You are here

How to Get a Galera Cluster Into Split Brain

"Split Brain" is the term commonly used for a cluster whose nodes have different contents, rather than identical as they should have. Typically, a "split brain" situation is the DBA's nightmare, and the Galera software is designed to avoid it. Galera is very successful in that avoidance, and it needs some special steps by the DBA to achieve "split brain". Here is how to do it - or, for most DBAs, what to avoid doing to not get a split-brain cluster.

Galera's Design

First, let's remember how Galera is operating:

  • The Galera software ensures that all nodes participating in a cluster will start from identical contents, by doing a "snapshot state transfer" (SST) of all current data to a newly joining node.
  • When the cluster is running, Galera transfers all changes (transactions) to all cluster nodes and applies them (or rolls back and ignores, in the case of a conflict).
  • If some connections get lost, all nodes check whether they "have quorum" (belong to a majority), and stop serving requests if they don't.
  • When a disconnected node re-joins the cluster, it gets all meantime changes transferred ("incremental state transfer" IST) and so makes its contents current.
  • Should that be impossible, because some of those changes have become unavailable (log purge), a full transfer (SST) is done.
By this design, the Galera software successfully avoids getting into a "split brain" situation.

Of course, the quorum is a well-known concept. The old term for it is "majority consensus", and the approach is built on a simple principle:
In any set (of cluster nodes), there cannot be two (or more) non-overlapping subsets which both contain a majority of the elements.
So if some loss of connectivity splits the cluster into subsets, at most one of them can "have quorum", all others will stop serving requests, and there cannot be two (or more) different directions in which the contents (data) changes.

What Galera introduced (compared to previous designs of distributed DBMSs) is the efficient transfer of changes and conflict detection / resolution ("certification" in Galera terms) at "commit" time that makes the system fast, while previous designs used "distributed locking" or other principles which added latency to many commands and so made their systems slow.

The Story

Let's get back to the "split brain" issue, of which I said that Galera avoids it, and also said it can be reached. Sounds contradictory? Well, there are more active components than just Galera. Here is a real-world case, as happened to (I won't say "achieved by") a customer:

Originally, they had set up a Galera cluster of three nodes; let's call them A, B, and C. This is started by bringing up node A as a stand-alone node, running MySQL with the "wsrep" plugin. Then, one after the other, nodes B and C are configured to join node A in forming a cluster, and started. As a result, there are three nodes communicating with each other that form the cluster. In addition, HAproxy is running somewhere, it will direct the clients to an active cluster node.

So far, so good: The cluster is running, clients connect and issue transactions, everything is ok, and the DBA/s turn/s to other tasks ...

Some time later, node A must be stopped to do some hardware maintenance. No problem, nodes B and C are running fine, they have quorum (2 of 3 is a majority), so the system is still available and operations continue. HAproxy detects that node A does not respond, so it directs all clients to B or C. The cluster architecture is serving its purpose of continuous availability even during a maintenance period.

Maintenance is done, node A is rebooted, its MySQL+Galera server process restarts. It comes up, HAproxy detects it as running and directs clients to it. All seems fine ...

Three hours later, someone has become suspicious, detects trouble, and node A is stopped. Why? What has happened?

Some
large
gap
is
appropriate
to
give
readers
a
break
and
let
them
consider
the
situation.

Analysis

What has happened?

Remember what I wrote about the cluster setup: It was

... started by bringing up node A as a stand-alone node, ...
... nodes B and C are configured to join node A ...
... three nodes communicating with each other that form the cluster.

These steps were sufficient to get the nodes up and running. What was missing, however, was to re-configure A from "stand-alone" to "member of cluster with B and C".

As a consequence, when A was restarted after the maintenance, it again came up stand-alone. It did not try to join the cluster (which would have triggered an IST of the meantime changes) or check for quorum (a stand-alone node is self-sufficient).

Based on A's configuration (as read from disk), all was fine.
Based on the concept of the A+B+C cluster, it was a plain, simple split-brain:
Some changes were done on B+C (which still had quorum), while others were done on A only (which was mis-configured).

Lesson to learn (or rather, to bring back into active brain memory):
If some configuration is changed at run-time, this change must also be done in the configuration files so that it is used on restart.

The typical example for such changes is a "set global" command modifying some dynamic variable, like "max_connections".
But in a Galera cluster, a node joining the others is also a dynamic configuration change, and it should ASAP be reflected in all configuration files. If this isn't done, the consequences might be as described above.

Happy-End

Now, most stories have a happy ending, and this one shouldn't be an exception:

Luckily, the application uses self-generated keys, similar to UUIDs, so the entries created on A did not conflict with those of B+C. Also, there were no changes of existing data, just inserts. So the situation could be corrected by extracting all new data from A, inserting them on B+C, and then resetting A's state so that it asked B+C for an SST. Uff!

Operational Advice

There are some tools available that will do such a transfer. However, it can be done completely with standard parts coming with MySQL:

  • "mysqldump" will extract the data from A.
  • Suitable options will make sure this exctract does not contain "drop" or "create" commands, and generates "insert ignore".
    Check the documentation for the options "--no-create-db", "--no-create-info", "--skip-add-drop-table", and "--insert-ignore".
  • If the old, common data are deleted first, both dumping and loading becomes faster, and duplicates are reduced / avoided.
  • "mysql" can be used to load these data into B or C.
Note, however, that conflicts will be ignored and not reported. Other tools or approaches might do that.

Had they used auto-increment keys, or had they modified existing data, it would have been much more complicated, and it might even have been impossible to combine all changes without losing some. I leave it to your imagination to think of such scenarios.

To repeat the lesson in DBMS / DBA terms:

  • The most important property of a database is consistency, it must be kept up at all times.
  • For database operations, the configuration on disk (in files) must be consistent to that in RAM (of the running processes), so any runtime changes must be reflected in the configuration files on disk to maintain consistency.

Percona's "pt-config-diff" can be used to compare a node's current variables to its configuration file.

Take care!