[Pacemaker] Split-site cluster in two locations
Robert van Leeuwen
vanleeuwen at stone-it.com
Tue Jan 11 12:04:50 UTC 2011
-----Original message-----
To: The Pacemaker cluster resource manager <pacemaker at oss.clusterlabs.org>;
From: Christoph Herrmann <C.Herrmann at science-computing.de>
Sent: Tue 11-01-2011 10:24
Subject: Re: [Pacemaker] Split-site cluster in two locations
> As long as you have only two computing centers it doesn't matter if you run a
> corosync
> only piece or whatever on a physikal or a virtual machine. The question is:
> How to
> configure a four node (or six node, an even number bigger then two)
> corosync/pacemaker
> cluster to continue services if you have a blackout in one computing center
> (you will
> always loose (at least) one half of your nodes), but to shutdown everything if
> you have
> less then half of the node available. Are there any best practices on how to
> deal with
> clusters in two computing centers? Anything like an external quorum node or a
> quorum
> partition? I'd like to set the expected-quorum-votes to "3" but this is not
> possible
> (with corosync-1.2.6 and pacemaker-1.1.2 on SLES11 SP1) Does anybody know why?
> Currently, the only way I can figure out is to run the cluster with
> no-quorum-policy="ignore". But I don't like that. Any suggestions?
Apart from the number of nodes in de datacenter: with 2 datacentre's you have another issue:
How do you know which DC is reachable (from you're clients point of view) when the communication between DC fails?
Best fix for this would be a node at a third DC but you still run into problems with the fencing devices.
I doubt you can remotely power-off the non-responding DC :-)
So a split brain situation is likely to happen sometime.
So for 100% data integrity I think it is best to let the cluster freeze itself...
Best Regards,
Robert van Leeuwen
More information about the Pacemaker
mailing list