[Pacemaker] 2 node cluster questions

mark - pacemaker list m+pacemaker at nerdish.us
Fri Nov 25 14:27:26 EST 2011


Hi Dirk,

On Fri, Nov 25, 2011 at 6:05 AM, Hellemans Dirk D
<Dirk.Hellemans at hpcds.com>wrote:

>  Hello everyone,****
>
> ** **
>
> I’ve been reading a lot lately about using Corosync/Openais in combination
> with Pacemaker: SuSe Linux documentation, Pacemaker & Linux-ha website,
> interesting blogs, mailinglists, etc. As I’m particularly interested in how
> well two node clusters (located within the same server room) are handled, I
> was a bit confused by the fact that quorum disks/ quorum servers are (not
> yet?) supported/used. Some suggested to add a third node which is not
> actively participating (e.g. only running corosync.... or with hearbeat but
> in standby mode). That might be a solution but doesn’t “feel” right,
> especially if you consider multiple two-node clusters... that would require
> a lot of extra “quorum only nodes”. Somehow SBD (storage based death) in
> combination with a hardware watchdog timer seemed to also provide a
> solution: run it on top of iSCSI storage and you end up with a fencing
> device and some sort of “network based quorum” as tiebreaker. If one node
> loses network connectivity, sbd + watchdog will make sure it’s being fenced.
> ****
>
> ** **
>
> I’d love to hear your ideas about 2 node cluster setups. What is the best
> way to do it? Any chance we’ll get quorum disks/ quorum servers in the
> (near) future?****
>
> **
>


Our experience with a two-node SBD-based cluster wasn't good.  After setup,
we started on failure scenarios.  The first test was to drop network
connectivity for one of the nodes while both could still access storage.
 The nodes fenced each other (sort of like a STONITH deathmatch you can
read about), killing all services and leaving us waiting for both nodes to
boot back up.  Obviously, a complete failure of testing, we didn't even
proceed with further checks.  We took a standard PC and built it out as a
third node, giving the cluster true quorum, and now it's rock-solid and
absolutely correct in every failure scenario we throw at it.  For
production use, the very real possibility of two nodes killing each other
just wasn't worth the risk to us.

If you go with two nodes and SBD, do a lot of testing.  No matter how much
you test though, if they lose visibility to each other on the network but
can both still see the storage, you've got a race where the node that
*should* be fenced (the one that has its network cables disconnected) can
fence the node that is still 100% healthy and actively serving clients.

Maybe there's a way to configure around that, I'd be interested in hearing
how if so.

Regards,
Mark





> **
>
> In addition, say you’re not using sbd but an IPMI based fencing solution.
> You lose network connectivity on one of the nodes (I know, they’re
> redundant but still...sh*t happens ;) Does Pacemaker know which of both
> nodes lost network connectivity? E.g.: node 1 runs Oracle database, node 2
> nothing. Node 2 loses network connectivity (e.g. both NICs without signal
> because unplugged by an errant technician ;) )... => split brain situation
> occurs, but who’ll be fenced? The one with Oracle running ?? I really hope
> not... cause in this case, the cluster can “see” there’s no signal on the
> NICs of node2. Would be interesting to know more about how
> Pacemaker/corosync makes such kind of decisions... how to choose which one
> will be fenced in case of split brain. Is it randomly chosen? Is it the DC
> which decides? Based on NIC state? I did some quick testing with 2 VMs and
> at first, it looks like Pacemaker/corosync always fences the correct node,
> or: the node where I unplugged the “virtual” cable. ****
>
> ** **
>
> I’m curious!****
>
> ** **
>
> Thanks a lot!****
>
> ** **
>
> Best regards,****
>
> Dirk****
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20111125/33496145/attachment-0003.html>


More information about the Pacemaker mailing list