[Pacemaker] Two resource nodes + one quorum node

Lars Marowsky-Bree lmb at suse.com
Thu Jun 13 16:10:16 EDT 2013


On 2013-06-13T22:12:23, Andrey Groshev <greenx at yandex.ru> wrote:

> >>>  It doesn't have to be able to run services, it only needs to
> >>> contribute to quorum.
> >>  That is, there is no way to switch the node into standby mode from
> >> a pacemaker script/config?
> > Sure there is:
> >
> > # crm node standby nodename
> >
> Again, not that. As most to switch node in the standby mode - I know.
> I want to "quorum" node can not be the occasion of the launch.
> To understand what it was fake node.

I'm sorry, I don't understand what you're saying. You asked for how to
switch a node to standby, so I thought that was what I was responding
to.

> A cluster of two nodes, by definition, can not be trusted because two
> nodes may not be the right one.

That's not quite true. The problems of two node clusters are mostly
during testing, where "Let's rip out all cables, bwahahaha" is
considered a valid scenario. It may be, but all links failing at once is
still not the most likely one in the real world for normal clusters.

The most likely case of the other node being unreachable? The other node
is dead.

And while "fence loops" can occur, they can also be avoided by some
reasonable easy configuration changes. (Don't start the cluster on boot,
use the recent sbd changes, etc.) Both nodes fencing at once used to be
avoidable via a "start-delay" to the fencing resource, too.

> There should always be a third option: a third node, the default
> gateway or something.

Well, the third node option exists. As outlined above. I don't much like
it either; I'd much more readily suggest a two node environment than
"just" adding a tie-breaker node because of the maintenance overhead
that causes.

Pinging the default gateway as a tie-breaker has issues, because if both
nodes can ping it, but their cluster comms are broken, you still got a
split brain scenario. So people don't consider this a "perfect" fix.

With SBD for fencing, hosting one of the SBD paths on an iSCSI server on
a third node/SAN/NAS is effectively such a tie-breaker too. In the RHEL
world, I expect fence_sanlock can do something similar.


We support a large number of clusters running pacemaker/corosync, the
majority of them being two node clusters. Two node clusters are way less
trouble than larger ones.

Clearly, three nodes can be better, but the exaggerated "don't do two
node environments" misses the mark.


Regards,
    Lars

-- 
Architect Storage/HA
SUSE LINUX Products GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 21284 (AG Nürnberg)
"Experience is the name everyone gives to their mistakes." -- Oscar Wilde





More information about the Pacemaker mailing list