[Pacemaker] master/slave problems

Uwe Grawert grawert at b1-systems.de
Wed Mar 16 03:47:18 EDT 2011


Hi,

Am 16.03.11 02:27, schrieb Sam Pinar:
> I've setup a two node cluster for testing using the "Clusters from Scratch -
> Apache, DRBD and GFS2" guide. I've set it up and fail over works like a
> charm, but I want one of the nodes to be a master; and fail back resources
> to the master when it comes back up. At the moment, the resource stays at
> the failed back node. configs:
> 
> 
> node ULTPLN30.DMZ
> node ULTPLN31.DMZ \
>         attributes standby="off"
> primitive ClusterIP ocf:heartbeat:IPaddr2 \
>         params ip="10.110.4.123" cidr_netmask="32" \
>         op monitor interval="30s"
> property $id="cib-bootstrap-options" \
>         dc-version="1.1.5-1.1.el5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f"
> \
>         cluster-infrastructure="openais" \
>         expected-quorum-votes="2" \
>         stonith-enabled="false" \
>         no-quorum-policy="ignore"

if you want to do that, you have to give your resource a stickyness to
the node you define as master. Say your master is ULTPLN30.DMZ, then you
have to define a location constraint, which sticks your resource to that
node.

The following command assignes a stickyness with weight of 10 to your
first node. As long as this rule is not outvoted by another rule, your
resource will stick to the defined node.

crm configure location loc_ip_stick_to_master ClusterIP 10: ULTPLN30.DMZ

Have a look at location constraints: crm configure help location




More information about the Pacemaker mailing list