[Pacemaker] How to setup STONITH in a 2-node active/passive linux HA pacemaker cluster?
Florian Haas
florian at hastexo.com
Mon Mar 19 19:26:20 UTC 2012
On Mon, Mar 19, 2012 at 8:14 PM, Mathias Nestler
<mathias.nestler at barzahlen.de> wrote:
> Hi everyone,
>
> I am trying to setup an active/passive (2 nodes) Linux-HA cluster with corosync and pacemaker to hold a PostgreSQL-Database up and running. It works via DRBD and a service-ip. If node1 fails, node2 should take over. The same if PG runs on node2 and it fails. Everything works fine except the STONITH thing.
>
> Between the nodes is an dedicated HA-connection (10.10.10.X), so I have the following interface configuration:
>
> eth0 eth1 host
> 10.10.10.251 172.10.10.1 node1
> 10.10.10.252 172.10.10.2 node2
>
> Stonith is enabled and I am testing with a ssh-agent to kill nodes.
>
> crm configure property stonith-enabled=true
> crm configure property stonith-action=poweroff
> crm configure rsc_defaults resource-stickiness=100
> crm configure property no-quorum-policy=ignore
>
> crm configure primitive stonith_postgres stonith:external/ssh \
> params hostlist="node1 node2"
> crm configure clone fencing_postgres stonith_postgres
You're missing location constraints, and doing this with 2 primitives
rather than 1 clone is usually cleaner. The example below is for
external/libvirt rather than external/ssh, but you ought to be able to
apply the concept anyhow:
http://www.hastexo.com/resources/hints-and-kinks/fencing-virtual-cluster-nodes
Hope this helps.
Cheers,
Florian
--
Need help with High Availability?
http://www.hastexo.com/now
More information about the Pacemaker
mailing list