[Pacemaker] two node cluster and shared device

Oliver Heinz oheinz at fbihome.de
Wed May 19 03:27:31 EDT 2010


I have problems to create a 2 node cluster with shared storage and cluster fs 
that behaves the way I want.

The decission should be based on network connectivity. If node a looses 
network connectivity (i.e. i pull the plugs) node b should take over all 
resources because he is able to ping i.e. the switch and the gateway.

It would be great if the shared storage itself could act as a fencing device 
to avoid FS corruption. If node-b detects that node-a still has access to the 
shared storge it should fence itself (network communication to node-a is not 
possible, so the lock-daemons for the shared fs cannot communicate) But if 
node-a doesn't access the shared device node-b should take over the resources.

Is it possible, or is it just stupid?
Should I always use a third node (even if it does not share ressources) just 
as a tie breaker?
Can I STONITH based on the ping? Will access to the shared device work after I 
STONITH the other node? With my current config access to the filesystem hangs 
after i kill a node with  ocfs2 i even get kernel segfaults, with gfs it just 
hangs)

Maybe someone has a working configuration for two (or 3) node/shared device/ 
cluster fs and want's to share it?  


TIA,
Oliver


current config that woks fine as long as i don't kill a node:


node server-c \
        attributes standby="off"
node server-d \
        attributes standby="off"
primitive resCLVM ocf:lvm2:clvmd \
        params daemon_timeout="30"
primitive resDATA ocf:heartbeat:LVM \
        params volgrpname="data"
primitive resDLM ocf:pacemaker:controld \
        op monitor interval="120s"
primitive resFS ocf:heartbeat:Filesystem \
        params device="/dev/mapper/data-data" directory="/srv/data" 
fstype="ocfs2" \
        op monitor interval="120s"
primitive resO2CB ocf:pacemaker:o2cb \
        op monitor interval="120s"
clone cloneCLVM resCLVM \
        meta target-role="Started" interleave="true" ordered="true"
clone cloneDATA resDATA \
        meta interleave="true" ordered="true"
clone cloneDLM resDLM \
        meta globally-unique="false" interleave="true"
clone cloneFS resFS \
        meta interleave="true" ordered="true" is-managed="true" target-
role="Started"
clone cloneO2CB resO2CB \
        meta globally-unique="false" interleave="true"
colocation colDATA inf: cloneDATA cloneCLVM
colocation colFSO2CB inf: cloneFS cloneO2CB
colocation colO2CBDLM inf: cloneO2CB cloneDLM
order ordCLVM inf: cloneDLM cloneCLVM
order ordDATA inf: cloneCLVM cloneDATA
order ordDATAFS 0: cloneDATA cloneFS
order ordDLMO2CB 0: cloneDLM cloneO2CB
order ordO2CBFS 0: cloneO2CB cloneFS
property $id="cib-bootstrap-options" \
        dc-version="1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd" \
        cluster-infrastructure="openais" \
        expected-quorum-votes="2" \
        stonith-enabled="false" \
        last-lrm-refresh="1272026744" \
        no-quorum-policy="ignore"




More information about the Pacemaker mailing list