[Pacemaker] How to put delay in fence_intelmodular for one node only
Gianluca Cecchi
gianluca.cecchi at gmail.com
Sat Jun 21 08:26:17 UTC 2014
Hello,
I have a CentOS 6.5 based cluster with
pacemaker-1.1.10-14.el6_5.3.x86_64
cman-3.0.12.1-59.el6_5.2.x86_64
and configured pacemaker with cman integration.
The nodes are two blades inside an Intel enclosure.
At the moment my configuration has this in cluster.conf
<fencedevices>
<fencedevice name="pcmk" agent="fence_pcmk"/>
</fencedevices>
and this if I run "pcs cluster edit"
<primitive id="Fencing" class="stonith" type="fence_intelmodular">
<instance_attributes id="Fencing-params">
<nvpair id="Fencing-passwd-script" name="passwd_script"
value="/usr/local/bin/fence_pwd.sh"/>
<nvpair id="Fencing-login" name="login" value="snmpv3user"/>
<nvpair id="Fencing-ipaddr" name="ipaddr"
value="192.168.150.150"/>
<nvpair id="Fencing-debug" name="power_wait" value="15"/>
<nvpair id="Fencing-snmp_version" name="snmp_version" value="3"/>
<nvpair id="Fencing-snmp_auth_prot" name="snmp_auth_prot"
value="SHA"/>
<nvpair id="Fencing-snmp_sec_level" name="snmp_sec_level"
value="authNoPriv"/>
<nvpair id="Fencing-pcmk_host_list" name="pcmk_host_list"
value="srvmgmt01.localdomain.local,srvmgmt02.localdomain.local"/>
<nvpair id="Fencing-pcmk_host_map" name="pcmk_host_map"
value="srvmgmt01.localdomain.local:5;srvmgmt02.localdomain.local:6"/>
</instance_attributes>
<operations>
<op id="Fencing-monitor-10m" interval="10m" name="monitor"
timeout="300s"/>
</operations>
</primitive>
If I want to set a delay on one of the two nodes to make it privileged in
case of split brain, what is the right place and how to put it?
Or do I have to decouple the fencing definition?
BTW: the fencing in general seems ok, but running crm_mon -1 on the two
nodes I have in my opinion confusing output (see below); is this expected?
[root at srvmgmt01 ~]# crm_mon -1
Last updated: Sat Jun 21 10:24:25 2014
Last change: Thu Jun 12 00:09:21 2014 via crmd on
srvmgmt01.localdomain.local
Stack: cman
Current DC: srvmgmt02.localdomain.local - partition with quorum
Version: 1.1.10-14.el6_5.3-368c726
2 Nodes configured
4 Resources configured
Online: [ srvmgmt01.localdomain.local srvmgmt02.localdomain.local ]
Master/Slave Set: ms_drbd_kvm-ovirtmgr [p_drbd_kvm-ovirtmgr]
Masters: [ srvmgmt01.localdomain.local ]
Slaves: [ srvmgmt02.localdomain.local ]
p_kvm-ovirtmgr (ocf::heartbeat:VirtualDomain): Started
srvmgmt01.localdomain.local
Fencing (stonith:fence_intelmodular): Started
srvmgmt02.localdomain.local
[root at srvmgmt02 ~]# crm_mon -1
Last updated: Sat Jun 21 10:24:19 2014
Last change: Thu Jun 12 00:09:21 2014 via crmd on
srvmgmt01.localdomain.local
Stack: cman
Current DC: srvmgmt02.localdomain.local - partition with quorum
Version: 1.1.10-14.el6_5.3-368c726
2 Nodes configured
4 Resources configured
Online: [ srvmgmt01.localdomain.local srvmgmt02.localdomain.local ]
Master/Slave Set: ms_drbd_kvm-ovirtmgr [p_drbd_kvm-ovirtmgr]
Masters: [ srvmgmt01.localdomain.local ]
Slaves: [ srvmgmt02.localdomain.local ]
p_kvm-ovirtmgr (ocf::heartbeat:VirtualDomain): Started
srvmgmt01.localdomain.local
Fencing (stonith:fence_intelmodular): Started
srvmgmt02.localdomain.local
Thanks in advance,
Gianluca
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20140621/a8993b9f/attachment-0003.html>
More information about the Pacemaker
mailing list