[Pacemaker] Fencing of movable VirtualDomains
Daniel Dehennin
daniel.dehennin at baby-gnu.org
Thu Oct 2 18:41:39 CEST 2014
Hello,
I'm setting up a 3 nodes OpenNebula[1] cluster on Debian Wheezy using a
SAN for shared storage and KVM as hypervisor.
The OpenNebula fontend is a VM for HA[2].
I had some quorum issues when the node running the fontend die as the
two other nodes loose quorum, so I added a pure quorum node in
standby="on" mode.
My physical hosts are fenced using stonith:external/ipmi, which works
great, one stonith device per node with a anti-location on itself.
I have more troubles fencing the VMs since they can move.
I try to define a stonith device per VM and colocate it with the VM
itslef like this:
#+begin_src
primitive ONE-Frontend ocf:heartbeat:VirtualDomain \
params config="/var/lib/one/datastores/one/one.xml" \
op start interval="0" timeout="90" \
op stop interval="0" timeout="100" \
meta target-role="Stopped"
primitive Quorum-Node ocf:heartbeat:VirtualDomain \
params config="/var/lib/one/datastores/one/quorum.xml" \
op start interval="0" timeout="90" \
op stop interval="0" timeout="100" \
meta target-role="Started" is-managed="true"
primitive Stonith-Quorum-Node stonith:external/libvirt \
params hostlist="quorum" hypervisor_uri="qemu:///system"
pcmk_host_list="quorum" pcmk_host_check="static-list" \
op monitor interval="30m" \
meta target-role="Started"
location ONE-Fontend-fenced-by-hypervisor Stonith-ONE-Frontend \
rule $id="ONE-Fontend-fenced-by-hypervisor-rule" inf: #uname ne quorum or #uname ne one
location ONE-Frontend-run-on-hypervisor ONE-Frontend \
rule $id="ONE-Frontend-run-on-hypervisor-rule" 20: #uname eq nebula1 \
rule $id="ONE-Frontend-run-on-hypervisor-rule-0" 30: #uname eq nebula2 \
rule $id="ONE-Frontend-run-on-hypervisor-rule-1" 40: #uname eq nebula3
location Quorum-Node-fenced-by-hypervisor Stonith-Quorum-Node \
rule $id="Quorum-Node-fenced-by-hypervisor-rule" inf: #uname ne quorum or #uname ne one
location Quorum-Node-run-on-hypervisor Quorum-Node \
rule $id="Quorum-Node-run-on-hypervisor-rule" 50: #uname eq nebula1 \
rule $id="Quorum-Node-run-on-hypervisor-rule-0" 40: #uname eq nebula2 \
rule $id="Quorum-Node-run-on-hypervisor-rule-1" 30: #uname eq nebula3
colocation Fence-ONE-Frontend-on-its-hypervisor inf: ONE-Frontend
Stonith-ONE-Frontend
colocation Fence-Quorum-Node-on-its-hypervisor inf: Quorum-Node
Stonith-Quorum-Node
property $id="cib-bootstrap-options" \
dc-version="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" \
cluster-infrastructure="openais" \
expected-quorum-votes="5" \
stonith-enabled="true" \
last-lrm-refresh="1412242734" \
stonith-timeout="30" \
symmetric-cluster="false"
#+end_src
But, I can not start the Quorum-Node resource, I get the following in logs:
#+begin_src
info: can_fence_host_with_device: Stonith-nebula2-IPMILAN can not fence quorum: static-list
#+end_src
All the examples I found describe a configuration where each VM stay on
a single hypervisor, in which case libvirt is configured to listen on
TCP and the “hypervisor_uri” point to it.
Does someone have ideas on configuring stonith:external/libvirt for
movable VMs?
Regards.
Footnotes:
[1] http://opennebula.org/
[2] http://docs.opennebula.org/4.8/advanced_administration/high_availability/oneha.html
--
Daniel Dehennin
Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
Fingerprint: 3E69 014E 5C23 50E8 9ED6 2AAD CC1E 9E5B 7A6F E2DF
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 342 bytes
Desc: not available
URL: <http://oss.clusterlabs.org/pipermail/pacemaker/attachments/20141002/02184bef/attachment.sig>
More information about the Pacemaker
mailing list