[Pacemaker] chicken-egg-problem with libvirtd and a VM within cluster
Arnold Krille
arnold at arnoldarts.de
Fri Oct 12 21:05:42 UTC 2012
On Fri, 12 Oct 2012 09:22:13 +0200 Florian Haas <florian at hastexo.com>
wrote:
> For most people, this issue doesn't occur on system boot, as libvirtd
> would normally start before corosync, or corosync/pacemaker isn't part
> of the system bootup sequence at all (the latter is preferred for
> two-node clusters to prevent fencing shootouts in case of cluster
> split brain).
Hm, here I start libvirt (as clone-resource) from pacemaker and don't
have such problems. But fi I remember correctly I have a) both order and
location constraints for the VMs to libvirtclone, b)
location-constraints to pingd-value and c) my own RA for virtual
machine that reacts differently when libvirtd isn't started yet.
But I also had to restore a vm from backup when an os-update stopped
(and didn't restart) libvirtd and pacemaker started a second instance
of the machine on a different host. Now I think I remember to put nodes
to standby-mode before doing upgrades of core-components...
<snip>
> Also, a running libvirtd is not
> needed, to the best of my knowledge, when the hypervisor in use is Xen
> rather than KVM.
With KVM you can also start virtual machine without libvirtd. You just
have to use qemu-kvm and a lot of commandline parameters.
Have fun,
Arnold
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: not available
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20121012/06a45e41/attachment-0004.sig>
More information about the Pacemaker
mailing list