[Pacemaker] Pacemaker unnecessarily (?) restarts a vm on active node when other node brought out of standby
Andrew Beekhof
andrew at beekhof.net
Wed May 14 06:58:44 UTC 2014
On 14 May 2014, at 5:23 am, Ian <cl-3627 at jusme.com> wrote:
> David Vossel wrote:
>> does setting resource-stickiness help?
>> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#s-resource-options
>
>
> Thanks for the suggestion. Applied resource-stickiness=100 to the vm resource, doesn't seem to have any effect (same behavior: the vm and gfs filesystem are stopped and restarted when promoting the underlying drbd resource from master/slave to master/master).
Hmmm, master-max=2... I'd bet that is something the code might not be handling optimally.
Can you attach a crm_report tarball for the period covered by your test?
>
> A bit of searching finds this, which seems somewhat related:
>
> https://github.com/ClusterLabs/pacemaker/pull/401
> http://bugs.clusterlabs.org/show_bug.cgi?id=5055
>
> Wondering if I have these patches in the stock CentOS release (pacemaker-1.1.10-14.el6_5.2.x86_64)?
Nope.
>
>
> # pcs config
> Cluster Name: jusme
> Corosync Nodes:
>
> Pacemaker Nodes:
> sv06 sv07
>
> Resources:
> Master: vm_storage_core_dev-master
> Meta Attrs: master-max=2 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
> Group: vm_storage_core_dev
> Resource: res_drbd_vm1 (class=ocf provider=linbit type=drbd)
> Attributes: drbd_resource=vm1
> Operations: monitor interval=60s (res_drbd_vm1-monitor-interval-60s)
> Clone: vm_storage_core-clone
> Group: vm_storage_core
> Resource: res_fs_vm1 (class=ocf provider=heartbeat type=Filesystem)
> Attributes: device=/dev/drbd/by-res/vm1 directory=/data/vm1 fstype=gfs2 options=noatime,nodiratime
> Operations: monitor interval=60s (res_fs_vm1-monitor-interval-60s)
> Master: nfs_server_dev-master
> Meta Attrs: master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
> Group: nfs_server_dev
> Resource: res_drbd_live (class=ocf provider=linbit type=drbd)
> Attributes: drbd_resource=live
> Operations: monitor interval=60s (res_drbd_live-monitor-interval-60s)
> Resource: res_vm_nfs_server (class=ocf provider=heartbeat type=VirtualDomain)
> Attributes: config=/etc/libvirt/qemu/vm09.xml
> Meta Attrs: resource-stickiness=100
> Operations: monitor interval=60s (res_vm_nfs_server-monitor-interval-60s)
>
> Stonith Devices:
> Fencing Levels:
>
> Location Constraints:
> Ordering Constraints:
> promote vm_storage_core_dev-master then start vm_storage_core-clone (Mandatory) (id:order-vm_storage_core_dev-master-vm_storage_core-clone-mandatory)
> promote nfs_server_dev-master then start res_vm_nfs_server (Mandatory) (id:order-nfs_server_dev-master-res_vm_nfs_server-mandatory)
> start vm_storage_core-clone then start res_vm_nfs_server (Mandatory) (id:order-vm_storage_core-clone-res_vm_nfs_server-mandatory)
> Colocation Constraints:
> vm_storage_core-clone with vm_storage_core_dev-master (INFINITY) (rsc-role:Started) (with-rsc-role:Master) (id:colocation-vm_storage_core-clone-vm_storage_core_dev-master-INFINITY)
> res_vm_nfs_server with nfs_server_dev-master (INFINITY) (rsc-role:Started) (with-rsc-role:Master) (id:colocation-res_vm_nfs_server-nfs_server_dev-master-INFINITY)
> res_vm_nfs_server with vm_storage_core-clone (INFINITY) (id:colocation-res_vm_nfs_server-vm_storage_core-clone-INFINITY)
>
> Cluster Properties:
> cluster-infrastructure: cman
> dc-version: 1.1.10-14.el6_5.2-368c726
> stonith-enabled: false
>
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 841 bytes
Desc: Message signed with OpenPGP using GPGMail
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20140514/750fe992/attachment-0004.sig>
More information about the Pacemaker
mailing list