[Pacemaker] Pacemaker unnecessarily (?) restarts a vm on active node when other node brought out of standby

Ian cl-3627 at jusme.com
Tue May 13 15:23:52 EDT 2014


David Vossel wrote:
> does setting resource-stickiness help?
> 
> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#s-resource-options


Thanks for the suggestion. Applied resource-stickiness=100 to the vm 
resource, doesn't seem to have any effect (same behavior: the vm and gfs 
filesystem are stopped and restarted when promoting the underlying drbd 
resource from master/slave to master/master).

A bit of searching finds this, which seems somewhat related:

   https://github.com/ClusterLabs/pacemaker/pull/401
   http://bugs.clusterlabs.org/show_bug.cgi?id=5055

Wondering if I have these patches in the stock CentOS release 
(pacemaker-1.1.10-14.el6_5.2.x86_64)?


# pcs config
Cluster Name: jusme
Corosync Nodes:

Pacemaker Nodes:
  sv06 sv07

Resources:
  Master: vm_storage_core_dev-master
   Meta Attrs: master-max=2 master-node-max=1 clone-max=2 
clone-node-max=1 notify=true
   Group: vm_storage_core_dev
    Resource: res_drbd_vm1 (class=ocf provider=linbit type=drbd)
     Attributes: drbd_resource=vm1
     Operations: monitor interval=60s (res_drbd_vm1-monitor-interval-60s)
  Clone: vm_storage_core-clone
   Group: vm_storage_core
    Resource: res_fs_vm1 (class=ocf provider=heartbeat type=Filesystem)
     Attributes: device=/dev/drbd/by-res/vm1 directory=/data/vm1 
fstype=gfs2 options=noatime,nodiratime
     Operations: monitor interval=60s (res_fs_vm1-monitor-interval-60s)
  Master: nfs_server_dev-master
   Meta Attrs: master-max=1 master-node-max=1 clone-max=2 
clone-node-max=1 notify=true
   Group: nfs_server_dev
    Resource: res_drbd_live (class=ocf provider=linbit type=drbd)
     Attributes: drbd_resource=live
     Operations: monitor interval=60s 
(res_drbd_live-monitor-interval-60s)
  Resource: res_vm_nfs_server (class=ocf provider=heartbeat 
type=VirtualDomain)
   Attributes: config=/etc/libvirt/qemu/vm09.xml
   Meta Attrs: resource-stickiness=100
   Operations: monitor interval=60s 
(res_vm_nfs_server-monitor-interval-60s)

Stonith Devices:
Fencing Levels:

Location Constraints:
Ordering Constraints:
   promote vm_storage_core_dev-master then start vm_storage_core-clone 
(Mandatory) 
(id:order-vm_storage_core_dev-master-vm_storage_core-clone-mandatory)
   promote nfs_server_dev-master then start res_vm_nfs_server (Mandatory) 
(id:order-nfs_server_dev-master-res_vm_nfs_server-mandatory)
   start vm_storage_core-clone then start res_vm_nfs_server (Mandatory) 
(id:order-vm_storage_core-clone-res_vm_nfs_server-mandatory)
Colocation Constraints:
   vm_storage_core-clone with vm_storage_core_dev-master (INFINITY) 
(rsc-role:Started) (with-rsc-role:Master) 
(id:colocation-vm_storage_core-clone-vm_storage_core_dev-master-INFINITY)
   res_vm_nfs_server with nfs_server_dev-master (INFINITY) 
(rsc-role:Started) (with-rsc-role:Master) 
(id:colocation-res_vm_nfs_server-nfs_server_dev-master-INFINITY)
   res_vm_nfs_server with vm_storage_core-clone (INFINITY) 
(id:colocation-res_vm_nfs_server-vm_storage_core-clone-INFINITY)

Cluster Properties:
  cluster-infrastructure: cman
  dc-version: 1.1.10-14.el6_5.2-368c726
  stonith-enabled: false






More information about the Pacemaker mailing list