[ClusterLabs] dlm_controld 4.0.4 exits when crmd is fencing another node

Vladislav Bogdanov bubble at hoster-ok.com
Fri Jan 22 16:59:25 CET 2016


Hi David, list,

recently I tried to upgrade dlm from 4.0.2 to 4.0.4 and found that it
no longer handles fencing of a remote node initiated by other cluster components.
First I noticed that during valid fencing due to resource stop failure,
but it is easily reproduced with 'crm node fence XXX'.

I took logs from both 4.0.2 and 4.0.4 and "normalized" (replaced timestamps)
their part after fencing is originated by pacemaker.

That resulted in the following diff:
--- dlm_controld.log.4.0.2 2016-01-22 15:37:42.860999831 +0000
+++ dlm_controld.log.4.0.4 2016-01-22 14:53:23.962999872 +0000
@@ -24,26 +24,11 @@
 clvmd wait for fencing
 fence wait 2 pid 11266 running
 clvmd wait for fencing
-fence result 2 pid 11266 result 0 exit status
-fence wait 2 pid 11266 result 0
-clvmd wait for fencing
-fence status 2 receive 0 from 1 walltime 1453473364 local 1001
-clvmd check_fencing 2 done start 618 fail 1000 fence 1001
-clvmd check_fencing done
-clvmd send_start 1:3 counts 2 1 0 1 1
-clvmd receive_start 1:3 len 76
-clvmd match_change 1:3 matches cg 3
-clvmd wait_messages cg 3 got all 1
-clvmd start_kernel cg 3 member_count 1
+shutdown
+cpg_leave dlm:controld ...
+clear_configfs_nodes rmdir "/sys/kernel/config/dlm/cluster/comms/1"
 dir_member 2
 dir_member 1
-set_members rmdir "/sys/kernel/config/dlm/cluster/spaces/clvmd/nodes/2"
-write "1" to "/sys/kernel/dlm/clvmd/control"
-clvmd prepare_plocks
-dlm:controld ring 1:412 2 memb 1 2
-fence work wait for cluster ringid
-dlm:ls:clvmd ring 1:412 2 memb 1 2
-fence work wait for cluster ringid
-cluster quorum 1 seq 412 nodes 2
-cluster node 2 added seq 412
-set_configfs_node 2 192.168.124.2 local 0
+clear_configfs_space_nodes rmdir "/sys/kernel/config/dlm/cluster/spaces/clvmd/nodes/2"
+clear_configfs_space_nodes rmdir "/sys/kernel/config/dlm/cluster/spaces/clvmd/nodes/1"
+clear_configfs_spaces rmdir "/sys/kernel/config/dlm/cluster/spaces/clvmd"

Both are built against pacemaker 1.1.14 (I rebuild 4.0.2 to ensure that bug is
not in the stonith API headers).

Systems (2 nodes) run corosync 2.3.5 (libqb 0.17.2) on the top of CentOS 6.7. They
run in virtual machines with fencing configured and working.

I hope this could be easily fixed,
feel free to request any further information (f.e. original logs),


Best regards,
Vladislav



More information about the Users mailing list