[Pacemaker] Pacemaker fencing and DLM/cLVM
Daniel Dehennin
daniel.dehennin at baby-gnu.org
Tue Nov 25 10:45:18 UTC 2014
Daniel Dehennin <daniel.dehennin at baby-gnu.org> writes:
> I'm using Ubuntu 14.04:
>
> - corosync 2.3.3-1ubuntu1
> - pacemaker 1.1.10+git20130802-1ubuntu2.1
>
> I thought everything was integrated in such configuration.
Here are some more informations:
- the pacemaker configuration
- the log of the DC nebula1 with marks for each step
- the log of the nebula2 with marks for each step
- the log of the nebula3 with marks for each step
- the output of “dlm_tool ls” and dlm_tool status” before/during/after
nebula2 fencing
The steps are:
1. All nodes up, cluster down
2. Start corosync on all nodes
3. Start pacemaker on all nodes
4. Start resource ONE-Storage-Clone (dlm, cLVM, VG, GFS2)
5. Crash nebula2
6. Start corosync on nebula2 after reboot
7. Start pacemaker on nebula2 after reboot
Does someone understand why DLM did not get the ACK of the fencing
automatically from stonith?
Why ONE-Storage-Clone does not manage to start on nebula2 after fencing.
Regards.
--
Daniel Dehennin
Récupérer ma clef GPG: gpg --recv-keys 0xCC1E9E5B7A6FE2DF
Fingerprint: 3E69 014E 5C23 50E8 9ED6 2AAD CC1E 9E5B 7A6F E2DF
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: pcmk.conf
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20141125/90ab8ec5/attachment-0002.conf>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: 01-dlm_tool-ls-working.log
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20141125/90ab8ec5/attachment-0018.log>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: 01-dlm_tool-status-working.log
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20141125/90ab8ec5/attachment-0019.log>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: 02-dlm_tool-ls-nebula2-fenced.log
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20141125/90ab8ec5/attachment-0020.log>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: 02-dlm_tool-status-nebula2-fenced.log
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20141125/90ab8ec5/attachment-0021.log>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: 03-dlm_tool-ls-manual-fence_ack-nebula2.log
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20141125/90ab8ec5/attachment-0022.log>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: 03-dlm_tool-status-manual-fence_ack-nebula2.log
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20141125/90ab8ec5/attachment-0023.log>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: 04-dlm_tool-ls-nebula2-restarted.log
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20141125/90ab8ec5/attachment-0024.log>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: 04-dlm_tool-status-nebula2-restarted.log
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20141125/90ab8ec5/attachment-0025.log>
-------------- next part --------------
nebula1 is DC
=============
Start corosync
--------------
Nov 25 10:48:43 nebula1 corosync[4872]: [MAIN ] Corosync Cluster Engine ('2.3.3'): started and ready to provide service.
Nov 25 10:48:43 nebula1 corosync[4872]: [MAIN ] Corosync built-in features: dbus testagents rdma watchdog augeas pie relro bindnow
Nov 25 10:48:43 nebula1 corosync[4873]: [TOTEM ] Initializing transport (UDP/IP Unicast).
Nov 25 10:48:43 nebula1 corosync[4873]: [TOTEM ] Initializing transmit/receive security (NSS) crypto: aes256 hash: sha256
Nov 25 10:48:43 nebula1 corosync[4873]: [TOTEM ] The network interface [192.168.231.70] is now up.
Nov 25 10:48:43 nebula1 corosync[4873]: [SERV ] Service engine loaded: corosync configuration map access [0]
Nov 25 10:48:43 nebula1 corosync[4873]: [QB ] server name: cmap
Nov 25 10:48:43 nebula1 corosync[4873]: [SERV ] Service engine loaded: corosync configuration service [1]
Nov 25 10:48:43 nebula1 corosync[4873]: [QB ] server name: cfg
Nov 25 10:48:43 nebula1 corosync[4873]: [SERV ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
Nov 25 10:48:43 nebula1 corosync[4873]: [QB ] server name: cpg
Nov 25 10:48:43 nebula1 corosync[4873]: [SERV ] Service engine loaded: corosync profile loading service [4]
Nov 25 10:48:43 nebula1 corosync[4873]: [WD ] No Watchdog, try modprobe <a watchdog>
Nov 25 10:48:43 nebula1 corosync[4873]: [WD ] no resources configured.
Nov 25 10:48:43 nebula1 corosync[4873]: [SERV ] Service engine loaded: corosync watchdog service [7]
Nov 25 10:48:43 nebula1 corosync[4873]: [QUORUM] Using quorum provider corosync_votequorum
Nov 25 10:48:43 nebula1 corosync[4873]: [QUORUM] Waiting for all cluster members. Current votes: 1 expected_votes: 3
Nov 25 10:48:43 nebula1 corosync[4873]: [SERV ] Service engine loaded: corosync vote quorum service v1.0 [5]
Nov 25 10:48:43 nebula1 corosync[4873]: [QB ] server name: votequorum
Nov 25 10:48:43 nebula1 corosync[4873]: [SERV ] Service engine loaded: corosync cluster quorum service v0.1 [3]
Nov 25 10:48:43 nebula1 corosync[4873]: [QB ] server name: quorum
Nov 25 10:48:43 nebula1 corosync[4873]: [TOTEM ] adding new UDPU member {192.168.231.70}
Nov 25 10:48:43 nebula1 corosync[4873]: [TOTEM ] adding new UDPU member {192.168.231.71}
Nov 25 10:48:43 nebula1 corosync[4873]: [TOTEM ] adding new UDPU member {192.168.231.72}
Nov 25 10:48:43 nebula1 corosync[4873]: [TOTEM ] adding new UDPU member {192.168.231.110}
Nov 25 10:48:43 nebula1 corosync[4873]: [TOTEM ] adding new UDPU member {192.168.231.111}
Nov 25 10:48:43 nebula1 corosync[4873]: [TOTEM ] A new membership (192.168.231.70:81380) was formed. Members joined: 1084811078
Nov 25 10:48:43 nebula1 corosync[4873]: [QUORUM] Waiting for all cluster members. Current votes: 1 expected_votes: 3
Nov 25 10:48:43 nebula1 corosync[4873]: message repeated 2 times: [ [QUORUM] Waiting for all cluster members. Current votes: 1 expected_votes: 3]
Nov 25 10:48:43 nebula1 corosync[4873]: [QUORUM] Members[1]: 1084811078
Nov 25 10:48:43 nebula1 corosync[4873]: [MAIN ] Completed service synchronization, ready to provide service.
Nov 25 10:48:43 nebula1 corosync[4873]: [TOTEM ] A new membership (192.168.231.70:81388) was formed. Members joined: 1084811079
Nov 25 10:48:43 nebula1 corosync[4873]: [QUORUM] Waiting for all cluster members. Current votes: 1 expected_votes: 3
Nov 25 10:48:43 nebula1 corosync[4873]: [QUORUM] Waiting for all cluster members. Current votes: 2 expected_votes: 3
Nov 25 10:48:43 nebula1 corosync[4873]: [QUORUM] Waiting for all cluster members. Current votes: 2 expected_votes: 3
Nov 25 10:48:43 nebula1 corosync[4873]: [QUORUM] Members[2]: 1084811078 1084811079
Nov 25 10:48:43 nebula1 corosync[4873]: [MAIN ] Completed service synchronization, ready to provide service.
Nov 25 10:48:43 nebula1 corosync[4873]: [TOTEM ] A new membership (192.168.231.70:81392) was formed. Members joined: 1084811080
Nov 25 10:48:43 nebula1 corosync[4873]: [QUORUM] This node is within the primary component and will provide service.
Nov 25 10:48:43 nebula1 corosync[4873]: [QUORUM] Members[3]: 1084811078 1084811079 1084811080
Nov 25 10:48:43 nebula1 corosync[4873]: [MAIN ] Completed service synchronization, ready to provide service.
Start pacemaker
---------------
Nov 25 10:49:21 nebula1 pacemakerd[5031]: notice: mcp_read_config: Configured corosync to accept connections from group 114: OK (1)
Nov 25 10:49:21 nebula1 pacemakerd[5031]: notice: main: Starting Pacemaker 1.1.10 (Build: 42f2063): generated-manpages agent-manpages ncurses libqb-logging libqb-ipc lha-fencing upstart nagios heartbeat corosync-native snmp libesmtp
Nov 25 10:49:21 nebula1 pacemakerd[5031]: notice: cluster_connect_quorum: Quorum acquired
Nov 25 10:49:21 nebula1 pacemakerd[5031]: notice: corosync_node_name: Unable to get node name for nodeid 1084811078
Nov 25 10:49:21 nebula1 pacemakerd[5031]: notice: get_node_name: Defaulting to uname -n for the local corosync node name
Nov 25 10:49:21 nebula1 pacemakerd[5031]: notice: crm_update_peer_state: pcmk_quorum_notification: Node nebula1[1084811078] - state is now member (was (null))
Nov 25 10:49:21 nebula1 pacemakerd[5031]: notice: corosync_node_name: Unable to get node name for nodeid 1084811079
Nov 25 10:49:21 nebula1 pacemakerd[5031]: notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[1084811079] - state is now member (was (null))
Nov 25 10:49:21 nebula1 pacemakerd[5031]: notice: corosync_node_name: Unable to get node name for nodeid 1084811080
Nov 25 10:49:21 nebula1 pacemakerd[5031]: notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[1084811080] - state is now member (was (null))
Nov 25 10:49:21 nebula1 attrd[5036]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Nov 25 10:49:21 nebula1 stonith-ng[5034]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Nov 25 10:49:21 nebula1 attrd[5036]: notice: corosync_node_name: Unable to get node name for nodeid 1084811078
Nov 25 10:49:21 nebula1 attrd[5036]: notice: get_node_name: Defaulting to uname -n for the local corosync node name
Nov 25 10:49:21 nebula1 attrd[5036]: notice: main: Starting mainloop...
Nov 25 10:49:21 nebula1 stonith-ng[5034]: notice: corosync_node_name: Unable to get node name for nodeid 1084811078
Nov 25 10:49:21 nebula1 stonith-ng[5034]: notice: get_node_name: Defaulting to uname -n for the local corosync node name
Nov 25 10:49:21 nebula1 crmd[5038]: notice: main: CRM Git Version: 42f2063
Nov 25 10:49:21 nebula1 cib[5033]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Nov 25 10:49:21 nebula1 cib[5033]: notice: corosync_node_name: Unable to get node name for nodeid 1084811078
Nov 25 10:49:21 nebula1 cib[5033]: notice: get_node_name: Defaulting to uname -n for the local corosync node name
Nov 25 10:49:22 nebula1 stonith-ng[5034]: notice: setup_cib: Watching for stonith topology changes
Nov 25 10:49:22 nebula1 crmd[5038]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Nov 25 10:49:22 nebula1 crmd[5038]: notice: corosync_node_name: Unable to get node name for nodeid 1084811078
Nov 25 10:49:22 nebula1 crmd[5038]: notice: get_node_name: Defaulting to uname -n for the local corosync node name
Nov 25 10:49:22 nebula1 crmd[5038]: notice: cluster_connect_quorum: Quorum acquired
Nov 25 10:49:22 nebula1 crmd[5038]: notice: crm_update_peer_state: pcmk_quorum_notification: Node nebula1[1084811078] - state is now member (was (null))
Nov 25 10:49:22 nebula1 crmd[5038]: notice: corosync_node_name: Unable to get node name for nodeid 1084811079
Nov 25 10:49:22 nebula1 crmd[5038]: notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[1084811079] - state is now member (was (null))
Nov 25 10:49:22 nebula1 crmd[5038]: notice: corosync_node_name: Unable to get node name for nodeid 1084811080
Nov 25 10:49:22 nebula1 crmd[5038]: notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[1084811080] - state is now member (was (null))
Nov 25 10:49:22 nebula1 crmd[5038]: notice: corosync_node_name: Unable to get node name for nodeid 1084811078
Nov 25 10:49:22 nebula1 crmd[5038]: notice: get_node_name: Defaulting to uname -n for the local corosync node name
Nov 25 10:49:22 nebula1 crmd[5038]: notice: do_started: The local CRM is operational
Nov 25 10:49:22 nebula1 crmd[5038]: notice: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Nov 25 10:49:23 nebula1 stonith-ng[5034]: notice: stonith_device_register: Added 'Stonith-nebula2-IPMILAN' to the device list (1 active devices)
Nov 25 10:49:24 nebula1 stonith-ng[5034]: notice: stonith_device_register: Added 'Stonith-nebula3-IPMILAN' to the device list (2 active devices)
Nov 25 10:49:25 nebula1 stonith-ng[5034]: notice: stonith_device_register: Added 'Stonith-ONE-Frontend' to the device list (3 active devices)
Nov 25 10:49:26 nebula1 stonith-ng[5034]: notice: corosync_node_name: Unable to get node name for nodeid 1084811078
Nov 25 10:49:26 nebula1 stonith-ng[5034]: notice: get_node_name: Defaulting to uname -n for the local corosync node name
Nov 25 10:49:43 nebula1 crmd[5038]: warning: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
Nov 25 10:49:43 nebula1 crmd[5038]: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Nov 25 10:49:43 nebula1 cib[5033]: notice: corosync_node_name: Unable to get node name for nodeid 1084811078
Nov 25 10:49:43 nebula1 cib[5033]: notice: get_node_name: Defaulting to uname -n for the local corosync node name
Nov 25 10:49:43 nebula1 attrd[5036]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Nov 25 10:49:43 nebula1 pengine[5037]: warning: stage6: Scheduling Node one-frontend for STONITH
Nov 25 10:49:43 nebula1 pengine[5037]: warning: stage6: Scheduling Node quorum for STONITH
Nov 25 10:49:43 nebula1 pengine[5037]: notice: LogActions: Start Stonith-nebula1-IPMILAN#011(nebula2)
Nov 25 10:49:43 nebula1 pengine[5037]: notice: LogActions: Start Stonith-nebula2-IPMILAN#011(nebula3)
Nov 25 10:49:43 nebula1 pengine[5037]: notice: LogActions: Start Stonith-nebula3-IPMILAN#011(nebula1)
Nov 25 10:49:43 nebula1 pengine[5037]: warning: process_pe_message: Calculated Transition 0: /var/lib/pacemaker/pengine/pe-warn-0.bz2
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 28: monitor Stonith-nebula1-IPMILAN_monitor_0 on nebula3
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 16: monitor Stonith-nebula1-IPMILAN_monitor_0 on nebula2
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 4: monitor Stonith-nebula1-IPMILAN_monitor_0 on nebula1 (local)
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 29: monitor Stonith-nebula2-IPMILAN_monitor_0 on nebula3
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 17: monitor Stonith-nebula2-IPMILAN_monitor_0 on nebula2
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 5: monitor Stonith-nebula2-IPMILAN_monitor_0 on nebula1 (local)
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 30: monitor Stonith-nebula3-IPMILAN_monitor_0 on nebula3
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 18: monitor Stonith-nebula3-IPMILAN_monitor_0 on nebula2
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 6: monitor Stonith-nebula3-IPMILAN_monitor_0 on nebula1 (local)
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 31: monitor Stonith-ONE-Frontend_monitor_0 on nebula3
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 19: monitor Stonith-ONE-Frontend_monitor_0 on nebula2
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 7: monitor Stonith-ONE-Frontend_monitor_0 on nebula1 (local)
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 32: monitor ONE-Frontend-VM_monitor_0 on nebula3
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 20: monitor ONE-Frontend-VM_monitor_0 on nebula2
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 8: monitor ONE-Frontend-VM_monitor_0 on nebula1 (local)
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 33: monitor Stonith-Quorum-Node_monitor_0 on nebula3
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 21: monitor Stonith-Quorum-Node_monitor_0 on nebula2
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 9: monitor Stonith-Quorum-Node_monitor_0 on nebula1 (local)
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 34: monitor Quorum-Node-VM_monitor_0 on nebula3
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 22: monitor Quorum-Node-VM_monitor_0 on nebula2
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 10: monitor Quorum-Node-VM_monitor_0 on nebula1 (local)
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 35: monitor dlm:0_monitor_0 on nebula3
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 23: monitor dlm:0_monitor_0 on nebula2
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 11: monitor dlm:0_monitor_0 on nebula1 (local)
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 36: monitor clvm:0_monitor_0 on nebula3
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 24: monitor clvm:0_monitor_0 on nebula2
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 12: monitor clvm:0_monitor_0 on nebula1 (local)
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 37: monitor ONE-vg:0_monitor_0 on nebula3
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 25: monitor ONE-vg:0_monitor_0 on nebula2
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 13: monitor ONE-vg:0_monitor_0 on nebula1 (local)
Nov 25 10:49:43 nebula1 crmd[5038]: notice: process_lrm_event: LRM operation clvm_monitor_0 (call=39, rc=7, cib-update=34, confirmed=true) not running
Nov 25 10:49:43 nebula1 crmd[5038]: notice: process_lrm_event: LRM operation dlm_monitor_0 (call=34, rc=7, cib-update=35, confirmed=true) not running
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 38: monitor ONE-Datastores:0_monitor_0 on nebula3
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 26: monitor ONE-Datastores:0_monitor_0 on nebula2
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 14: monitor ONE-Datastores:0_monitor_0 on nebula1 (local)
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_fence_node: Executing reboot fencing operation (78) on one-frontend (timeout=30000)
Nov 25 10:49:43 nebula1 stonith-ng[5034]: notice: handle_request: Client crmd.5038.c82449da wants to fence (reboot) 'one-frontend' with device '(any)'
Nov 25 10:49:43 nebula1 stonith-ng[5034]: notice: initiate_remote_stonith_op: Initiating remote operation reboot for one-frontend: 98290389-2232-4ba1-b4dd-ed4a4d7f46a1 (0)
Nov 25 10:49:43 nebula1 stonith-ng[5034]: notice: can_fence_host_with_device: Stonith-ONE-Frontend can fence one-frontend: static-list
Nov 25 10:49:43 nebula1 stonith-ng[5034]: notice: can_fence_host_with_device: Stonith-nebula2-IPMILAN can not fence one-frontend: static-list
Nov 25 10:49:43 nebula1 stonith-ng[5034]: notice: can_fence_host_with_device: Stonith-nebula3-IPMILAN can not fence one-frontend: static-list
Nov 25 10:49:43 nebula1 LVM(ONE-vg)[5052]: WARNING: LVM Volume one-fs is not available (stopped)
Nov 25 10:49:43 nebula1 stonith-ng[5034]: notice: can_fence_host_with_device: Stonith-ONE-Frontend can fence one-frontend: static-list
Nov 25 10:49:43 nebula1 stonith-ng[5034]: notice: can_fence_host_with_device: Stonith-nebula2-IPMILAN can not fence one-frontend: static-list
Nov 25 10:49:43 nebula1 stonith-ng[5034]: notice: can_fence_host_with_device: Stonith-nebula3-IPMILAN can not fence one-frontend: static-list
Nov 25 10:49:43 nebula1 LVM(ONE-vg)[5052]: INFO: LVM Volume one-fs is offline
Nov 25 10:49:43 nebula1 Filesystem(ONE-Datastores)[5106]: WARNING: Couldn't find device [/dev/one-fs/datastores]. Expected /dev/??? to exist
Nov 25 10:49:43 nebula1 crmd[5038]: notice: process_lrm_event: LRM operation ONE-vg_monitor_0 (call=44, rc=7, cib-update=36, confirmed=true) not running
Nov 25 10:49:43 nebula1 VirtualDomain(Quorum-Node-VM)[5049]: INFO: Configuration file /var/lib/libvirt/qemu/pcmk/quorum.xml not readable during probe.
Nov 25 10:49:43 nebula1 VirtualDomain(ONE-Frontend-VM)[5048]: INFO: Configuration file /var/lib/one/datastores/one/one.xml not readable during probe.
Nov 25 10:49:43 nebula1 crmd[5038]: notice: process_lrm_event: LRM operation ONE-Datastores_monitor_0 (call=56, rc=7, cib-update=37, confirmed=true) not running
Nov 25 10:49:43 nebula1 crmd[5038]: notice: process_lrm_event: LRM operation Quorum-Node-VM_monitor_0 (call=29, rc=7, cib-update=38, confirmed=true) not running
Nov 25 10:49:43 nebula1 crmd[5038]: notice: process_lrm_event: LRM operation ONE-Frontend-VM_monitor_0 (call=21, rc=7, cib-update=39, confirmed=true) not running
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 3: probe_complete probe_complete on nebula1 (local) - no waiting
Nov 25 10:49:43 nebula1 attrd[5036]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Nov 25 10:49:43 nebula1 attrd[5036]: notice: attrd_perform_update: Sent update 4: probe_complete=true
Nov 25 10:49:43 nebula1 attrd[5036]: notice: corosync_node_name: Unable to get node name for nodeid 1084811078
Nov 25 10:49:43 nebula1 attrd[5036]: notice: get_node_name: Defaulting to uname -n for the local corosync node name
Nov 25 10:49:43 nebula1 external/libvirt[5189]: notice: Domain one-frontend is already stopped
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 27: probe_complete probe_complete on nebula3 - no waiting
Nov 25 10:49:43 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 15: probe_complete probe_complete on nebula2 - no waiting
Nov 25 10:49:45 nebula1 kernel: [ 452.133936] type=1400 audit(1416908985.815:40): apparmor="STATUS" operation="profile_load" profile="unconfined" name="libvirt-a8069a7b-97fe-4122-85a3-0abbc011f540" pid=5222 comm="apparmor_parser"
Nov 25 10:49:45 nebula1 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 -- --if-exists del-port one-dmz-pub
Nov 25 10:49:45 nebula1 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 -- --if-exists del-port one-admin
Nov 25 10:49:45 nebula1 external/libvirt[5189]: ERROR: Failed to start domain one-frontend
Nov 25 10:49:45 nebula1 external/libvirt[5189]: ERROR: error: Failed to start domain one-frontend#012error: Failed to open file '/var/lib/one/datastores/one/one.img': No such file or directory
Nov 25 10:49:46 nebula1 stonith-ng[5034]: notice: log_operation: Operation 'reboot' [5110] (call 2 from crmd.5038) for host 'one-frontend' with device 'Stonith-ONE-Frontend' returned: 0 (OK)
Nov 25 10:49:46 nebula1 stonith-ng[5034]: error: crm_abort: crm_glib_handler: Forked child 5238 to record non-fatal assert at logging.c:63 : Source ID 17 was not found when attempting to remove it
Nov 25 10:49:46 nebula1 stonith-ng[5034]: error: crm_abort: crm_glib_handler: Forked child 5239 to record non-fatal assert at logging.c:63 : Source ID 18 was not found when attempting to remove it
Nov 25 10:49:46 nebula1 stonith-ng[5034]: notice: remote_op_done: Operation reboot of one-frontend by nebula1 for crmd.5038 at nebula1.98290389: OK
Nov 25 10:49:46 nebula1 crmd[5038]: notice: tengine_stonith_callback: Stonith operation 2/78:0:0:d458be36-df66-4aff-b201-af76ccbec3aa: OK (0)
Nov 25 10:49:46 nebula1 crmd[5038]: notice: crm_update_peer_state: send_stonith_update: Node one-frontend[0] - state is now lost (was (null))
Nov 25 10:49:46 nebula1 crmd[5038]: notice: te_fence_node: Executing reboot fencing operation (79) on quorum (timeout=30000)
Nov 25 10:49:46 nebula1 crmd[5038]: notice: tengine_stonith_notify: Peer one-frontend was terminated (reboot) by nebula1 for nebula1: OK (ref=98290389-2232-4ba1-b4dd-ed4a4d7f46a1) by client crmd.5038
Nov 25 10:49:46 nebula1 stonith-ng[5034]: notice: handle_request: Client crmd.5038.c82449da wants to fence (reboot) 'quorum' with device '(any)'
Nov 25 10:49:46 nebula1 stonith-ng[5034]: notice: initiate_remote_stonith_op: Initiating remote operation reboot for quorum: 281581d8-fecf-4588-808f-1ead04b7cdd6 (0)
Nov 25 10:49:46 nebula1 stonith-ng[5034]: notice: can_fence_host_with_device: Stonith-ONE-Frontend can not fence quorum: static-list
Nov 25 10:49:46 nebula1 stonith-ng[5034]: notice: can_fence_host_with_device: Stonith-nebula2-IPMILAN can not fence quorum: static-list
Nov 25 10:49:46 nebula1 stonith-ng[5034]: notice: can_fence_host_with_device: Stonith-nebula3-IPMILAN can not fence quorum: static-list
Nov 25 10:49:50 nebula1 stonith-ng[5034]: notice: remote_op_done: Operation reboot of quorum by nebula2 for crmd.5038 at nebula1.281581d8: OK
Nov 25 10:49:50 nebula1 crmd[5038]: notice: tengine_stonith_callback: Stonith operation 3/79:0:0:d458be36-df66-4aff-b201-af76ccbec3aa: OK (0)
Nov 25 10:49:50 nebula1 crmd[5038]: notice: crm_update_peer_state: send_stonith_update: Node quorum[0] - state is now lost (was (null))
Nov 25 10:49:50 nebula1 crmd[5038]: notice: tengine_stonith_notify: Peer quorum was terminated (reboot) by nebula2 for nebula1: OK (ref=281581d8-fecf-4588-808f-1ead04b7cdd6) by client crmd.5038
Nov 25 10:49:50 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 39: start Stonith-nebula1-IPMILAN_start_0 on nebula2
Nov 25 10:49:50 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 41: start Stonith-nebula2-IPMILAN_start_0 on nebula3
Nov 25 10:49:50 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 43: start Stonith-nebula3-IPMILAN_start_0 on nebula1 (local)
Nov 25 10:49:50 nebula1 stonith-ng[5034]: notice: stonith_device_register: Device 'Stonith-nebula3-IPMILAN' already existed in device list (3 active devices)
Nov 25 10:49:51 nebula1 stonith-ng[5034]: error: crm_abort: crm_glib_handler: Forked child 5255 to record non-fatal assert at logging.c:63 : Source ID 23 was not found when attempting to remove it
Nov 25 10:49:51 nebula1 crmd[5038]: notice: process_lrm_event: LRM operation Stonith-nebula3-IPMILAN_start_0 (call=62, rc=0, cib-update=54, confirmed=true) ok
Nov 25 10:49:51 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 44: monitor Stonith-nebula3-IPMILAN_monitor_1800000 on nebula1 (local)
Nov 25 10:49:51 nebula1 stonith-ng[5034]: error: crm_abort: crm_glib_handler: Forked child 5256 to record non-fatal assert at logging.c:63 : Source ID 24 was not found when attempting to remove it
Nov 25 10:49:52 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 40: monitor Stonith-nebula1-IPMILAN_monitor_1800000 on nebula2
Nov 25 10:49:52 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 42: monitor Stonith-nebula2-IPMILAN_monitor_1800000 on nebula3
Nov 25 10:49:52 nebula1 stonith-ng[5034]: error: crm_abort: crm_glib_handler: Forked child 5271 to record non-fatal assert at logging.c:63 : Source ID 25 was not found when attempting to remove it
Nov 25 10:49:52 nebula1 crmd[5038]: notice: process_lrm_event: LRM operation Stonith-nebula3-IPMILAN_monitor_1800000 (call=65, rc=0, cib-update=55, confirmed=false) ok
Nov 25 10:49:52 nebula1 stonith-ng[5034]: error: crm_abort: crm_glib_handler: Forked child 5272 to record non-fatal assert at logging.c:63 : Source ID 26 was not found when attempting to remove it
Nov 25 10:49:54 nebula1 crmd[5038]: notice: run_graph: Transition 0 (Complete=47, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): Complete
Nov 25 10:49:54 nebula1 crmd[5038]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Start resource ONE-Storage-Clone
--------------------------------
Nov 25 10:50:47 nebula1 crmd[5038]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Nov 25 10:50:47 nebula1 cib[5033]: notice: cib:diff: Diff: --- 0.5.50
Nov 25 10:50:47 nebula1 cib[5033]: notice: cib:diff: Diff: +++ 0.6.1 22611fd8cdf48139fb2d4a9b27083364
Nov 25 10:50:47 nebula1 pengine[5037]: notice: LogActions: Start dlm:0#011(nebula1)
Nov 25 10:50:47 nebula1 pengine[5037]: notice: LogActions: Start clvm:0#011(nebula1)
Nov 25 10:50:47 nebula1 pengine[5037]: notice: LogActions: Start ONE-vg:0#011(nebula1)
Nov 25 10:50:47 nebula1 pengine[5037]: notice: LogActions: Start ONE-Datastores:0#011(nebula1)
Nov 25 10:50:47 nebula1 pengine[5037]: notice: LogActions: Start dlm:1#011(nebula2)
Nov 25 10:50:47 nebula1 pengine[5037]: notice: LogActions: Start clvm:1#011(nebula2)
Nov 25 10:50:47 nebula1 pengine[5037]: notice: LogActions: Start ONE-vg:1#011(nebula2)
Nov 25 10:50:47 nebula1 pengine[5037]: notice: LogActions: Start ONE-Datastores:1#011(nebula2)
Nov 25 10:50:47 nebula1 pengine[5037]: notice: LogActions: Start dlm:2#011(nebula3)
Nov 25 10:50:47 nebula1 pengine[5037]: notice: LogActions: Start clvm:2#011(nebula3)
Nov 25 10:50:47 nebula1 pengine[5037]: notice: LogActions: Start ONE-vg:2#011(nebula3)
Nov 25 10:50:47 nebula1 pengine[5037]: notice: LogActions: Start ONE-Datastores:2#011(nebula3)
Nov 25 10:50:47 nebula1 pengine[5037]: notice: process_pe_message: Calculated Transition 1: /var/lib/pacemaker/pengine/pe-input-0.bz2
Nov 25 10:50:47 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 23: start dlm_start_0 on nebula1 (local)
Nov 25 10:50:47 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 35: start dlm:1_start_0 on nebula2
Nov 25 10:50:47 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 47: start dlm:2_start_0 on nebula3
Nov 25 10:50:47 nebula1 kernel: [ 514.041302] sctp: Hash tables configured (established 65536 bind 65536)
Nov 25 10:50:47 nebula1 kernel: [ 514.052482] DLM installed
Nov 25 10:50:47 nebula1 dlm_controld[5303]: 514 dlm_controld 4.0.1 started
Nov 25 10:50:48 nebula1 crmd[5038]: notice: process_lrm_event: LRM operation dlm_start_0 (call=68, rc=0, cib-update=57, confirmed=true) ok
Nov 25 10:50:48 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 24: monitor dlm_monitor_60000 on nebula1 (local)
Nov 25 10:50:48 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 25: start clvm_start_0 on nebula1 (local)
Nov 25 10:50:48 nebula1 crmd[5038]: notice: process_lrm_event: LRM operation dlm_monitor_60000 (call=71, rc=0, cib-update=58, confirmed=false) ok
Nov 25 10:50:48 nebula1 clvmd(clvm)[5312]: INFO: Starting clvm
Nov 25 10:50:48 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 48: monitor dlm:2_monitor_60000 on nebula3
Nov 25 10:50:48 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 49: start clvm:2_start_0 on nebula3
Nov 25 10:50:48 nebula1 clvmd[5328]: CLVMD started
Nov 25 10:50:48 nebula1 kernel: [ 515.161007] dlm: Using TCP for communications
Nov 25 10:50:48 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 36: monitor dlm:1_monitor_60000 on nebula2
Nov 25 10:50:48 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 37: start clvm:1_start_0 on nebula2
Nov 25 10:50:48 nebula1 kernel: [ 515.218143] dlm: connecting to 1084811080
Nov 25 10:50:48 nebula1 kernel: [ 515.219781] dlm: got connection from 1084811080
Nov 25 10:50:48 nebula1 kernel: [ 515.268755] dlm: connecting to 1084811079
Nov 25 10:50:48 nebula1 kernel: [ 515.269417] dlm: got connection from 1084811079
Nov 25 10:50:49 nebula1 clvmd[5328]: Created DLM lockspace for CLVMD.
Nov 25 10:50:49 nebula1 clvmd[5328]: DLM initialisation complete
Nov 25 10:50:49 nebula1 clvmd[5328]: Our local node id is 1084811078
Nov 25 10:50:49 nebula1 clvmd[5328]: Connected to Corosync
Nov 25 10:50:49 nebula1 clvmd[5328]: Cluster LVM daemon started - connected to Corosync
Nov 25 10:50:49 nebula1 clvmd[5328]: Cluster ready, doing some more initialisation
Nov 25 10:50:49 nebula1 clvmd[5328]: starting LVM thread
Nov 25 10:50:49 nebula1 clvmd[5328]: LVM thread function started
Nov 25 10:50:49 nebula1 lvm[5328]: clvmd ready for work
Nov 25 10:50:49 nebula1 lvm[5328]: Sub thread ready for work.
Nov 25 10:50:49 nebula1 lvm[5328]: Using timeout of 60 seconds
Nov 25 10:50:49 nebula1 lvm[5328]: LVM thread waiting for work
Nov 25 10:50:49 nebula1 lvm[5328]: confchg callback. 1 joined, 0 left, 1 members
Nov 25 10:50:49 nebula1 lvm[5328]: confchg callback. 1 joined, 0 left, 2 members
Nov 25 10:50:49 nebula1 lvm[5328]: confchg callback. 1 joined, 0 left, 3 members
Nov 25 10:50:51 nebula1 lrmd[5035]: notice: operation_finished: clvm_start_0:5312:stderr [ local socket: connect failed: No such file or directory ]
Nov 25 10:50:51 nebula1 crmd[5038]: notice: process_lrm_event: LRM operation clvm_start_0 (call=73, rc=0, cib-update=59, confirmed=true) ok
Nov 25 10:50:51 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 26: monitor clvm_monitor_60000 on nebula1 (local)
Nov 25 10:50:51 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 27: start ONE-vg_start_0 on nebula1 (local)
Nov 25 10:50:51 nebula1 LVM(ONE-vg)[5349]: INFO: Activating volume group one-fs
Nov 25 10:50:51 nebula1 lvm[5328]: Got new connection on fd 5
Nov 25 10:50:51 nebula1 lvm[5328]: Read on local socket 5, len = 29
Nov 25 10:50:51 nebula1 lvm[5328]: check_all_clvmds_running
Nov 25 10:50:51 nebula1 lvm[5328]: down_callback. node 1084811079, state = 3
Nov 25 10:50:51 nebula1 lvm[5328]: down_callback. node 1084811078, state = 3
Nov 25 10:50:51 nebula1 lvm[5328]: down_callback. node 1084811080, state = 3
Nov 25 10:50:51 nebula1 lvm[5328]: creating pipe, [13, 14]
Nov 25 10:50:51 nebula1 lvm[5328]: Creating pre&post thread
Nov 25 10:50:51 nebula1 lvm[5328]: Created pre&post thread, state = 0
Nov 25 10:50:51 nebula1 lvm[5328]: in sub thread: client = 0x164fa60
Nov 25 10:50:51 nebula1 lvm[5328]: doing PRE command LOCK_VG 'P_#global' at 4 (client=0x164fa60)
Nov 25 10:50:51 nebula1 lvm[5328]: lock_resource 'P_#global', flags=0, mode=4
Nov 25 10:50:51 nebula1 lvm[5328]: lock_resource returning 0, lock_id=1
Nov 25 10:50:51 nebula1 lvm[5328]: Writing status 0 down pipe 14
Nov 25 10:50:51 nebula1 lvm[5328]: Waiting to do post command - state = 0
Nov 25 10:50:51 nebula1 lvm[5328]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:51 nebula1 lvm[5328]: background routine status was 0, sock_client=0x164fa60
Nov 25 10:50:51 nebula1 lvm[5328]: distribute command: XID = 0, flags=0x0 ()
Nov 25 10:50:51 nebula1 lvm[5328]: num_nodes = 3
Nov 25 10:50:51 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x1650040. client=0x164fa60, msg=0x164fb70, len=29, csid=(nil), xid=0
Nov 25 10:50:51 nebula1 lvm[5328]: Sending message to all cluster nodes
Nov 25 10:50:51 nebula1 lvm[5328]: process_work_item: local
Nov 25 10:50:51 nebula1 lvm[5328]: process_local_command: LOCK_VG (0x33) msg=0x164fde0, msglen =29, client=0x164fa60
Nov 25 10:50:51 nebula1 lvm[5328]: do_lock_vg: resource 'P_#global', cmd = 0x4 LCK_VG (WRITE|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 10:50:51 nebula1 lvm[5328]: Refreshing context
Nov 25 10:50:51 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811078 for 0. len 29
Nov 25 10:50:51 nebula1 crmd[5038]: notice: process_lrm_event: LRM operation clvm_monitor_60000 (call=77, rc=0, cib-update=60, confirmed=false) ok
Nov 25 10:50:51 nebula1 lvm[5328]: Reply from node 40a8e746: 0 bytes
Nov 25 10:50:51 nebula1 lvm[5328]: Got 1 replies, expecting: 3
Nov 25 10:50:51 nebula1 lvm[5328]: LVM thread waiting for work
Nov 25 10:50:51 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811079 for 1084811078. len 18
Nov 25 10:50:51 nebula1 lvm[5328]: Reply from node 40a8e747: 0 bytes
Nov 25 10:50:51 nebula1 lvm[5328]: Got 2 replies, expecting: 3
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811080 for 1084811078. len 18
Nov 25 10:50:52 nebula1 lvm[5328]: Reply from node 40a8e748: 0 bytes
Nov 25 10:50:52 nebula1 lvm[5328]: Got 3 replies, expecting: 3
Nov 25 10:50:52 nebula1 lvm[5328]: Got post command condition...
Nov 25 10:50:52 nebula1 lvm[5328]: Waiting for next pre command
Nov 25 10:50:52 nebula1 lvm[5328]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula1 lvm[5328]: background routine status was 0, sock_client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: Send local reply
Nov 25 10:50:52 nebula1 lvm[5328]: Read on local socket 5, len = 32
Nov 25 10:50:52 nebula1 lvm[5328]: Got pre command condition...
Nov 25 10:50:52 nebula1 lvm[5328]: doing PRE command LOCK_VG 'V_nebula1-vg' at 1 (client=0x164fa60)
Nov 25 10:50:52 nebula1 lvm[5328]: lock_resource 'V_nebula1-vg', flags=0, mode=3
Nov 25 10:50:52 nebula1 lvm[5328]: lock_resource returning 0, lock_id=2
Nov 25 10:50:52 nebula1 lvm[5328]: Writing status 0 down pipe 14
Nov 25 10:50:52 nebula1 lvm[5328]: Waiting to do post command - state = 0
Nov 25 10:50:52 nebula1 lvm[5328]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula1 lvm[5328]: background routine status was 0, sock_client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: distribute command: XID = 1, flags=0x1 (LOCAL)
Nov 25 10:50:52 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x164fde0. client=0x164fa60, msg=0x164fb70, len=32, csid=(nil), xid=1
Nov 25 10:50:52 nebula1 lvm[5328]: process_work_item: local
Nov 25 10:50:52 nebula1 lvm[5328]: process_local_command: LOCK_VG (0x33) msg=0x164fe20, msglen =32, client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: do_lock_vg: resource 'V_nebula1-vg', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 10:50:52 nebula1 lvm[5328]: Invalidating cached metadata for VG nebula1-vg
Nov 25 10:50:52 nebula1 lvm[5328]: Reply from node 40a8e746: 0 bytes
Nov 25 10:50:52 nebula1 lvm[5328]: Got 1 replies, expecting: 1
Nov 25 10:50:52 nebula1 lvm[5328]: LVM thread waiting for work
Nov 25 10:50:52 nebula1 lvm[5328]: Got post command condition...
Nov 25 10:50:52 nebula1 lvm[5328]: Waiting for next pre command
Nov 25 10:50:52 nebula1 lvm[5328]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula1 lvm[5328]: background routine status was 0, sock_client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: Send local reply
Nov 25 10:50:52 nebula1 lvm[5328]: Read on local socket 5, len = 31
Nov 25 10:50:52 nebula1 lvm[5328]: check_all_clvmds_running
Nov 25 10:50:52 nebula1 lvm[5328]: down_callback. node 1084811079, state = 3
Nov 25 10:50:52 nebula1 lvm[5328]: down_callback. node 1084811078, state = 3
Nov 25 10:50:52 nebula1 lvm[5328]: down_callback. node 1084811080, state = 3
Nov 25 10:50:52 nebula1 lvm[5328]: Got pre command condition...
Nov 25 10:50:52 nebula1 lvm[5328]: Writing status 0 down pipe 14
Nov 25 10:50:52 nebula1 lvm[5328]: Waiting to do post command - state = 0
Nov 25 10:50:52 nebula1 lvm[5328]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula1 lvm[5328]: background routine status was 0, sock_client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: distribute command: XID = 2, flags=0x0 ()
Nov 25 10:50:52 nebula1 lvm[5328]: num_nodes = 3
Nov 25 10:50:52 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x1650040. client=0x164fa60, msg=0x164fb70, len=31, csid=(nil), xid=2
Nov 25 10:50:52 nebula1 lvm[5328]: Sending message to all cluster nodes
Nov 25 10:50:52 nebula1 lvm[5328]: process_work_item: local
Nov 25 10:50:52 nebula1 lvm[5328]: process_local_command: SYNC_NAMES (0x2d) msg=0x164fde0, msglen =31, client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: Syncing device names
Nov 25 10:50:52 nebula1 lvm[5328]: Reply from node 40a8e746: 0 bytes
Nov 25 10:50:52 nebula1 lvm[5328]: Got 1 replies, expecting: 3
Nov 25 10:50:52 nebula1 lvm[5328]: LVM thread waiting for work
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811078 for 0. len 31
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811079 for 1084811078. len 18
Nov 25 10:50:52 nebula1 lvm[5328]: Reply from node 40a8e747: 0 bytes
Nov 25 10:50:52 nebula1 lvm[5328]: Got 2 replies, expecting: 3
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811080 for 1084811078. len 18
Nov 25 10:50:52 nebula1 lvm[5328]: Reply from node 40a8e748: 0 bytes
Nov 25 10:50:52 nebula1 lvm[5328]: Got 3 replies, expecting: 3
Nov 25 10:50:52 nebula1 lvm[5328]: Got post command condition...
Nov 25 10:50:52 nebula1 lvm[5328]: Waiting for next pre command
Nov 25 10:50:52 nebula1 lvm[5328]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula1 lvm[5328]: background routine status was 0, sock_client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: Send local reply
Nov 25 10:50:52 nebula1 lvm[5328]: Read on local socket 5, len = 32
Nov 25 10:50:52 nebula1 lvm[5328]: Got pre command condition...
Nov 25 10:50:52 nebula1 lvm[5328]: doing PRE command LOCK_VG 'V_nebula1-vg' at 6 (client=0x164fa60)
Nov 25 10:50:52 nebula1 lvm[5328]: unlock_resource: V_nebula1-vg lockid: 2
Nov 25 10:50:52 nebula1 lvm[5328]: Writing status 0 down pipe 14
Nov 25 10:50:52 nebula1 lvm[5328]: Waiting to do post command - state = 0
Nov 25 10:50:52 nebula1 lvm[5328]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula1 lvm[5328]: background routine status was 0, sock_client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: distribute command: XID = 3, flags=0x1 (LOCAL)
Nov 25 10:50:52 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x164fde0. client=0x164fa60, msg=0x164fb70, len=32, csid=(nil), xid=3
Nov 25 10:50:52 nebula1 lvm[5328]: process_work_item: local
Nov 25 10:50:52 nebula1 lvm[5328]: process_local_command: LOCK_VG (0x33) msg=0x164fe20, msglen =32, client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: do_lock_vg: resource 'V_nebula1-vg', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 10:50:52 nebula1 lvm[5328]: Invalidating cached metadata for VG nebula1-vg
Nov 25 10:50:52 nebula1 lvm[5328]: Reply from node 40a8e746: 0 bytes
Nov 25 10:50:52 nebula1 lvm[5328]: Got 1 replies, expecting: 1
Nov 25 10:50:52 nebula1 lvm[5328]: LVM thread waiting for work
Nov 25 10:50:52 nebula1 lvm[5328]: Got post command condition...
Nov 25 10:50:52 nebula1 lvm[5328]: Waiting for next pre command
Nov 25 10:50:52 nebula1 lvm[5328]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula1 lvm[5328]: background routine status was 0, sock_client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: Send local reply
Nov 25 10:50:52 nebula1 lvm[5328]: Read on local socket 5, len = 28
Nov 25 10:50:52 nebula1 lvm[5328]: Got pre command condition...
Nov 25 10:50:52 nebula1 lvm[5328]: doing PRE command LOCK_VG 'V_one-fs' at 1 (client=0x164fa60)
Nov 25 10:50:52 nebula1 lvm[5328]: lock_resource 'V_one-fs', flags=0, mode=3
Nov 25 10:50:52 nebula1 lvm[5328]: lock_resource returning 0, lock_id=2
Nov 25 10:50:52 nebula1 lvm[5328]: Writing status 0 down pipe 14
Nov 25 10:50:52 nebula1 lvm[5328]: Waiting to do post command - state = 0
Nov 25 10:50:52 nebula1 lvm[5328]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula1 lvm[5328]: background routine status was 0, sock_client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: distribute command: XID = 4, flags=0x1 (LOCAL)
Nov 25 10:50:52 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x164fde0. client=0x164fa60, msg=0x164fb70, len=28, csid=(nil), xid=4
Nov 25 10:50:52 nebula1 lvm[5328]: process_work_item: local
Nov 25 10:50:52 nebula1 lvm[5328]: process_local_command: LOCK_VG (0x33) msg=0x164fe20, msglen =28, client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: do_lock_vg: resource 'V_one-fs', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 10:50:52 nebula1 lvm[5328]: Invalidating cached metadata for VG one-fs
Nov 25 10:50:52 nebula1 lvm[5328]: Reply from node 40a8e746: 0 bytes
Nov 25 10:50:52 nebula1 lvm[5328]: Got 1 replies, expecting: 1
Nov 25 10:50:52 nebula1 lvm[5328]: LVM thread waiting for work
Nov 25 10:50:52 nebula1 lvm[5328]: Got post command condition...
Nov 25 10:50:52 nebula1 lvm[5328]: Waiting for next pre command
Nov 25 10:50:52 nebula1 lvm[5328]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula1 lvm[5328]: background routine status was 0, sock_client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: Send local reply
Nov 25 10:50:52 nebula1 lvm[5328]: Read on local socket 5, len = 31
Nov 25 10:50:52 nebula1 lvm[5328]: check_all_clvmds_running
Nov 25 10:50:52 nebula1 lvm[5328]: down_callback. node 1084811079, state = 3
Nov 25 10:50:52 nebula1 lvm[5328]: down_callback. node 1084811078, state = 3
Nov 25 10:50:52 nebula1 lvm[5328]: down_callback. node 1084811080, state = 3
Nov 25 10:50:52 nebula1 lvm[5328]: Got pre command condition...
Nov 25 10:50:52 nebula1 lvm[5328]: Writing status 0 down pipe 14
Nov 25 10:50:52 nebula1 lvm[5328]: Waiting to do post command - state = 0
Nov 25 10:50:52 nebula1 lvm[5328]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula1 lvm[5328]: background routine status was 0, sock_client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: distribute command: XID = 5, flags=0x0 ()
Nov 25 10:50:52 nebula1 lvm[5328]: num_nodes = 3
Nov 25 10:50:52 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x1650040. client=0x164fa60, msg=0x164fb70, len=31, csid=(nil), xid=5
Nov 25 10:50:52 nebula1 lvm[5328]: Sending message to all cluster nodes
Nov 25 10:50:52 nebula1 lvm[5328]: process_work_item: local
Nov 25 10:50:52 nebula1 lvm[5328]: process_local_command: SYNC_NAMES (0x2d) msg=0x164fde0, msglen =31, client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: Syncing device names
Nov 25 10:50:52 nebula1 lvm[5328]: Reply from node 40a8e746: 0 bytes
Nov 25 10:50:52 nebula1 lvm[5328]: Got 1 replies, expecting: 3
Nov 25 10:50:52 nebula1 lvm[5328]: LVM thread waiting for work
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811078 for 0. len 31
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811079 for 1084811078. len 18
Nov 25 10:50:52 nebula1 lvm[5328]: Reply from node 40a8e747: 0 bytes
Nov 25 10:50:52 nebula1 lvm[5328]: Got 2 replies, expecting: 3
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811080 for 1084811078. len 18
Nov 25 10:50:52 nebula1 lvm[5328]: Reply from node 40a8e748: 0 bytes
Nov 25 10:50:52 nebula1 lvm[5328]: Got 3 replies, expecting: 3
Nov 25 10:50:52 nebula1 lvm[5328]: Got post command condition...
Nov 25 10:50:52 nebula1 lvm[5328]: Waiting for next pre command
Nov 25 10:50:52 nebula1 lvm[5328]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula1 lvm[5328]: background routine status was 0, sock_client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: Send local reply
Nov 25 10:50:52 nebula1 lvm[5328]: Read on local socket 5, len = 28
Nov 25 10:50:52 nebula1 lvm[5328]: Got pre command condition...
Nov 25 10:50:52 nebula1 lvm[5328]: doing PRE command LOCK_VG 'V_one-fs' at 6 (client=0x164fa60)
Nov 25 10:50:52 nebula1 lvm[5328]: unlock_resource: V_one-fs lockid: 2
Nov 25 10:50:52 nebula1 lvm[5328]: Writing status 0 down pipe 14
Nov 25 10:50:52 nebula1 lvm[5328]: Waiting to do post command - state = 0
Nov 25 10:50:52 nebula1 lvm[5328]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula1 lvm[5328]: background routine status was 0, sock_client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: distribute command: XID = 6, flags=0x1 (LOCAL)
Nov 25 10:50:52 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x164fde0. client=0x164fa60, msg=0x164fb70, len=28, csid=(nil), xid=6
Nov 25 10:50:52 nebula1 lvm[5328]: process_work_item: local
Nov 25 10:50:52 nebula1 lvm[5328]: process_local_command: LOCK_VG (0x33) msg=0x164fe20, msglen =28, client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: do_lock_vg: resource 'V_one-fs', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 10:50:52 nebula1 lvm[5328]: Invalidating cached metadata for VG one-fs
Nov 25 10:50:52 nebula1 lvm[5328]: Reply from node 40a8e746: 0 bytes
Nov 25 10:50:52 nebula1 lvm[5328]: Got 1 replies, expecting: 1
Nov 25 10:50:52 nebula1 lvm[5328]: LVM thread waiting for work
Nov 25 10:50:52 nebula1 lvm[5328]: Got post command condition...
Nov 25 10:50:52 nebula1 lvm[5328]: Waiting for next pre command
Nov 25 10:50:52 nebula1 lvm[5328]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula1 lvm[5328]: background routine status was 0, sock_client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: Send local reply
Nov 25 10:50:52 nebula1 lvm[5328]: Read on local socket 5, len = 29
Nov 25 10:50:52 nebula1 lvm[5328]: check_all_clvmds_running
Nov 25 10:50:52 nebula1 lvm[5328]: down_callback. node 1084811079, state = 3
Nov 25 10:50:52 nebula1 lvm[5328]: down_callback. node 1084811078, state = 3
Nov 25 10:50:52 nebula1 lvm[5328]: down_callback. node 1084811080, state = 3
Nov 25 10:50:52 nebula1 lvm[5328]: Got pre command condition...
Nov 25 10:50:52 nebula1 lvm[5328]: doing PRE command LOCK_VG 'P_#global' at 6 (client=0x164fa60)
Nov 25 10:50:52 nebula1 lvm[5328]: unlock_resource: P_#global lockid: 1
Nov 25 10:50:52 nebula1 lvm[5328]: Writing status 0 down pipe 14
Nov 25 10:50:52 nebula1 lvm[5328]: Waiting to do post command - state = 0
Nov 25 10:50:52 nebula1 lvm[5328]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula1 lvm[5328]: background routine status was 0, sock_client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: distribute command: XID = 7, flags=0x0 ()
Nov 25 10:50:52 nebula1 lvm[5328]: num_nodes = 3
Nov 25 10:50:52 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x1650040. client=0x164fa60, msg=0x164fb70, len=29, csid=(nil), xid=7
Nov 25 10:50:52 nebula1 lvm[5328]: Sending message to all cluster nodes
Nov 25 10:50:52 nebula1 lvm[5328]: process_work_item: local
Nov 25 10:50:52 nebula1 lvm[5328]: process_local_command: LOCK_VG (0x33) msg=0x164fde0, msglen =29, client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: do_lock_vg: resource 'P_#global', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 10:50:52 nebula1 lvm[5328]: Refreshing context
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811078 for 0. len 29
Nov 25 10:50:52 nebula1 lvm[5328]: Reply from node 40a8e746: 0 bytes
Nov 25 10:50:52 nebula1 lvm[5328]: Got 1 replies, expecting: 3
Nov 25 10:50:52 nebula1 lvm[5328]: LVM thread waiting for work
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811079 for 1084811078. len 18
Nov 25 10:50:52 nebula1 lvm[5328]: Reply from node 40a8e747: 0 bytes
Nov 25 10:50:52 nebula1 lvm[5328]: Got 2 replies, expecting: 3
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811080 for 1084811078. len 18
Nov 25 10:50:52 nebula1 lvm[5328]: Reply from node 40a8e748: 0 bytes
Nov 25 10:50:52 nebula1 lvm[5328]: Got 3 replies, expecting: 3
Nov 25 10:50:52 nebula1 lvm[5328]: Got post command condition...
Nov 25 10:50:52 nebula1 lvm[5328]: Waiting for next pre command
Nov 25 10:50:52 nebula1 lvm[5328]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula1 lvm[5328]: background routine status was 0, sock_client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: Send local reply
Nov 25 10:50:52 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 50: monitor clvm:2_monitor_60000 on nebula3
Nov 25 10:50:52 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 51: start ONE-vg:2_start_0 on nebula3
Nov 25 10:50:52 nebula1 lvm[5328]: Read on local socket 5, len = 0
Nov 25 10:50:52 nebula1 lvm[5328]: EOF on local socket: inprogress=0
Nov 25 10:50:52 nebula1 lvm[5328]: Waiting for child thread
Nov 25 10:50:52 nebula1 lvm[5328]: Got pre command condition...
Nov 25 10:50:52 nebula1 lvm[5328]: Subthread finished
Nov 25 10:50:52 nebula1 lvm[5328]: Joined child thread
Nov 25 10:50:52 nebula1 lvm[5328]: ret == 0, errno = 0. removing client
Nov 25 10:50:52 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x164fb70. client=0x164fa60, msg=(nil), len=0, csid=(nil), xid=7
Nov 25 10:50:52 nebula1 lvm[5328]: process_work_item: free fd -1
Nov 25 10:50:52 nebula1 lvm[5328]: LVM thread waiting for work
Nov 25 10:50:52 nebula1 LVM(ONE-vg)[5349]: INFO: Reading all physical volumes. This may take a while... Found volume group "nebula1-vg" using metadata type lvm2 Found volume group "one-fs" using metadata type lvm2
Nov 25 10:50:52 nebula1 lvm[5328]: Got new connection on fd 5
Nov 25 10:50:52 nebula1 lvm[5328]: Read on local socket 5, len = 28
Nov 25 10:50:52 nebula1 lvm[5328]: creating pipe, [13, 14]
Nov 25 10:50:52 nebula1 lvm[5328]: Creating pre&post thread
Nov 25 10:50:52 nebula1 lvm[5328]: Created pre&post thread, state = 0
Nov 25 10:50:52 nebula1 lvm[5328]: in sub thread: client = 0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: doing PRE command LOCK_VG 'V_one-fs' at 1 (client=0x164fa60)
Nov 25 10:50:52 nebula1 lvm[5328]: lock_resource 'V_one-fs', flags=0, mode=3
Nov 25 10:50:52 nebula1 lvm[5328]: lock_resource returning 0, lock_id=1
Nov 25 10:50:52 nebula1 lvm[5328]: Writing status 0 down pipe 14
Nov 25 10:50:52 nebula1 lvm[5328]: Waiting to do post command - state = 0
Nov 25 10:50:52 nebula1 lvm[5328]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula1 lvm[5328]: background routine status was 0, sock_client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: distribute command: XID = 8, flags=0x1 (LOCAL)
Nov 25 10:50:52 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x164fde0. client=0x164fa60, msg=0x164fb70, len=28, csid=(nil), xid=8
Nov 25 10:50:52 nebula1 lvm[5328]: process_work_item: local
Nov 25 10:50:52 nebula1 lvm[5328]: process_local_command: LOCK_VG (0x33) msg=0x164fe20, msglen =28, client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: do_lock_vg: resource 'V_one-fs', cmd = 0x1 LCK_VG (READ|VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Nov 25 10:50:52 nebula1 lvm[5328]: Invalidating cached metadata for VG one-fs
Nov 25 10:50:52 nebula1 lvm[5328]: Reply from node 40a8e746: 0 bytes
Nov 25 10:50:52 nebula1 lvm[5328]: Got 1 replies, expecting: 1
Nov 25 10:50:52 nebula1 lvm[5328]: LVM thread waiting for work
Nov 25 10:50:52 nebula1 lvm[5328]: Got post command condition...
Nov 25 10:50:52 nebula1 lvm[5328]: Waiting for next pre command
Nov 25 10:50:52 nebula1 lvm[5328]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula1 lvm[5328]: background routine status was 0, sock_client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: Send local reply
Nov 25 10:50:52 nebula1 lvm[5328]: Read on local socket 5, len = 84
Nov 25 10:50:52 nebula1 lvm[5328]: check_all_clvmds_running
Nov 25 10:50:52 nebula1 lvm[5328]: down_callback. node 1084811079, state = 3
Nov 25 10:50:52 nebula1 lvm[5328]: down_callback. node 1084811078, state = 3
Nov 25 10:50:52 nebula1 lvm[5328]: down_callback. node 1084811080, state = 3
Nov 25 10:50:52 nebula1 lvm[5328]: Got pre command condition...
Nov 25 10:50:52 nebula1 lvm[5328]: Writing status 0 down pipe 14
Nov 25 10:50:52 nebula1 lvm[5328]: Waiting to do post command - state = 0
Nov 25 10:50:52 nebula1 lvm[5328]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula1 lvm[5328]: background routine status was 0, sock_client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: distribute command: XID = 9, flags=0x0 ()
Nov 25 10:50:52 nebula1 lvm[5328]: num_nodes = 3
Nov 25 10:50:52 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x16500a0. client=0x164fa60, msg=0x164fde0, len=84, csid=(nil), xid=9
Nov 25 10:50:52 nebula1 lvm[5328]: Sending message to all cluster nodes
Nov 25 10:50:52 nebula1 lvm[5328]: process_work_item: local
Nov 25 10:50:52 nebula1 lvm[5328]: process_local_command: LOCK_QUERY (0x34) msg=0x164fe40, msglen =84, client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: do_lock_query: resource 'PSXBJgdbJb55UdFcI48VOdE6voIrDm71exNu0QeKultyW71LS8DjWLEdnpgtovv9', mode -1 (?)
Nov 25 10:50:52 nebula1 lvm[5328]: Reply from node 40a8e746: 0 bytes
Nov 25 10:50:52 nebula1 lvm[5328]: Got 1 replies, expecting: 3
Nov 25 10:50:52 nebula1 lvm[5328]: LVM thread waiting for work
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811078 for 0. len 84
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811079 for 1084811078. len 18
Nov 25 10:50:52 nebula1 lvm[5328]: Reply from node 40a8e747: 0 bytes
Nov 25 10:50:52 nebula1 lvm[5328]: Got 2 replies, expecting: 3
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811080 for 1084811078. len 18
Nov 25 10:50:52 nebula1 lvm[5328]: Reply from node 40a8e748: 0 bytes
Nov 25 10:50:52 nebula1 lvm[5328]: Got 3 replies, expecting: 3
Nov 25 10:50:52 nebula1 lvm[5328]: Got post command condition...
Nov 25 10:50:52 nebula1 lvm[5328]: Waiting for next pre command
Nov 25 10:50:52 nebula1 lvm[5328]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula1 lvm[5328]: background routine status was 0, sock_client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: Send local reply
Nov 25 10:50:52 nebula1 lvm[5328]: Read on local socket 5, len = 84
Nov 25 10:50:52 nebula1 lvm[5328]: Got pre command condition...
Nov 25 10:50:52 nebula1 lvm[5328]: Writing status 0 down pipe 14
Nov 25 10:50:52 nebula1 lvm[5328]: Waiting to do post command - state = 0
Nov 25 10:50:52 nebula1 lvm[5328]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula1 lvm[5328]: background routine status was 0, sock_client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: distribute command: XID = 10, flags=0x1 (LOCAL)
Nov 25 10:50:52 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x164fe40. client=0x164fa60, msg=0x164fde0, len=84, csid=(nil), xid=10
Nov 25 10:50:52 nebula1 lvm[5328]: process_work_item: local
Nov 25 10:50:52 nebula1 lvm[5328]: process_local_command: LOCK_LV (0x32) msg=0x16500a0, msglen =84, client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: do_lock_lv: resource 'PSXBJgdbJb55UdFcI48VOdE6voIrDm71exNu0QeKultyW71LS8DjWLEdnpgtovv9', cmd = 0x99 LCK_LV_ACTIVATE (READ|LV|NONBLOCK|CLUSTER_VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811080 for 0. len 29
Nov 25 10:50:52 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x164fe80. client=0x6a1d60, msg=0x7fff1dfbce5c, len=29, csid=0x7fff1dfbb82c, xid=0
Nov 25 10:50:52 nebula1 lvm[5328]: lock_resource 'PSXBJgdbJb55UdFcI48VOdE6voIrDm71exNu0QeKultyW71LS8DjWLEdnpgtovv9', flags=1, mode=1
Nov 25 10:50:52 nebula1 lvm[5328]: lock_resource returning 0, lock_id=3
Nov 25 10:50:52 nebula1 lvm[5328]: Command return is 0, critical_section is 0
Nov 25 10:50:52 nebula1 lvm[5328]: Reply from node 40a8e746: 0 bytes
Nov 25 10:50:52 nebula1 lvm[5328]: Got 1 replies, expecting: 1
Nov 25 10:50:52 nebula1 lvm[5328]: process_work_item: remote
Nov 25 10:50:52 nebula1 lvm[5328]: process_remote_command LOCK_VG (0x33) for clientid 0xc000000 XID 0 on node 40a8e748
Nov 25 10:50:52 nebula1 lvm[5328]: Got post command condition...
Nov 25 10:50:52 nebula1 lvm[5328]: Waiting for next pre command
Nov 25 10:50:52 nebula1 lvm[5328]: do_lock_vg: resource 'P_#global', cmd = 0x4 LCK_VG (WRITE|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 10:50:52 nebula1 lvm[5328]: Refreshing context
Nov 25 10:50:52 nebula1 lvm[5328]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula1 lvm[5328]: background routine status was 0, sock_client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: Send local reply
Nov 25 10:50:52 nebula1 lvm[5328]: Read on local socket 5, len = 31
Nov 25 10:50:52 nebula1 lvm[5328]: check_all_clvmds_running
Nov 25 10:50:52 nebula1 lvm[5328]: down_callback. node 1084811079, state = 3
Nov 25 10:50:52 nebula1 lvm[5328]: down_callback. node 1084811078, state = 3
Nov 25 10:50:52 nebula1 lvm[5328]: down_callback. node 1084811080, state = 3
Nov 25 10:50:52 nebula1 lvm[5328]: Got pre command condition...
Nov 25 10:50:52 nebula1 lvm[5328]: Writing status 0 down pipe 14
Nov 25 10:50:52 nebula1 lvm[5328]: Waiting to do post command - state = 0
Nov 25 10:50:52 nebula1 lvm[5328]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula1 lvm[5328]: background routine status was 0, sock_client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: distribute command: XID = 11, flags=0x0 ()
Nov 25 10:50:52 nebula1 lvm[5328]: num_nodes = 3
Nov 25 10:50:52 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x1650120. client=0x164fa60, msg=0x164fe40, len=31, csid=(nil), xid=11
Nov 25 10:50:52 nebula1 lvm[5328]: Sending message to all cluster nodes
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811078 for 0. len 31
Nov 25 10:50:52 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 38: monitor clvm:1_monitor_60000 on nebula2
Nov 25 10:50:52 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 39: start ONE-vg:1_start_0 on nebula2
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811079 for 1084811080. len 18
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811079 for 1084811078. len 18
Nov 25 10:50:52 nebula1 lvm[5328]: Reply from node 40a8e747: 0 bytes
Nov 25 10:50:52 nebula1 lvm[5328]: Got 1 replies, expecting: 3
Nov 25 10:50:52 nebula1 lvm[5328]: process_work_item: local
Nov 25 10:50:52 nebula1 lvm[5328]: process_local_command: SYNC_NAMES (0x2d) msg=0x164fde0, msglen =31, client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: Syncing device names
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811078 for 1084811080. len 18
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811080 for 1084811078. len 18
Nov 25 10:50:52 nebula1 lvm[5328]: Reply from node 40a8e748: 0 bytes
Nov 25 10:50:52 nebula1 lvm[5328]: Got 2 replies, expecting: 3
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811080 for 0. len 31
Nov 25 10:50:52 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x164fe80. client=0x6a1d60, msg=0x7fff1dfbce5c, len=31, csid=0x7fff1dfbb82c, xid=0
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811079 for 1084811080. len 18
Nov 25 10:50:52 nebula1 lvm[5328]: Reply from node 40a8e746: 0 bytes
Nov 25 10:50:52 nebula1 lvm[5328]: Got 3 replies, expecting: 3
Nov 25 10:50:52 nebula1 lvm[5328]: process_work_item: remote
Nov 25 10:50:52 nebula1 lvm[5328]: process_remote_command SYNC_NAMES (0x2d) for clientid 0xc000000 XID 2 on node 40a8e748
Nov 25 10:50:52 nebula1 lvm[5328]: Syncing device names
Nov 25 10:50:52 nebula1 lvm[5328]: LVM thread waiting for work
Nov 25 10:50:52 nebula1 lvm[5328]: Got post command condition...
Nov 25 10:50:52 nebula1 lvm[5328]: Waiting for next pre command
Nov 25 10:50:52 nebula1 lvm[5328]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula1 lvm[5328]: background routine status was 0, sock_client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: Send local reply
Nov 25 10:50:52 nebula1 lvm[5328]: Read on local socket 5, len = 28
Nov 25 10:50:52 nebula1 lvm[5328]: Got pre command condition...
Nov 25 10:50:52 nebula1 lvm[5328]: doing PRE command LOCK_VG 'V_one-fs' at 6 (client=0x164fa60)
Nov 25 10:50:52 nebula1 lvm[5328]: unlock_resource: V_one-fs lockid: 1
Nov 25 10:50:52 nebula1 lvm[5328]: Writing status 0 down pipe 14
Nov 25 10:50:52 nebula1 lvm[5328]: Waiting to do post command - state = 0
Nov 25 10:50:52 nebula1 lvm[5328]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula1 lvm[5328]: background routine status was 0, sock_client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: distribute command: XID = 12, flags=0x1 (LOCAL)
Nov 25 10:50:52 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x164fde0. client=0x164fa60, msg=0x164fb70, len=28, csid=(nil), xid=12
Nov 25 10:50:52 nebula1 lvm[5328]: process_work_item: local
Nov 25 10:50:52 nebula1 lvm[5328]: process_local_command: LOCK_VG (0x33) msg=0x164fe20, msglen =28, client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: do_lock_vg: resource 'V_one-fs', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Nov 25 10:50:52 nebula1 lvm[5328]: Invalidating cached metadata for VG one-fs
Nov 25 10:50:52 nebula1 lvm[5328]: Reply from node 40a8e746: 0 bytes
Nov 25 10:50:52 nebula1 lvm[5328]: Got 1 replies, expecting: 1
Nov 25 10:50:52 nebula1 lvm[5328]: LVM thread waiting for work
Nov 25 10:50:52 nebula1 lvm[5328]: Got post command condition...
Nov 25 10:50:52 nebula1 lvm[5328]: Waiting for next pre command
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811078 for 1084811080. len 18
Nov 25 10:50:52 nebula1 lvm[5328]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula1 lvm[5328]: background routine status was 0, sock_client=0x164fa60
Nov 25 10:50:52 nebula1 lvm[5328]: Send local reply
Nov 25 10:50:52 nebula1 lvm[5328]: Read on local socket 5, len = 0
Nov 25 10:50:52 nebula1 lvm[5328]: EOF on local socket: inprogress=0
Nov 25 10:50:52 nebula1 lvm[5328]: Waiting for child thread
Nov 25 10:50:52 nebula1 lvm[5328]: Got pre command condition...
Nov 25 10:50:52 nebula1 lvm[5328]: Subthread finished
Nov 25 10:50:52 nebula1 lvm[5328]: Joined child thread
Nov 25 10:50:52 nebula1 lvm[5328]: ret == 0, errno = 0. removing client
Nov 25 10:50:52 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x164fb70. client=0x164fa60, msg=(nil), len=0, csid=(nil), xid=12
Nov 25 10:50:52 nebula1 lvm[5328]: process_work_item: free fd -1
Nov 25 10:50:52 nebula1 lvm[5328]: LVM thread waiting for work
Nov 25 10:50:52 nebula1 LVM(ONE-vg)[5349]: INFO: 1 logical volume(s) in volume group "one-fs" now active
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811080 for 0. len 31
Nov 25 10:50:52 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x164fa60. client=0x6a1d60, msg=0x7fff1dfbce5c, len=31, csid=0x7fff1dfbb82c, xid=0
Nov 25 10:50:52 nebula1 lvm[5328]: process_work_item: remote
Nov 25 10:50:52 nebula1 lvm[5328]: process_remote_command SYNC_NAMES (0x2d) for clientid 0xc000000 XID 5 on node 40a8e748
Nov 25 10:50:52 nebula1 lvm[5328]: Syncing device names
Nov 25 10:50:52 nebula1 lvm[5328]: LVM thread waiting for work
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811079 for 1084811080. len 18
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811078 for 1084811080. len 18
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811080 for 0. len 29
Nov 25 10:50:52 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x164fa60. client=0x6a1d60, msg=0x7fff1dfbce5c, len=29, csid=0x7fff1dfbb82c, xid=0
Nov 25 10:50:52 nebula1 lvm[5328]: process_work_item: remote
Nov 25 10:50:52 nebula1 lvm[5328]: process_remote_command LOCK_VG (0x33) for clientid 0xc000000 XID 7 on node 40a8e748
Nov 25 10:50:52 nebula1 lvm[5328]: do_lock_vg: resource 'P_#global', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 10:50:52 nebula1 lvm[5328]: Refreshing context
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811079 for 0. len 29
Nov 25 10:50:52 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x164fad0. client=0x6a1d60, msg=0x7fff1dfbce5c, len=29, csid=0x7fff1dfbb82c, xid=0
Nov 25 10:50:52 nebula1 crmd[5038]: notice: process_lrm_event: LRM operation ONE-vg_start_0 (call=79, rc=0, cib-update=61, confirmed=true) ok
Nov 25 10:50:52 nebula1 lvm[5328]: process_work_item: remote
Nov 25 10:50:52 nebula1 lvm[5328]: process_remote_command LOCK_VG (0x33) for clientid 0x5000000 XID 0 on node 40a8e747
Nov 25 10:50:52 nebula1 lvm[5328]: do_lock_vg: resource 'P_#global', cmd = 0x4 LCK_VG (WRITE|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 10:50:52 nebula1 lvm[5328]: Refreshing context
Nov 25 10:50:52 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 28: monitor ONE-vg_monitor_60000 on nebula1 (local)
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811078 for 1084811080. len 18
Nov 25 10:50:52 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 29: start ONE-Datastores_start_0 on nebula1 (local)
Nov 25 10:50:52 nebula1 lvm[5328]: LVM thread waiting for work
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811078 for 1084811079. len 18
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811079 for 1084811080. len 18
Nov 25 10:50:52 nebula1 crmd[5038]: notice: process_lrm_event: LRM operation ONE-vg_monitor_60000 (call=83, rc=0, cib-update=62, confirmed=false) ok
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811080 for 1084811079. len 18
Nov 25 10:50:52 nebula1 Filesystem(ONE-Datastores)[5406]: INFO: Running start for /dev/one-fs/datastores on /var/lib/one/datastores
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811079 for 0. len 31
Nov 25 10:50:52 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x164fa60. client=0x6a1d60, msg=0x7fff1dfbce5c, len=31, csid=0x7fff1dfbb82c, xid=0
Nov 25 10:50:52 nebula1 lvm[5328]: process_work_item: remote
Nov 25 10:50:52 nebula1 lvm[5328]: process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 2 on node 40a8e747
Nov 25 10:50:52 nebula1 lvm[5328]: Syncing device names
Nov 25 10:50:52 nebula1 lvm[5328]: LVM thread waiting for work
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811078 for 1084811079. len 18
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811080 for 1084811079. len 18
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811079 for 0. len 31
Nov 25 10:50:52 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x164fa60. client=0x6a1d60, msg=0x7fff1dfbce5c, len=31, csid=0x7fff1dfbb82c, xid=0
Nov 25 10:50:52 nebula1 lvm[5328]: process_work_item: remote
Nov 25 10:50:52 nebula1 lvm[5328]: process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 5 on node 40a8e747
Nov 25 10:50:52 nebula1 lvm[5328]: Syncing device names
Nov 25 10:50:52 nebula1 lvm[5328]: LVM thread waiting for work
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811078 for 1084811079. len 18
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811080 for 1084811079. len 18
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811080 for 0. len 84
Nov 25 10:50:52 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x164fa60. client=0x6a1d60, msg=0x7fff1dfbce5c, len=84, csid=0x7fff1dfbb82c, xid=0
Nov 25 10:50:52 nebula1 lvm[5328]: process_work_item: remote
Nov 25 10:50:52 nebula1 lvm[5328]: process_remote_command LOCK_QUERY (0x34) for clientid 0xc000000 XID 9 on node 40a8e748
Nov 25 10:50:52 nebula1 lvm[5328]: do_lock_query: resource 'PSXBJgdbJb55UdFcI48VOdE6voIrDm71exNu0QeKultyW71LS8DjWLEdnpgtovv9', mode 1 (CR)
Nov 25 10:50:52 nebula1 lvm[5328]: LVM thread waiting for work
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811079 for 1084811080. len 18
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811078 for 1084811080. len 21
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811079 for 0. len 29
Nov 25 10:50:52 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x164fa60. client=0x6a1d60, msg=0x7fff1dfbce5c, len=29, csid=0x7fff1dfbb82c, xid=0
Nov 25 10:50:52 nebula1 lvm[5328]: process_work_item: remote
Nov 25 10:50:52 nebula1 lvm[5328]: process_remote_command LOCK_VG (0x33) for clientid 0x5000000 XID 7 on node 40a8e747
Nov 25 10:50:52 nebula1 lvm[5328]: do_lock_vg: resource 'P_#global', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 10:50:52 nebula1 lvm[5328]: Refreshing context
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811080 for 0. len 31
Nov 25 10:50:52 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x164fad0. client=0x6a1d60, msg=0x7fff1dfbce5c, len=31, csid=0x7fff1dfbb82c, xid=0
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811079 for 1084811080. len 18
Nov 25 10:50:52 nebula1 lvm[5328]: process_work_item: remote
Nov 25 10:50:52 nebula1 lvm[5328]: process_remote_command SYNC_NAMES (0x2d) for clientid 0xc000000 XID 11 on node 40a8e748
Nov 25 10:50:52 nebula1 lvm[5328]: Syncing device names
Nov 25 10:50:52 nebula1 lvm[5328]: LVM thread waiting for work
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811078 for 1084811079. len 18
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811078 for 1084811080. len 18
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811080 for 1084811079. len 18
Nov 25 10:50:52 nebula1 kernel: [ 518.636405] GFS2 installed
Nov 25 10:50:52 nebula1 kernel: [ 518.650961] GFS2: fsid=one:datastores: Trying to join cluster "lock_dlm", "one:datastores"
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811079 for 0. len 84
Nov 25 10:50:52 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x164fa60. client=0x6a1d60, msg=0x7fff1dfbce5c, len=84, csid=0x7fff1dfbb82c, xid=0
Nov 25 10:50:52 nebula1 lvm[5328]: process_work_item: remote
Nov 25 10:50:52 nebula1 lvm[5328]: process_remote_command LOCK_QUERY (0x34) for clientid 0x5000000 XID 9 on node 40a8e747
Nov 25 10:50:52 nebula1 lvm[5328]: do_lock_query: resource 'PSXBJgdbJb55UdFcI48VOdE6voIrDm71exNu0QeKultyW71LS8DjWLEdnpgtovv9', mode 1 (CR)
Nov 25 10:50:52 nebula1 lvm[5328]: LVM thread waiting for work
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811078 for 1084811079. len 21
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811080 for 1084811079. len 21
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811079 for 0. len 31
Nov 25 10:50:52 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x164fa60. client=0x6a1d60, msg=0x7fff1dfbce5c, len=31, csid=0x7fff1dfbb82c, xid=0
Nov 25 10:50:52 nebula1 lvm[5328]: process_work_item: remote
Nov 25 10:50:52 nebula1 lvm[5328]: process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 11 on node 40a8e747
Nov 25 10:50:52 nebula1 lvm[5328]: Syncing device names
Nov 25 10:50:52 nebula1 lvm[5328]: LVM thread waiting for work
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811078 for 1084811079. len 18
Nov 25 10:50:52 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811080 for 1084811079. len 18
Nov 25 10:50:52 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 52: monitor ONE-vg:2_monitor_60000 on nebula3
Nov 25 10:50:52 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 53: start ONE-Datastores:2_start_0 on nebula3
Nov 25 10:50:52 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 40: monitor ONE-vg:1_monitor_60000 on nebula2
Nov 25 10:50:52 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 41: start ONE-Datastores:1_start_0 on nebula2
Nov 25 10:50:52 nebula1 kernel: [ 519.160069] GFS2: fsid=one:datastores: first mounter control generation 0
Nov 25 10:50:52 nebula1 kernel: [ 519.160074] GFS2: fsid=one:datastores: Joined cluster. Now mounting FS...
Nov 25 10:50:52 nebula1 kernel: [ 519.189600] GFS2: fsid=one:datastores.0: jid=0, already locked for use
Nov 25 10:50:52 nebula1 kernel: [ 519.189603] GFS2: fsid=one:datastores.0: jid=0: Looking at journal...
Nov 25 10:50:52 nebula1 kernel: [ 519.278636] GFS2: fsid=one:datastores.0: jid=0: Done
Nov 25 10:50:52 nebula1 kernel: [ 519.278676] GFS2: fsid=one:datastores.0: jid=1: Trying to acquire journal lock...
Nov 25 10:50:52 nebula1 kernel: [ 519.287411] GFS2: fsid=one:datastores.0: jid=1: Looking at journal...
Nov 25 10:50:53 nebula1 kernel: [ 519.527648] GFS2: fsid=one:datastores.0: jid=1: Done
Nov 25 10:50:53 nebula1 kernel: [ 519.527668] GFS2: fsid=one:datastores.0: jid=2: Trying to acquire journal lock...
Nov 25 10:50:53 nebula1 kernel: [ 519.536777] GFS2: fsid=one:datastores.0: jid=2: Looking at journal...
Nov 25 10:50:53 nebula1 kernel: [ 519.663270] GFS2: fsid=one:datastores.0: jid=2: Done
Nov 25 10:50:53 nebula1 kernel: [ 519.663288] GFS2: fsid=one:datastores.0: jid=3: Trying to acquire journal lock...
Nov 25 10:50:53 nebula1 kernel: [ 519.666539] GFS2: fsid=one:datastores.0: jid=3: Looking at journal...
Nov 25 10:50:53 nebula1 kernel: [ 519.799837] GFS2: fsid=one:datastores.0: jid=3: Done
Nov 25 10:50:53 nebula1 kernel: [ 519.799849] GFS2: fsid=one:datastores.0: first mount done, others may mount
Nov 25 10:50:53 nebula1 crmd[5038]: notice: process_lrm_event: LRM operation ONE-Datastores_start_0 (call=85, rc=0, cib-update=63, confirmed=true) ok
Nov 25 10:50:53 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 30: monitor ONE-Datastores_monitor_20000 on nebula1 (local)
Nov 25 10:50:53 nebula1 crmd[5038]: notice: process_lrm_event: LRM operation ONE-Datastores_monitor_20000 (call=89, rc=0, cib-update=64, confirmed=false) ok
Nov 25 10:50:53 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 54: monitor ONE-Datastores:2_monitor_20000 on nebula3
Nov 25 10:50:54 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 42: monitor ONE-Datastores:1_monitor_20000 on nebula2
Nov 25 10:50:54 nebula1 crmd[5038]: notice: run_graph: Transition 1 (Complete=33, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-0.bz2): Complete
Nov 25 10:50:54 nebula1 crmd[5038]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Crash nebula2 (echo c > /proc/sysrq-trigger)
--------------------------------------------
Nov 25 10:56:28 nebula1 corosync[4873]: [TOTEM ] A processor failed, forming new configuration.
Nov 25 10:56:32 nebula1 corosync[4873]: [TOTEM ] A new membership (192.168.231.70:81396) was formed. Members left: 1084811079
Nov 25 10:56:32 nebula1 lvm[5328]: confchg callback. 0 joined, 1 left, 2 members
Nov 25 10:56:32 nebula1 crmd[5038]: warning: match_down_event: No match for shutdown action on 1084811079
Nov 25 10:56:32 nebula1 crmd[5038]: notice: peer_update_callback: Stonith/shutdown of nebula2 not matched
Nov 25 10:56:32 nebula1 crmd[5038]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Nov 25 10:56:32 nebula1 corosync[4873]: [QUORUM] Members[2]: 1084811078 1084811080
Nov 25 10:56:32 nebula1 corosync[4873]: [MAIN ] Completed service synchronization, ready to provide service.
Nov 25 10:56:32 nebula1 kernel: [ 858.367645] dlm: closing connection to node 1084811079
Nov 25 10:56:32 nebula1 crmd[5038]: notice: crm_update_peer_state: pcmk_quorum_notification: Node nebula2[1084811079] - state is now lost (was member)
Nov 25 10:56:32 nebula1 crmd[5038]: warning: match_down_event: No match for shutdown action on 1084811079
Nov 25 10:56:32 nebula1 crmd[5038]: notice: peer_update_callback: Stonith/shutdown of nebula2 not matched
Nov 25 10:56:32 nebula1 pacemakerd[5031]: notice: crm_update_peer_state: pcmk_quorum_notification: Node nebula2[1084811079] - state is now lost (was member)
Nov 25 10:56:32 nebula1 dlm_controld[5303]: 858 fence request 1084811079 pid 6187 nodedown time 1416909392 fence_all dlm_stonith
Nov 25 10:56:32 nebula1 dlm_controld[5303]: 858 fence result 1084811079 pid 6187 result 1 exit status
Nov 25 10:56:32 nebula1 dlm_controld[5303]: 858 fence status 1084811079 receive 1 from 1084811078 walltime 1416909392 local 858
Nov 25 10:56:32 nebula1 pengine[5037]: warning: pe_fence_node: Node nebula2 will be fenced because our peer process is no longer available
Nov 25 10:56:32 nebula1 pengine[5037]: warning: determine_online_status: Node nebula2 is unclean
Nov 25 10:56:32 nebula1 dlm_controld[5303]: 858 fence status 1084811079 receive 1 from 1084811080 walltime 1416909392 local 858
Nov 25 10:56:32 nebula1 dlm_controld[5303]: 858 fence request 1084811079 no actor
Nov 25 10:56:32 nebula1 pengine[5037]: warning: stage6: Scheduling Node nebula2 for STONITH
Nov 25 10:56:32 nebula1 pengine[5037]: notice: LogActions: Move Stonith-nebula1-IPMILAN#011(Started nebula2 -> nebula3)
Nov 25 10:56:32 nebula1 pengine[5037]: notice: LogActions: Stop dlm:0#011(nebula2)
Nov 25 10:56:32 nebula1 pengine[5037]: notice: LogActions: Stop clvm:0#011(nebula2)
Nov 25 10:56:32 nebula1 pengine[5037]: notice: LogActions: Stop ONE-vg:0#011(nebula2)
Nov 25 10:56:32 nebula1 pengine[5037]: notice: LogActions: Stop ONE-Datastores:0#011(nebula2)
Nov 25 10:56:32 nebula1 pengine[5037]: warning: process_pe_message: Calculated Transition 2: /var/lib/pacemaker/pengine/pe-warn-1.bz2
Nov 25 10:56:33 nebula1 pengine[5037]: warning: pe_fence_node: Node nebula2 will be fenced because the node is no longer part of the cluster
Nov 25 10:56:33 nebula1 pengine[5037]: warning: determine_online_status: Node nebula2 is unclean
Nov 25 10:56:33 nebula1 pengine[5037]: warning: custom_action: Action Stonith-nebula1-IPMILAN_stop_0 on nebula2 is unrunnable (offline)
Nov 25 10:56:33 nebula1 pengine[5037]: warning: custom_action: Action dlm:0_stop_0 on nebula2 is unrunnable (offline)
Nov 25 10:56:33 nebula1 pengine[5037]: warning: custom_action: Action dlm:0_stop_0 on nebula2 is unrunnable (offline)
Nov 25 10:56:33 nebula1 pengine[5037]: warning: custom_action: Action clvm:0_stop_0 on nebula2 is unrunnable (offline)
Nov 25 10:56:33 nebula1 pengine[5037]: warning: custom_action: Action clvm:0_stop_0 on nebula2 is unrunnable (offline)
Nov 25 10:56:33 nebula1 pengine[5037]: warning: custom_action: Action ONE-vg:0_stop_0 on nebula2 is unrunnable (offline)
Nov 25 10:56:33 nebula1 pengine[5037]: warning: custom_action: Action ONE-vg:0_stop_0 on nebula2 is unrunnable (offline)
Nov 25 10:56:33 nebula1 pengine[5037]: warning: custom_action: Action ONE-Datastores:0_stop_0 on nebula2 is unrunnable (offline)
Nov 25 10:56:33 nebula1 pengine[5037]: warning: custom_action: Action ONE-Datastores:0_stop_0 on nebula2 is unrunnable (offline)
Nov 25 10:56:33 nebula1 pengine[5037]: warning: stage6: Scheduling Node nebula2 for STONITH
Nov 25 10:56:33 nebula1 pengine[5037]: notice: LogActions: Move Stonith-nebula1-IPMILAN#011(Started nebula2 -> nebula3)
Nov 25 10:56:33 nebula1 pengine[5037]: notice: LogActions: Stop dlm:0#011(nebula2)
Nov 25 10:56:33 nebula1 pengine[5037]: notice: LogActions: Stop clvm:0#011(nebula2)
Nov 25 10:56:33 nebula1 pengine[5037]: notice: LogActions: Stop ONE-vg:0#011(nebula2)
Nov 25 10:56:33 nebula1 pengine[5037]: notice: LogActions: Stop ONE-Datastores:0#011(nebula2)
Nov 25 10:56:33 nebula1 pengine[5037]: warning: process_pe_message: Calculated Transition 3: /var/lib/pacemaker/pengine/pe-warn-2.bz2
Nov 25 10:56:33 nebula1 crmd[5038]: notice: te_fence_node: Executing reboot fencing operation (75) on nebula2 (timeout=30000)
Nov 25 10:56:33 nebula1 stonith-ng[5034]: notice: handle_request: Client crmd.5038.c82449da wants to fence (reboot) 'nebula2' with device '(any)'
Nov 25 10:56:33 nebula1 stonith-ng[5034]: notice: initiate_remote_stonith_op: Initiating remote operation reboot for nebula2: 34bed18c-c395-4de2-b323-e00208cac6c7 (0)
Nov 25 10:56:33 nebula1 stonith-ng[5034]: notice: can_fence_host_with_device: Stonith-ONE-Frontend can not fence nebula2: static-list
Nov 25 10:56:33 nebula1 stonith-ng[5034]: notice: can_fence_host_with_device: Stonith-nebula2-IPMILAN can fence nebula2: static-list
Nov 25 10:56:33 nebula1 stonith-ng[5034]: notice: can_fence_host_with_device: Stonith-nebula3-IPMILAN can not fence nebula2: static-list
Nov 25 10:56:34 nebula1 stonith-ng[5034]: notice: remote_op_done: Operation reboot of nebula2 by nebula3 for crmd.5038 at nebula1.34bed18c: OK
Nov 25 10:56:34 nebula1 crmd[5038]: notice: tengine_stonith_callback: Stonith operation 4/75:3:0:d458be36-df66-4aff-b201-af76ccbec3aa: OK (0)
Nov 25 10:56:34 nebula1 crmd[5038]: notice: tengine_stonith_notify: Peer nebula2 was terminated (reboot) by nebula3 for nebula1: OK (ref=34bed18c-c395-4de2-b323-e00208cac6c7) by client crmd.5038
Nov 25 10:56:34 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 16: start Stonith-nebula1-IPMILAN_start_0 on nebula3
Nov 25 10:56:35 nebula1 crmd[5038]: notice: run_graph: Transition 3 (Complete=13, Pending=0, Fired=0, Skipped=1, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-warn-2.bz2): Stopped
Nov 25 10:56:35 nebula1 pengine[5037]: notice: process_pe_message: Calculated Transition 4: /var/lib/pacemaker/pengine/pe-input-1.bz2
Nov 25 10:56:35 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 17: monitor Stonith-nebula1-IPMILAN_monitor_1800000 on nebula3
Nov 25 10:56:36 nebula1 crmd[5038]: notice: run_graph: Transition 4 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): Complete
Nov 25 10:56:36 nebula1 crmd[5038]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Nov 25 10:57:29 nebula1 dlm_controld[5303]: 915 datastores wait for fencing
Nov 25 10:57:29 nebula1 dlm_controld[5303]: 915 clvmd wait for fencing
Nov 25 11:00:14 nebula1 kernel: [ 1080.756059] INFO: task gfs2_quotad:5492 blocked for more than 120 seconds.
Nov 25 11:00:14 nebula1 kernel: [ 1080.756178] Not tainted 3.13.0-40-generic #69-Ubuntu
Nov 25 11:00:14 nebula1 kernel: [ 1080.756269] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 25 11:00:14 nebula1 kernel: [ 1080.756376] gfs2_quotad D ffff88087fcd4480 0 5492 2 0x00000000
Nov 25 11:00:14 nebula1 kernel: [ 1080.756380] ffff88082b977a98 0000000000000046 ffff880035f89800 ffff88082b977fd8
Nov 25 11:00:14 nebula1 kernel: [ 1080.756384] 0000000000014480 0000000000014480 ffff880035f89800 ffff880035f89800
Nov 25 11:00:14 nebula1 kernel: [ 1080.756386] ffff880852b0f7b0 ffff880852b0f7b8 ffff88082bab40b0 0000000000000014
Nov 25 11:00:14 nebula1 kernel: [ 1080.756389] Call Trace:
Nov 25 11:00:14 nebula1 kernel: [ 1080.756399] [<ffffffff81723649>] schedule+0x29/0x70
Nov 25 11:00:14 nebula1 kernel: [ 1080.756402] [<ffffffff817264d5>] rwsem_down_read_failed+0xf5/0x150
Nov 25 11:00:14 nebula1 kernel: [ 1080.756407] [<ffffffff81370764>] call_rwsem_down_read_failed+0x14/0x30
Nov 25 11:00:14 nebula1 kernel: [ 1080.756409] [<ffffffff81725ca0>] ? down_read+0x20/0x30
Nov 25 11:00:14 nebula1 kernel: [ 1080.756419] [<ffffffffa03d64d2>] dlm_lock+0x52/0x1b0 [dlm]
Nov 25 11:00:14 nebula1 kernel: [ 1080.756423] [<ffffffff8101b689>] ? read_tsc+0x9/0x20
Nov 25 11:00:14 nebula1 kernel: [ 1080.756427] [<ffffffff810cd6aa>] ? __getnstimeofday+0x3a/0xc0
Nov 25 11:00:14 nebula1 kernel: [ 1080.756440] [<ffffffffa0427517>] gdlm_lock+0x1e7/0x330 [gfs2]
Nov 25 11:00:14 nebula1 kernel: [ 1080.756447] [<ffffffffa0427660>] ? gdlm_lock+0x330/0x330 [gfs2]
Nov 25 11:00:14 nebula1 kernel: [ 1080.756455] [<ffffffffa0428120>] ? gdlm_recovery_result+0x130/0x130 [gfs2]
Nov 25 11:00:14 nebula1 kernel: [ 1080.756462] [<ffffffffa040a9f7>] do_xmote+0x187/0x270 [gfs2]
Nov 25 11:00:14 nebula1 kernel: [ 1080.756468] [<ffffffffa040ac26>] run_queue+0x146/0x290 [gfs2]
Nov 25 11:00:14 nebula1 kernel: [ 1080.756475] [<ffffffffa040b0c2>] gfs2_glock_nq+0x1b2/0x430 [gfs2]
Nov 25 11:00:14 nebula1 kernel: [ 1080.756482] [<ffffffffa0424493>] gfs2_statfs_sync+0x53/0x1a0 [gfs2]
Nov 25 11:00:14 nebula1 kernel: [ 1080.756485] [<ffffffff817227dd>] ? schedule_timeout+0x17d/0x2d0
Nov 25 11:00:14 nebula1 kernel: [ 1080.756493] [<ffffffffa0424489>] ? gfs2_statfs_sync+0x49/0x1a0 [gfs2]
Nov 25 11:00:14 nebula1 kernel: [ 1080.756500] [<ffffffffa041b133>] quotad_check_timeo.part.16+0x23/0x90 [gfs2]
Nov 25 11:00:14 nebula1 kernel: [ 1080.756508] [<ffffffffa041d6de>] gfs2_quotad+0x23e/0x2e0 [gfs2]
Nov 25 11:00:14 nebula1 kernel: [ 1080.756512] [<ffffffff810ab0a0>] ? prepare_to_wait_event+0x100/0x100
Nov 25 11:00:14 nebula1 kernel: [ 1080.756520] [<ffffffffa041d4a0>] ? gfs2_wake_up_statfs+0x40/0x40 [gfs2]
Nov 25 11:00:14 nebula1 kernel: [ 1080.756523] [<ffffffff8108b562>] kthread+0xd2/0xf0
Nov 25 11:00:14 nebula1 kernel: [ 1080.756526] [<ffffffff8108b490>] ? kthread_create_on_node+0x1c0/0x1c0
Nov 25 11:00:14 nebula1 kernel: [ 1080.756529] [<ffffffff8172fc7c>] ret_from_fork+0x7c/0xb0
Nov 25 11:00:14 nebula1 kernel: [ 1080.756531] [<ffffffff8108b490>] ? kthread_create_on_node+0x1c0/0x1c0
Manual fence_ack “dlm_tool fence_ack 1084811079”
------------------------------------------------
Nov 25 11:01:22 nebula1 dlm_controld[5303]: 1148 fence status 1084811079 receive -125 from 1084811080 walltime 1416909682 local 1148
Nov 25 11:01:22 nebula1 kernel: [ 1148.852417] GFS2: fsid=one:datastores.0: recover generation 4 jid 2
Nov 25 11:01:22 nebula1 kernel: [ 1148.852428] GFS2: fsid=one:datastores.0: jid=2: Trying to acquire journal lock...
Nov 25 11:01:22 nebula1 kernel: [ 1148.852800] GFS2: fsid=one:datastores.0: jid=2: Looking at journal...
Nov 25 11:01:22 nebula1 kernel: [ 1148.885739] GFS2: fsid=one:datastores.0: recover jid 2 result success
Nov 25 11:01:22 nebula1 kernel: [ 1148.885752] GFS2: fsid=one:datastores.0: jid=2: Done
Nov 25 11:01:22 nebula1 kernel: [ 1148.885805] GFS2: fsid=one:datastores.0: recover generation 4 done
Start corosync on nebula2 after reboot
--------------------------------------
Nov 25 11:04:37 nebula1 corosync[4873]: [TOTEM ] A new membership (192.168.231.70:81400) was formed. Members joined: 1084811079
Nov 25 11:04:37 nebula1 corosync[4873]: [QUORUM] Members[3]: 1084811078 1084811079 1084811080
Nov 25 11:04:37 nebula1 crmd[5038]: notice: crm_update_peer_state: pcmk_quorum_notification: Node nebula2[1084811079] - state is now member (was lost)
Nov 25 11:04:37 nebula1 pacemakerd[5031]: notice: crm_update_peer_state: pcmk_quorum_notification: Node nebula2[1084811079] - state is now member (was lost)
Nov 25 11:04:37 nebula1 corosync[4873]: [MAIN ] Completed service synchronization, ready to provide service.
Start pacemaker on nebula2 after reboot
---------------------------------------
Nov 25 11:04:52 nebula1 crmd[5038]: notice: do_state_transition: State transition S_IDLE -> S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=peer_update_callback ]
Nov 25 11:04:56 nebula1 attrd[5036]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Nov 25 11:04:56 nebula1 attrd[5036]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Nov 25 11:04:57 nebula1 pengine[5037]: notice: LogActions: Start dlm:2#011(nebula2)
Nov 25 11:04:57 nebula1 pengine[5037]: notice: LogActions: Start clvm:2#011(nebula2)
Nov 25 11:04:57 nebula1 pengine[5037]: notice: LogActions: Start ONE-vg:2#011(nebula2)
Nov 25 11:04:57 nebula1 pengine[5037]: notice: LogActions: Start ONE-Datastores:2#011(nebula2)
Nov 25 11:04:57 nebula1 pengine[5037]: notice: process_pe_message: Calculated Transition 5: /var/lib/pacemaker/pengine/pe-input-2.bz2
Nov 25 11:04:57 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 16: monitor Stonith-nebula1-IPMILAN_monitor_0 on nebula2
Nov 25 11:04:57 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 17: monitor Stonith-nebula2-IPMILAN_monitor_0 on nebula2
Nov 25 11:04:57 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 18: monitor Stonith-nebula3-IPMILAN_monitor_0 on nebula2
Nov 25 11:04:57 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 19: monitor Stonith-ONE-Frontend_monitor_0 on nebula2
Nov 25 11:04:57 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 20: monitor ONE-Frontend-VM_monitor_0 on nebula2
Nov 25 11:04:57 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 21: monitor Stonith-Quorum-Node_monitor_0 on nebula2
Nov 25 11:04:57 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 22: monitor Quorum-Node-VM_monitor_0 on nebula2
Nov 25 11:04:57 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 23: monitor dlm:2_monitor_0 on nebula2
Nov 25 11:04:57 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 24: monitor clvm:2_monitor_0 on nebula2
Nov 25 11:04:57 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 25: monitor ONE-vg:2_monitor_0 on nebula2
Nov 25 11:04:57 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 26: monitor ONE-Datastores:2_monitor_0 on nebula2
Nov 25 11:04:57 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 15: probe_complete probe_complete on nebula2 - no waiting
Nov 25 11:04:57 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 66: start dlm:2_start_0 on nebula2
Nov 25 11:04:58 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 67: monitor dlm:2_monitor_60000 on nebula2
Nov 25 11:04:58 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 68: start clvm:2_start_0 on nebula2
Nov 25 11:04:58 nebula1 kernel: [ 1365.032033] dlm: got connection from 1084811079
Nov 25 11:04:59 nebula1 lvm[5328]: confchg callback. 1 joined, 0 left, 3 members
Nov 25 11:05:01 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 69: monitor clvm:2_monitor_60000 on nebula2
Nov 25 11:05:01 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 70: start ONE-vg:2_start_0 on nebula2
Nov 25 11:05:01 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811079 for 0. len 29
Nov 25 11:05:01 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x164fa60. client=0x6a1d60, msg=0x7fff1dfbce5c, len=29, csid=0x7fff1dfbb82c, xid=0
Nov 25 11:05:01 nebula1 lvm[5328]: process_work_item: remote
Nov 25 11:05:01 nebula1 lvm[5328]: process_remote_command LOCK_VG (0x33) for clientid 0x5000000 XID 0 on node 40a8e747
Nov 25 11:05:01 nebula1 lvm[5328]: do_lock_vg: resource 'P_#global', cmd = 0x4 LCK_VG (WRITE|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 11:05:01 nebula1 lvm[5328]: Refreshing context
Nov 25 11:05:01 nebula1 lvm[5328]: LVM thread waiting for work
Nov 25 11:05:01 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811078 for 1084811079. len 18
Nov 25 11:05:01 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811080 for 1084811079. len 18
Nov 25 11:05:01 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811079 for 0. len 31
Nov 25 11:05:01 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x164fa60. client=0x6a1d60, msg=0x7fff1dfbce5c, len=31, csid=0x7fff1dfbb82c, xid=0
Nov 25 11:05:01 nebula1 lvm[5328]: process_work_item: remote
Nov 25 11:05:01 nebula1 lvm[5328]: process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 2 on node 40a8e747
Nov 25 11:05:01 nebula1 lvm[5328]: Syncing device names
Nov 25 11:05:01 nebula1 lvm[5328]: LVM thread waiting for work
Nov 25 11:05:01 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811078 for 1084811079. len 18
Nov 25 11:05:01 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811080 for 1084811079. len 18
Nov 25 11:05:01 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811079 for 0. len 29
Nov 25 11:05:01 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x164fa60. client=0x6a1d60, msg=0x7fff1dfbce5c, len=29, csid=0x7fff1dfbb82c, xid=0
Nov 25 11:05:01 nebula1 lvm[5328]: process_work_item: remote
Nov 25 11:05:01 nebula1 lvm[5328]: process_remote_command LOCK_VG (0x33) for clientid 0x5000000 XID 4 on node 40a8e747
Nov 25 11:05:01 nebula1 lvm[5328]: do_lock_vg: resource 'P_#global', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 11:05:01 nebula1 lvm[5328]: Refreshing context
Nov 25 11:05:01 nebula1 lvm[5328]: LVM thread waiting for work
Nov 25 11:05:01 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811078 for 1084811079. len 18
Nov 25 11:05:01 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811080 for 1084811079. len 18
Nov 25 11:05:01 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811079 for 0. len 31
Nov 25 11:05:01 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x164fa60. client=0x6a1d60, msg=0x7fff1dfbce5c, len=31, csid=0x7fff1dfbb82c, xid=0
Nov 25 11:05:01 nebula1 lvm[5328]: process_work_item: remote
Nov 25 11:05:01 nebula1 lvm[5328]: process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 6 on node 40a8e747
Nov 25 11:05:01 nebula1 lvm[5328]: Syncing device names
Nov 25 11:05:01 nebula1 lvm[5328]: LVM thread waiting for work
Nov 25 11:05:01 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811078 for 1084811079. len 18
Nov 25 11:05:01 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811080 for 1084811079. len 18
Nov 25 11:05:01 nebula1 crmd[5038]: warning: status_from_rc: Action 70 (ONE-vg:2_start_0) on nebula2 failed (target: 0 vs. rc: 1): Error
Nov 25 11:05:01 nebula1 crmd[5038]: warning: update_failcount: Updating failcount for ONE-vg on nebula2 after failed start: rc=1 (update=INFINITY, time=1416909901)
Nov 25 11:05:01 nebula1 crmd[5038]: warning: update_failcount: Updating failcount for ONE-vg on nebula2 after failed start: rc=1 (update=INFINITY, time=1416909901)
Nov 25 11:05:01 nebula1 crmd[5038]: notice: run_graph: Transition 5 (Complete=20, Pending=0, Fired=0, Skipped=5, Incomplete=1, Source=/var/lib/pacemaker/pengine/pe-input-2.bz2): Stopped
Nov 25 11:05:01 nebula1 pengine[5037]: warning: unpack_rsc_op: Processing failed op start for ONE-vg:0 on nebula2: unknown error (1)
Nov 25 11:05:01 nebula1 pengine[5037]: notice: LogActions: Recover ONE-vg:0#011(Started nebula2)
Nov 25 11:05:01 nebula1 pengine[5037]: notice: LogActions: Start ONE-Datastores:0#011(nebula2)
Nov 25 11:05:01 nebula1 pengine[5037]: notice: process_pe_message: Calculated Transition 6: /var/lib/pacemaker/pengine/pe-input-3.bz2
Nov 25 11:05:01 nebula1 pengine[5037]: warning: unpack_rsc_op: Processing failed op start for ONE-vg:0 on nebula2: unknown error (1)
Nov 25 11:05:01 nebula1 pengine[5037]: warning: common_apply_stickiness: Forcing ONE-Storage-Clone away from nebula2 after 1000000 failures (max=1000000)
Nov 25 11:05:01 nebula1 pengine[5037]: message repeated 4 times: [ warning: common_apply_stickiness: Forcing ONE-Storage-Clone away from nebula2 after 1000000 failures (max=1000000)]
Nov 25 11:05:01 nebula1 pengine[5037]: notice: LogActions: Stop dlm:0#011(nebula2)
Nov 25 11:05:01 nebula1 pengine[5037]: notice: LogActions: Stop clvm:0#011(nebula2)
Nov 25 11:05:01 nebula1 pengine[5037]: notice: LogActions: Stop ONE-vg:0#011(nebula2)
Nov 25 11:05:01 nebula1 pengine[5037]: notice: process_pe_message: Calculated Transition 7: /var/lib/pacemaker/pengine/pe-input-4.bz2
Nov 25 11:05:01 nebula1 pengine[5037]: warning: unpack_rsc_op: Processing failed op start for ONE-vg:0 on nebula2: unknown error (1)
Nov 25 11:05:01 nebula1 pengine[5037]: warning: common_apply_stickiness: Forcing ONE-Storage-Clone away from nebula2 after 1000000 failures (max=1000000)
Nov 25 11:05:01 nebula1 pengine[5037]: message repeated 4 times: [ warning: common_apply_stickiness: Forcing ONE-Storage-Clone away from nebula2 after 1000000 failures (max=1000000)]
Nov 25 11:05:01 nebula1 pengine[5037]: notice: LogActions: Stop dlm:0#011(nebula2)
Nov 25 11:05:01 nebula1 pengine[5037]: notice: LogActions: Stop clvm:0#011(nebula2)
Nov 25 11:05:01 nebula1 pengine[5037]: notice: LogActions: Stop ONE-vg:0#011(nebula2)
Nov 25 11:05:01 nebula1 pengine[5037]: notice: process_pe_message: Calculated Transition 8: /var/lib/pacemaker/pengine/pe-input-5.bz2
Nov 25 11:05:01 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 3: stop ONE-vg_stop_0 on nebula2
Nov 25 11:05:02 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811079 for 0. len 31
Nov 25 11:05:02 nebula1 lvm[5328]: add_to_lvmqueue: cmd=0x164fa60. client=0x6a1d60, msg=0x7fff1dfbce5c, len=31, csid=0x7fff1dfbb82c, xid=0
Nov 25 11:05:02 nebula1 lvm[5328]: process_work_item: remote
Nov 25 11:05:02 nebula1 lvm[5328]: process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 9 on node 40a8e747
Nov 25 11:05:02 nebula1 lvm[5328]: Syncing device names
Nov 25 11:05:02 nebula1 lvm[5328]: LVM thread waiting for work
Nov 25 11:05:02 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811078 for 1084811079. len 18
Nov 25 11:05:02 nebula1 lvm[5328]: 1084811078 got message from nodeid 1084811080 for 1084811079. len 18
Nov 25 11:05:02 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 35: stop clvm_stop_0 on nebula2
Nov 25 11:05:02 nebula1 lvm[5328]: confchg callback. 0 joined, 1 left, 2 members
Nov 25 11:05:03 nebula1 crmd[5038]: notice: te_rsc_command: Initiating action 34: stop dlm_stop_0 on nebula2
Nov 25 11:05:05 nebula1 crmd[5038]: notice: run_graph: Transition 8 (Complete=8, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-5.bz2): Complete
Nov 25 11:05:05 nebula1 crmd[5038]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: cluster-debug-nebula2.log
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20141125/90ab8ec5/attachment-0026.log>
-------------- next part --------------
nebula1 is DC
=============
Start corosync
--------------
Nov 25 10:48:43 nebula3 corosync[6219]: [MAIN ] Corosync Cluster Engine ('2.3.3'): started and ready to provide service.
Nov 25 10:48:43 nebula3 corosync[6219]: [MAIN ] Corosync built-in features: dbus testagents rdma watchdog augeas pie relro bindnow
Nov 25 10:48:43 nebula3 corosync[6220]: [TOTEM ] Initializing transport (UDP/IP Unicast).
Nov 25 10:48:43 nebula3 corosync[6220]: [TOTEM ] Initializing transmit/receive security (NSS) crypto: aes256 hash: sha256
Nov 25 10:48:43 nebula3 corosync[6220]: [TOTEM ] The network interface [192.168.231.72] is now up.
Nov 25 10:48:43 nebula3 corosync[6220]: [SERV ] Service engine loaded: corosync configuration map access [0]
Nov 25 10:48:43 nebula3 corosync[6220]: [QB ] server name: cmap
Nov 25 10:48:43 nebula3 corosync[6220]: [SERV ] Service engine loaded: corosync configuration service [1]
Nov 25 10:48:43 nebula3 corosync[6220]: [QB ] server name: cfg
Nov 25 10:48:43 nebula3 corosync[6220]: [SERV ] Service engine loaded: corosync cluster closed process group service v1.01 [2]
Nov 25 10:48:43 nebula3 corosync[6220]: [QB ] server name: cpg
Nov 25 10:48:43 nebula3 corosync[6220]: [SERV ] Service engine loaded: corosync profile loading service [4]
Nov 25 10:48:43 nebula3 corosync[6220]: [WD ] Watchdog is now been tickled by corosync.
Nov 25 10:48:43 nebula3 corosync[6220]: [WD ] no resources configured.
Nov 25 10:48:43 nebula3 corosync[6220]: [SERV ] Service engine loaded: corosync watchdog service [7]
Nov 25 10:48:43 nebula3 corosync[6220]: [QUORUM] Using quorum provider corosync_votequorum
Nov 25 10:48:43 nebula3 corosync[6220]: [QUORUM] Waiting for all cluster members. Current votes: 1 expected_votes: 3
Nov 25 10:48:43 nebula3 corosync[6220]: [SERV ] Service engine loaded: corosync vote quorum service v1.0 [5]
Nov 25 10:48:43 nebula3 corosync[6220]: [QB ] server name: votequorum
Nov 25 10:48:43 nebula3 corosync[6220]: [SERV ] Service engine loaded: corosync cluster quorum service v0.1 [3]
Nov 25 10:48:43 nebula3 corosync[6220]: [QB ] server name: quorum
Nov 25 10:48:43 nebula3 corosync[6220]: [TOTEM ] adding new UDPU member {192.168.231.70}
Nov 25 10:48:43 nebula3 corosync[6220]: [TOTEM ] adding new UDPU member {192.168.231.71}
Nov 25 10:48:43 nebula3 corosync[6220]: [TOTEM ] adding new UDPU member {192.168.231.72}
Nov 25 10:48:43 nebula3 corosync[6220]: [TOTEM ] adding new UDPU member {192.168.231.110}
Nov 25 10:48:43 nebula3 corosync[6220]: [TOTEM ] adding new UDPU member {192.168.231.111}
Nov 25 10:48:43 nebula3 corosync[6220]: [TOTEM ] A new membership (192.168.231.72:81376) was formed. Members joined: 1084811080
Nov 25 10:48:43 nebula3 corosync[6220]: [QUORUM] Waiting for all cluster members. Current votes: 1 expected_votes: 3
Nov 25 10:48:43 nebula3 corosync[6220]: message repeated 2 times: [ [QUORUM] Waiting for all cluster members. Current votes: 1 expected_votes: 3]
Nov 25 10:48:43 nebula3 corosync[6220]: [QUORUM] Members[1]: 1084811080
Nov 25 10:48:43 nebula3 corosync[6220]: [MAIN ] Completed service synchronization, ready to provide service.
Nov 25 10:48:43 nebula3 corosync[6220]: [TOTEM ] A new membership (192.168.231.70:81392) was formed. Members joined: 1084811078 1084811079
Nov 25 10:48:43 nebula3 corosync[6220]: [QUORUM] Waiting for all cluster members. Current votes: 2 expected_votes: 3
Nov 25 10:48:43 nebula3 corosync[6220]: [QUORUM] Waiting for all cluster members. Current votes: 2 expected_votes: 3
Nov 25 10:48:43 nebula3 corosync[6220]: [QUORUM] This node is within the primary component and will provide service.
Nov 25 10:48:43 nebula3 corosync[6220]: [QUORUM] Members[3]: 1084811078 1084811079 1084811080
Nov 25 10:48:43 nebula3 corosync[6220]: [MAIN ] Completed service synchronization, ready to provide service.
Start pacemaker
---------------
Nov 25 10:49:31 nebula3 pacemakerd[6229]: notice: mcp_read_config: Configured corosync to accept connections from group 114: OK (1)
Nov 25 10:49:31 nebula3 pacemakerd[6229]: notice: main: Starting Pacemaker 1.1.10 (Build: 42f2063): generated-manpages agent-manpages ncurses libqb-logging libqb-ipc lha-fencing upstart nagios heartbeat corosync-native snmp libesmtp
Nov 25 10:49:31 nebula3 pacemakerd[6229]: notice: cluster_connect_quorum: Quorum acquired
Nov 25 10:49:31 nebula3 pacemakerd[6229]: notice: corosync_node_name: Unable to get node name for nodeid 1084811080
Nov 25 10:49:31 nebula3 pacemakerd[6229]: notice: get_node_name: Defaulting to uname -n for the local corosync node name
Nov 25 10:49:31 nebula3 pacemakerd[6229]: notice: corosync_node_name: Unable to get node name for nodeid 1084811078
Nov 25 10:49:31 nebula3 pacemakerd[6229]: notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[1084811078] - state is now member (was (null))
Nov 25 10:49:31 nebula3 pacemakerd[6229]: notice: corosync_node_name: Unable to get node name for nodeid 1084811079
Nov 25 10:49:31 nebula3 pacemakerd[6229]: notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[1084811079] - state is now member (was (null))
Nov 25 10:49:31 nebula3 pacemakerd[6229]: notice: crm_update_peer_state: pcmk_quorum_notification: Node nebula3[1084811080] - state is now member (was (null))
Nov 25 10:49:31 nebula3 attrd[6234]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Nov 25 10:49:31 nebula3 stonith-ng[6232]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Nov 25 10:49:31 nebula3 crmd[6236]: notice: main: CRM Git Version: 42f2063
Nov 25 10:49:31 nebula3 attrd[6234]: notice: corosync_node_name: Unable to get node name for nodeid 1084811080
Nov 25 10:49:31 nebula3 attrd[6234]: notice: get_node_name: Defaulting to uname -n for the local corosync node name
Nov 25 10:49:31 nebula3 attrd[6234]: notice: main: Starting mainloop...
Nov 25 10:49:31 nebula3 stonith-ng[6232]: notice: corosync_node_name: Unable to get node name for nodeid 1084811080
Nov 25 10:49:31 nebula3 stonith-ng[6232]: notice: get_node_name: Defaulting to uname -n for the local corosync node name
Nov 25 10:49:31 nebula3 cib[6231]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Nov 25 10:49:31 nebula3 cib[6231]: notice: corosync_node_name: Unable to get node name for nodeid 1084811080
Nov 25 10:49:31 nebula3 cib[6231]: notice: get_node_name: Defaulting to uname -n for the local corosync node name
Nov 25 10:49:32 nebula3 crmd[6236]: notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Nov 25 10:49:32 nebula3 stonith-ng[6232]: notice: setup_cib: Watching for stonith topology changes
Nov 25 10:49:32 nebula3 crmd[6236]: notice: corosync_node_name: Unable to get node name for nodeid 1084811080
Nov 25 10:49:32 nebula3 crmd[6236]: notice: get_node_name: Defaulting to uname -n for the local corosync node name
Nov 25 10:49:32 nebula3 stonith-ng[6232]: notice: corosync_node_name: Unable to get node name for nodeid 1084811080
Nov 25 10:49:32 nebula3 stonith-ng[6232]: notice: get_node_name: Defaulting to uname -n for the local corosync node name
Nov 25 10:49:32 nebula3 crmd[6236]: notice: cluster_connect_quorum: Quorum acquired
Nov 25 10:49:32 nebula3 crmd[6236]: notice: corosync_node_name: Unable to get node name for nodeid 1084811078
Nov 25 10:49:32 nebula3 crmd[6236]: notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[1084811078] - state is now member (was (null))
Nov 25 10:49:32 nebula3 crmd[6236]: notice: corosync_node_name: Unable to get node name for nodeid 1084811079
Nov 25 10:49:32 nebula3 crmd[6236]: notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[1084811079] - state is now member (was (null))
Nov 25 10:49:32 nebula3 crmd[6236]: notice: crm_update_peer_state: pcmk_quorum_notification: Node nebula3[1084811080] - state is now member (was (null))
Nov 25 10:49:32 nebula3 crmd[6236]: notice: corosync_node_name: Unable to get node name for nodeid 1084811080
Nov 25 10:49:32 nebula3 crmd[6236]: notice: get_node_name: Defaulting to uname -n for the local corosync node name
Nov 25 10:49:32 nebula3 crmd[6236]: notice: do_started: The local CRM is operational
Nov 25 10:49:32 nebula3 crmd[6236]: notice: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Nov 25 10:49:33 nebula3 stonith-ng[6232]: notice: stonith_device_register: Added 'Stonith-nebula1-IPMILAN' to the device list (1 active devices)
Nov 25 10:49:34 nebula3 stonith-ng[6232]: notice: stonith_device_register: Added 'Stonith-nebula2-IPMILAN' to the device list (2 active devices)
Nov 25 10:49:35 nebula3 stonith-ng[6232]: notice: stonith_device_register: Added 'Stonith-ONE-Frontend' to the device list (3 active devices)
Nov 25 10:49:43 nebula3 cib[6231]: notice: corosync_node_name: Unable to get node name for nodeid 1084811080
Nov 25 10:49:43 nebula3 cib[6231]: notice: get_node_name: Defaulting to uname -n for the local corosync node name
Nov 25 10:49:43 nebula3 attrd[6234]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
Nov 25 10:49:43 nebula3 crmd[6236]: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Nov 25 10:49:43 nebula3 crmd[6236]: notice: process_lrm_event: LRM operation dlm_monitor_0 (call=34, rc=7, cib-update=13, confirmed=true) not running
Nov 25 10:49:43 nebula3 stonith-ng[6232]: notice: can_fence_host_with_device: Stonith-nebula1-IPMILAN can not fence one-frontend: static-list
Nov 25 10:49:43 nebula3 stonith-ng[6232]: notice: can_fence_host_with_device: Stonith-nebula2-IPMILAN can not fence one-frontend: static-list
Nov 25 10:49:43 nebula3 stonith-ng[6232]: notice: can_fence_host_with_device: Stonith-ONE-Frontend can fence one-frontend: static-list
Nov 25 10:49:43 nebula3 crmd[6236]: notice: process_lrm_event: LRM operation clvm_monitor_0 (call=39, rc=7, cib-update=14, confirmed=true) not running
Nov 25 10:49:43 nebula3 LVM(ONE-vg)[6250]: WARNING: LVM Volume one-fs is not available (stopped)
Nov 25 10:49:43 nebula3 LVM(ONE-vg)[6250]: INFO: LVM Volume one-fs is offline
Nov 25 10:49:43 nebula3 crmd[6236]: notice: process_lrm_event: LRM operation ONE-vg_monitor_0 (call=44, rc=7, cib-update=15, confirmed=true) not running
Nov 25 10:49:43 nebula3 Filesystem(ONE-Datastores)[6306]: WARNING: Couldn't find device [/dev/one-fs/datastores]. Expected /dev/??? to exist
Nov 25 10:49:43 nebula3 VirtualDomain(Quorum-Node-VM)[6247]: INFO: Configuration file /var/lib/libvirt/qemu/pcmk/quorum.xml not readable during probe.
Nov 25 10:49:43 nebula3 VirtualDomain(ONE-Frontend-VM)[6246]: INFO: Configuration file /var/lib/one/datastores/one/one.xml not readable during probe.
Nov 25 10:49:43 nebula3 crmd[6236]: notice: process_lrm_event: LRM operation Quorum-Node-VM_monitor_0 (call=29, rc=7, cib-update=16, confirmed=true) not running
Nov 25 10:49:43 nebula3 crmd[6236]: notice: process_lrm_event: LRM operation ONE-Frontend-VM_monitor_0 (call=21, rc=7, cib-update=17, confirmed=true) not running
Nov 25 10:49:43 nebula3 crmd[6236]: notice: process_lrm_event: LRM operation ONE-Datastores_monitor_0 (call=56, rc=7, cib-update=18, confirmed=true) not running
Nov 25 10:49:43 nebula3 attrd[6234]: notice: corosync_node_name: Unable to get node name for nodeid 1084811080
Nov 25 10:49:43 nebula3 attrd[6234]: notice: get_node_name: Defaulting to uname -n for the local corosync node name
Nov 25 10:49:43 nebula3 attrd[6234]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Nov 25 10:49:43 nebula3 attrd[6234]: notice: attrd_perform_update: Sent update 5: probe_complete=true
Nov 25 10:49:46 nebula3 stonith-ng[6232]: notice: remote_op_done: Operation reboot of one-frontend by nebula1 for crmd.5038 at nebula1.98290389: OK
Nov 25 10:49:46 nebula3 crmd[6236]: notice: tengine_stonith_notify: Peer one-frontend was terminated (reboot) by nebula1 for nebula1: OK (ref=98290389-2232-4ba1-b4dd-ed4a4d7f46a1) by client crmd.5038
Nov 25 10:49:46 nebula3 crmd[6236]: notice: crm_update_peer_state: tengine_stonith_notify: Node one-frontend[0] - state is now lost (was (null))
Nov 25 10:49:46 nebula3 stonith-ng[6232]: notice: can_fence_host_with_device: Stonith-nebula1-IPMILAN can not fence quorum: static-list
Nov 25 10:49:46 nebula3 stonith-ng[6232]: notice: can_fence_host_with_device: Stonith-nebula2-IPMILAN can not fence quorum: static-list
Nov 25 10:49:46 nebula3 stonith-ng[6232]: notice: can_fence_host_with_device: Stonith-ONE-Frontend can not fence quorum: static-list
Nov 25 10:49:50 nebula3 stonith-ng[6232]: notice: remote_op_done: Operation reboot of quorum by nebula1 for crmd.5038 at nebula1.281581d8: OK
Nov 25 10:49:50 nebula3 crmd[6236]: notice: tengine_stonith_notify: Peer quorum was terminated (reboot) by nebula1 for nebula1: OK (ref=281581d8-fecf-4588-808f-1ead04b7cdd6) by client crmd.5038
Nov 25 10:49:50 nebula3 crmd[6236]: notice: crm_update_peer_state: tengine_stonith_notify: Node quorum[0] - state is now lost (was (null))
Nov 25 10:49:51 nebula3 stonith-ng[6232]: notice: stonith_device_register: Device 'Stonith-nebula2-IPMILAN' already existed in device list (3 active devices)
Nov 25 10:49:52 nebula3 stonith-ng[6232]: error: crm_abort: crm_glib_handler: Forked child 6409 to record non-fatal assert at logging.c:63 : Source ID 15 was not found when attempting to remove it
Nov 25 10:49:52 nebula3 crmd[6236]: notice: process_lrm_event: LRM operation Stonith-nebula2-IPMILAN_start_0 (call=62, rc=0, cib-update=19, confirmed=true) ok
Nov 25 10:49:52 nebula3 stonith-ng[6232]: error: crm_abort: crm_glib_handler: Forked child 6410 to record non-fatal assert at logging.c:63 : Source ID 16 was not found when attempting to remove it
Nov 25 10:49:54 nebula3 stonith-ng[6232]: error: crm_abort: crm_glib_handler: Forked child 6425 to record non-fatal assert at logging.c:63 : Source ID 17 was not found when attempting to remove it
Nov 25 10:49:54 nebula3 crmd[6236]: notice: process_lrm_event: LRM operation Stonith-nebula2-IPMILAN_monitor_1800000 (call=65, rc=0, cib-update=20, confirmed=false) ok
Nov 25 10:49:54 nebula3 stonith-ng[6232]: error: crm_abort: crm_glib_handler: Forked child 6426 to record non-fatal assert at logging.c:63 : Source ID 18 was not found when attempting to remove it
Start resource ONE-Storage-Clone
--------------------------------
Nov 25 10:50:47 nebula3 cibadmin[6435]: notice: crm_log_args: Invoked: cibadmin -p -R -o resources
Nov 25 10:50:47 nebula3 kernel: [ 691.772162] sctp: Hash tables configured (established 65536 bind 65536)
Nov 25 10:50:47 nebula3 kernel: [ 691.787265] DLM installed
Nov 25 10:50:47 nebula3 dlm_controld[6465]: 691 dlm_controld 4.0.1 started
Nov 25 10:50:48 nebula3 crmd[6236]: notice: process_lrm_event: LRM operation dlm_start_0 (call=68, rc=0, cib-update=21, confirmed=true) ok
Nov 25 10:50:48 nebula3 crmd[6236]: notice: process_lrm_event: LRM operation dlm_monitor_60000 (call=71, rc=0, cib-update=22, confirmed=false) ok
Nov 25 10:50:48 nebula3 clvmd(clvm)[6474]: INFO: Starting clvm
Nov 25 10:50:48 nebula3 clvmd[6490]: CLVMD started
Nov 25 10:50:48 nebula3 kernel: [ 692.909127] dlm: Using TCP for communications
Nov 25 10:50:48 nebula3 kernel: [ 692.919584] dlm: connecting to 1084811078
Nov 25 10:50:48 nebula3 kernel: [ 692.921043] dlm: got connection from 1084811078
Nov 25 10:50:48 nebula3 kernel: [ 692.970302] dlm: connecting to 1084811079
Nov 25 10:50:48 nebula3 kernel: [ 692.970639] dlm: got connection from 1084811079
Nov 25 10:50:49 nebula3 clvmd[6490]: Created DLM lockspace for CLVMD.
Nov 25 10:50:49 nebula3 clvmd[6490]: DLM initialisation complete
Nov 25 10:50:49 nebula3 clvmd[6490]: Our local node id is 1084811080
Nov 25 10:50:49 nebula3 clvmd[6490]: Connected to Corosync
Nov 25 10:50:49 nebula3 clvmd[6490]: Cluster LVM daemon started - connected to Corosync
Nov 25 10:50:49 nebula3 clvmd[6490]: Cluster ready, doing some more initialisation
Nov 25 10:50:49 nebula3 clvmd[6490]: starting LVM thread
Nov 25 10:50:49 nebula3 clvmd[6490]: LVM thread function started
Nov 25 10:50:49 nebula3 lvm[6490]: Sub thread ready for work.
Nov 25 10:50:49 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 10:50:49 nebula3 lvm[6490]: clvmd ready for work
Nov 25 10:50:49 nebula3 lvm[6490]: Using timeout of 60 seconds
Nov 25 10:50:49 nebula3 lvm[6490]: confchg callback. 1 joined, 0 left, 2 members
Nov 25 10:50:49 nebula3 lvm[6490]: confchg callback. 1 joined, 0 left, 3 members
Nov 25 10:50:51 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811078 for 0. len 29
Nov 25 10:50:51 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6aa60. client=0x6a1d60, msg=0x7fffc04d1c6c, len=29, csid=0x7fffc04d063c, xid=0
Nov 25 10:50:51 nebula3 lvm[6490]: process_work_item: remote
Nov 25 10:50:51 nebula3 lvm[6490]: process_remote_command LOCK_VG (0x33) for clientid 0x5000000 XID 0 on node 40a8e746
Nov 25 10:50:51 nebula3 lvm[6490]: do_lock_vg: resource 'P_#global', cmd = 0x4 LCK_VG (WRITE|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 10:50:51 nebula3 lvm[6490]: Refreshing context
Nov 25 10:50:51 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811079 for 1084811078. len 18
Nov 25 10:50:51 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811080 for 1084811078. len 18
Nov 25 10:50:52 nebula3 lrmd[6233]: notice: operation_finished: clvm_start_0:6474:stderr [ local socket: connect failed: No such file or directory ]
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811078 for 0. len 31
Nov 25 10:50:52 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6aa60. client=0x6a1d60, msg=0x7fffc04d1c6c, len=31, csid=0x7fffc04d063c, xid=0
Nov 25 10:50:52 nebula3 lvm[6490]: process_work_item: remote
Nov 25 10:50:52 nebula3 lvm[6490]: process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 2 on node 40a8e746
Nov 25 10:50:52 nebula3 lvm[6490]: Syncing device names
Nov 25 10:50:52 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811079 for 1084811078. len 18
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811080 for 1084811078. len 18
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811078 for 0. len 31
Nov 25 10:50:52 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6aa60. client=0x6a1d60, msg=0x7fffc04d1c6c, len=31, csid=0x7fffc04d063c, xid=0
Nov 25 10:50:52 nebula3 lvm[6490]: process_work_item: remote
Nov 25 10:50:52 nebula3 lvm[6490]: process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 5 on node 40a8e746
Nov 25 10:50:52 nebula3 lvm[6490]: Syncing device names
Nov 25 10:50:52 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811079 for 1084811078. len 18
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811080 for 1084811078. len 18
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811078 for 0. len 29
Nov 25 10:50:52 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6aa60. client=0x6a1d60, msg=0x7fffc04d1c6c, len=29, csid=0x7fffc04d063c, xid=0
Nov 25 10:50:52 nebula3 lvm[6490]: process_work_item: remote
Nov 25 10:50:52 nebula3 lvm[6490]: process_remote_command LOCK_VG (0x33) for clientid 0x5000000 XID 7 on node 40a8e746
Nov 25 10:50:52 nebula3 lvm[6490]: do_lock_vg: resource 'P_#global', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 10:50:52 nebula3 lvm[6490]: Refreshing context
Nov 25 10:50:52 nebula3 crmd[6236]: notice: process_lrm_event: LRM operation clvm_start_0 (call=73, rc=0, cib-update=23, confirmed=true) ok
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811079 for 1084811078. len 18
Nov 25 10:50:52 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811080 for 1084811078. len 18
Nov 25 10:50:52 nebula3 LVM(ONE-vg)[6512]: INFO: Activating volume group one-fs
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811078 for 0. len 84
Nov 25 10:50:52 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6aa60. client=0x6a1d60, msg=0x7fffc04d1c6c, len=84, csid=0x7fffc04d063c, xid=0
Nov 25 10:50:52 nebula3 lvm[6490]: process_work_item: remote
Nov 25 10:50:52 nebula3 lvm[6490]: process_remote_command LOCK_QUERY (0x34) for clientid 0x5000000 XID 9 on node 40a8e746
Nov 25 10:50:52 nebula3 lvm[6490]: do_lock_query: resource 'PSXBJgdbJb55UdFcI48VOdE6voIrDm71exNu0QeKultyW71LS8DjWLEdnpgtovv9', mode -1 (?)
Nov 25 10:50:52 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811079 for 1084811078. len 18
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811080 for 1084811078. len 18
Nov 25 10:50:52 nebula3 lvm[6490]: Got new connection on fd 12
Nov 25 10:50:52 nebula3 lvm[6490]: Read on local socket 12, len = 29
Nov 25 10:50:52 nebula3 lvm[6490]: check_all_clvmds_running
Nov 25 10:50:52 nebula3 lvm[6490]: down_callback. node 1084811079, state = 3
Nov 25 10:50:52 nebula3 lvm[6490]: down_callback. node 1084811078, state = 3
Nov 25 10:50:52 nebula3 lvm[6490]: down_callback. node 1084811080, state = 3
Nov 25 10:50:52 nebula3 lvm[6490]: creating pipe, [13, 14]
Nov 25 10:50:52 nebula3 lvm[6490]: Creating pre&post thread
Nov 25 10:50:52 nebula3 lvm[6490]: Created pre&post thread, state = 0
Nov 25 10:50:52 nebula3 lvm[6490]: in sub thread: client = 0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: doing PRE command LOCK_VG 'P_#global' at 4 (client=0x1a6aa60)
Nov 25 10:50:52 nebula3 lvm[6490]: lock_resource 'P_#global', flags=0, mode=4
Nov 25 10:50:52 nebula3 lvm[6490]: lock_resource returning 0, lock_id=1
Nov 25 10:50:52 nebula3 lvm[6490]: Writing status 0 down pipe 14
Nov 25 10:50:52 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 10:50:52 nebula3 lvm[6490]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: distribute command: XID = 0, flags=0x0 ()
Nov 25 10:50:52 nebula3 lvm[6490]: num_nodes = 3
Nov 25 10:50:52 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6b040. client=0x1a6aa60, msg=0x1a6ab70, len=29, csid=(nil), xid=0
Nov 25 10:50:52 nebula3 lvm[6490]: Sending message to all cluster nodes
Nov 25 10:50:52 nebula3 lvm[6490]: process_work_item: local
Nov 25 10:50:52 nebula3 lvm[6490]: process_local_command: LOCK_VG (0x33) msg=0x1a6ade0, msglen =29, client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: do_lock_vg: resource 'P_#global', cmd = 0x4 LCK_VG (WRITE|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 10:50:52 nebula3 lvm[6490]: Refreshing context
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811080 for 0. len 29
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811078 for 0. len 31
Nov 25 10:50:52 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ae10. client=0x6a1d60, msg=0x7fffc04d1c6c, len=31, csid=0x7fffc04d063c, xid=0
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811079 for 1084811080. len 18
Nov 25 10:50:52 nebula3 lvm[6490]: Reply from node 40a8e747: 0 bytes
Nov 25 10:50:52 nebula3 lvm[6490]: Got 1 replies, expecting: 3
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811079 for 1084811078. len 18
Nov 25 10:50:52 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 10:50:52 nebula3 lvm[6490]: Got 2 replies, expecting: 3
Nov 25 10:50:52 nebula3 lvm[6490]: process_work_item: remote
Nov 25 10:50:52 nebula3 lvm[6490]: process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 11 on node 40a8e746
Nov 25 10:50:52 nebula3 lvm[6490]: Syncing device names
Nov 25 10:50:52 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811078 for 1084811080. len 18
Nov 25 10:50:52 nebula3 lvm[6490]: Reply from node 40a8e746: 0 bytes
Nov 25 10:50:52 nebula3 lvm[6490]: Got 3 replies, expecting: 3
Nov 25 10:50:52 nebula3 lvm[6490]: Got post command condition...
Nov 25 10:50:52 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 10:50:52 nebula3 lvm[6490]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: Send local reply
Nov 25 10:50:52 nebula3 crmd[6236]: notice: process_lrm_event: LRM operation clvm_monitor_60000 (call=77, rc=0, cib-update=24, confirmed=false) ok
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811080 for 1084811078. len 18
Nov 25 10:50:52 nebula3 lvm[6490]: Read on local socket 12, len = 28
Nov 25 10:50:52 nebula3 lvm[6490]: Got pre command condition...
Nov 25 10:50:52 nebula3 lvm[6490]: doing PRE command LOCK_VG 'V_one-fs' at 1 (client=0x1a6aa60)
Nov 25 10:50:52 nebula3 lvm[6490]: lock_resource 'V_one-fs', flags=0, mode=3
Nov 25 10:50:52 nebula3 lvm[6490]: lock_resource returning 0, lock_id=2
Nov 25 10:50:52 nebula3 lvm[6490]: Writing status 0 down pipe 14
Nov 25 10:50:52 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 10:50:52 nebula3 lvm[6490]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: distribute command: XID = 1, flags=0x1 (LOCAL)
Nov 25 10:50:52 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ade0. client=0x1a6aa60, msg=0x1a6ab70, len=28, csid=(nil), xid=1
Nov 25 10:50:52 nebula3 lvm[6490]: process_work_item: local
Nov 25 10:50:52 nebula3 lvm[6490]: process_local_command: LOCK_VG (0x33) msg=0x1a6ae20, msglen =28, client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: do_lock_vg: resource 'V_one-fs', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 10:50:52 nebula3 lvm[6490]: Invalidating cached metadata for VG one-fs
Nov 25 10:50:52 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 10:50:52 nebula3 lvm[6490]: Got 1 replies, expecting: 1
Nov 25 10:50:52 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 10:50:52 nebula3 lvm[6490]: Got post command condition...
Nov 25 10:50:52 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 10:50:52 nebula3 lvm[6490]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: Send local reply
Nov 25 10:50:52 nebula3 lvm[6490]: Read on local socket 12, len = 31
Nov 25 10:50:52 nebula3 lvm[6490]: check_all_clvmds_running
Nov 25 10:50:52 nebula3 lvm[6490]: down_callback. node 1084811079, state = 3
Nov 25 10:50:52 nebula3 lvm[6490]: down_callback. node 1084811078, state = 3
Nov 25 10:50:52 nebula3 lvm[6490]: down_callback. node 1084811080, state = 3
Nov 25 10:50:52 nebula3 lvm[6490]: Got pre command condition...
Nov 25 10:50:52 nebula3 lvm[6490]: Writing status 0 down pipe 14
Nov 25 10:50:52 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 10:50:52 nebula3 lvm[6490]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: distribute command: XID = 2, flags=0x0 ()
Nov 25 10:50:52 nebula3 lvm[6490]: num_nodes = 3
Nov 25 10:50:52 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6b040. client=0x1a6aa60, msg=0x1a6ab70, len=31, csid=(nil), xid=2
Nov 25 10:50:52 nebula3 lvm[6490]: Sending message to all cluster nodes
Nov 25 10:50:52 nebula3 lvm[6490]: process_work_item: local
Nov 25 10:50:52 nebula3 lvm[6490]: process_local_command: SYNC_NAMES (0x2d) msg=0x1a6ade0, msglen =31, client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: Syncing device names
Nov 25 10:50:52 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 10:50:52 nebula3 lvm[6490]: Got 1 replies, expecting: 3
Nov 25 10:50:52 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811080 for 0. len 31
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811079 for 1084811080. len 18
Nov 25 10:50:52 nebula3 lvm[6490]: Reply from node 40a8e747: 0 bytes
Nov 25 10:50:52 nebula3 lvm[6490]: Got 2 replies, expecting: 3
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811078 for 1084811080. len 18
Nov 25 10:50:52 nebula3 lvm[6490]: Reply from node 40a8e746: 0 bytes
Nov 25 10:50:52 nebula3 lvm[6490]: Got 3 replies, expecting: 3
Nov 25 10:50:52 nebula3 lvm[6490]: Got post command condition...
Nov 25 10:50:52 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 10:50:52 nebula3 lvm[6490]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: Send local reply
Nov 25 10:50:52 nebula3 lvm[6490]: Read on local socket 12, len = 28
Nov 25 10:50:52 nebula3 lvm[6490]: Got pre command condition...
Nov 25 10:50:52 nebula3 lvm[6490]: doing PRE command LOCK_VG 'V_one-fs' at 6 (client=0x1a6aa60)
Nov 25 10:50:52 nebula3 lvm[6490]: unlock_resource: V_one-fs lockid: 2
Nov 25 10:50:52 nebula3 lvm[6490]: Writing status 0 down pipe 14
Nov 25 10:50:52 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 10:50:52 nebula3 lvm[6490]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: distribute command: XID = 3, flags=0x1 (LOCAL)
Nov 25 10:50:52 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ade0. client=0x1a6aa60, msg=0x1a6ab70, len=28, csid=(nil), xid=3
Nov 25 10:50:52 nebula3 lvm[6490]: process_work_item: local
Nov 25 10:50:52 nebula3 lvm[6490]: process_local_command: LOCK_VG (0x33) msg=0x1a6ae20, msglen =28, client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: do_lock_vg: resource 'V_one-fs', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 10:50:52 nebula3 lvm[6490]: Invalidating cached metadata for VG one-fs
Nov 25 10:50:52 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 10:50:52 nebula3 lvm[6490]: Got 1 replies, expecting: 1
Nov 25 10:50:52 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 10:50:52 nebula3 lvm[6490]: Got post command condition...
Nov 25 10:50:52 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 10:50:52 nebula3 lvm[6490]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: Send local reply
Nov 25 10:50:52 nebula3 lvm[6490]: Read on local socket 12, len = 32
Nov 25 10:50:52 nebula3 lvm[6490]: Got pre command condition...
Nov 25 10:50:52 nebula3 lvm[6490]: doing PRE command LOCK_VG 'V_nebula3-vg' at 1 (client=0x1a6aa60)
Nov 25 10:50:52 nebula3 lvm[6490]: lock_resource 'V_nebula3-vg', flags=0, mode=3
Nov 25 10:50:52 nebula3 lvm[6490]: lock_resource returning 0, lock_id=2
Nov 25 10:50:52 nebula3 lvm[6490]: Writing status 0 down pipe 14
Nov 25 10:50:52 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 10:50:52 nebula3 lvm[6490]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: distribute command: XID = 4, flags=0x1 (LOCAL)
Nov 25 10:50:52 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ade0. client=0x1a6aa60, msg=0x1a6ab70, len=32, csid=(nil), xid=4
Nov 25 10:50:52 nebula3 lvm[6490]: process_work_item: local
Nov 25 10:50:52 nebula3 lvm[6490]: process_local_command: LOCK_VG (0x33) msg=0x1a6ae20, msglen =32, client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: do_lock_vg: resource 'V_nebula3-vg', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 10:50:52 nebula3 lvm[6490]: Invalidating cached metadata for VG nebula3-vg
Nov 25 10:50:52 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 10:50:52 nebula3 lvm[6490]: Got 1 replies, expecting: 1
Nov 25 10:50:52 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 10:50:52 nebula3 lvm[6490]: Got post command condition...
Nov 25 10:50:52 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 10:50:52 nebula3 lvm[6490]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: Send local reply
Nov 25 10:50:52 nebula3 lvm[6490]: Read on local socket 12, len = 31
Nov 25 10:50:52 nebula3 lvm[6490]: check_all_clvmds_running
Nov 25 10:50:52 nebula3 lvm[6490]: down_callback. node 1084811079, state = 3
Nov 25 10:50:52 nebula3 lvm[6490]: down_callback. node 1084811078, state = 3
Nov 25 10:50:52 nebula3 lvm[6490]: down_callback. node 1084811080, state = 3
Nov 25 10:50:52 nebula3 lvm[6490]: Got pre command condition...
Nov 25 10:50:52 nebula3 lvm[6490]: Writing status 0 down pipe 14
Nov 25 10:50:52 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 10:50:52 nebula3 lvm[6490]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: distribute command: XID = 5, flags=0x0 ()
Nov 25 10:50:52 nebula3 lvm[6490]: num_nodes = 3
Nov 25 10:50:52 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6b040. client=0x1a6aa60, msg=0x1a6ab70, len=31, csid=(nil), xid=5
Nov 25 10:50:52 nebula3 lvm[6490]: Sending message to all cluster nodes
Nov 25 10:50:52 nebula3 lvm[6490]: process_work_item: local
Nov 25 10:50:52 nebula3 lvm[6490]: process_local_command: SYNC_NAMES (0x2d) msg=0x1a6ade0, msglen =31, client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: Syncing device names
Nov 25 10:50:52 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 10:50:52 nebula3 lvm[6490]: Got 1 replies, expecting: 3
Nov 25 10:50:52 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811080 for 0. len 31
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811079 for 1084811080. len 18
Nov 25 10:50:52 nebula3 lvm[6490]: Reply from node 40a8e747: 0 bytes
Nov 25 10:50:52 nebula3 lvm[6490]: Got 2 replies, expecting: 3
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811078 for 1084811080. len 18
Nov 25 10:50:52 nebula3 lvm[6490]: Reply from node 40a8e746: 0 bytes
Nov 25 10:50:52 nebula3 lvm[6490]: Got 3 replies, expecting: 3
Nov 25 10:50:52 nebula3 lvm[6490]: Got post command condition...
Nov 25 10:50:52 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 10:50:52 nebula3 lvm[6490]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: Send local reply
Nov 25 10:50:52 nebula3 lvm[6490]: Read on local socket 12, len = 32
Nov 25 10:50:52 nebula3 lvm[6490]: Got pre command condition...
Nov 25 10:50:52 nebula3 lvm[6490]: doing PRE command LOCK_VG 'V_nebula3-vg' at 6 (client=0x1a6aa60)
Nov 25 10:50:52 nebula3 lvm[6490]: unlock_resource: V_nebula3-vg lockid: 2
Nov 25 10:50:52 nebula3 lvm[6490]: Writing status 0 down pipe 14
Nov 25 10:50:52 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 10:50:52 nebula3 lvm[6490]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: distribute command: XID = 6, flags=0x1 (LOCAL)
Nov 25 10:50:52 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ade0. client=0x1a6aa60, msg=0x1a6ab70, len=32, csid=(nil), xid=6
Nov 25 10:50:52 nebula3 lvm[6490]: process_work_item: local
Nov 25 10:50:52 nebula3 lvm[6490]: process_local_command: LOCK_VG (0x33) msg=0x1a6ae20, msglen =32, client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: do_lock_vg: resource 'V_nebula3-vg', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 10:50:52 nebula3 lvm[6490]: Invalidating cached metadata for VG nebula3-vg
Nov 25 10:50:52 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 10:50:52 nebula3 lvm[6490]: Got 1 replies, expecting: 1
Nov 25 10:50:52 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 10:50:52 nebula3 lvm[6490]: Got post command condition...
Nov 25 10:50:52 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 10:50:52 nebula3 lvm[6490]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: Send local reply
Nov 25 10:50:52 nebula3 lvm[6490]: Read on local socket 12, len = 29
Nov 25 10:50:52 nebula3 lvm[6490]: check_all_clvmds_running
Nov 25 10:50:52 nebula3 lvm[6490]: down_callback. node 1084811079, state = 3
Nov 25 10:50:52 nebula3 lvm[6490]: down_callback. node 1084811078, state = 3
Nov 25 10:50:52 nebula3 lvm[6490]: down_callback. node 1084811080, state = 3
Nov 25 10:50:52 nebula3 lvm[6490]: Got pre command condition...
Nov 25 10:50:52 nebula3 lvm[6490]: doing PRE command LOCK_VG 'P_#global' at 6 (client=0x1a6aa60)
Nov 25 10:50:52 nebula3 lvm[6490]: unlock_resource: P_#global lockid: 1
Nov 25 10:50:52 nebula3 lvm[6490]: Writing status 0 down pipe 14
Nov 25 10:50:52 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 10:50:52 nebula3 lvm[6490]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: distribute command: XID = 7, flags=0x0 ()
Nov 25 10:50:52 nebula3 lvm[6490]: num_nodes = 3
Nov 25 10:50:52 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6b040. client=0x1a6aa60, msg=0x1a6ab70, len=29, csid=(nil), xid=7
Nov 25 10:50:52 nebula3 lvm[6490]: Sending message to all cluster nodes
Nov 25 10:50:52 nebula3 lvm[6490]: process_work_item: local
Nov 25 10:50:52 nebula3 lvm[6490]: process_local_command: LOCK_VG (0x33) msg=0x1a6ade0, msglen =29, client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: do_lock_vg: resource 'P_#global', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 10:50:52 nebula3 lvm[6490]: Refreshing context
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811080 for 0. len 29
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811079 for 0. len 29
Nov 25 10:50:52 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ae10. client=0x6a1d60, msg=0x7fffc04d1c6c, len=29, csid=0x7fffc04d063c, xid=0
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811078 for 1084811080. len 18
Nov 25 10:50:52 nebula3 lvm[6490]: Reply from node 40a8e746: 0 bytes
Nov 25 10:50:52 nebula3 lvm[6490]: Got 1 replies, expecting: 3
Nov 25 10:50:52 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 10:50:52 nebula3 lvm[6490]: Got 2 replies, expecting: 3
Nov 25 10:50:52 nebula3 lvm[6490]: process_work_item: remote
Nov 25 10:50:52 nebula3 lvm[6490]: process_remote_command LOCK_VG (0x33) for clientid 0x5000000 XID 0 on node 40a8e747
Nov 25 10:50:52 nebula3 lvm[6490]: do_lock_vg: resource 'P_#global', cmd = 0x4 LCK_VG (WRITE|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 10:50:52 nebula3 lvm[6490]: Refreshing context
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811078 for 1084811079. len 18
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811079 for 1084811080. len 18
Nov 25 10:50:52 nebula3 lvm[6490]: Reply from node 40a8e747: 0 bytes
Nov 25 10:50:52 nebula3 lvm[6490]: Got 3 replies, expecting: 3
Nov 25 10:50:52 nebula3 lvm[6490]: Got post command condition...
Nov 25 10:50:52 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 10:50:52 nebula3 lvm[6490]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: Send local reply
Nov 25 10:50:52 nebula3 lvm[6490]: Read on local socket 12, len = 0
Nov 25 10:50:52 nebula3 lvm[6490]: EOF on local socket: inprogress=0
Nov 25 10:50:52 nebula3 lvm[6490]: Waiting for child thread
Nov 25 10:50:52 nebula3 lvm[6490]: Got pre command condition...
Nov 25 10:50:52 nebula3 lvm[6490]: Subthread finished
Nov 25 10:50:52 nebula3 lvm[6490]: Joined child thread
Nov 25 10:50:52 nebula3 lvm[6490]: ret == 0, errno = 0. removing client
Nov 25 10:50:52 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ab70. client=0x1a6aa60, msg=(nil), len=0, csid=(nil), xid=7
Nov 25 10:50:52 nebula3 lvm[6490]: process_work_item: free fd -1
Nov 25 10:50:52 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811080 for 1084811079. len 18
Nov 25 10:50:52 nebula3 LVM(ONE-vg)[6512]: INFO: Reading all physical volumes. This may take a while... Found volume group "one-fs" using metadata type lvm2 Found volume group "nebula3-vg" using metadata type lvm2
Nov 25 10:50:52 nebula3 lvm[6490]: Got new connection on fd 12
Nov 25 10:50:52 nebula3 lvm[6490]: Read on local socket 12, len = 28
Nov 25 10:50:52 nebula3 lvm[6490]: creating pipe, [13, 14]
Nov 25 10:50:52 nebula3 lvm[6490]: Creating pre&post thread
Nov 25 10:50:52 nebula3 lvm[6490]: Created pre&post thread, state = 0
Nov 25 10:50:52 nebula3 lvm[6490]: in sub thread: client = 0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: doing PRE command LOCK_VG 'V_one-fs' at 1 (client=0x1a6aa60)
Nov 25 10:50:52 nebula3 lvm[6490]: lock_resource 'V_one-fs', flags=0, mode=3
Nov 25 10:50:52 nebula3 lvm[6490]: lock_resource returning 0, lock_id=1
Nov 25 10:50:52 nebula3 lvm[6490]: Writing status 0 down pipe 14
Nov 25 10:50:52 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 10:50:52 nebula3 lvm[6490]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: distribute command: XID = 8, flags=0x1 (LOCAL)
Nov 25 10:50:52 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ade0. client=0x1a6aa60, msg=0x1a6ab70, len=28, csid=(nil), xid=8
Nov 25 10:50:52 nebula3 lvm[6490]: process_work_item: local
Nov 25 10:50:52 nebula3 lvm[6490]: process_local_command: LOCK_VG (0x33) msg=0x1a6ae20, msglen =28, client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: do_lock_vg: resource 'V_one-fs', cmd = 0x1 LCK_VG (READ|VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Nov 25 10:50:52 nebula3 lvm[6490]: Invalidating cached metadata for VG one-fs
Nov 25 10:50:52 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 10:50:52 nebula3 lvm[6490]: Got 1 replies, expecting: 1
Nov 25 10:50:52 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 10:50:52 nebula3 lvm[6490]: Got post command condition...
Nov 25 10:50:52 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 10:50:52 nebula3 lvm[6490]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: Send local reply
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811079 for 0. len 31
Nov 25 10:50:52 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ade0. client=0x6a1d60, msg=0x7fffc04d1c6c, len=31, csid=0x7fffc04d063c, xid=0
Nov 25 10:50:52 nebula3 lvm[6490]: process_work_item: remote
Nov 25 10:50:52 nebula3 lvm[6490]: process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 2 on node 40a8e747
Nov 25 10:50:52 nebula3 lvm[6490]: Syncing device names
Nov 25 10:50:52 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811078 for 1084811079. len 18
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811080 for 1084811079. len 18
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811079 for 0. len 31
Nov 25 10:50:52 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ade0. client=0x6a1d60, msg=0x7fffc04d1c6c, len=31, csid=0x7fffc04d063c, xid=0
Nov 25 10:50:52 nebula3 lvm[6490]: process_work_item: remote
Nov 25 10:50:52 nebula3 lvm[6490]: process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 5 on node 40a8e747
Nov 25 10:50:52 nebula3 lvm[6490]: Syncing device names
Nov 25 10:50:52 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811078 for 1084811079. len 18
Nov 25 10:50:52 nebula3 lvm[6490]: Read on local socket 12, len = 84
Nov 25 10:50:52 nebula3 lvm[6490]: check_all_clvmds_running
Nov 25 10:50:52 nebula3 lvm[6490]: down_callback. node 1084811079, state = 3
Nov 25 10:50:52 nebula3 lvm[6490]: down_callback. node 1084811078, state = 3
Nov 25 10:50:52 nebula3 lvm[6490]: down_callback. node 1084811080, state = 3
Nov 25 10:50:52 nebula3 lvm[6490]: Got pre command condition...
Nov 25 10:50:52 nebula3 lvm[6490]: Writing status 0 down pipe 14
Nov 25 10:50:52 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811080 for 1084811079. len 18
Nov 25 10:50:52 nebula3 lvm[6490]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: distribute command: XID = 9, flags=0x0 ()
Nov 25 10:50:52 nebula3 lvm[6490]: num_nodes = 3
Nov 25 10:50:52 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6b0a0. client=0x1a6aa60, msg=0x1a6ade0, len=84, csid=(nil), xid=9
Nov 25 10:50:52 nebula3 lvm[6490]: Sending message to all cluster nodes
Nov 25 10:50:52 nebula3 lvm[6490]: process_work_item: local
Nov 25 10:50:52 nebula3 lvm[6490]: process_local_command: LOCK_QUERY (0x34) msg=0x1a6ae40, msglen =84, client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: do_lock_query: resource 'PSXBJgdbJb55UdFcI48VOdE6voIrDm71exNu0QeKultyW71LS8DjWLEdnpgtovv9', mode -1 (?)
Nov 25 10:50:52 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 10:50:52 nebula3 lvm[6490]: Got 1 replies, expecting: 3
Nov 25 10:50:52 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811080 for 0. len 84
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811079 for 1084811080. len 18
Nov 25 10:50:52 nebula3 lvm[6490]: Reply from node 40a8e747: 0 bytes
Nov 25 10:50:52 nebula3 lvm[6490]: Got 2 replies, expecting: 3
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811078 for 1084811080. len 21
Nov 25 10:50:52 nebula3 lvm[6490]: Reply from node 40a8e746: 3 bytes
Nov 25 10:50:52 nebula3 lvm[6490]: Got 3 replies, expecting: 3
Nov 25 10:50:52 nebula3 lvm[6490]: Got post command condition...
Nov 25 10:50:52 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 10:50:52 nebula3 lvm[6490]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: Send local reply
Nov 25 10:50:52 nebula3 lvm[6490]: Read on local socket 12, len = 84
Nov 25 10:50:52 nebula3 lvm[6490]: Got pre command condition...
Nov 25 10:50:52 nebula3 lvm[6490]: Writing status 0 down pipe 14
Nov 25 10:50:52 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 10:50:52 nebula3 lvm[6490]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: distribute command: XID = 10, flags=0x1 (LOCAL)
Nov 25 10:50:52 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ae40. client=0x1a6aa60, msg=0x1a6ade0, len=84, csid=(nil), xid=10
Nov 25 10:50:52 nebula3 lvm[6490]: process_work_item: local
Nov 25 10:50:52 nebula3 lvm[6490]: process_local_command: LOCK_LV (0x32) msg=0x1a6b0a0, msglen =84, client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: do_lock_lv: resource 'PSXBJgdbJb55UdFcI48VOdE6voIrDm71exNu0QeKultyW71LS8DjWLEdnpgtovv9', cmd = 0x99 LCK_LV_ACTIVATE (READ|LV|NONBLOCK|CLUSTER_VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811079 for 0. len 29
Nov 25 10:50:52 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ae80. client=0x6a1d60, msg=0x7fffc04d1c6c, len=29, csid=0x7fffc04d063c, xid=0
Nov 25 10:50:52 nebula3 lvm[6490]: lock_resource 'PSXBJgdbJb55UdFcI48VOdE6voIrDm71exNu0QeKultyW71LS8DjWLEdnpgtovv9', flags=1, mode=1
Nov 25 10:50:52 nebula3 lvm[6490]: lock_resource returning 0, lock_id=2
Nov 25 10:50:52 nebula3 lvm[6490]: Command return is 0, critical_section is 0
Nov 25 10:50:52 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 10:50:52 nebula3 lvm[6490]: Got 1 replies, expecting: 1
Nov 25 10:50:52 nebula3 lvm[6490]: process_work_item: remote
Nov 25 10:50:52 nebula3 lvm[6490]: process_remote_command LOCK_VG (0x33) for clientid 0x5000000 XID 7 on node 40a8e747
Nov 25 10:50:52 nebula3 lvm[6490]: Got post command condition...
Nov 25 10:50:52 nebula3 lvm[6490]: do_lock_vg: resource 'P_#global', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 10:50:52 nebula3 lvm[6490]: Refreshing context
Nov 25 10:50:52 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 10:50:52 nebula3 lvm[6490]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: Send local reply
Nov 25 10:50:52 nebula3 lvm[6490]: Read on local socket 12, len = 31
Nov 25 10:50:52 nebula3 lvm[6490]: check_all_clvmds_running
Nov 25 10:50:52 nebula3 lvm[6490]: down_callback. node 1084811079, state = 3
Nov 25 10:50:52 nebula3 lvm[6490]: down_callback. node 1084811078, state = 3
Nov 25 10:50:52 nebula3 lvm[6490]: down_callback. node 1084811080, state = 3
Nov 25 10:50:52 nebula3 lvm[6490]: Got pre command condition...
Nov 25 10:50:52 nebula3 lvm[6490]: Writing status 0 down pipe 14
Nov 25 10:50:52 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 10:50:52 nebula3 lvm[6490]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: distribute command: XID = 11, flags=0x0 ()
Nov 25 10:50:52 nebula3 lvm[6490]: num_nodes = 3
Nov 25 10:50:52 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6b120. client=0x1a6aa60, msg=0x1a6ae40, len=31, csid=(nil), xid=11
Nov 25 10:50:52 nebula3 lvm[6490]: Sending message to all cluster nodes
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811080 for 0. len 31
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811079 for 1084811080. len 18
Nov 25 10:50:52 nebula3 lvm[6490]: Reply from node 40a8e747: 0 bytes
Nov 25 10:50:52 nebula3 lvm[6490]: Got 1 replies, expecting: 3
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811078 for 1084811079. len 18
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811078 for 1084811080. len 18
Nov 25 10:50:52 nebula3 lvm[6490]: Reply from node 40a8e746: 0 bytes
Nov 25 10:50:52 nebula3 lvm[6490]: Got 2 replies, expecting: 3
Nov 25 10:50:52 nebula3 lvm[6490]: process_work_item: local
Nov 25 10:50:52 nebula3 lvm[6490]: process_local_command: SYNC_NAMES (0x2d) msg=0x1a6ade0, msglen =31, client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: Syncing device names
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811080 for 1084811079. len 18
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811079 for 0. len 84
Nov 25 10:50:52 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ae80. client=0x6a1d60, msg=0x7fffc04d1c6c, len=84, csid=0x7fffc04d063c, xid=0
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811078 for 1084811079. len 21
Nov 25 10:50:52 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 10:50:52 nebula3 lvm[6490]: Got 3 replies, expecting: 3
Nov 25 10:50:52 nebula3 lvm[6490]: process_work_item: remote
Nov 25 10:50:52 nebula3 lvm[6490]: process_remote_command LOCK_QUERY (0x34) for clientid 0x5000000 XID 9 on node 40a8e747
Nov 25 10:50:52 nebula3 lvm[6490]: do_lock_query: resource 'PSXBJgdbJb55UdFcI48VOdE6voIrDm71exNu0QeKultyW71LS8DjWLEdnpgtovv9', mode 1 (CR)
Nov 25 10:50:52 nebula3 lvm[6490]: Got post command condition...
Nov 25 10:50:52 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 10:50:52 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 10:50:52 nebula3 lvm[6490]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: Send local reply
Nov 25 10:50:52 nebula3 lvm[6490]: Read on local socket 12, len = 28
Nov 25 10:50:52 nebula3 lvm[6490]: Got pre command condition...
Nov 25 10:50:52 nebula3 lvm[6490]: doing PRE command LOCK_VG 'V_one-fs' at 6 (client=0x1a6aa60)
Nov 25 10:50:52 nebula3 lvm[6490]: unlock_resource: V_one-fs lockid: 1
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811080 for 1084811079. len 21
Nov 25 10:50:52 nebula3 lvm[6490]: Writing status 0 down pipe 14
Nov 25 10:50:52 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 10:50:52 nebula3 lvm[6490]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: distribute command: XID = 12, flags=0x1 (LOCAL)
Nov 25 10:50:52 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ade0. client=0x1a6aa60, msg=0x1a6ab70, len=28, csid=(nil), xid=12
Nov 25 10:50:52 nebula3 lvm[6490]: process_work_item: local
Nov 25 10:50:52 nebula3 lvm[6490]: process_local_command: LOCK_VG (0x33) msg=0x1a6ae20, msglen =28, client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: do_lock_vg: resource 'V_one-fs', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x4 ( DMEVENTD_MONITOR ), critical_section = 0
Nov 25 10:50:52 nebula3 lvm[6490]: Invalidating cached metadata for VG one-fs
Nov 25 10:50:52 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 10:50:52 nebula3 lvm[6490]: Got 1 replies, expecting: 1
Nov 25 10:50:52 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 10:50:52 nebula3 lvm[6490]: Got post command condition...
Nov 25 10:50:52 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 10:50:52 nebula3 lvm[6490]: read on PIPE 13: 4 bytes: status: 0
Nov 25 10:50:52 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 10:50:52 nebula3 lvm[6490]: Send local reply
Nov 25 10:50:52 nebula3 lvm[6490]: Read on local socket 12, len = 0
Nov 25 10:50:52 nebula3 lvm[6490]: EOF on local socket: inprogress=0
Nov 25 10:50:52 nebula3 lvm[6490]: Waiting for child thread
Nov 25 10:50:52 nebula3 lvm[6490]: Got pre command condition...
Nov 25 10:50:52 nebula3 lvm[6490]: Subthread finished
Nov 25 10:50:52 nebula3 lvm[6490]: Joined child thread
Nov 25 10:50:52 nebula3 lvm[6490]: ret == 0, errno = 0. removing client
Nov 25 10:50:52 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ab70. client=0x1a6aa60, msg=(nil), len=0, csid=(nil), xid=12
Nov 25 10:50:52 nebula3 lvm[6490]: process_work_item: free fd -1
Nov 25 10:50:52 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 10:50:52 nebula3 LVM(ONE-vg)[6512]: INFO: 1 logical volume(s) in volume group "one-fs" now active
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811079 for 0. len 31
Nov 25 10:50:52 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6aa60. client=0x6a1d60, msg=0x7fffc04d1c6c, len=31, csid=0x7fffc04d063c, xid=0
Nov 25 10:50:52 nebula3 lvm[6490]: process_work_item: remote
Nov 25 10:50:52 nebula3 lvm[6490]: process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 11 on node 40a8e747
Nov 25 10:50:52 nebula3 lvm[6490]: Syncing device names
Nov 25 10:50:52 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811078 for 1084811079. len 18
Nov 25 10:50:52 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811080 for 1084811079. len 18
Nov 25 10:50:52 nebula3 crmd[6236]: notice: process_lrm_event: LRM operation ONE-vg_start_0 (call=79, rc=0, cib-update=25, confirmed=true) ok
Nov 25 10:50:52 nebula3 crmd[6236]: notice: process_lrm_event: LRM operation ONE-vg_monitor_60000 (call=83, rc=0, cib-update=26, confirmed=false) ok
Nov 25 10:50:52 nebula3 Filesystem(ONE-Datastores)[6569]: INFO: Running start for /dev/one-fs/datastores on /var/lib/one/datastores
Nov 25 10:50:52 nebula3 kernel: [ 696.464815] GFS2 installed
Nov 25 10:50:52 nebula3 kernel: [ 696.478619] GFS2: fsid=one:datastores: Trying to join cluster "lock_dlm", "one:datastores"
Nov 25 10:50:53 nebula3 kernel: [ 697.724083] GFS2: fsid=one:datastores: Joined cluster. Now mounting FS...
Nov 25 10:50:53 nebula3 kernel: [ 697.765245] GFS2: fsid=one:datastores.1: jid=1, already locked for use
Nov 25 10:50:53 nebula3 kernel: [ 697.765250] GFS2: fsid=one:datastores.1: jid=1: Looking at journal...
Nov 25 10:50:53 nebula3 kernel: [ 697.805020] GFS2: fsid=one:datastores.1: jid=1: Done
Nov 25 10:50:53 nebula3 crmd[6236]: notice: process_lrm_event: LRM operation ONE-Datastores_start_0 (call=85, rc=0, cib-update=27, confirmed=true) ok
Nov 25 10:50:53 nebula3 crmd[6236]: notice: process_lrm_event: LRM operation ONE-Datastores_monitor_20000 (call=89, rc=0, cib-update=28, confirmed=false) ok
Crash nebula2 (echo c > /proc/sysrq-trigger)
--------------------------------------------
Nov 25 10:56:28 nebula3 corosync[6220]: [TOTEM ] A processor failed, forming new configuration.
Nov 25 10:56:32 nebula3 corosync[6220]: [TOTEM ] A new membership (192.168.231.70:81396) was formed. Members left: 1084811079
Nov 25 10:56:32 nebula3 lvm[6490]: confchg callback. 0 joined, 1 left, 2 members
Nov 25 10:56:32 nebula3 corosync[6220]: [QUORUM] Members[2]: 1084811078 1084811080
Nov 25 10:56:32 nebula3 corosync[6220]: [MAIN ] Completed service synchronization, ready to provide service.
Nov 25 10:56:32 nebula3 pacemakerd[6229]: notice: crm_update_peer_state: pcmk_quorum_notification: Node nebula2[1084811079] - state is now lost (was member)
Nov 25 10:56:32 nebula3 crmd[6236]: notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[1084811079] - state is now lost (was member)
Nov 25 10:56:32 nebula3 kernel: [ 1036.329677] dlm: closing connection to node 1084811079
Nov 25 10:56:32 nebula3 dlm_controld[6465]: 1035 fence status 1084811079 receive 1 from 1084811078 walltime 1416909392 local 1035
Nov 25 10:56:32 nebula3 dlm_controld[6465]: 1035 fence request 1084811079 pid 7358 nodedown time 1416909392 fence_all dlm_stonith
Nov 25 10:56:32 nebula3 dlm_controld[6465]: 1035 fence result 1084811079 pid 7358 result 1 exit status
Nov 25 10:56:32 nebula3 dlm_controld[6465]: 1035 fence status 1084811079 receive 1 from 1084811080 walltime 1416909392 local 1035
Nov 25 10:56:32 nebula3 dlm_controld[6465]: 1035 fence request 1084811079 no actor
Nov 25 10:56:33 nebula3 stonith-ng[6232]: notice: can_fence_host_with_device: Stonith-nebula1-IPMILAN can not fence nebula2: static-list
Nov 25 10:56:33 nebula3 stonith-ng[6232]: notice: can_fence_host_with_device: Stonith-nebula2-IPMILAN can fence nebula2: static-list
Nov 25 10:56:33 nebula3 stonith-ng[6232]: notice: can_fence_host_with_device: Stonith-ONE-Frontend can not fence nebula2: static-list
Nov 25 10:56:33 nebula3 stonith-ng[6232]: notice: can_fence_host_with_device: Stonith-nebula1-IPMILAN can not fence nebula2: static-list
Nov 25 10:56:33 nebula3 stonith-ng[6232]: notice: can_fence_host_with_device: Stonith-nebula2-IPMILAN can fence nebula2: static-list
Nov 25 10:56:33 nebula3 stonith-ng[6232]: notice: can_fence_host_with_device: Stonith-ONE-Frontend can not fence nebula2: static-list
Nov 25 10:56:34 nebula3 stonith-ng[6232]: notice: log_operation: Operation 'reboot' [7359] (call 4 from crmd.5038) for host 'nebula2' with device 'Stonith-nebula2-IPMILAN' returned: 0 (OK)
Nov 25 10:56:34 nebula3 stonith-ng[6232]: error: crm_abort: crm_glib_handler: Forked child 7376 to record non-fatal assert at logging.c:63 : Source ID 20 was not found when attempting to remove it
Nov 25 10:56:34 nebula3 stonith-ng[6232]: error: crm_abort: crm_glib_handler: Forked child 7377 to record non-fatal assert at logging.c:63 : Source ID 21 was not found when attempting to remove it
Nov 25 10:56:34 nebula3 stonith-ng[6232]: notice: remote_op_done: Operation reboot of nebula2 by nebula1 for crmd.5038 at nebula1.34bed18c: OK
Nov 25 10:56:34 nebula3 crmd[6236]: notice: tengine_stonith_notify: Peer nebula2 was terminated (reboot) by nebula1 for nebula1: OK (ref=34bed18c-c395-4de2-b323-e00208cac6c7) by client crmd.5038
Nov 25 10:56:34 nebula3 crmd[6236]: notice: crm_update_peer_state: tengine_stonith_notify: Node nebula2[0] - state is now lost (was (null))
Nov 25 10:56:34 nebula3 stonith-ng[6232]: notice: stonith_device_register: Device 'Stonith-nebula1-IPMILAN' already existed in device list (3 active devices)
Nov 25 10:56:35 nebula3 stonith-ng[6232]: error: crm_abort: crm_glib_handler: Forked child 7428 to record non-fatal assert at logging.c:63 : Source ID 23 was not found when attempting to remove it
Nov 25 10:56:35 nebula3 stonith-ng[6232]: error: crm_abort: crm_glib_handler: Forked child 7435 to record non-fatal assert at logging.c:63 : Source ID 24 was not found when attempting to remove it
Nov 25 10:56:35 nebula3 crmd[6236]: notice: process_lrm_event: LRM operation Stonith-nebula1-IPMILAN_start_0 (call=92, rc=0, cib-update=29, confirmed=true) ok
Nov 25 10:56:36 nebula3 stonith-ng[6232]: error: crm_abort: crm_glib_handler: Forked child 7454 to record non-fatal assert at logging.c:63 : Source ID 25 was not found when attempting to remove it
Nov 25 10:56:36 nebula3 crmd[6236]: notice: process_lrm_event: LRM operation Stonith-nebula1-IPMILAN_monitor_1800000 (call=95, rc=0, cib-update=30, confirmed=false) ok
Nov 25 10:56:36 nebula3 stonith-ng[6232]: error: crm_abort: crm_glib_handler: Forked child 7455 to record non-fatal assert at logging.c:63 : Source ID 26 was not found when attempting to remove it
Nov 25 10:57:29 nebula3 dlm_controld[6465]: 1092 datastores wait for fencing
Nov 25 10:57:29 nebula3 dlm_controld[6465]: 1092 clvmd wait for fencing
Nov 25 10:59:17 nebula3 kernel: [ 1202.342839] INFO: task gfs2_quotad:6646 blocked for more than 120 seconds.
Nov 25 10:59:17 nebula3 kernel: [ 1202.344814] Not tainted 3.13.0-40-generic #69-Ubuntu
Nov 25 10:59:17 nebula3 kernel: [ 1202.346374] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 25 10:59:17 nebula3 kernel: [ 1202.348628] gfs2_quotad D ffff88180f2b4480 0 6646 2 0x00000000
Nov 25 10:59:17 nebula3 kernel: [ 1202.348636] ffff880bfa983a98 0000000000000046 ffff880bfb0b9800 ffff880bfa983fd8
Nov 25 10:59:17 nebula3 kernel: [ 1202.348643] 0000000000014480 0000000000014480 ffff880bfb0b9800 ffff880bfb0b9800
Nov 25 10:59:17 nebula3 kernel: [ 1202.348649] ffff880bf86267b0 ffff880bf86267b8 ffff880bf1bb0570 0000000000000014
Nov 25 10:59:17 nebula3 kernel: [ 1202.348655] Call Trace:
Nov 25 10:59:17 nebula3 kernel: [ 1202.348675] [<ffffffff81723649>] schedule+0x29/0x70
Nov 25 10:59:17 nebula3 kernel: [ 1202.348681] [<ffffffff817264d5>] rwsem_down_read_failed+0xf5/0x150
Nov 25 10:59:17 nebula3 kernel: [ 1202.348708] [<ffffffff81370764>] call_rwsem_down_read_failed+0x14/0x30
Nov 25 10:59:17 nebula3 kernel: [ 1202.348722] [<ffffffff81725ca0>] ? down_read+0x20/0x30
Nov 25 10:59:17 nebula3 kernel: [ 1202.348737] [<ffffffffa04704d2>] dlm_lock+0x52/0x1b0 [dlm]
Nov 25 10:59:17 nebula3 kernel: [ 1202.348746] [<ffffffff8101b689>] ? read_tsc+0x9/0x20
Nov 25 10:59:17 nebula3 kernel: [ 1202.348754] [<ffffffff810cd6aa>] ? __getnstimeofday+0x3a/0xc0
Nov 25 10:59:17 nebula3 kernel: [ 1202.348771] [<ffffffffa04be517>] gdlm_lock+0x1e7/0x330 [gfs2]
Nov 25 10:59:17 nebula3 kernel: [ 1202.348783] [<ffffffffa04be660>] ? gdlm_lock+0x330/0x330 [gfs2]
Nov 25 10:59:17 nebula3 kernel: [ 1202.348802] [<ffffffffa04bf120>] ? gdlm_recovery_result+0x130/0x130 [gfs2]
Nov 25 10:59:17 nebula3 kernel: [ 1202.348820] [<ffffffffa04a19f7>] do_xmote+0x187/0x270 [gfs2]
Nov 25 10:59:17 nebula3 kernel: [ 1202.348833] [<ffffffffa04a1c26>] run_queue+0x146/0x290 [gfs2]
Nov 25 10:59:17 nebula3 kernel: [ 1202.348844] [<ffffffffa04a20c2>] gfs2_glock_nq+0x1b2/0x430 [gfs2]
Nov 25 10:59:17 nebula3 kernel: [ 1202.348856] [<ffffffffa04bb493>] gfs2_statfs_sync+0x53/0x1a0 [gfs2]
Nov 25 10:59:17 nebula3 kernel: [ 1202.348862] [<ffffffff817227dd>] ? schedule_timeout+0x17d/0x2d0
Nov 25 10:59:17 nebula3 kernel: [ 1202.348872] [<ffffffffa04bb489>] ? gfs2_statfs_sync+0x49/0x1a0 [gfs2]
Nov 25 10:59:17 nebula3 kernel: [ 1202.348885] [<ffffffffa04b2133>] quotad_check_timeo.part.16+0x23/0x90 [gfs2]
Nov 25 10:59:17 nebula3 kernel: [ 1202.348896] [<ffffffffa04b46de>] gfs2_quotad+0x23e/0x2e0 [gfs2]
Nov 25 10:59:17 nebula3 kernel: [ 1202.348904] [<ffffffff810ab0a0>] ? prepare_to_wait_event+0x100/0x100
Nov 25 10:59:17 nebula3 kernel: [ 1202.348915] [<ffffffffa04b44a0>] ? gfs2_wake_up_statfs+0x40/0x40 [gfs2]
Nov 25 10:59:17 nebula3 kernel: [ 1202.348923] [<ffffffff8108b562>] kthread+0xd2/0xf0
Nov 25 10:59:17 nebula3 kernel: [ 1202.348927] [<ffffffff8108b490>] ? kthread_create_on_node+0x1c0/0x1c0
Nov 25 10:59:17 nebula3 kernel: [ 1202.348933] [<ffffffff8172fc7c>] ret_from_fork+0x7c/0xb0
Nov 25 10:59:17 nebula3 kernel: [ 1202.348938] [<ffffffff8108b490>] ? kthread_create_on_node+0x1c0/0x1c0
Nov 25 10:59:54 nebula3 lvm[6490]: Got new connection on fd 5
Nov 25 10:59:54 nebula3 lvm[6490]: Read on local socket 5, len = 28
Nov 25 10:59:54 nebula3 lvm[6490]: creating pipe, [12, 13]
Nov 25 10:59:54 nebula3 lvm[6490]: Creating pre&post thread
Nov 25 10:59:54 nebula3 lvm[6490]: Created pre&post thread, state = 0
Nov 25 10:59:54 nebula3 lvm[6490]: in sub thread: client = 0x1a6aa60
Nov 25 10:59:54 nebula3 lvm[6490]: doing PRE command LOCK_VG 'V_one-fs' at 1 (client=0x1a6aa60)
Nov 25 10:59:54 nebula3 lvm[6490]: lock_resource 'V_one-fs', flags=0, mode=3
Nov 25 11:01:17 nebula3 kernel: [ 1322.438106] INFO: task gfs2_quotad:6646 blocked for more than 120 seconds.
Nov 25 11:01:17 nebula3 kernel: [ 1322.440081] Not tainted 3.13.0-40-generic #69-Ubuntu
Nov 25 11:01:17 nebula3 kernel: [ 1322.441648] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Nov 25 11:01:17 nebula3 kernel: [ 1322.443907] gfs2_quotad D ffff88180f2b4480 0 6646 2 0x00000000
Nov 25 11:01:17 nebula3 kernel: [ 1322.443915] ffff880bfa983a98 0000000000000046 ffff880bfb0b9800 ffff880bfa983fd8
Nov 25 11:01:17 nebula3 kernel: [ 1322.443922] 0000000000014480 0000000000014480 ffff880bfb0b9800 ffff880bfb0b9800
Nov 25 11:01:17 nebula3 kernel: [ 1322.443928] ffff880bf86267b0 ffff880bf86267b8 ffff880bf1bb0570 0000000000000014
Nov 25 11:01:17 nebula3 kernel: [ 1322.443935] Call Trace:
Nov 25 11:01:17 nebula3 kernel: [ 1322.443960] [<ffffffff81723649>] schedule+0x29/0x70
Nov 25 11:01:17 nebula3 kernel: [ 1322.443975] [<ffffffff817264d5>] rwsem_down_read_failed+0xf5/0x150
Nov 25 11:01:17 nebula3 kernel: [ 1322.443988] [<ffffffff81370764>] call_rwsem_down_read_failed+0x14/0x30
Nov 25 11:01:17 nebula3 kernel: [ 1322.443992] [<ffffffff81725ca0>] ? down_read+0x20/0x30
Nov 25 11:01:17 nebula3 kernel: [ 1322.444006] [<ffffffffa04704d2>] dlm_lock+0x52/0x1b0 [dlm]
Nov 25 11:01:17 nebula3 kernel: [ 1322.444016] [<ffffffff8101b689>] ? read_tsc+0x9/0x20
Nov 25 11:01:17 nebula3 kernel: [ 1322.444024] [<ffffffff810cd6aa>] ? __getnstimeofday+0x3a/0xc0
Nov 25 11:01:17 nebula3 kernel: [ 1322.444041] [<ffffffffa04be517>] gdlm_lock+0x1e7/0x330 [gfs2]
Nov 25 11:01:17 nebula3 kernel: [ 1322.444062] [<ffffffffa04be660>] ? gdlm_lock+0x330/0x330 [gfs2]
Nov 25 11:01:17 nebula3 kernel: [ 1322.444073] [<ffffffffa04bf120>] ? gdlm_recovery_result+0x130/0x130 [gfs2]
Nov 25 11:01:17 nebula3 kernel: [ 1322.444084] [<ffffffffa04a19f7>] do_xmote+0x187/0x270 [gfs2]
Nov 25 11:01:17 nebula3 kernel: [ 1322.444094] [<ffffffffa04a1c26>] run_queue+0x146/0x290 [gfs2]
Nov 25 11:01:17 nebula3 kernel: [ 1322.444110] [<ffffffffa04a20c2>] gfs2_glock_nq+0x1b2/0x430 [gfs2]
Nov 25 11:01:17 nebula3 kernel: [ 1322.444122] [<ffffffffa04bb493>] gfs2_statfs_sync+0x53/0x1a0 [gfs2]
Nov 25 11:01:17 nebula3 kernel: [ 1322.444128] [<ffffffff817227dd>] ? schedule_timeout+0x17d/0x2d0
Nov 25 11:01:17 nebula3 kernel: [ 1322.444138] [<ffffffffa04bb489>] ? gfs2_statfs_sync+0x49/0x1a0 [gfs2]
Nov 25 11:01:17 nebula3 kernel: [ 1322.444150] [<ffffffffa04b2133>] quotad_check_timeo.part.16+0x23/0x90 [gfs2]
Nov 25 11:01:17 nebula3 kernel: [ 1322.444161] [<ffffffffa04b46de>] gfs2_quotad+0x23e/0x2e0 [gfs2]
Nov 25 11:01:17 nebula3 kernel: [ 1322.444171] [<ffffffff810ab0a0>] ? prepare_to_wait_event+0x100/0x100
Nov 25 11:01:17 nebula3 kernel: [ 1322.444181] [<ffffffffa04b44a0>] ? gfs2_wake_up_statfs+0x40/0x40 [gfs2]
Nov 25 11:01:17 nebula3 kernel: [ 1322.444189] [<ffffffff8108b562>] kthread+0xd2/0xf0
Nov 25 11:01:17 nebula3 kernel: [ 1322.444194] [<ffffffff8108b490>] ? kthread_create_on_node+0x1c0/0x1c0
Nov 25 11:01:17 nebula3 kernel: [ 1322.444200] [<ffffffff8172fc7c>] ret_from_fork+0x7c/0xb0
Nov 25 11:01:17 nebula3 kernel: [ 1322.444204] [<ffffffff8108b490>] ? kthread_create_on_node+0x1c0/0x1c0
Manual fence_ack “dlm_tool fence_ack 1084811079”
------------------------------------------------
Nov 25 11:01:22 nebula3 dlm_controld[6465]: 1325 fence status 1084811079 receive -125 from 1084811080 walltime 1416909682 local 1325
Nov 25 11:01:22 nebula3 lvm[6490]: lock_resource returning 0, lock_id=1
Nov 25 11:01:22 nebula3 lvm[6490]: Writing status 0 down pipe 13
Nov 25 11:01:22 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 11:01:22 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:01:22 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:01:22 nebula3 lvm[6490]: distribute command: XID = 13, flags=0x1 (LOCAL)
Nov 25 11:01:22 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ade0. client=0x1a6aa60, msg=0x1a6ab70, len=28, csid=(nil), xid=13
Nov 25 11:01:22 nebula3 lvm[6490]: process_work_item: local
Nov 25 11:01:22 nebula3 lvm[6490]: process_local_command: LOCK_VG (0x33) msg=0x1a6ae20, msglen =28, client=0x1a6aa60
Nov 25 11:01:22 nebula3 lvm[6490]: do_lock_vg: resource 'V_one-fs', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 11:01:22 nebula3 lvm[6490]: Invalidating cached metadata for VG one-fs
Nov 25 11:01:22 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 11:01:22 nebula3 lvm[6490]: Got 1 replies, expecting: 1
Nov 25 11:01:22 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 11:01:22 nebula3 lvm[6490]: Got post command condition...
Nov 25 11:01:22 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 11:01:22 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:01:22 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:01:22 nebula3 lvm[6490]: Send local reply
Nov 25 11:01:22 nebula3 lvm[6490]: Read on local socket 5, len = 31
Nov 25 11:01:22 nebula3 lvm[6490]: Got pre command condition...
Nov 25 11:01:22 nebula3 lvm[6490]: Writing status 0 down pipe 13
Nov 25 11:01:22 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 11:01:22 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:01:22 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:01:22 nebula3 lvm[6490]: distribute command: XID = 14, flags=0x1 (LOCAL)
Nov 25 11:01:22 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ade0. client=0x1a6aa60, msg=0x1a6ab70, len=31, csid=(nil), xid=14
Nov 25 11:01:22 nebula3 lvm[6490]: process_work_item: local
Nov 25 11:01:22 nebula3 lvm[6490]: process_local_command: SYNC_NAMES (0x2d) msg=0x1a6ae20, msglen =31, client=0x1a6aa60
Nov 25 11:01:22 nebula3 lvm[6490]: Syncing device names
Nov 25 11:01:22 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 11:01:22 nebula3 lvm[6490]: Got 1 replies, expecting: 1
Nov 25 11:01:22 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 11:01:22 nebula3 lvm[6490]: Got post command condition...
Nov 25 11:01:22 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 11:01:22 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:01:22 nebula3 kernel: [ 1327.078274] GFS2: fsid=one:datastores.1: recover generation 4 done
Nov 25 11:01:22 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:01:22 nebula3 lvm[6490]: Send local reply
Nov 25 11:01:22 nebula3 lvm[6490]: Read on local socket 5, len = 31
Nov 25 11:01:22 nebula3 lvm[6490]: check_all_clvmds_running
Nov 25 11:01:22 nebula3 lvm[6490]: down_callback. node 1084811079, state = 1
Nov 25 11:01:22 nebula3 lvm[6490]: down_callback. node 1084811078, state = 3
Nov 25 11:01:22 nebula3 lvm[6490]: down_callback. node 1084811080, state = 3
Nov 25 11:01:22 nebula3 lvm[6490]: Send local reply
Nov 25 11:01:22 nebula3 lvm[6490]: Read on local socket 5, len = 28
Nov 25 11:01:22 nebula3 lvm[6490]: Got pre command condition...
Nov 25 11:01:22 nebula3 lvm[6490]: doing PRE command LOCK_VG 'V_one-fs' at 6 (client=0x1a6aa60)
Nov 25 11:01:22 nebula3 lvm[6490]: unlock_resource: V_one-fs lockid: 1
Nov 25 11:01:22 nebula3 lvm[6490]: Writing status 0 down pipe 13
Nov 25 11:01:22 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 11:01:22 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:01:22 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:01:22 nebula3 lvm[6490]: distribute command: XID = 15, flags=0x1 (LOCAL)
Nov 25 11:01:22 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ade0. client=0x1a6aa60, msg=0x1a6ab70, len=28, csid=(nil), xid=15
Nov 25 11:01:22 nebula3 lvm[6490]: process_work_item: local
Nov 25 11:01:22 nebula3 lvm[6490]: process_local_command: LOCK_VG (0x33) msg=0x1a6ae20, msglen =28, client=0x1a6aa60
Nov 25 11:01:22 nebula3 lvm[6490]: do_lock_vg: resource 'V_one-fs', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 11:01:22 nebula3 lvm[6490]: Invalidating cached metadata for VG one-fs
Nov 25 11:01:22 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 11:01:22 nebula3 lvm[6490]: Got 1 replies, expecting: 1
Nov 25 11:01:22 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 11:01:22 nebula3 lvm[6490]: Got post command condition...
Nov 25 11:01:22 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 11:01:22 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:01:22 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:01:22 nebula3 lvm[6490]: Send local reply
Nov 25 11:01:22 nebula3 lvm[6490]: Read on local socket 5, len = 32
Nov 25 11:01:22 nebula3 lvm[6490]: Got pre command condition...
Nov 25 11:01:22 nebula3 lvm[6490]: doing PRE command LOCK_VG 'V_nebula3-vg' at 1 (client=0x1a6aa60)
Nov 25 11:01:22 nebula3 lvm[6490]: lock_resource 'V_nebula3-vg', flags=0, mode=3
Nov 25 11:01:22 nebula3 lvm[6490]: lock_resource returning 0, lock_id=1
Nov 25 11:01:22 nebula3 lvm[6490]: Writing status 0 down pipe 13
Nov 25 11:01:22 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 11:01:22 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:01:22 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:01:22 nebula3 lvm[6490]: distribute command: XID = 16, flags=0x1 (LOCAL)
Nov 25 11:01:22 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ade0. client=0x1a6aa60, msg=0x1a6ab70, len=32, csid=(nil), xid=16
Nov 25 11:01:22 nebula3 lvm[6490]: process_work_item: local
Nov 25 11:01:22 nebula3 lvm[6490]: process_local_command: LOCK_VG (0x33) msg=0x1a6ae20, msglen =32, client=0x1a6aa60
Nov 25 11:01:22 nebula3 lvm[6490]: do_lock_vg: resource 'V_nebula3-vg', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 11:01:22 nebula3 lvm[6490]: Invalidating cached metadata for VG nebula3-vg
Nov 25 11:01:22 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 11:01:22 nebula3 lvm[6490]: Got 1 replies, expecting: 1
Nov 25 11:01:22 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 11:01:22 nebula3 lvm[6490]: Got post command condition...
Nov 25 11:01:22 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 11:01:22 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:01:22 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:01:22 nebula3 lvm[6490]: Send local reply
Nov 25 11:01:22 nebula3 lvm[6490]: Read on local socket 5, len = 31
Nov 25 11:01:22 nebula3 lvm[6490]: Got pre command condition...
Nov 25 11:01:22 nebula3 lvm[6490]: Writing status 0 down pipe 13
Nov 25 11:01:22 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 11:01:22 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:01:22 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:01:22 nebula3 lvm[6490]: distribute command: XID = 17, flags=0x1 (LOCAL)
Nov 25 11:01:22 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ade0. client=0x1a6aa60, msg=0x1a6ab70, len=31, csid=(nil), xid=17
Nov 25 11:01:22 nebula3 lvm[6490]: process_work_item: local
Nov 25 11:01:22 nebula3 lvm[6490]: process_local_command: SYNC_NAMES (0x2d) msg=0x1a6ae20, msglen =31, client=0x1a6aa60
Nov 25 11:01:22 nebula3 lvm[6490]: Syncing device names
Nov 25 11:01:22 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 11:01:22 nebula3 lvm[6490]: Got 1 replies, expecting: 1
Nov 25 11:01:22 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 11:01:22 nebula3 lvm[6490]: Got post command condition...
Nov 25 11:01:22 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 11:01:22 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:01:22 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:01:22 nebula3 lvm[6490]: Send local reply
Nov 25 11:01:22 nebula3 lvm[6490]: Read on local socket 5, len = 31
Nov 25 11:01:22 nebula3 lvm[6490]: Got pre command condition...
Nov 25 11:01:22 nebula3 lvm[6490]: Writing status 0 down pipe 13
Nov 25 11:01:22 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 11:01:22 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:01:22 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:01:22 nebula3 lvm[6490]: distribute command: XID = 18, flags=0x1 (LOCAL)
Nov 25 11:01:22 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ade0. client=0x1a6aa60, msg=0x1a6ab70, len=31, csid=(nil), xid=18
Nov 25 11:01:22 nebula3 lvm[6490]: process_work_item: local
Nov 25 11:01:22 nebula3 lvm[6490]: process_local_command: SYNC_NAMES (0x2d) msg=0x1a6ae20, msglen =31, client=0x1a6aa60
Nov 25 11:01:22 nebula3 lvm[6490]: Syncing device names
Nov 25 11:01:22 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 11:01:22 nebula3 lvm[6490]: Got 1 replies, expecting: 1
Nov 25 11:01:22 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 11:01:22 nebula3 lvm[6490]: Got post command condition...
Nov 25 11:01:22 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 11:01:22 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:01:22 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:01:22 nebula3 lvm[6490]: Send local reply
Nov 25 11:01:22 nebula3 lvm[6490]: Read on local socket 5, len = 31
Nov 25 11:01:22 nebula3 lvm[6490]: Got pre command condition...
Nov 25 11:01:22 nebula3 lvm[6490]: Writing status 0 down pipe 13
Nov 25 11:01:22 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 11:01:22 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:01:22 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:01:22 nebula3 lvm[6490]: distribute command: XID = 19, flags=0x1 (LOCAL)
Nov 25 11:01:22 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ade0. client=0x1a6aa60, msg=0x1a6ab70, len=31, csid=(nil), xid=19
Nov 25 11:01:22 nebula3 lvm[6490]: process_work_item: local
Nov 25 11:01:22 nebula3 lvm[6490]: process_local_command: SYNC_NAMES (0x2d) msg=0x1a6ae20, msglen =31, client=0x1a6aa60
Nov 25 11:01:22 nebula3 lvm[6490]: Syncing device names
Nov 25 11:01:22 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 11:01:22 nebula3 lvm[6490]: Got 1 replies, expecting: 1
Nov 25 11:01:22 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 11:01:22 nebula3 lvm[6490]: Got post command condition...
Nov 25 11:01:22 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 11:01:22 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:01:22 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:01:22 nebula3 lvm[6490]: Send local reply
Nov 25 11:01:22 nebula3 lvm[6490]: Read on local socket 5, len = 31
Nov 25 11:01:22 nebula3 lvm[6490]: Got pre command condition...
Nov 25 11:01:22 nebula3 lvm[6490]: Writing status 0 down pipe 13
Nov 25 11:01:22 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 11:01:22 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:01:22 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:01:22 nebula3 lvm[6490]: distribute command: XID = 20, flags=0x1 (LOCAL)
Nov 25 11:01:22 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ade0. client=0x1a6aa60, msg=0x1a6ab70, len=31, csid=(nil), xid=20
Nov 25 11:01:22 nebula3 lvm[6490]: process_work_item: local
Nov 25 11:01:22 nebula3 lvm[6490]: process_local_command: SYNC_NAMES (0x2d) msg=0x1a6ae20, msglen =31, client=0x1a6aa60
Nov 25 11:01:22 nebula3 lvm[6490]: Syncing device names
Nov 25 11:01:22 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 11:01:22 nebula3 lvm[6490]: Got 1 replies, expecting: 1
Nov 25 11:01:22 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 11:01:22 nebula3 lvm[6490]: Got post command condition...
Nov 25 11:01:22 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 11:01:22 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:01:22 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:01:22 nebula3 lvm[6490]: Send local reply
Nov 25 11:01:22 nebula3 lvm[6490]: Read on local socket 5, len = 31
Nov 25 11:01:22 nebula3 lvm[6490]: check_all_clvmds_running
Nov 25 11:01:22 nebula3 lvm[6490]: down_callback. node 1084811079, state = 1
Nov 25 11:01:22 nebula3 lvm[6490]: down_callback. node 1084811078, state = 3
Nov 25 11:01:22 nebula3 lvm[6490]: down_callback. node 1084811080, state = 3
Nov 25 11:01:22 nebula3 lvm[6490]: Send local reply
Nov 25 11:01:22 nebula3 lvm[6490]: Read on local socket 5, len = 32
Nov 25 11:01:22 nebula3 lvm[6490]: Got pre command condition...
Nov 25 11:01:22 nebula3 lvm[6490]: doing PRE command LOCK_VG 'V_nebula3-vg' at 6 (client=0x1a6aa60)
Nov 25 11:01:22 nebula3 lvm[6490]: unlock_resource: V_nebula3-vg lockid: 1
Nov 25 11:01:22 nebula3 lvm[6490]: Writing status 0 down pipe 13
Nov 25 11:01:22 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 11:01:22 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:01:22 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:01:22 nebula3 lvm[6490]: distribute command: XID = 21, flags=0x1 (LOCAL)
Nov 25 11:01:22 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ade0. client=0x1a6aa60, msg=0x1a6ab70, len=32, csid=(nil), xid=21
Nov 25 11:01:22 nebula3 lvm[6490]: process_work_item: local
Nov 25 11:01:22 nebula3 lvm[6490]: process_local_command: LOCK_VG (0x33) msg=0x1a6ae20, msglen =32, client=0x1a6aa60
Nov 25 11:01:22 nebula3 lvm[6490]: do_lock_vg: resource 'V_nebula3-vg', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 11:01:22 nebula3 lvm[6490]: Invalidating cached metadata for VG nebula3-vg
Nov 25 11:01:22 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 11:01:22 nebula3 lvm[6490]: Got 1 replies, expecting: 1
Nov 25 11:01:22 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 11:01:22 nebula3 lvm[6490]: Got post command condition...
Nov 25 11:01:22 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 11:01:22 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:01:22 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:01:22 nebula3 lvm[6490]: Send local reply
Nov 25 11:01:22 nebula3 lvm[6490]: Read on local socket 5, len = 0
Nov 25 11:01:22 nebula3 lvm[6490]: EOF on local socket: inprogress=0
Nov 25 11:01:22 nebula3 lvm[6490]: Waiting for child thread
Nov 25 11:01:22 nebula3 lvm[6490]: Got pre command condition...
Nov 25 11:01:22 nebula3 lvm[6490]: Subthread finished
Nov 25 11:01:22 nebula3 lvm[6490]: Joined child thread
Nov 25 11:01:22 nebula3 lvm[6490]: ret == 0, errno = 0. removing client
Nov 25 11:01:22 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ab70. client=0x1a6aa60, msg=(nil), len=0, csid=(nil), xid=21
Nov 25 11:01:22 nebula3 lvm[6490]: process_work_item: free fd -1
Nov 25 11:01:22 nebula3 lvm[6490]: LVM thread waiting for work
Start corosync on nebula2 after reboot
--------------------------------------
Nov 25 11:04:37 nebula3 corosync[6220]: [TOTEM ] A new membership (192.168.231.70:81400) was formed. Members joined: 1084811079
Nov 25 11:04:37 nebula3 corosync[6220]: [QUORUM] Members[3]: 1084811078 1084811079 1084811080
Nov 25 11:04:37 nebula3 corosync[6220]: [MAIN ] Completed service synchronization, ready to provide service.
Nov 25 11:04:37 nebula3 pacemakerd[6229]: notice: crm_update_peer_state: pcmk_quorum_notification: Node nebula2[1084811079] - state is now member (was lost)
Nov 25 11:04:37 nebula3 crmd[6236]: notice: corosync_node_name: Unable to get node name for nodeid 1084811079
Nov 25 11:04:37 nebula3 crmd[6236]: notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[1084811079] - state is now member (was lost)
Start pacemaker on nebula2 after reboot
---------------------------------------
Nov 25 11:04:58 nebula3 kernel: [ 1543.378684] dlm: got connection from 1084811079
Nov 25 11:04:59 nebula3 lvm[6490]: confchg callback. 1 joined, 0 left, 3 members
Nov 25 11:05:01 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811079 for 0. len 29
Nov 25 11:05:01 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6aa60. client=0x6a1d60, msg=0x7fffc04d1c6c, len=29, csid=0x7fffc04d063c, xid=0
Nov 25 11:05:01 nebula3 lvm[6490]: process_work_item: remote
Nov 25 11:05:01 nebula3 lvm[6490]: process_remote_command LOCK_VG (0x33) for clientid 0x5000000 XID 0 on node 40a8e747
Nov 25 11:05:01 nebula3 lvm[6490]: do_lock_vg: resource 'P_#global', cmd = 0x4 LCK_VG (WRITE|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 11:05:01 nebula3 lvm[6490]: Refreshing context
Nov 25 11:05:01 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811078 for 1084811079. len 18
Nov 25 11:05:01 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 11:05:01 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811080 for 1084811079. len 18
Nov 25 11:05:01 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811079 for 0. len 31
Nov 25 11:05:01 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6aa60. client=0x6a1d60, msg=0x7fffc04d1c6c, len=31, csid=0x7fffc04d063c, xid=0
Nov 25 11:05:01 nebula3 lvm[6490]: process_work_item: remote
Nov 25 11:05:01 nebula3 lvm[6490]: process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 2 on node 40a8e747
Nov 25 11:05:01 nebula3 lvm[6490]: Syncing device names
Nov 25 11:05:01 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 11:05:01 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811078 for 1084811079. len 18
Nov 25 11:05:01 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811080 for 1084811079. len 18
Nov 25 11:05:01 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811079 for 0. len 29
Nov 25 11:05:01 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6aa60. client=0x6a1d60, msg=0x7fffc04d1c6c, len=29, csid=0x7fffc04d063c, xid=0
Nov 25 11:05:01 nebula3 lvm[6490]: process_work_item: remote
Nov 25 11:05:01 nebula3 lvm[6490]: process_remote_command LOCK_VG (0x33) for clientid 0x5000000 XID 4 on node 40a8e747
Nov 25 11:05:01 nebula3 lvm[6490]: do_lock_vg: resource 'P_#global', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 11:05:01 nebula3 lvm[6490]: Refreshing context
Nov 25 11:05:01 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811078 for 1084811079. len 18
Nov 25 11:05:01 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 11:05:01 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811080 for 1084811079. len 18
Nov 25 11:05:01 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811079 for 0. len 31
Nov 25 11:05:01 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6aa60. client=0x6a1d60, msg=0x7fffc04d1c6c, len=31, csid=0x7fffc04d063c, xid=0
Nov 25 11:05:01 nebula3 lvm[6490]: process_work_item: remote
Nov 25 11:05:01 nebula3 lvm[6490]: process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 6 on node 40a8e747
Nov 25 11:05:01 nebula3 lvm[6490]: Syncing device names
Nov 25 11:05:01 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 11:05:01 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811078 for 1084811079. len 18
Nov 25 11:05:01 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811080 for 1084811079. len 18
Nov 25 11:05:02 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811079 for 0. len 31
Nov 25 11:05:02 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6aa60. client=0x6a1d60, msg=0x7fffc04d1c6c, len=31, csid=0x7fffc04d063c, xid=0
Nov 25 11:05:02 nebula3 lvm[6490]: process_work_item: remote
Nov 25 11:05:02 nebula3 lvm[6490]: process_remote_command SYNC_NAMES (0x2d) for clientid 0x5000000 XID 9 on node 40a8e747
Nov 25 11:05:02 nebula3 lvm[6490]: Syncing device names
Nov 25 11:05:02 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 11:05:02 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811078 for 1084811079. len 18
Nov 25 11:05:02 nebula3 lvm[6490]: 1084811080 got message from nodeid 1084811080 for 1084811079. len 18
Nov 25 11:05:02 nebula3 lvm[6490]: confchg callback. 0 joined, 1 left, 2 members
Nov 25 11:05:55 nebula3 lvm[6490]: Got new connection on fd 5
Nov 25 11:05:55 nebula3 lvm[6490]: Read on local socket 5, len = 28
Nov 25 11:05:55 nebula3 lvm[6490]: creating pipe, [12, 13]
Nov 25 11:05:55 nebula3 lvm[6490]: Creating pre&post thread
Nov 25 11:05:55 nebula3 lvm[6490]: Created pre&post thread, state = 0
Nov 25 11:05:55 nebula3 lvm[6490]: in sub thread: client = 0x1a6aa60
Nov 25 11:05:55 nebula3 lvm[6490]: doing PRE command LOCK_VG 'V_one-fs' at 1 (client=0x1a6aa60)
Nov 25 11:05:55 nebula3 lvm[6490]: lock_resource 'V_one-fs', flags=0, mode=3
Nov 25 11:05:55 nebula3 lvm[6490]: lock_resource returning 0, lock_id=1
Nov 25 11:05:55 nebula3 lvm[6490]: Writing status 0 down pipe 13
Nov 25 11:05:55 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 11:05:55 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:05:55 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:05:55 nebula3 lvm[6490]: distribute command: XID = 22, flags=0x1 (LOCAL)
Nov 25 11:05:55 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ade0. client=0x1a6aa60, msg=0x1a6ab70, len=28, csid=(nil), xid=22
Nov 25 11:05:55 nebula3 lvm[6490]: process_work_item: local
Nov 25 11:05:55 nebula3 lvm[6490]: process_local_command: LOCK_VG (0x33) msg=0x1a6ae20, msglen =28, client=0x1a6aa60
Nov 25 11:05:55 nebula3 lvm[6490]: do_lock_vg: resource 'V_one-fs', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 11:05:55 nebula3 lvm[6490]: Invalidating cached metadata for VG one-fs
Nov 25 11:05:55 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 11:05:55 nebula3 lvm[6490]: Got 1 replies, expecting: 1
Nov 25 11:05:55 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 11:05:55 nebula3 lvm[6490]: Got post command condition...
Nov 25 11:05:55 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 11:05:55 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:05:55 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:05:55 nebula3 lvm[6490]: Send local reply
Nov 25 11:05:55 nebula3 lvm[6490]: Read on local socket 5, len = 31
Nov 25 11:05:55 nebula3 lvm[6490]: Got pre command condition...
Nov 25 11:05:55 nebula3 lvm[6490]: Writing status 0 down pipe 13
Nov 25 11:05:55 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 11:05:55 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:05:55 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:05:55 nebula3 lvm[6490]: distribute command: XID = 23, flags=0x1 (LOCAL)
Nov 25 11:05:55 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ade0. client=0x1a6aa60, msg=0x1a6ab70, len=31, csid=(nil), xid=23
Nov 25 11:05:55 nebula3 lvm[6490]: process_work_item: local
Nov 25 11:05:55 nebula3 lvm[6490]: process_local_command: SYNC_NAMES (0x2d) msg=0x1a6ae20, msglen =31, client=0x1a6aa60
Nov 25 11:05:55 nebula3 lvm[6490]: Syncing device names
Nov 25 11:05:55 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 11:05:55 nebula3 lvm[6490]: Got 1 replies, expecting: 1
Nov 25 11:05:55 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 11:05:55 nebula3 lvm[6490]: Got post command condition...
Nov 25 11:05:55 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 11:05:55 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:05:55 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:05:55 nebula3 lvm[6490]: Send local reply
Nov 25 11:05:55 nebula3 lvm[6490]: Read on local socket 5, len = 31
Nov 25 11:05:55 nebula3 lvm[6490]: check_all_clvmds_running
Nov 25 11:05:55 nebula3 lvm[6490]: down_callback. node 1084811079, state = 1
Nov 25 11:05:55 nebula3 lvm[6490]: down_callback. node 1084811078, state = 3
Nov 25 11:05:55 nebula3 lvm[6490]: down_callback. node 1084811080, state = 3
Nov 25 11:05:55 nebula3 lvm[6490]: Send local reply
Nov 25 11:05:55 nebula3 lvm[6490]: Read on local socket 5, len = 28
Nov 25 11:05:55 nebula3 lvm[6490]: Got pre command condition...
Nov 25 11:05:55 nebula3 lvm[6490]: doing PRE command LOCK_VG 'V_one-fs' at 6 (client=0x1a6aa60)
Nov 25 11:05:55 nebula3 lvm[6490]: unlock_resource: V_one-fs lockid: 1
Nov 25 11:05:55 nebula3 lvm[6490]: Writing status 0 down pipe 13
Nov 25 11:05:55 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 11:05:55 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:05:55 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:05:55 nebula3 lvm[6490]: distribute command: XID = 24, flags=0x1 (LOCAL)
Nov 25 11:05:55 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ade0. client=0x1a6aa60, msg=0x1a6ab70, len=28, csid=(nil), xid=24
Nov 25 11:05:55 nebula3 lvm[6490]: process_work_item: local
Nov 25 11:05:55 nebula3 lvm[6490]: process_local_command: LOCK_VG (0x33) msg=0x1a6ae20, msglen =28, client=0x1a6aa60
Nov 25 11:05:55 nebula3 lvm[6490]: do_lock_vg: resource 'V_one-fs', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 11:05:55 nebula3 lvm[6490]: Invalidating cached metadata for VG one-fs
Nov 25 11:05:55 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 11:05:55 nebula3 lvm[6490]: Got 1 replies, expecting: 1
Nov 25 11:05:55 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 11:05:55 nebula3 lvm[6490]: Got post command condition...
Nov 25 11:05:55 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 11:05:55 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:05:55 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:05:55 nebula3 lvm[6490]: Send local reply
Nov 25 11:05:55 nebula3 lvm[6490]: Read on local socket 5, len = 32
Nov 25 11:05:55 nebula3 lvm[6490]: Got pre command condition...
Nov 25 11:05:55 nebula3 lvm[6490]: doing PRE command LOCK_VG 'V_nebula3-vg' at 1 (client=0x1a6aa60)
Nov 25 11:05:55 nebula3 lvm[6490]: lock_resource 'V_nebula3-vg', flags=0, mode=3
Nov 25 11:05:55 nebula3 lvm[6490]: lock_resource returning 0, lock_id=1
Nov 25 11:05:55 nebula3 lvm[6490]: Writing status 0 down pipe 13
Nov 25 11:05:55 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 11:05:55 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:05:55 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:05:55 nebula3 lvm[6490]: distribute command: XID = 25, flags=0x1 (LOCAL)
Nov 25 11:05:55 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ade0. client=0x1a6aa60, msg=0x1a6ab70, len=32, csid=(nil), xid=25
Nov 25 11:05:55 nebula3 lvm[6490]: process_work_item: local
Nov 25 11:05:55 nebula3 lvm[6490]: process_local_command: LOCK_VG (0x33) msg=0x1a6ae20, msglen =32, client=0x1a6aa60
Nov 25 11:05:55 nebula3 lvm[6490]: do_lock_vg: resource 'V_nebula3-vg', cmd = 0x1 LCK_VG (READ|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 11:05:55 nebula3 lvm[6490]: Invalidating cached metadata for VG nebula3-vg
Nov 25 11:05:55 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 11:05:55 nebula3 lvm[6490]: Got 1 replies, expecting: 1
Nov 25 11:05:55 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 11:05:55 nebula3 lvm[6490]: Got post command condition...
Nov 25 11:05:55 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 11:05:55 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:05:55 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:05:55 nebula3 lvm[6490]: Send local reply
Nov 25 11:05:55 nebula3 lvm[6490]: Read on local socket 5, len = 31
Nov 25 11:05:55 nebula3 lvm[6490]: Got pre command condition...
Nov 25 11:05:55 nebula3 lvm[6490]: Writing status 0 down pipe 13
Nov 25 11:05:55 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 11:05:55 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:05:55 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:05:55 nebula3 lvm[6490]: distribute command: XID = 26, flags=0x1 (LOCAL)
Nov 25 11:05:55 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ade0. client=0x1a6aa60, msg=0x1a6ab70, len=31, csid=(nil), xid=26
Nov 25 11:05:55 nebula3 lvm[6490]: process_work_item: local
Nov 25 11:05:55 nebula3 lvm[6490]: process_local_command: SYNC_NAMES (0x2d) msg=0x1a6ae20, msglen =31, client=0x1a6aa60
Nov 25 11:05:55 nebula3 lvm[6490]: Syncing device names
Nov 25 11:05:55 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 11:05:55 nebula3 lvm[6490]: Got 1 replies, expecting: 1
Nov 25 11:05:55 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 11:05:55 nebula3 lvm[6490]: Got post command condition...
Nov 25 11:05:55 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 11:05:55 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:05:55 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:05:55 nebula3 lvm[6490]: Send local reply
Nov 25 11:05:55 nebula3 lvm[6490]: Read on local socket 5, len = 31
Nov 25 11:05:55 nebula3 lvm[6490]: Got pre command condition...
Nov 25 11:05:55 nebula3 lvm[6490]: Writing status 0 down pipe 13
Nov 25 11:05:55 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 11:05:55 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:05:55 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:05:55 nebula3 lvm[6490]: distribute command: XID = 27, flags=0x1 (LOCAL)
Nov 25 11:05:55 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ade0. client=0x1a6aa60, msg=0x1a6ab70, len=31, csid=(nil), xid=27
Nov 25 11:05:55 nebula3 lvm[6490]: process_work_item: local
Nov 25 11:05:55 nebula3 lvm[6490]: process_local_command: SYNC_NAMES (0x2d) msg=0x1a6ae20, msglen =31, client=0x1a6aa60
Nov 25 11:05:55 nebula3 lvm[6490]: Syncing device names
Nov 25 11:05:55 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 11:05:55 nebula3 lvm[6490]: Got 1 replies, expecting: 1
Nov 25 11:05:55 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 11:05:55 nebula3 lvm[6490]: Got post command condition...
Nov 25 11:05:55 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 11:05:55 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:05:55 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:05:55 nebula3 lvm[6490]: Send local reply
Nov 25 11:05:55 nebula3 lvm[6490]: Read on local socket 5, len = 31
Nov 25 11:05:55 nebula3 lvm[6490]: Got pre command condition...
Nov 25 11:05:55 nebula3 lvm[6490]: Writing status 0 down pipe 13
Nov 25 11:05:55 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 11:05:55 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:05:55 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:05:55 nebula3 lvm[6490]: distribute command: XID = 28, flags=0x1 (LOCAL)
Nov 25 11:05:55 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ade0. client=0x1a6aa60, msg=0x1a6ab70, len=31, csid=(nil), xid=28
Nov 25 11:05:55 nebula3 lvm[6490]: process_work_item: local
Nov 25 11:05:55 nebula3 lvm[6490]: process_local_command: SYNC_NAMES (0x2d) msg=0x1a6ae20, msglen =31, client=0x1a6aa60
Nov 25 11:05:55 nebula3 lvm[6490]: Syncing device names
Nov 25 11:05:55 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 11:05:55 nebula3 lvm[6490]: Got 1 replies, expecting: 1
Nov 25 11:05:55 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 11:05:55 nebula3 lvm[6490]: Got post command condition...
Nov 25 11:05:55 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 11:05:55 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:05:55 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:05:55 nebula3 lvm[6490]: Send local reply
Nov 25 11:05:55 nebula3 lvm[6490]: Read on local socket 5, len = 31
Nov 25 11:05:55 nebula3 lvm[6490]: Got pre command condition...
Nov 25 11:05:55 nebula3 lvm[6490]: Writing status 0 down pipe 13
Nov 25 11:05:55 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 11:05:55 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:05:55 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:05:55 nebula3 lvm[6490]: distribute command: XID = 29, flags=0x1 (LOCAL)
Nov 25 11:05:55 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ade0. client=0x1a6aa60, msg=0x1a6ab70, len=31, csid=(nil), xid=29
Nov 25 11:05:55 nebula3 lvm[6490]: process_work_item: local
Nov 25 11:05:55 nebula3 lvm[6490]: process_local_command: SYNC_NAMES (0x2d) msg=0x1a6ae20, msglen =31, client=0x1a6aa60
Nov 25 11:05:55 nebula3 lvm[6490]: Syncing device names
Nov 25 11:05:55 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 11:05:55 nebula3 lvm[6490]: Got 1 replies, expecting: 1
Nov 25 11:05:55 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 11:05:55 nebula3 lvm[6490]: Got post command condition...
Nov 25 11:05:55 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 11:05:55 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:05:55 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:05:55 nebula3 lvm[6490]: Send local reply
Nov 25 11:05:55 nebula3 lvm[6490]: Read on local socket 5, len = 31
Nov 25 11:05:55 nebula3 lvm[6490]: check_all_clvmds_running
Nov 25 11:05:55 nebula3 lvm[6490]: down_callback. node 1084811079, state = 1
Nov 25 11:05:55 nebula3 lvm[6490]: down_callback. node 1084811078, state = 3
Nov 25 11:05:55 nebula3 lvm[6490]: down_callback. node 1084811080, state = 3
Nov 25 11:05:55 nebula3 lvm[6490]: Send local reply
Nov 25 11:05:55 nebula3 lvm[6490]: Read on local socket 5, len = 32
Nov 25 11:05:55 nebula3 lvm[6490]: Got pre command condition...
Nov 25 11:05:55 nebula3 lvm[6490]: doing PRE command LOCK_VG 'V_nebula3-vg' at 6 (client=0x1a6aa60)
Nov 25 11:05:55 nebula3 lvm[6490]: unlock_resource: V_nebula3-vg lockid: 1
Nov 25 11:05:55 nebula3 lvm[6490]: Writing status 0 down pipe 13
Nov 25 11:05:55 nebula3 lvm[6490]: Waiting to do post command - state = 0
Nov 25 11:05:55 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:05:55 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:05:55 nebula3 lvm[6490]: distribute command: XID = 30, flags=0x1 (LOCAL)
Nov 25 11:05:55 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ade0. client=0x1a6aa60, msg=0x1a6ab70, len=32, csid=(nil), xid=30
Nov 25 11:05:55 nebula3 lvm[6490]: process_work_item: local
Nov 25 11:05:55 nebula3 lvm[6490]: process_local_command: LOCK_VG (0x33) msg=0x1a6ae20, msglen =32, client=0x1a6aa60
Nov 25 11:05:55 nebula3 lvm[6490]: do_lock_vg: resource 'V_nebula3-vg', cmd = 0x6 LCK_VG (UNLOCK|VG), flags = 0x0 ( ), critical_section = 0
Nov 25 11:05:55 nebula3 lvm[6490]: Invalidating cached metadata for VG nebula3-vg
Nov 25 11:05:55 nebula3 lvm[6490]: Reply from node 40a8e748: 0 bytes
Nov 25 11:05:55 nebula3 lvm[6490]: Got 1 replies, expecting: 1
Nov 25 11:05:55 nebula3 lvm[6490]: LVM thread waiting for work
Nov 25 11:05:55 nebula3 lvm[6490]: Got post command condition...
Nov 25 11:05:55 nebula3 lvm[6490]: Waiting for next pre command
Nov 25 11:05:55 nebula3 lvm[6490]: read on PIPE 12: 4 bytes: status: 0
Nov 25 11:05:55 nebula3 lvm[6490]: background routine status was 0, sock_client=0x1a6aa60
Nov 25 11:05:55 nebula3 lvm[6490]: Send local reply
Nov 25 11:05:55 nebula3 lvm[6490]: Read on local socket 5, len = 0
Nov 25 11:05:55 nebula3 lvm[6490]: EOF on local socket: inprogress=0
Nov 25 11:05:55 nebula3 lvm[6490]: Waiting for child thread
Nov 25 11:05:55 nebula3 lvm[6490]: Got pre command condition...
Nov 25 11:05:55 nebula3 lvm[6490]: Subthread finished
Nov 25 11:05:55 nebula3 lvm[6490]: Joined child thread
Nov 25 11:05:55 nebula3 lvm[6490]: ret == 0, errno = 0. removing client
Nov 25 11:05:55 nebula3 lvm[6490]: add_to_lvmqueue: cmd=0x1a6ab70. client=0x1a6aa60, msg=(nil), len=0, csid=(nil), xid=30
Nov 25 11:05:55 nebula3 lvm[6490]: process_work_item: free fd -1
Nov 25 11:05:55 nebula3 lvm[6490]: LVM thread waiting for work
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 342 bytes
Desc: not available
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20141125/90ab8ec5/attachment-0004.sig>
More information about the Pacemaker
mailing list