[Pacemaker] DRBD+OCFS2+Pacemaker on UBUNTU 12.04, DRBD via pacemaker doesn't start when corosync invoked

kamal kishi kamal.kishi at gmail.com
Fri May 16 07:32:57 EDT 2014


Hi Emi,

I changed DRBD and Pacemaker configs as attached,
still no results.

The Exit Code is 3 now

Have attached logs too



On Thu, May 15, 2014 at 2:48 PM, emmanuel segura <emi2fast at gmail.com> wrote:

> Are you sure that your drbd work by hand? this is from your log
>
> May 15 12:26:04 server1 lrmd: [1211]: info: RA output:
> (Cluster-FS-DRBD:0:start:stderr) Command 'drbdsetup new-resource r0'
> terminated with exit code 20
> May 15 12:26:04 server1 drbd[1808]: ERROR: r0: Called drbdadm -c
> /etc/drbd.conf new-resource r0
> May 15 12:26:04 server1 drbd[1808]: ERROR: r0: Exit code 20
> May 15 12:26:04 server1 drbd[1808]: ERROR: r0: Command output:
>
>
> 2014-05-15 10:37 GMT+02:00 kamal kishi <kamal.kishi at gmail.com>:
>
> Hi emi,
>>
>> The document also says the following :
>>
>> "It is also possible to use drbd.conf as a flat configuration file
>> without any include statements at all. Such a configuration, however,
>> quickly becomes cluttered and hard to manage, which is why the
>> multiple-file approach is the preferred one.
>>
>> Regardless of which approach you employ, you should always make sure that
>> drbd.conf, and any other files it includes, are *exactly identical* on
>> all participating cluster nodes."
>>
>> I've tried that way too, but of no use.
>>
>> Tried giving -
>>
>> params device="/dev/drbd0" directory="/cluster" fstype="ocfs2" \
>>
>> instead of
>>
>> params device="/dev/drbd/by-res/r0" directory="/cluster" fstype="ocfs2" \
>>
>> even that failed.
>>
>> But my doubt is that i'm able to manually work with DRBD without any
>> issue but why can't via pacemaker.
>>
>> Any useful info in logs??
>>
>> I did not get any, so inquiring
>>
>>
>>
>> On Thu, May 15, 2014 at 1:51 PM, emmanuel segura <emi2fast at gmail.com>wrote:
>>
>>> You don't declared your drbd resource r0 in the configuration, read this
>>> http://www.drbd.org/users-guide/s-configure-resource.html
>>>
>>>
>>> 2014-05-15 9:33 GMT+02:00 kamal kishi <kamal.kishi at gmail.com>:
>>>
>>>>  Hi All,
>>>>
>>>> My configuration is simple and straight, UBUNTU 12.04 used to run
>>>> pacemaker.
>>>> Pacemaker runs DRBD and OCFS2.
>>>>
>>>> The DRBD can be started manually without any error or issues with
>>>> primary/primary configuration.
>>>>
>>>> (NOTE : This configuration is being done as base to configure
>>>> ACTIVE-ACTIVE XEN configuration, hence the "become-primary-on both;' is
>>>> used in DRBD config)
>>>>
>>>> Configuration attached :
>>>> 1. DRBD
>>>> 2. Pacemaker
>>>>
>>>> Log attached :
>>>> Syslog1 - Server 1
>>>> Syslog2 - Server 2
>>>>
>>>> Hope and wish I get a solution.
>>>>
>>>> --
>>>> Regards,
>>>> Kamal Kishore B V
>>>>
>>>> _______________________________________________
>>>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>>>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>>>
>>>> Project Home: http://www.clusterlabs.org
>>>> Getting started:
>>>> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>>> Bugs: http://bugs.clusterlabs.org
>>>>
>>>>
>>>
>>>
>>> --
>>> esta es mi vida e me la vivo hasta que dios quiera
>>>
>>> _______________________________________________
>>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>>
>>> Project Home: http://www.clusterlabs.org
>>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>> Bugs: http://bugs.clusterlabs.org
>>>
>>>
>>
>>
>> --
>> Regards,
>> Kamal Kishore B V
>>
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
>>
>
>
> --
> esta es mi vida e me la vivo hasta que dios quiera
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
>


-- 
Regards,
Kamal Kishore B V
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20140516/c93d4ab3/attachment-0003.html>
-------------- next part --------------
global { usage-count no; }
common { syncer { rate 15M; } }
resource r0 {
protocol C;
startup {
become-primary-on both;
}
disk {
fencing resource-only;
}
handlers {
fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh";
}
net {
allow-two-primaries;
#cram-hmac-alg sha1;
shared-secret "kalki";
}
on server1 {
device /dev/drbd0;
disk /dev/sda3;
address 192.168.0.92:7788;
meta-disk internal;
}
on server2 {
device /dev/drbd0;
disk /dev/sda3;
address 192.168.0.93:7788;
meta-disk internal;
}
}
-------------- next part --------------
crm configure
property no-quorum-policy=ignore
property stonith-enabled=false
property default-resource-stickiness=1000
commit
bye

crm configure
primitive Cluster-FS-DRBD ocf:linbit:drbd \
params drbd_resource="r0" \
op start interval="0" timeout="240s" \
op stop interval="0" timeout="100s" \
op monitor interval="29s" role="Master" \
op monitor interval="31s" role="Slave" \
meta migration-threshold="3" failure-timeout="60s"

ms Cluster-FS-DRBD-Master Cluster-FS-DRBD \
meta resource-stickiness="100" master-max="2" notify="true" interleave="true" target-role="started"

primitive Cluster-FS-Mount ocf:heartbeat:Filesystem \
params device="/dev/drbd0" directory="/cluster" fstype="ocfs2" \
op monitor interval="10s" timeout="60s" \
op start interval="0" timeout="90s" \
op stop interval="0" timeout="60s" \
meta migration-threshold="3" failure-timeout="60s"


clone Cluster-FS-Mount-Clone Cluster-FS-Mount \
meta interleave="true" ordered="true" target-role="started"

colocation colOCFS2-with-DRBDMaster inf: Cluster-FS-Mount-Clone Cluster-FS-DRBD-Master:Master

order Cluster-FS-After-DRBD inf: \
Cluster-FS-DRBD-Master:promote \
Cluster-FS-Mount-Clone:start
commit
-------------- next part --------------
May 16 16:42:32 server1 cib: [2478]: info: cib_stats: Processed 77 operations (0.00us average, 0% utilization) in the last 10min
May 16 16:42:50 server1 crmd: [2482]: info: update_dc: Unset DC server2
May 16 16:42:50 server1 crmd: [2482]: info: do_state_transition: State transition S_NOT_DC -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
May 16 16:42:50 server1 crmd: [2482]: info: update_dc: Set DC to server2 (3.0.5)
May 16 16:42:50 server1 crmd: [2482]: info: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
May 16 16:42:50 server1 attrd: [2480]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
May 16 16:42:50 server1 attrd: [2480]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
May 16 16:42:50 server1 crmd: [2482]: info: do_lrm_rsc_op: Performing key=7:3:7:8bcf8e15-a13a-4ab3-b651-dd8e89f3d9fc op=Cluster-FS-DRBD:0_monitor_0 )
May 16 16:42:50 server1 lrmd: [2479]: info: rsc:Cluster-FS-DRBD:0 probe[2] (pid 2493)
May 16 16:42:50 server1 crmd: [2482]: info: do_lrm_rsc_op: Performing key=8:3:7:8bcf8e15-a13a-4ab3-b651-dd8e89f3d9fc op=Cluster-FS-Mount:0_monitor_0 )
May 16 16:42:50 server1 lrmd: [2479]: info: rsc:Cluster-FS-Mount:0 probe[3] (pid 2494)
May 16 16:42:50 server1 lrmd: [2479]: info: operation monitor[3] on Cluster-FS-Mount:0 for client 2482: pid 2494 exited with return code 7
May 16 16:42:50 server1 crmd: [2482]: info: process_lrm_event: LRM operation Cluster-FS-Mount:0_monitor_0 (call=3, rc=7, cib-update=14, confirmed=true) not running
May 16 16:42:50 server1 lrmd: [2479]: info: RA output: (Cluster-FS-DRBD:0:probe:stderr) Could not connect to 'drbd' generic netlink family
May 16 16:42:50 server1 crm_attribute: [2553]: info: Invoked: crm_attribute -N server1 -n master-Cluster-FS-DRBD:0 -l reboot -D 
May 16 16:42:50 server1 lrmd: [2479]: info: operation monitor[2] on Cluster-FS-DRBD:0 for client 2482: pid 2493 exited with return code 7
May 16 16:42:50 server1 crmd: [2482]: info: process_lrm_event: LRM operation Cluster-FS-DRBD:0_monitor_0 (call=2, rc=7, cib-update=15, confirmed=true) not running
May 16 16:42:50 server1 crmd: [2482]: info: do_lrm_rsc_op: Performing key=9:3:0:8bcf8e15-a13a-4ab3-b651-dd8e89f3d9fc op=Cluster-FS-DRBD:0_start_0 )
May 16 16:42:50 server1 lrmd: [2479]: info: rsc:Cluster-FS-DRBD:0 start[4] (pid 2554)
May 16 16:42:50 server1 lrmd: [2479]: info: RA output: (Cluster-FS-DRBD:0:start:stderr) Could not connect to 'drbd' generic netlink family
May 16 16:42:50 server1 lrmd: [2479]: info: RA output: (Cluster-FS-DRBD:0:start:stdout) 
May 16 16:42:50 server1 lrmd: [2479]: info: RA output: (Cluster-FS-DRBD:0:start:stderr) drbdadm: Unknown command 'syncer'
May 16 16:42:50 server1 drbd[2554]: ERROR: r0: Called drbdadm -c /etc/drbd.conf syncer r0
May 16 16:42:50 server1 drbd[2554]: ERROR: r0: Exit code 3
May 16 16:42:50 server1 drbd[2554]: ERROR: r0: Command output: 
May 16 16:42:50 server1 lrmd: [2479]: info: RA output: (Cluster-FS-DRBD:0:start:stdout) 
May 16 16:42:50 server1 lrmd: [2479]: info: operation start[4] on Cluster-FS-DRBD:0 for client 2482: pid 2554 exited with return code 1
May 16 16:42:50 server1 crmd: [2482]: info: process_lrm_event: LRM operation Cluster-FS-DRBD:0_start_0 (call=4, rc=1, cib-update=16, confirmed=true) unknown error
May 16 16:42:50 server1 attrd: [2480]: notice: attrd_ais_dispatch: Update relayed from server2
May 16 16:42:50 server1 attrd: [2480]: notice: attrd_trigger_update: Sending flush op to all hosts for: fail-count-Cluster-FS-DRBD:0 (INFINITY)
May 16 16:42:50 server1 crmd: [2482]: info: do_lrm_rsc_op: Performing key=49:3:0:8bcf8e15-a13a-4ab3-b651-dd8e89f3d9fc op=Cluster-FS-DRBD:0_notify_0 )
May 16 16:42:50 server1 lrmd: [2479]: info: rsc:Cluster-FS-DRBD:0 notify[5] (pid 2612)
May 16 16:42:50 server1 attrd: [2480]: notice: attrd_perform_update: Sent update 21: fail-count-Cluster-FS-DRBD:0=INFINITY
May 16 16:42:50 server1 attrd: [2480]: notice: attrd_ais_dispatch: Update relayed from server2
May 16 16:42:50 server1 attrd: [2480]: notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-Cluster-FS-DRBD:0 (1400238770)
May 16 16:42:50 server1 attrd: [2480]: notice: attrd_perform_update: Sent update 24: last-failure-Cluster-FS-DRBD:0=1400238770
May 16 16:42:50 server1 lrmd: [2479]: info: RA output: (Cluster-FS-DRBD:0:notify:stderr) Could not connect to 'drbd' generic netlink family
May 16 16:42:50 server1 kernel: [29460.904629] block drbd0: Starting worker thread (from drbdsetup-83 [2646])
May 16 16:42:50 server1 kernel: [29460.904710] block drbd0: disk( Diskless -> Attaching ) 
May 16 16:42:50 server1 kernel: [29460.906449] block drbd0: Found 4 transactions (4 active extents) in activity log.
May 16 16:42:50 server1 kernel: [29460.906453] block drbd0: Method to ensure write ordering: flush
May 16 16:42:50 server1 kernel: [29460.906458] block drbd0: drbd_bm_resize called with capacity == 78122592
May 16 16:42:50 server1 kernel: [29460.906686] block drbd0: resync bitmap: bits=9765324 words=152584 pages=299
May 16 16:42:50 server1 kernel: [29460.906689] block drbd0: size = 37 GB (39061296 KB)
May 16 16:42:50 server1 kernel: [29460.923678] block drbd0: bitmap READ of 299 pages took 5 jiffies
May 16 16:42:50 server1 kernel: [29460.923849] block drbd0: recounting of set bits took additional 0 jiffies
May 16 16:42:50 server1 kernel: [29460.923851] block drbd0: 0 KB (0 bits) marked out-of-sync by on disk bit-map.
May 16 16:42:50 server1 kernel: [29460.923857] block drbd0: disk( Attaching -> Consistent ) 
May 16 16:42:50 server1 kernel: [29460.923859] block drbd0: attached to UUIDs 23A83AE9AB70041C:0000000000000000:0001000000000000:0001000000000004
May 16 16:42:50 server1 kernel: [29460.941924] block drbd0: conn( StandAlone -> Unconnected ) 
May 16 16:42:50 server1 kernel: [29460.941938] block drbd0: Starting receiver thread (from drbd0_worker [2647])
May 16 16:42:50 server1 kernel: [29460.942855] block drbd0: receiver (re)started
May 16 16:42:50 server1 kernel: [29460.942862] block drbd0: conn( Unconnected -> WFConnection ) 
May 16 16:42:50 server1 lrmd: [2479]: info: RA output: (Cluster-FS-DRBD:0:notify:stdout) drbdsetup-83 0 disk /dev/sda3 /dev/sda3 internal --set-defaults --create-device --fencing=resource-only 
May 16 16:42:50 server1 lrmd: [2479]: info: RA output: (Cluster-FS-DRBD:0:notify:stdout) drbdsetup-83 0 syncer --set-defaults --create-device --rate=15M 
May 16 16:42:50 server1 lrmd: [2479]: info: RA output: (Cluster-FS-DRBD:0:notify:stdout) drbdsetup-83 0 net ipv4:192.168.0.92:7788 ipv4:192.168.0.93:7788 C --set-defaults --create-device --allow-two-primaries --shared-secret=kalki 
May 16 16:42:50 server1 lrmd: [2479]: info: operation notify[5] on Cluster-FS-DRBD:0 for client 2482: pid 2612 exited with return code 0
May 16 16:42:50 server1 crmd: [2482]: info: send_direct_ack: ACK'ing resource op Cluster-FS-DRBD:0_notify_0 from 49:3:0:8bcf8e15-a13a-4ab3-b651-dd8e89f3d9fc: lrm_invoke-lrmd-1400238770-10
May 16 16:42:50 server1 crmd: [2482]: info: process_lrm_event: LRM operation Cluster-FS-DRBD:0_notify_0 (call=5, rc=0, cib-update=0, confirmed=true) ok
May 16 16:42:50 server1 crmd: [2482]: info: do_lrm_rsc_op: Performing key=45:4:0:8bcf8e15-a13a-4ab3-b651-dd8e89f3d9fc op=Cluster-FS-DRBD:0_notify_0 )
May 16 16:42:50 server1 lrmd: [2479]: info: rsc:Cluster-FS-DRBD:0 notify[6] (pid 2667)
May 16 16:42:50 server1 lrmd: [2479]: info: operation notify[6] on Cluster-FS-DRBD:0 for client 2482: pid 2667 exited with return code 0
May 16 16:42:50 server1 crmd: [2482]: info: send_direct_ack: ACK'ing resource op Cluster-FS-DRBD:0_notify_0 from 45:4:0:8bcf8e15-a13a-4ab3-b651-dd8e89f3d9fc: lrm_invoke-lrmd-1400238770-11
May 16 16:42:50 server1 crmd: [2482]: info: process_lrm_event: LRM operation Cluster-FS-DRBD:0_notify_0 (call=6, rc=0, cib-update=0, confirmed=true) ok
May 16 16:42:50 server1 crmd: [2482]: info: do_lrm_rsc_op: Performing key=2:4:0:8bcf8e15-a13a-4ab3-b651-dd8e89f3d9fc op=Cluster-FS-DRBD:0_stop_0 )
May 16 16:42:50 server1 lrmd: [2479]: info: rsc:Cluster-FS-DRBD:0 stop[7] (pid 2690)
May 16 16:42:51 server1 lrmd: [2479]: info: RA output: (Cluster-FS-DRBD:0:stop:stderr) Could not connect to 'drbd' generic netlink family
May 16 16:42:51 server1 lrmd: [2479]: info: RA output: (Cluster-FS-DRBD:0:stop:stdout) 
May 16 16:42:51 server1 kernel: [29461.073006] block drbd0: conn( WFConnection -> Disconnecting ) 
May 16 16:42:51 server1 kernel: [29461.073024] block drbd0: Discarding network configuration.
May 16 16:42:51 server1 kernel: [29461.073048] block drbd0: Connection closed
May 16 16:42:51 server1 kernel: [29461.073052] block drbd0: conn( Disconnecting -> StandAlone ) 
May 16 16:42:51 server1 kernel: [29461.073073] block drbd0: receiver terminated
May 16 16:42:51 server1 kernel: [29461.073076] block drbd0: Terminating drbd0_receiver
May 16 16:42:51 server1 kernel: [29461.073083] block drbd0: disk( Consistent -> Failed ) 
May 16 16:42:51 server1 kernel: [29461.073096] block drbd0: Sending state for detaching disk failed
May 16 16:42:51 server1 kernel: [29461.073104] block drbd0: disk( Failed -> Diskless ) 
May 16 16:42:51 server1 kernel: [29461.073146] block drbd0: drbd_bm_resize called with capacity == 0
May 16 16:42:51 server1 kernel: [29461.073206] block drbd0: worker terminated
May 16 16:42:51 server1 kernel: [29461.073208] block drbd0: Terminating drbd0_worker
May 16 16:42:51 server1 crm_attribute: [2720]: info: Invoked: crm_attribute -N server1 -n master-Cluster-FS-DRBD:0 -l reboot -D 
May 16 16:42:51 server1 lrmd: [2479]: info: RA output: (Cluster-FS-DRBD:0:stop:stdout) 
May 16 16:42:51 server1 lrmd: [2479]: info: operation stop[7] on Cluster-FS-DRBD:0 for client 2482: pid 2690 exited with return code 0
May 16 16:42:51 server1 crmd: [2482]: info: process_lrm_event: LRM operation Cluster-FS-DRBD:0_stop_0 (call=7, rc=0, cib-update=17, confirmed=true) ok
May 16 16:52:32 server1 cib: [2478]: info: cib_stats: Processed 60 operations (0.00us average, 0% utilization) in the last 10min
May 16 16:57:51 server1 crmd: [2482]: info: handle_failcount_op: Removing failcount for Cluster-FS-DRBD:0
May 16 16:57:51 server1 attrd: [2480]: notice: attrd_trigger_update: Sending flush op to all hosts for: fail-count-Cluster-FS-DRBD:0 (<null>)
May 16 16:57:51 server1 attrd: [2480]: notice: attrd_perform_update: Sent delete 28: node=server1, attr=fail-count-Cluster-FS-DRBD:0, id=<n/a>, set=(null), section=status
May 16 16:57:51 server1 crmd: [2482]: info: do_lrm_rsc_op: Performing key=7:5:0:8bcf8e15-a13a-4ab3-b651-dd8e89f3d9fc op=Cluster-FS-DRBD:0_start_0 )
May 16 16:57:51 server1 lrmd: [2479]: info: rsc:Cluster-FS-DRBD:0 start[8] (pid 2722)
May 16 16:57:51 server1 attrd: [2480]: notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-Cluster-FS-DRBD:0 (<null>)
May 16 16:57:51 server1 attrd: [2480]: notice: attrd_perform_update: Sent delete 30: node=server1, attr=last-failure-Cluster-FS-DRBD:0, id=<n/a>, set=(null), section=status
May 16 16:57:51 server1 attrd: [2480]: notice: attrd_perform_update: Sent delete 32: node=server1, attr=fail-count-Cluster-FS-DRBD:0, id=<n/a>, set=(null), section=status
May 16 16:57:51 server1 attrd: [2480]: notice: attrd_perform_update: Sent delete 35: node=server1, attr=last-failure-Cluster-FS-DRBD:0, id=<n/a>, set=(null), section=status
May 16 16:57:51 server1 lrmd: [2479]: info: RA output: (Cluster-FS-DRBD:0:start:stderr) Could not connect to 'drbd' generic netlink family
May 16 16:57:51 server1 lrmd: [2479]: info: RA output: (Cluster-FS-DRBD:0:start:stdout) 
May 16 16:57:51 server1 lrmd: [2479]: info: RA output: (Cluster-FS-DRBD:0:start:stderr) drbdadm: Unknown command 'syncer'
May 16 16:57:51 server1 drbd[2722]: ERROR: r0: Called drbdadm -c /etc/drbd.conf syncer r0
May 16 16:57:51 server1 drbd[2722]: ERROR: r0: Exit code 3
May 16 16:57:51 server1 drbd[2722]: ERROR: r0: Command output: 
May 16 16:57:51 server1 lrmd: [2479]: info: RA output: (Cluster-FS-DRBD:0:start:stdout) 
May 16 16:57:51 server1 lrmd: [2479]: info: operation start[8] on Cluster-FS-DRBD:0 for client 2482: pid 2722 exited with return code 1
May 16 16:57:51 server1 crmd: [2482]: info: process_lrm_event: LRM operation Cluster-FS-DRBD:0_start_0 (call=8, rc=1, cib-update=18, confirmed=true) unknown error
May 16 16:57:51 server1 attrd: [2480]: notice: attrd_ais_dispatch: Update relayed from server2
May 16 16:57:51 server1 attrd: [2480]: notice: attrd_trigger_update: Sending flush op to all hosts for: fail-count-Cluster-FS-DRBD:0 (INFINITY)
May 16 16:57:51 server1 attrd: [2480]: notice: attrd_perform_update: Sent update 39: fail-count-Cluster-FS-DRBD:0=INFINITY
May 16 16:57:51 server1 attrd: [2480]: notice: attrd_ais_dispatch: Update relayed from server2
May 16 16:57:51 server1 attrd: [2480]: notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-Cluster-FS-DRBD:0 (1400239670)
May 16 16:57:51 server1 attrd: [2480]: notice: attrd_perform_update: Sent update 42: last-failure-Cluster-FS-DRBD:0=1400239670
May 16 16:57:51 server1 crmd: [2482]: info: do_lrm_rsc_op: Performing key=47:5:0:8bcf8e15-a13a-4ab3-b651-dd8e89f3d9fc op=Cluster-FS-DRBD:0_notify_0 )
May 16 16:57:51 server1 lrmd: [2479]: info: rsc:Cluster-FS-DRBD:0 notify[9] (pid 2779)
May 16 16:57:51 server1 lrmd: [2479]: info: RA output: (Cluster-FS-DRBD:0:notify:stderr) Could not connect to 'drbd' generic netlink family
May 16 16:57:51 server1 kernel: [30361.680452] block drbd0: Starting worker thread (from drbdsetup-83 [2811])
May 16 16:57:51 server1 kernel: [30361.680627] block drbd0: disk( Diskless -> Attaching ) 
May 16 16:57:51 server1 kernel: [30361.682452] block drbd0: Found 4 transactions (4 active extents) in activity log.
May 16 16:57:51 server1 kernel: [30361.682455] block drbd0: Method to ensure write ordering: flush
May 16 16:57:51 server1 kernel: [30361.682459] block drbd0: drbd_bm_resize called with capacity == 78122592
May 16 16:57:51 server1 kernel: [30361.682689] block drbd0: resync bitmap: bits=9765324 words=152584 pages=299
May 16 16:57:51 server1 kernel: [30361.682692] block drbd0: size = 37 GB (39061296 KB)
May 16 16:57:51 server1 kernel: [30361.688375] block drbd0: bitmap READ of 299 pages took 2 jiffies
May 16 16:57:51 server1 kernel: [30361.688545] block drbd0: recounting of set bits took additional 0 jiffies
May 16 16:57:51 server1 kernel: [30361.688548] block drbd0: 0 KB (0 bits) marked out-of-sync by on disk bit-map.
May 16 16:57:51 server1 kernel: [30361.688553] block drbd0: disk( Attaching -> Consistent ) 
May 16 16:57:51 server1 kernel: [30361.688556] block drbd0: attached to UUIDs 23A83AE9AB70041C:0000000000000000:0001000000000000:0001000000000004
May 16 16:57:51 server1 kernel: [30361.705091] block drbd0: conn( StandAlone -> Unconnected ) 
May 16 16:57:51 server1 kernel: [30361.705104] block drbd0: Starting receiver thread (from drbd0_worker [2812])
May 16 16:57:51 server1 kernel: [30361.705170] block drbd0: receiver (re)started
May 16 16:57:51 server1 kernel: [30361.705177] block drbd0: conn( Unconnected -> WFConnection ) 
May 16 16:57:51 server1 lrmd: [2479]: info: RA output: (Cluster-FS-DRBD:0:notify:stdout) drbdsetup-83 0 disk /dev/sda3 /dev/sda3 internal --set-defaults --create-device --fencing=resource-only 
May 16 16:57:51 server1 lrmd: [2479]: info: RA output: (Cluster-FS-DRBD:0:notify:stdout) drbdsetup-83 0 syncer --set-defaults --create-device --rate=15M 
May 16 16:57:51 server1 lrmd: [2479]: info: RA output: (Cluster-FS-DRBD:0:notify:stdout) drbdsetup-83 0 net ipv4:192.168.0.92:7788 ipv4:192.168.0.93:7788 C --set-defaults --create-device --allow-two-primaries --shared-secret=kalki 
May 16 16:57:51 server1 lrmd: [2479]: info: operation notify[9] on Cluster-FS-DRBD:0 for client 2482: pid 2779 exited with return code 0
May 16 16:57:51 server1 crmd: [2482]: info: send_direct_ack: ACK'ing resource op Cluster-FS-DRBD:0_notify_0 from 47:5:0:8bcf8e15-a13a-4ab3-b651-dd8e89f3d9fc: lrm_invoke-lrmd-1400239671-12
May 16 16:57:51 server1 crmd: [2482]: info: process_lrm_event: LRM operation Cluster-FS-DRBD:0_notify_0 (call=9, rc=0, cib-update=0, confirmed=true) ok
May 16 16:57:51 server1 crmd: [2482]: info: do_lrm_rsc_op: Performing key=45:6:0:8bcf8e15-a13a-4ab3-b651-dd8e89f3d9fc op=Cluster-FS-DRBD:0_notify_0 )
May 16 16:57:51 server1 lrmd: [2479]: info: rsc:Cluster-FS-DRBD:0 notify[10] (pid 2832)
May 16 16:57:51 server1 lrmd: [2479]: info: operation notify[10] on Cluster-FS-DRBD:0 for client 2482: pid 2832 exited with return code 0
May 16 16:57:51 server1 crmd: [2482]: info: send_direct_ack: ACK'ing resource op Cluster-FS-DRBD:0_notify_0 from 45:6:0:8bcf8e15-a13a-4ab3-b651-dd8e89f3d9fc: lrm_invoke-lrmd-1400239671-13
May 16 16:57:51 server1 crmd: [2482]: info: process_lrm_event: LRM operation Cluster-FS-DRBD:0_notify_0 (call=10, rc=0, cib-update=0, confirmed=true) ok
May 16 16:57:51 server1 crmd: [2482]: info: do_lrm_rsc_op: Performing key=2:6:0:8bcf8e15-a13a-4ab3-b651-dd8e89f3d9fc op=Cluster-FS-DRBD:0_stop_0 )
May 16 16:57:51 server1 lrmd: [2479]: info: rsc:Cluster-FS-DRBD:0 stop[11] (pid 2855)
May 16 16:57:51 server1 lrmd: [2479]: info: RA output: (Cluster-FS-DRBD:0:stop:stderr) Could not connect to 'drbd' generic netlink family
May 16 16:57:51 server1 lrmd: [2479]: info: RA output: (Cluster-FS-DRBD:0:stop:stdout) 
May 16 16:57:51 server1 kernel: [30361.844841] block drbd0: conn( WFConnection -> Disconnecting ) 
May 16 16:57:51 server1 kernel: [30361.844861] block drbd0: Discarding network configuration.
May 16 16:57:51 server1 kernel: [30361.844881] block drbd0: Connection closed
May 16 16:57:51 server1 kernel: [30361.844885] block drbd0: conn( Disconnecting -> StandAlone ) 
May 16 16:57:51 server1 kernel: [30361.844905] block drbd0: receiver terminated
May 16 16:57:51 server1 kernel: [30361.844907] block drbd0: Terminating drbd0_receiver
May 16 16:57:51 server1 kernel: [30361.844928] block drbd0: disk( Consistent -> Failed ) 
May 16 16:57:51 server1 kernel: [30361.844939] block drbd0: Sending state for detaching disk failed
May 16 16:57:51 server1 kernel: [30361.844947] block drbd0: disk( Failed -> Diskless ) 
May 16 16:57:51 server1 kernel: [30361.844988] block drbd0: drbd_bm_resize called with capacity == 0
May 16 16:57:51 server1 kernel: [30361.845051] block drbd0: worker terminated
May 16 16:57:51 server1 kernel: [30361.845053] block drbd0: Terminating drbd0_worker
May 16 16:57:51 server1 crm_attribute: [2885]: info: Invoked: crm_attribute -N server1 -n master-Cluster-FS-DRBD:0 -l reboot -D 
May 16 16:57:51 server1 lrmd: [2479]: info: RA output: (Cluster-FS-DRBD:0:stop:stdout) 
May 16 16:57:51 server1 lrmd: [2479]: info: operation stop[11] on Cluster-FS-DRBD:0 for client 2482: pid 2855 exited with return code 0
May 16 16:57:51 server1 crmd: [2482]: info: process_lrm_event: LRM operation Cluster-FS-DRBD:0_stop_0 (call=11, rc=0, cib-update=19, confirmed=true) ok
-------------- next part --------------
May 16 16:42:49 server2 cib: [2611]: info: cib_replace_notify: Replaced: 0.29.27 -> 0.30.1 from <null>
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: - <cib admin_epoch="0" epoch="29" num_updates="27" />
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: + <cib epoch="30" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.5" update-origin="server2" update-client="cibadmin" cib-last-written="Fri May 16 16:38:34 2014" have-quorum="1" dc-uuid="server2" >
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +   <configuration >
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +     <resources >
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +       <master id="Cluster-FS-DRBD-Master" __crm_diff_marker__="added:top" >
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +         <meta_attributes id="Cluster-FS-DRBD-Master-meta_attributes" >
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +           <nvpair id="Cluster-FS-DRBD-Master-meta_attributes-resource-stickiness" name="resource-stickiness" value="100" />
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +           <nvpair id="Cluster-FS-DRBD-Master-meta_attributes-master-max" name="master-max" value="2" />
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +           <nvpair id="Cluster-FS-DRBD-Master-meta_attributes-notify" name="notify" value="true" />
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +           <nvpair id="Cluster-FS-DRBD-Master-meta_attributes-interleave" name="interleave" value="true" />
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +           <nvpair id="Cluster-FS-DRBD-Master-meta_attributes-target-role" name="target-role" value="started" />
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +         </meta_attributes>
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +         <primitive class="ocf" id="Cluster-FS-DRBD" provider="linbit" type="drbd" >
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +           <instance_attributes id="Cluster-FS-DRBD-instance_attributes" >
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +             <nvpair id="Cluster-FS-DRBD-instance_attributes-drbd_resource" name="drbd_resource" value="r0" />
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +           </instance_attributes>
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +           <operations >
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +             <op id="Cluster-FS-DRBD-start-0" interval="0" name="start" timeout="240s" />
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +             <op id="Cluster-FS-DRBD-stop-0" interval="0" name="stop" timeout="100s" />
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +             <op id="Cluster-FS-DRBD-monitor-29s" interval="29s" name="monitor" role="Master" />
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +             <op id="Cluster-FS-DRBD-monitor-31s" interval="31s" name="monitor" role="Slave" />
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +           </operations>
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +           <meta_attributes id="Cluster-FS-DRBD-meta_attributes" >
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +             <nvpair id="Cluster-FS-DRBD-meta_attributes-migration-threshold" name="migration-threshold" value="3" />
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +             <nvpair id="Cluster-FS-DRBD-meta_attributes-failure-timeout" name="failure-timeout" value="60s" />
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +           </meta_attributes>
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +         </primitive>
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +       </master>
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +       <clone id="Cluster-FS-Mount-Clone" __crm_diff_marker__="added:top" >
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +         <meta_attributes id="Cluster-FS-Mount-Clone-meta_attributes" >
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +           <nvpair id="Cluster-FS-Mount-Clone-meta_attributes-interleave" name="interleave" value="true" />
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +           <nvpair id="Cluster-FS-Mount-Clone-meta_attributes-ordered" name="ordered" value="true" />
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +           <nvpair id="Cluster-FS-Mount-Clone-meta_attributes-target-role" name="target-role" value="started" />
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +         </meta_attributes>
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +         <primitive class="ocf" id="Cluster-FS-Mount" provider="heartbeat" type="Filesystem" >
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +           <instance_attributes id="Cluster-FS-Mount-instance_attributes" >
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +             <nvpair id="Cluster-FS-Mount-instance_attributes-device" name="device" value="/dev/drbd0" />
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +             <nvpair id="Cluster-FS-Mount-instance_attributes-directory" name="directory" value="/cluster" />
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +             <nvpair id="Cluster-FS-Mount-instance_attributes-fstype" name="fstype" value="ocfs2" />
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +           </instance_attributes>
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +           <operations >
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +             <op id="Cluster-FS-Mount-monitor-10s" interval="10s" name="monitor" timeout="60s" />
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +             <op id="Cluster-FS-Mount-start-0" interval="0" name="start" timeout="90s" />
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +             <op id="Cluster-FS-Mount-stop-0" interval="0" name="stop" timeout="60s" />
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +           </operations>
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +           <meta_attributes id="Cluster-FS-Mount-meta_attributes" >
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +             <nvpair id="Cluster-FS-Mount-meta_attributes-migration-threshold" name="migration-threshold" value="3" />
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +             <nvpair id="Cluster-FS-Mount-meta_attributes-failure-timeout" name="failure-timeout" value="60s" />
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +           </meta_attributes>
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +         </primitive>
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +       </clone>
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +     </resources>
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +     <constraints >
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +       <rsc_colocation id="colOCFS2-with-DRBDMaster" rsc="Cluster-FS-Mount-Clone" score="INFINITY" with-rsc="Cluster-FS-DRBD-Master" with-rsc-role="Master" __crm_diff_marker__="added:top" />
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +       <rsc_order first="Cluster-FS-DRBD-Master" first-action="promote" id="Cluster-FS-After-DRBD" score="INFINITY" then="Cluster-FS-Mount-Clone" then-action="start" __crm_diff_marker__="added:top" />
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +     </constraints>
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: +   </configuration>
May 16 16:42:49 server2 cib: [2611]: info: cib:diff: + </cib>
May 16 16:42:49 server2 cib: [2611]: info: cib_process_request: Operation complete: op cib_replace for section 'all' (origin=local/cibadmin/2, version=0.30.1): ok (rc=0)
May 16 16:42:49 server2 crmd: [2615]: info: abort_transition_graph: te_update_diff:124 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.30.1) : Non-status change
May 16 16:42:49 server2 crmd: [2615]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
May 16 16:42:49 server2 crmd: [2615]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
May 16 16:42:49 server2 crmd: [2615]: info: do_pe_invoke: Query 92: Requesting the current CIB: S_POLICY_ENGINE
May 16 16:42:49 server2 crmd: [2615]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
May 16 16:42:49 server2 crmd: [2615]: info: update_dc: Unset DC server2
May 16 16:42:49 server2 attrd: [2613]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
May 16 16:42:49 server2 cib: [2611]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/90, version=0.30.2): ok (rc=0)
May 16 16:42:49 server2 crmd: [2615]: info: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
May 16 16:42:49 server2 crmd: [2615]: info: do_dc_takeover: Taking over DC status for this partition
May 16 16:42:49 server2 cib: [2611]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/93, version=0.30.5): ok (rc=0)
May 16 16:42:49 server2 cib: [2611]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/94, version=0.30.6): ok (rc=0)
May 16 16:42:49 server2 cib: [2611]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/96, version=0.30.8): ok (rc=0)
May 16 16:42:49 server2 cib: [2611]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/98, version=0.30.9): ok (rc=0)
May 16 16:42:49 server2 crmd: [2615]: info: do_dc_join_offer_all: join-6: Waiting on 2 outstanding join acks
May 16 16:42:49 server2 crmd: [2615]: info: ais_dispatch_message: Membership 156: quorum retained
May 16 16:42:49 server2 crmd: [2615]: info: crmd_ais_dispatch: Setting expected votes to 2
May 16 16:42:49 server2 crmd: [2615]: info: update_dc: Set DC to server2 (3.0.5)
May 16 16:42:49 server2 crmd: [2615]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
May 16 16:42:49 server2 crmd: [2615]: info: config_query_callback: Checking for expired actions every 900000ms
May 16 16:42:49 server2 crmd: [2615]: info: config_query_callback: Sending expected-votes=2 to corosync
May 16 16:42:49 server2 cib: [2611]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/101, version=0.30.10): ok (rc=0)
May 16 16:42:49 server2 crmd: [2615]: info: ais_dispatch_message: Membership 156: quorum retained
May 16 16:42:49 server2 crmd: [2615]: info: crmd_ais_dispatch: Setting expected votes to 2
May 16 16:42:49 server2 cib: [2611]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/104, version=0.30.11): ok (rc=0)
May 16 16:42:49 server2 crmd: [2615]: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
May 16 16:42:49 server2 crmd: [2615]: info: do_state_transition: All 2 cluster nodes responded to the join offer.
May 16 16:42:49 server2 crmd: [2615]: info: do_dc_join_finalize: join-6: Syncing the CIB from server2 to the rest of the cluster
May 16 16:42:49 server2 cib: [2611]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/105, version=0.30.11): ok (rc=0)
May 16 16:42:49 server2 cib: [2611]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/106, version=0.30.12): ok (rc=0)
May 16 16:42:49 server2 cib: [2611]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/107, version=0.30.13): ok (rc=0)
May 16 16:42:49 server2 crmd: [2615]: info: do_dc_join_ack: join-6: Updating node state to member for server1
May 16 16:42:49 server2 crmd: [2615]: info: do_dc_join_ack: join-6: Updating node state to member for server2
May 16 16:42:49 server2 cib: [2611]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='server1']/lrm (origin=local/crmd/108, version=0.30.15): ok (rc=0)
May 16 16:42:49 server2 crmd: [2615]: info: erase_xpath_callback: Deletion of "//node_state[@uname='server1']/lrm": ok (rc=0)
May 16 16:42:49 server2 crmd: [2615]: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
May 16 16:42:49 server2 crmd: [2615]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
May 16 16:42:49 server2 crmd: [2615]: info: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date
May 16 16:42:49 server2 crmd: [2615]: info: crm_update_quorum: Updating quorum status to true (call=114)
May 16 16:42:49 server2 crmd: [2615]: info: abort_transition_graph: do_te_invoke:167 - Triggered transition abort (complete=1) : Peer Cancelled
May 16 16:42:49 server2 crmd: [2615]: info: do_pe_invoke: Query 115: Requesting the current CIB: S_POLICY_ENGINE
May 16 16:42:49 server2 cib: [2611]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='server2']/lrm (origin=local/crmd/110, version=0.30.17): ok (rc=0)
May 16 16:42:49 server2 crmd: [2615]: info: erase_xpath_callback: Deletion of "//node_state[@uname='server2']/lrm": ok (rc=0)
May 16 16:42:49 server2 cib: [2611]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/112, version=0.30.19): ok (rc=0)
May 16 16:42:49 server2 attrd: [2613]: notice: attrd_local_callback: Sending full refresh (origin=crmd)
May 16 16:42:49 server2 attrd: [2613]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
May 16 16:42:49 server2 cib: [2611]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/114, version=0.30.21): ok (rc=0)
May 16 16:42:49 server2 crmd: [2615]: info: do_pe_invoke_callback: Invoking the PE: query=115, ref=pe_calc-dc-1400238769-43, seq=156, quorate=1
May 16 16:42:49 server2 pengine: [2614]: notice: unpack_config: On loss of CCM Quorum: Ignore
May 16 16:42:49 server2 pengine: [2614]: notice: RecurringOp:  Start recurring monitor (31s) for Cluster-FS-DRBD:0 on server1
May 16 16:42:49 server2 pengine: [2614]: notice: RecurringOp:  Start recurring monitor (31s) for Cluster-FS-DRBD:1 on server2
May 16 16:42:49 server2 pengine: [2614]: notice: RecurringOp:  Start recurring monitor (31s) for Cluster-FS-DRBD:0 on server1
May 16 16:42:49 server2 pengine: [2614]: notice: RecurringOp:  Start recurring monitor (31s) for Cluster-FS-DRBD:1 on server2
May 16 16:42:49 server2 pengine: [2614]: notice: LogActions: Start   Cluster-FS-DRBD:0#011(server1)
May 16 16:42:49 server2 pengine: [2614]: notice: LogActions: Start   Cluster-FS-DRBD:1#011(server2)
May 16 16:42:49 server2 pengine: [2614]: notice: LogActions: Leave   Cluster-FS-Mount:0#011(Stopped)
May 16 16:42:49 server2 pengine: [2614]: notice: LogActions: Leave   Cluster-FS-Mount:1#011(Stopped)
May 16 16:42:49 server2 crmd: [2615]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
May 16 16:42:49 server2 crmd: [2615]: info: unpack_graph: Unpacked transition 3: 19 actions in 19 synapses
May 16 16:42:49 server2 crmd: [2615]: info: do_te_invoke: Processing graph 3 (ref=pe_calc-dc-1400238769-43) derived from /var/lib/pengine/pe-input-35.bz2
May 16 16:42:49 server2 crmd: [2615]: info: te_rsc_command: Initiating action 7: monitor Cluster-FS-DRBD:0_monitor_0 on server1
May 16 16:42:49 server2 crmd: [2615]: info: te_rsc_command: Initiating action 4: monitor Cluster-FS-DRBD:1_monitor_0 on server2 (local)
May 16 16:42:49 server2 crmd: [2615]: info: do_lrm_rsc_op: Performing key=4:3:7:8bcf8e15-a13a-4ab3-b651-dd8e89f3d9fc op=Cluster-FS-DRBD:1_monitor_0 )
May 16 16:42:50 server2 lrmd: [2612]: info: rsc:Cluster-FS-DRBD:1 probe[2] (pid 2715)
May 16 16:42:50 server2 crmd: [2615]: info: te_pseudo_action: Pseudo action 15 fired and confirmed
May 16 16:42:50 server2 crmd: [2615]: info: te_rsc_command: Initiating action 8: monitor Cluster-FS-Mount:0_monitor_0 on server1
May 16 16:42:50 server2 crmd: [2615]: info: te_rsc_command: Initiating action 5: monitor Cluster-FS-Mount:0_monitor_0 on server2 (local)
May 16 16:42:50 server2 crmd: [2615]: info: do_lrm_rsc_op: Performing key=5:3:7:8bcf8e15-a13a-4ab3-b651-dd8e89f3d9fc op=Cluster-FS-Mount:0_monitor_0 )
May 16 16:42:50 server2 lrmd: [2612]: info: rsc:Cluster-FS-Mount:0 probe[3] (pid 2716)
May 16 16:42:50 server2 crmd: [2615]: info: te_pseudo_action: Pseudo action 16 fired and confirmed
May 16 16:42:50 server2 crmd: [2615]: info: te_pseudo_action: Pseudo action 13 fired and confirmed
May 16 16:42:50 server2 lrmd: [2612]: info: operation monitor[3] on Cluster-FS-Mount:0 for client 2615: pid 2716 exited with return code 7
May 16 16:42:50 server2 crmd: [2615]: info: match_graph_event: Action Cluster-FS-Mount:0_monitor_0 (8) confirmed on server1 (rc=0)
May 16 16:42:50 server2 crmd: [2615]: info: process_lrm_event: LRM operation Cluster-FS-Mount:0_monitor_0 (call=3, rc=7, cib-update=116, confirmed=true) not running
May 16 16:42:50 server2 crmd: [2615]: info: match_graph_event: Action Cluster-FS-Mount:0_monitor_0 (5) confirmed on server2 (rc=0)
May 16 16:42:50 server2 lrmd: [2612]: info: RA output: (Cluster-FS-DRBD:1:probe:stderr) Could not connect to 'drbd' generic netlink family
May 16 16:42:50 server2 crm_attribute: [2775]: info: Invoked: crm_attribute -N server2 -n master-Cluster-FS-DRBD:1 -l reboot -D 
May 16 16:42:50 server2 lrmd: [2612]: info: operation monitor[2] on Cluster-FS-DRBD:1 for client 2615: pid 2715 exited with return code 7
May 16 16:42:50 server2 crmd: [2615]: info: process_lrm_event: LRM operation Cluster-FS-DRBD:1_monitor_0 (call=2, rc=7, cib-update=117, confirmed=true) not running
May 16 16:42:50 server2 crmd: [2615]: info: match_graph_event: Action Cluster-FS-DRBD:1_monitor_0 (4) confirmed on server2 (rc=0)
May 16 16:42:50 server2 crmd: [2615]: info: te_rsc_command: Initiating action 3: probe_complete probe_complete on server2 (local) - no waiting
May 16 16:42:50 server2 crmd: [2615]: info: match_graph_event: Action Cluster-FS-DRBD:0_monitor_0 (7) confirmed on server1 (rc=0)
May 16 16:42:50 server2 crmd: [2615]: info: te_rsc_command: Initiating action 6: probe_complete probe_complete on server1 - no waiting
May 16 16:42:50 server2 crmd: [2615]: info: te_pseudo_action: Pseudo action 2 fired and confirmed
May 16 16:42:50 server2 crmd: [2615]: info: te_rsc_command: Initiating action 9: start Cluster-FS-DRBD:0_start_0 on server1
May 16 16:42:50 server2 crmd: [2615]: info: te_rsc_command: Initiating action 11: start Cluster-FS-DRBD:1_start_0 on server2 (local)
May 16 16:42:50 server2 crmd: [2615]: info: do_lrm_rsc_op: Performing key=11:3:0:8bcf8e15-a13a-4ab3-b651-dd8e89f3d9fc op=Cluster-FS-DRBD:1_start_0 )
May 16 16:42:50 server2 lrmd: [2612]: info: rsc:Cluster-FS-DRBD:1 start[4] (pid 2776)
May 16 16:42:50 server2 pengine: [2614]: notice: process_pe_message: Transition 3: PEngine Input stored in: /var/lib/pengine/pe-input-35.bz2
May 16 16:42:50 server2 lrmd: [2612]: info: RA output: (Cluster-FS-DRBD:1:start:stderr) Could not connect to 'drbd' generic netlink family
May 16 16:42:50 server2 lrmd: [2612]: info: RA output: (Cluster-FS-DRBD:1:start:stdout) 
May 16 16:42:50 server2 lrmd: [2612]: info: RA output: (Cluster-FS-DRBD:1:start:stderr) drbdadm: Unknown command 'syncer'
May 16 16:42:50 server2 drbd[2776]: ERROR: r0: Called drbdadm -c /etc/drbd.conf syncer r0
May 16 16:42:50 server2 drbd[2776]: ERROR: r0: Exit code 3
May 16 16:42:50 server2 drbd[2776]: ERROR: r0: Command output: 
May 16 16:42:50 server2 lrmd: [2612]: info: RA output: (Cluster-FS-DRBD:1:start:stdout) 
May 16 16:42:50 server2 lrmd: [2612]: info: operation start[4] on Cluster-FS-DRBD:1 for client 2615: pid 2776 exited with return code 1
May 16 16:42:50 server2 crmd: [2615]: info: process_lrm_event: LRM operation Cluster-FS-DRBD:1_start_0 (call=4, rc=1, cib-update=118, confirmed=true) unknown error
May 16 16:42:50 server2 crmd: [2615]: WARN: status_from_rc: Action 9 (Cluster-FS-DRBD:0_start_0) on server1 failed (target: 0 vs. rc: 1): Error
May 16 16:42:50 server2 crmd: [2615]: WARN: update_failcount: Updating failcount for Cluster-FS-DRBD:0 on server1 after failed start: rc=1 (update=INFINITY, time=1400238770)
May 16 16:42:50 server2 crmd: [2615]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=Cluster-FS-DRBD:0_last_failure_0, magic=0:1;9:3:0:8bcf8e15-a13a-4ab3-b651-dd8e89f3d9fc, cib=0.30.29) : Event failed
May 16 16:42:50 server2 crmd: [2615]: info: update_abort_priority: Abort priority upgraded from 0 to 1
May 16 16:42:50 server2 crmd: [2615]: info: update_abort_priority: Abort action done superceeded by restart
May 16 16:42:50 server2 crmd: [2615]: info: match_graph_event: Action Cluster-FS-DRBD:0_start_0 (9) confirmed on server1 (rc=4)
May 16 16:42:50 server2 crmd: [2615]: WARN: status_from_rc: Action 11 (Cluster-FS-DRBD:1_start_0) on server2 failed (target: 0 vs. rc: 1): Error
May 16 16:42:50 server2 crmd: [2615]: WARN: update_failcount: Updating failcount for Cluster-FS-DRBD:1 on server2 after failed start: rc=1 (update=INFINITY, time=1400238770)
May 16 16:42:50 server2 crmd: [2615]: info: abort_transition_graph: match_graph_event:277 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=Cluster-FS-DRBD:1_last_failure_0, magic=0:1;11:3:0:8bcf8e15-a13a-4ab3-b651-dd8e89f3d9fc, cib=0.30.30) : Event failed
May 16 16:42:50 server2 crmd: [2615]: info: match_graph_event: Action Cluster-FS-DRBD:1_start_0 (11) confirmed on server2 (rc=4)
May 16 16:42:50 server2 crmd: [2615]: info: te_pseudo_action: Pseudo action 14 fired and confirmed
May 16 16:42:50 server2 crmd: [2615]: info: te_pseudo_action: Pseudo action 17 fired and confirmed
May 16 16:42:50 server2 crmd: [2615]: info: te_rsc_command: Initiating action 49: notify Cluster-FS-DRBD:0_post_notify_start_0 on server1
May 16 16:42:50 server2 crmd: [2615]: info: te_rsc_command: Initiating action 50: notify Cluster-FS-DRBD:1_post_notify_start_0 on server2 (local)
May 16 16:42:50 server2 attrd: [2613]: notice: attrd_trigger_update: Sending flush op to all hosts for: fail-count-Cluster-FS-DRBD:1 (INFINITY)
May 16 16:42:50 server2 crmd: [2615]: info: do_lrm_rsc_op: Performing key=50:3:0:8bcf8e15-a13a-4ab3-b651-dd8e89f3d9fc op=Cluster-FS-DRBD:1_notify_0 )
May 16 16:42:50 server2 lrmd: [2612]: info: rsc:Cluster-FS-DRBD:1 notify[5] (pid 2832)
May 16 16:42:50 server2 attrd: [2613]: notice: attrd_perform_update: Sent update 30: fail-count-Cluster-FS-DRBD:1=INFINITY
May 16 16:42:50 server2 crmd: [2615]: info: abort_transition_graph: te_update_diff:164 - Triggered transition abort (complete=0, tag=nvpair, id=status-server2-fail-count-Cluster-FS-DRBD.1, name=fail-count-Cluster-FS-DRBD:1, value=INFINITY, magic=NA, cib=0.30.31) : Transient attribute: update
May 16 16:42:50 server2 crmd: [2615]: info: update_abort_priority: Abort priority upgraded from 1 to 1000000
May 16 16:42:50 server2 crmd: [2615]: info: update_abort_priority: 'Event failed' abort superceeded
May 16 16:42:50 server2 attrd: [2613]: notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-Cluster-FS-DRBD:1 (1400238770)
May 16 16:42:50 server2 attrd: [2613]: notice: attrd_perform_update: Sent update 33: last-failure-Cluster-FS-DRBD:1=1400238770
May 16 16:42:50 server2 crmd: [2615]: info: abort_transition_graph: te_update_diff:164 - Triggered transition abort (complete=0, tag=nvpair, id=status-server2-last-failure-Cluster-FS-DRBD.1, name=last-failure-Cluster-FS-DRBD:1, value=1400238770, magic=NA, cib=0.30.32) : Transient attribute: update
May 16 16:42:50 server2 crmd: [2615]: info: abort_transition_graph: te_update_diff:164 - Triggered transition abort (complete=0, tag=nvpair, id=status-server1-fail-count-Cluster-FS-DRBD.0, name=fail-count-Cluster-FS-DRBD:0, value=INFINITY, magic=NA, cib=0.30.33) : Transient attribute: update
May 16 16:42:50 server2 crmd: [2615]: info: abort_transition_graph: te_update_diff:164 - Triggered transition abort (complete=0, tag=nvpair, id=status-server1-last-failure-Cluster-FS-DRBD.0, name=last-failure-Cluster-FS-DRBD:0, value=1400238770, magic=NA, cib=0.30.34) : Transient attribute: update
May 16 16:42:50 server2 lrmd: [2612]: info: RA output: (Cluster-FS-DRBD:1:notify:stderr) Could not connect to 'drbd' generic netlink family
May 16 16:42:50 server2 kernel: [15438.689757] block drbd0: Starting worker thread (from drbdsetup-83 [2866])
May 16 16:42:50 server2 kernel: [15438.689854] block drbd0: disk( Diskless -> Attaching ) 
May 16 16:42:50 server2 kernel: [15438.690349] block drbd0: ASSERT( from_tnr - cnr + i - from == mx+1 ) in /build/buildd/linux-3.2.0/drivers/block/drbd/drbd_actlog.c:468
May 16 16:42:50 server2 kernel: [15438.691325] block drbd0: Found 4 transactions (142 active extents) in activity log.
May 16 16:42:50 server2 kernel: [15438.691328] block drbd0: Method to ensure write ordering: flush
May 16 16:42:50 server2 kernel: [15438.691333] block drbd0: drbd_bm_resize called with capacity == 78122592
May 16 16:42:50 server2 kernel: [15438.691582] block drbd0: resync bitmap: bits=9765324 words=152584 pages=299
May 16 16:42:50 server2 kernel: [15438.691585] block drbd0: size = 37 GB (39061296 KB)
May 16 16:42:50 server2 kernel: [15438.713985] block drbd0: bitmap READ of 299 pages took 6 jiffies
May 16 16:42:50 server2 kernel: [15438.714171] block drbd0: recounting of set bits took additional 0 jiffies
May 16 16:42:50 server2 kernel: [15438.714174] block drbd0: 0 KB (0 bits) marked out-of-sync by on disk bit-map.
May 16 16:42:50 server2 kernel: [15438.714181] block drbd0: disk( Attaching -> Consistent ) 
May 16 16:42:50 server2 kernel: [15438.714183] block drbd0: attached to UUIDs 23A83AE9AB70041C:0000000000000000:0001000000000000:0001000000000004
May 16 16:42:50 server2 crmd: [2615]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1400238770-10 from server1
May 16 16:42:50 server2 crmd: [2615]: info: match_graph_event: Action Cluster-FS-DRBD:0_notify_0 (49) confirmed on server1 (rc=0)
May 16 16:42:50 server2 kernel: [15438.730683] block drbd0: conn( StandAlone -> Unconnected ) 
May 16 16:42:50 server2 kernel: [15438.731808] block drbd0: Starting receiver thread (from drbd0_worker [2867])
May 16 16:42:50 server2 kernel: [15438.732978] block drbd0: receiver (re)started
May 16 16:42:50 server2 kernel: [15438.732986] block drbd0: conn( Unconnected -> WFConnection ) 
May 16 16:42:50 server2 lrmd: [2612]: info: RA output: (Cluster-FS-DRBD:1:notify:stdout) drbdsetup-83 0 disk /dev/sda3 /dev/sda3 internal --set-defaults --create-device --fencing=resource-only 
May 16 16:42:50 server2 lrmd: [2612]: info: RA output: (Cluster-FS-DRBD:1:notify:stdout) drbdsetup-83 0 syncer --set-defaults --create-device --rate=15M 
May 16 16:42:50 server2 lrmd: [2612]: info: RA output: (Cluster-FS-DRBD:1:notify:stdout) drbdsetup-83 0 net ipv4:192.168.0.93:7788 ipv4:192.168.0.92:7788 C --set-defaults --create-device --allow-two-primaries --shared-secret=kalki 
May 16 16:42:50 server2 lrmd: [2612]: info: operation notify[5] on Cluster-FS-DRBD:1 for client 2615: pid 2832 exited with return code 0
May 16 16:42:50 server2 crmd: [2615]: info: send_direct_ack: ACK'ing resource op Cluster-FS-DRBD:1_notify_0 from 50:3:0:8bcf8e15-a13a-4ab3-b651-dd8e89f3d9fc: lrm_invoke-lrmd-1400238770-54
May 16 16:42:50 server2 crmd: [2615]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1400238770-54 from server2
May 16 16:42:50 server2 crmd: [2615]: info: match_graph_event: Action Cluster-FS-DRBD:1_notify_0 (50) confirmed on server2 (rc=0)
May 16 16:42:50 server2 crmd: [2615]: info: process_lrm_event: LRM operation Cluster-FS-DRBD:1_notify_0 (call=5, rc=0, cib-update=0, confirmed=true) ok
May 16 16:42:50 server2 crmd: [2615]: info: te_pseudo_action: Pseudo action 18 fired and confirmed
May 16 16:42:50 server2 crmd: [2615]: info: run_graph: ====================================================
May 16 16:42:50 server2 crmd: [2615]: notice: run_graph: Transition 3 (Complete=17, Pending=0, Fired=0, Skipped=2, Incomplete=0, Source=/var/lib/pengine/pe-input-35.bz2): Stopped
May 16 16:42:50 server2 crmd: [2615]: info: te_graph_trigger: Transition 3 is now complete
May 16 16:42:50 server2 crmd: [2615]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
May 16 16:42:50 server2 crmd: [2615]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
May 16 16:42:50 server2 crmd: [2615]: info: do_pe_invoke: Query 119: Requesting the current CIB: S_POLICY_ENGINE
May 16 16:42:50 server2 crmd: [2615]: info: do_pe_invoke_callback: Invoking the PE: query=119, ref=pe_calc-dc-1400238770-55, seq=156, quorate=1
May 16 16:42:50 server2 pengine: [2614]: notice: unpack_config: On loss of CCM Quorum: Ignore
May 16 16:42:50 server2 pengine: [2614]: WARN: unpack_rsc_op: Processing failed op Cluster-FS-DRBD:1_last_failure_0 on server2: unknown error (1)
May 16 16:42:50 server2 pengine: [2614]: WARN: unpack_rsc_op: Processing failed op Cluster-FS-DRBD:0_last_failure_0 on server1: unknown error (1)
May 16 16:42:50 server2 pengine: [2614]: WARN: common_apply_stickiness: Forcing Cluster-FS-DRBD-Master away from server2 after 1000000 failures (max=3)
May 16 16:42:50 server2 pengine: [2614]: WARN: common_apply_stickiness: Forcing Cluster-FS-DRBD-Master away from server2 after 1000000 failures (max=3)
May 16 16:42:50 server2 pengine: [2614]: WARN: common_apply_stickiness: Forcing Cluster-FS-DRBD-Master away from server1 after 1000000 failures (max=3)
May 16 16:42:50 server2 pengine: [2614]: WARN: common_apply_stickiness: Forcing Cluster-FS-DRBD-Master away from server1 after 1000000 failures (max=3)
May 16 16:42:50 server2 pengine: [2614]: notice: LogActions: Stop    Cluster-FS-DRBD:0#011(server1)
May 16 16:42:50 server2 pengine: [2614]: notice: LogActions: Stop    Cluster-FS-DRBD:1#011(server2)
May 16 16:42:50 server2 pengine: [2614]: notice: LogActions: Leave   Cluster-FS-Mount:0#011(Stopped)
May 16 16:42:50 server2 pengine: [2614]: notice: LogActions: Leave   Cluster-FS-Mount:1#011(Stopped)
May 16 16:42:50 server2 crmd: [2615]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
May 16 16:42:50 server2 crmd: [2615]: info: unpack_graph: Unpacked transition 4: 11 actions in 11 synapses
May 16 16:42:50 server2 crmd: [2615]: info: do_te_invoke: Processing graph 4 (ref=pe_calc-dc-1400238770-55) derived from /var/lib/pengine/pe-input-36.bz2
May 16 16:42:50 server2 crmd: [2615]: info: te_pseudo_action: Pseudo action 15 fired and confirmed
May 16 16:42:50 server2 crmd: [2615]: info: te_rsc_command: Initiating action 45: notify Cluster-FS-DRBD:0_pre_notify_stop_0 on server1
May 16 16:42:50 server2 crmd: [2615]: info: te_rsc_command: Initiating action 46: notify Cluster-FS-DRBD:1_pre_notify_stop_0 on server2 (local)
May 16 16:42:50 server2 crmd: [2615]: info: do_lrm_rsc_op: Performing key=46:4:0:8bcf8e15-a13a-4ab3-b651-dd8e89f3d9fc op=Cluster-FS-DRBD:1_notify_0 )
May 16 16:42:50 server2 lrmd: [2612]: info: rsc:Cluster-FS-DRBD:1 notify[6] (pid 2884)
May 16 16:42:50 server2 crmd: [2615]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1400238770-11 from server1
May 16 16:42:50 server2 crmd: [2615]: info: match_graph_event: Action Cluster-FS-DRBD:0_notify_0 (45) confirmed on server1 (rc=0)
May 16 16:42:50 server2 pengine: [2614]: notice: process_pe_message: Transition 4: PEngine Input stored in: /var/lib/pengine/pe-input-36.bz2
May 16 16:42:50 server2 lrmd: [2612]: info: operation notify[6] on Cluster-FS-DRBD:1 for client 2615: pid 2884 exited with return code 0
May 16 16:42:50 server2 crmd: [2615]: info: send_direct_ack: ACK'ing resource op Cluster-FS-DRBD:1_notify_0 from 46:4:0:8bcf8e15-a13a-4ab3-b651-dd8e89f3d9fc: lrm_invoke-lrmd-1400238770-58
May 16 16:42:50 server2 crmd: [2615]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1400238770-58 from server2
May 16 16:42:50 server2 crmd: [2615]: info: match_graph_event: Action Cluster-FS-DRBD:1_notify_0 (46) confirmed on server2 (rc=0)
May 16 16:42:50 server2 crmd: [2615]: info: process_lrm_event: LRM operation Cluster-FS-DRBD:1_notify_0 (call=6, rc=0, cib-update=0, confirmed=true) ok
May 16 16:42:50 server2 crmd: [2615]: info: te_pseudo_action: Pseudo action 16 fired and confirmed
May 16 16:42:50 server2 crmd: [2615]: info: te_pseudo_action: Pseudo action 13 fired and confirmed
May 16 16:42:50 server2 crmd: [2615]: info: te_rsc_command: Initiating action 2: stop Cluster-FS-DRBD:0_stop_0 on server1
May 16 16:42:50 server2 crmd: [2615]: info: te_rsc_command: Initiating action 1: stop Cluster-FS-DRBD:1_stop_0 on server2 (local)
May 16 16:42:50 server2 crmd: [2615]: info: do_lrm_rsc_op: Performing key=1:4:0:8bcf8e15-a13a-4ab3-b651-dd8e89f3d9fc op=Cluster-FS-DRBD:1_stop_0 )
May 16 16:42:50 server2 lrmd: [2612]: info: rsc:Cluster-FS-DRBD:1 stop[7] (pid 2910)
May 16 16:42:50 server2 lrmd: [2612]: info: RA output: (Cluster-FS-DRBD:1:stop:stderr) Could not connect to 'drbd' generic netlink family
May 16 16:42:50 server2 lrmd: [2612]: info: RA output: (Cluster-FS-DRBD:1:stop:stdout) 
May 16 16:42:50 server2 kernel: [15438.859437] block drbd0: conn( WFConnection -> Disconnecting ) 
May 16 16:42:50 server2 kernel: [15438.859454] block drbd0: Discarding network configuration.
May 16 16:42:50 server2 kernel: [15438.859472] block drbd0: Connection closed
May 16 16:42:50 server2 kernel: [15438.859476] block drbd0: conn( Disconnecting -> StandAlone ) 
May 16 16:42:50 server2 kernel: [15438.859497] block drbd0: receiver terminated
May 16 16:42:50 server2 kernel: [15438.859499] block drbd0: Terminating drbd0_receiver
May 16 16:42:50 server2 kernel: [15438.859513] block drbd0: disk( Consistent -> Failed ) 
May 16 16:42:50 server2 kernel: [15438.859527] block drbd0: Sending state for detaching disk failed
May 16 16:42:50 server2 kernel: [15438.859535] block drbd0: disk( Failed -> Diskless ) 
May 16 16:42:50 server2 kernel: [15438.859586] block drbd0: drbd_bm_resize called with capacity == 0
May 16 16:42:50 server2 kernel: [15438.859673] block drbd0: worker terminated
May 16 16:42:50 server2 kernel: [15438.859677] block drbd0: Terminating drbd0_worker
May 16 16:42:50 server2 crm_attribute: [2940]: info: Invoked: crm_attribute -N server2 -n master-Cluster-FS-DRBD:1 -l reboot -D 
May 16 16:42:50 server2 crmd: [2615]: info: match_graph_event: Action Cluster-FS-DRBD:0_stop_0 (2) confirmed on server1 (rc=0)
May 16 16:42:50 server2 lrmd: [2612]: info: operation stop[7] on Cluster-FS-DRBD:1 for client 2615: pid 2910 exited with return code 0
May 16 16:42:50 server2 lrmd: [2612]: info: RA output: (Cluster-FS-DRBD:1:stop:stdout) 
May 16 16:42:50 server2 crmd: [2615]: info: process_lrm_event: LRM operation Cluster-FS-DRBD:1_stop_0 (call=7, rc=0, cib-update=120, confirmed=true) ok
May 16 16:42:50 server2 crmd: [2615]: info: match_graph_event: Action Cluster-FS-DRBD:1_stop_0 (1) confirmed on server2 (rc=0)
May 16 16:42:50 server2 crmd: [2615]: info: te_pseudo_action: Pseudo action 14 fired and confirmed
May 16 16:42:50 server2 crmd: [2615]: info: te_pseudo_action: Pseudo action 17 fired and confirmed
May 16 16:42:50 server2 crmd: [2615]: info: te_pseudo_action: Pseudo action 18 fired and confirmed
May 16 16:42:50 server2 crmd: [2615]: info: te_pseudo_action: Pseudo action 3 fired and confirmed
May 16 16:42:50 server2 crmd: [2615]: info: run_graph: ====================================================
May 16 16:42:50 server2 crmd: [2615]: notice: run_graph: Transition 4 (Complete=11, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-36.bz2): Complete
May 16 16:42:50 server2 crmd: [2615]: info: te_graph_trigger: Transition 4 is now complete
May 16 16:42:50 server2 crmd: [2615]: info: notify_crmd: Transition 4 status: done - <null>
May 16 16:42:50 server2 crmd: [2615]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
May 16 16:42:50 server2 crmd: [2615]: info: do_state_transition: Starting PEngine Recheck Timer
May 16 16:52:10 server2 cib: [2611]: info: cib_stats: Processed 60 operations (833.00us average, 0% utilization) in the last 10min


More information about the Pacemaker mailing list