<div dir="ltr">Thank you Vladislav.<div><br></div><div>I have configured resource level fencing on drbd and removed wfc-timeout and defr-wfc-timeout (is this required?). My drbd configuration is now:</div><div><br></div><div><div>resource pg {</div><div>  device /dev/drbd0;</div><div>  disk /dev/vdb;</div><div>  meta-disk internal;</div><div>  disk {</div><div>    fencing resource-only;</div><div>    on-io-error detach;</div><div>    resync-rate 40M;</div><div>  }</div><div>  handlers {</div><div>    fence-peer &quot;/usr/lib/drbd/crm-fence-peer.sh&quot;;</div><div>    after-resync-target &quot;/usr/lib/drbd/crm-unfence-peer.sh&quot;;</div><div>    split-brain &quot;/usr/lib/drbd/notify-split-brain.sh nkbm&quot;;</div><div>  }</div><div>  on node01 {</div><div>    address <a href="http://10.2.136.52:7789">10.2.136.52:7789</a>;</div><div>  }</div><div>  on node02 {</div><div>    address <a href="http://10.2.136.55:7789">10.2.136.55:7789</a>;</div><div>  }</div><div>  net {</div><div>    verify-alg md5;</div><div>    after-sb-0pri discard-zero-changes;</div><div>    after-sb-1pri discard-secondary;</div><div>    after-sb-2pri disconnect;</div><div>  }</div><div>}</div></div><div><br></div><div>Failover works on my initial test (restarting both nodes alternately - this always works). Will wait for a couple of hours after doing a failover test again (Which always fail on my previous setup).</div><div><br></div><div>Thank you!</div><div>Kiam</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Sep 11, 2014 at 2:14 PM, Vladislav Bogdanov <span dir="ltr">&lt;<a href="mailto:bubble@hoster-ok.com" target="_blank">bubble@hoster-ok.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">11.09.2014 05:57, Norbert Kiam Maclang wrote:<br>
&gt; Is this something to do with quorum? But I already set<br>
<br>
You&#39;d need to configure fencing at the drbd resources level.<br>
<br>
<a href="http://www.drbd.org/users-guide-emb/s-pacemaker-fencing.html#s-pacemaker-fencing-cib" target="_blank">http://www.drbd.org/users-guide-emb/s-pacemaker-fencing.html#s-pacemaker-fencing-cib</a><br>
<br>
<br>
&gt;<br>
&gt; property no-quorum-policy=&quot;ignore&quot; \<br>
&gt; expected-quorum-votes=&quot;1&quot;<br>
&gt;<br>
&gt; Thanks in advance,<br>
&gt; Kiam<br>
&gt;<br>
&gt; On Thu, Sep 11, 2014 at 10:09 AM, Norbert Kiam Maclang<br>
&gt; &lt;<a href="mailto:norbert.kiam.maclang@gmail.com">norbert.kiam.maclang@gmail.com</a> &lt;mailto:<a href="mailto:norbert.kiam.maclang@gmail.com">norbert.kiam.maclang@gmail.com</a>&gt;&gt;<br>
&gt; wrote:<br>
&gt;<br>
&gt;     Hi,<br>
&gt;<br>
&gt;     Please help me understand what is causing the problem. I have a 2<br>
&gt;     node cluster running on vms using KVM. Each vm (I am using Ubuntu<br>
&gt;     14.04) runs on a separate hypervisor on separate machines. All are<br>
&gt;     working well during testing (I restarted the vms alternately), but<br>
&gt;     after a day when I kill the other node, I always end up corosync and<br>
&gt;     pacemaker hangs on the surviving node. Date and time on the vms are<br>
&gt;     in sync, I use unicast, tcpdump shows both nodes exchanges,<br>
&gt;     confirmed that DRBD is healthy and crm_mon show good status before I<br>
&gt;     kill the other node. Below are my configurations and versions I used:<br>
&gt;<br>
&gt;     corosync             2.3.3-1ubuntu1<br>
&gt;     crmsh                1.2.5+hg1034-1ubuntu3<br>
&gt;     drbd8-utils          2:8.4.4-1ubuntu1<br>
&gt;     libcorosync-common4  2.3.3-1ubuntu1<br>
&gt;     libcrmcluster4       1.1.10+git20130802-1ubuntu2<br>
&gt;     libcrmcommon3        1.1.10+git20130802-1ubuntu2<br>
&gt;     libcrmservice1       1.1.10+git20130802-1ubuntu2<br>
&gt;     pacemaker            1.1.10+git20130802-1ubuntu2<br>
&gt;     pacemaker-cli-utils  1.1.10+git20130802-1ubuntu2<br>
&gt;     postgresql-9.3       9.3.5-0ubuntu0.14.04.1<br>
&gt;<br>
&gt;     # /etc/corosync/corosync:<br>
&gt;     totem {<br>
&gt;     version: 2<br>
&gt;     token: 3000<br>
&gt;     token_retransmits_before_loss_const: 10<br>
&gt;     join: 60<br>
&gt;     consensus: 3600<br>
&gt;     vsftype: none<br>
&gt;     max_messages: 20<br>
&gt;     clear_node_high_bit: yes<br>
&gt;      secauth: off<br>
&gt;      threads: 0<br>
&gt;      rrp_mode: none<br>
&gt;      interface {<br>
&gt;                     member {<br>
&gt;                             memberaddr: 10.2.136.56<br>
&gt;                     }<br>
&gt;                     member {<br>
&gt;                             memberaddr: 10.2.136.57<br>
&gt;                     }<br>
&gt;                     ringnumber: 0<br>
&gt;                     bindnetaddr: 10.2.136.0<br>
&gt;                     mcastport: 5405<br>
&gt;             }<br>
&gt;             transport: udpu<br>
&gt;     }<br>
&gt;     amf {<br>
&gt;     mode: disabled<br>
&gt;     }<br>
&gt;     quorum {<br>
&gt;     provider: corosync_votequorum<br>
&gt;     expected_votes: 1<br>
&gt;     }<br>
&gt;     aisexec {<br>
&gt;             user:   root<br>
&gt;             group:  root<br>
&gt;     }<br>
&gt;     logging {<br>
&gt;             fileline: off<br>
&gt;             to_stderr: yes<br>
&gt;             to_logfile: no<br>
&gt;             to_syslog: yes<br>
&gt;     syslog_facility: daemon<br>
&gt;             debug: off<br>
&gt;             timestamp: on<br>
&gt;             logger_subsys {<br>
&gt;                     subsys: AMF<br>
&gt;                     debug: off<br>
&gt;                     tags: enter|leave|trace1|trace2|trace3|trace4|trace6<br>
&gt;             }<br>
&gt;     }<br>
&gt;<br>
&gt;     # /etc/corosync/service.d/pcmk:<br>
&gt;     service {<br>
&gt;       name: pacemaker<br>
&gt;       ver: 1<br>
&gt;     }<br>
&gt;<br>
&gt;     /etc/drbd.d/global_common.conf:<br>
&gt;     global {<br>
&gt;     usage-count no;<br>
&gt;     }<br>
&gt;<br>
&gt;     common {<br>
&gt;     net {<br>
&gt;                     protocol C;<br>
&gt;     }<br>
&gt;     }<br>
&gt;<br>
&gt;     # /etc/drbd.d/pg.res:<br>
&gt;     resource pg {<br>
&gt;       device /dev/drbd0;<br>
&gt;       disk /dev/vdb;<br>
&gt;       meta-disk internal;<br>
&gt;       startup {<br>
&gt;         wfc-timeout 15;<br>
&gt;         degr-wfc-timeout 60;<br>
&gt;       }<br>
&gt;       disk {<br>
&gt;         on-io-error detach;<br>
&gt;         resync-rate 40M;<br>
&gt;       }<br>
&gt;       on node01 {<br>
&gt;         address <a href="http://10.2.136.56:7789" target="_blank">10.2.136.56:7789</a> &lt;<a href="http://10.2.136.56:7789" target="_blank">http://10.2.136.56:7789</a>&gt;;<br>
&gt;       }<br>
&gt;       on node02 {<br>
&gt;         address <a href="http://10.2.136.57:7789" target="_blank">10.2.136.57:7789</a> &lt;<a href="http://10.2.136.57:7789" target="_blank">http://10.2.136.57:7789</a>&gt;;<br>
&gt;       }<br>
&gt;       net {<br>
&gt;         verify-alg md5;<br>
&gt;         after-sb-0pri discard-zero-changes;<br>
&gt;         after-sb-1pri discard-secondary;<br>
&gt;         after-sb-2pri disconnect;<br>
&gt;       }<br>
&gt;     }<br>
&gt;<br>
&gt;     # Pacemaker configuration:<br>
&gt;     node $id=&quot;167938104&quot; node01<br>
&gt;     node $id=&quot;167938105&quot; node02<br>
&gt;     primitive drbd_pg ocf:linbit:drbd \<br>
&gt;     params drbd_resource=&quot;pg&quot; \<br>
&gt;     op monitor interval=&quot;29s&quot; role=&quot;Master&quot; \<br>
&gt;     op monitor interval=&quot;31s&quot; role=&quot;Slave&quot;<br>
&gt;     primitive fs_pg ocf:heartbeat:Filesystem \<br>
&gt;     params device=&quot;/dev/drbd0&quot; directory=&quot;/var/lib/postgresql/9.3/main&quot;<br>
&gt;     fstype=&quot;ext4&quot;<br>
&gt;     primitive ip_pg ocf:heartbeat:IPaddr2 \<br>
&gt;     params ip=&quot;10.2.136.59&quot; cidr_netmask=&quot;24&quot; nic=&quot;eth0&quot;<br>
&gt;     primitive lsb_pg lsb:postgresql<br>
&gt;     group PGServer fs_pg lsb_pg ip_pg<br>
&gt;     ms ms_drbd_pg drbd_pg \<br>
&gt;     meta master-max=&quot;1&quot; master-node-max=&quot;1&quot; clone-max=&quot;2&quot;<br>
&gt;     clone-node-max=&quot;1&quot; notify=&quot;true&quot;<br>
&gt;     colocation pg_on_drbd inf: PGServer ms_drbd_pg:Master<br>
&gt;     order pg_after_drbd inf: ms_drbd_pg:promote PGServer:start<br>
&gt;     property $id=&quot;cib-bootstrap-options&quot; \<br>
&gt;     dc-version=&quot;1.1.10-42f2063&quot; \<br>
&gt;     cluster-infrastructure=&quot;corosync&quot; \<br>
&gt;     stonith-enabled=&quot;false&quot; \<br>
&gt;     no-quorum-policy=&quot;ignore&quot;<br>
&gt;     rsc_defaults $id=&quot;rsc-options&quot; \<br>
&gt;     resource-stickiness=&quot;100&quot;<br>
&gt;<br>
&gt;     # Logs on node01<br>
&gt;     Sep 10 10:25:33 node01 crmd[1019]:   notice: peer_update_callback:<br>
&gt;     Our peer on the DC is dead<br>
&gt;     Sep 10 10:25:33 node01 crmd[1019]:   notice: do_state_transition:<br>
&gt;     State transition S_NOT_DC -&gt; S_ELECTION [ input=I_ELECTION<br>
&gt;     cause=C_CRMD_STATUS_CALLBACK origin=peer_update_callback ]<br>
&gt;     Sep 10 10:25:33 node01 crmd[1019]:   notice: do_state_transition:<br>
&gt;     State transition S_ELECTION -&gt; S_INTEGRATION [ input=I_ELECTION_DC<br>
&gt;     cause=C_FSA_INTERNAL origin=do_election_check ]<br>
&gt;     Sep 10 10:25:33 node01 corosync[940]:   [TOTEM ] A new membership<br>
&gt;     (<a href="http://10.2.136.56:52" target="_blank">10.2.136.56:52</a> &lt;<a href="http://10.2.136.56:52" target="_blank">http://10.2.136.56:52</a>&gt;) was formed. Members left:<br>
&gt;     167938105<br>
&gt;     Sep 10 10:25:45 node01 kernel: [74452.740024] d-con pg: PingAck did<br>
&gt;     not arrive in time.<br>
&gt;     Sep 10 10:25:45 node01 kernel: [74452.740169] d-con pg: peer(<br>
&gt;     Primary -&gt; Unknown ) conn( Connected -&gt; NetworkFailure ) pdsk(<br>
&gt;     UpToDate -&gt; DUnknown )<br>
&gt;     Sep 10 10:25:45 node01 kernel: [74452.740987] d-con pg: asender<br>
&gt;     terminated<br>
&gt;     Sep 10 10:25:45 node01 kernel: [74452.740999] d-con pg: Terminating<br>
&gt;     drbd_a_pg<br>
&gt;     Sep 10 10:25:45 node01 kernel: [74452.741235] d-con pg: Connection<br>
&gt;     closed<br>
&gt;     Sep 10 10:25:45 node01 kernel: [74452.741259] d-con pg: conn(<br>
&gt;     NetworkFailure -&gt; Unconnected )<br>
&gt;     Sep 10 10:25:45 node01 kernel: [74452.741260] d-con pg: receiver<br>
&gt;     terminated<br>
&gt;     Sep 10 10:25:45 node01 kernel: [74452.741261] d-con pg: Restarting<br>
&gt;     receiver thread<br>
&gt;     Sep 10 10:25:45 node01 kernel: [74452.741262] d-con pg: receiver<br>
&gt;     (re)started<br>
&gt;     Sep 10 10:25:45 node01 kernel: [74452.741269] d-con pg: conn(<br>
&gt;     Unconnected -&gt; WFConnection )<br>
&gt;     Sep 10 10:26:12 node01 lrmd[1016]:  warning: child_timeout_callback:<br>
&gt;     drbd_pg_monitor_31000 process (PID 8445) timed out<br>
&gt;     Sep 10 10:26:12 node01 lrmd[1016]:  warning: operation_finished:<br>
&gt;     drbd_pg_monitor_31000:8445 - timed out after 20000ms<br>
&gt;     Sep 10 10:26:12 node01 crmd[1019]:    error: process_lrm_event: LRM<br>
&gt;     operation drbd_pg_monitor_31000 (30) Timed Out (timeout=20000ms)<br>
&gt;     Sep 10 10:26:32 node01 crmd[1019]:  warning: cib_rsc_callback:<br>
&gt;     Resource update 23 failed: (rc=-62) Timer expired<br>
&gt;     Sep 10 10:27:03 node01 lrmd[1016]:  warning: child_timeout_callback:<br>
&gt;     drbd_pg_monitor_31000 process (PID 8693) timed out<br>
&gt;     Sep 10 10:27:03 node01 lrmd[1016]:  warning: operation_finished:<br>
&gt;     drbd_pg_monitor_31000:8693 - timed out after 20000ms<br>
&gt;     Sep 10 10:27:54 node01 lrmd[1016]:  warning: child_timeout_callback:<br>
&gt;     drbd_pg_monitor_31000 process (PID 8938) timed out<br>
&gt;     Sep 10 10:27:54 node01 lrmd[1016]:  warning: operation_finished:<br>
&gt;     drbd_pg_monitor_31000:8938 - timed out after 20000ms<br>
&gt;     Sep 10 10:28:33 node01 crmd[1019]:    error: crm_timer_popped:<br>
&gt;     Integration Timer (I_INTEGRATED) just popped in state S_INTEGRATION!<br>
&gt;     (180000ms)<br>
&gt;     Sep 10 10:28:33 node01 crmd[1019]:  warning: do_state_transition:<br>
&gt;     Progressed to state S_FINALIZE_JOIN after C_TIMER_POPPED<br>
&gt;     Sep 10 10:28:33 node01 crmd[1019]:  warning: do_state_transition: 1<br>
&gt;     cluster nodes failed to respond to the join offer.<br>
&gt;     Sep 10 10:28:33 node01 crmd[1019]:   notice: crmd_join_phase_log:<br>
&gt;     join-1: node02=none<br>
&gt;     Sep 10 10:28:33 node01 crmd[1019]:   notice: crmd_join_phase_log:<br>
&gt;     join-1: node01=welcomed<br>
&gt;     Sep 10 10:28:45 node01 lrmd[1016]:  warning: child_timeout_callback:<br>
&gt;     drbd_pg_monitor_31000 process (PID 9185) timed out<br>
&gt;     Sep 10 10:28:45 node01 lrmd[1016]:  warning: operation_finished:<br>
&gt;     drbd_pg_monitor_31000:9185 - timed out after 20000ms<br>
&gt;     Sep 10 10:29:36 node01 lrmd[1016]:  warning: child_timeout_callback:<br>
&gt;     drbd_pg_monitor_31000 process (PID 9432) timed out<br>
&gt;     Sep 10 10:29:36 node01 lrmd[1016]:  warning: operation_finished:<br>
&gt;     drbd_pg_monitor_31000:9432 - timed out after 20000ms<br>
&gt;     Sep 10 10:30:27 node01 lrmd[1016]:  warning: child_timeout_callback:<br>
&gt;     drbd_pg_monitor_31000 process (PID 9680) timed out<br>
&gt;     Sep 10 10:30:27 node01 lrmd[1016]:  warning: operation_finished:<br>
&gt;     drbd_pg_monitor_31000:9680 - timed out after 20000ms<br>
&gt;     Sep 10 10:31:18 node01 lrmd[1016]:  warning: child_timeout_callback:<br>
&gt;     drbd_pg_monitor_31000 process (PID 9927) timed out<br>
&gt;     Sep 10 10:31:18 node01 lrmd[1016]:  warning: operation_finished:<br>
&gt;     drbd_pg_monitor_31000:9927 - timed out after 20000ms<br>
&gt;     Sep 10 10:32:09 node01 lrmd[1016]:  warning: child_timeout_callback:<br>
&gt;     drbd_pg_monitor_31000 process (PID 10174) timed out<br>
&gt;     Sep 10 10:32:09 node01 lrmd[1016]:  warning: operation_finished:<br>
&gt;     drbd_pg_monitor_31000:10174 - timed out after 20000ms<br>
&gt;<br>
&gt;     #crm_mon on node01 before I kill the other vm:<br>
&gt;     Stack: corosync<br>
&gt;     Current DC: node02 (167938104) - partition with quorum<br>
&gt;     Version: 1.1.10-42f2063<br>
&gt;     2 Nodes configured<br>
&gt;     5 Resources configured<br>
&gt;<br>
&gt;     Online: [ node01 node02 ]<br>
&gt;<br>
&gt;      Resource Group: PGServer<br>
&gt;          fs_pg      (ocf::heartbeat:Filesystem):    Started node02<br>
&gt;          lsb_pg     (lsb:postgresql):       Started node02<br>
&gt;          ip_pg      (ocf::heartbeat:IPaddr2):       Started node02<br>
&gt;      Master/Slave Set: ms_drbd_pg [drbd_pg]<br>
&gt;          Masters: [ node02 ]<br>
&gt;          Slaves: [ node01 ]<br>
&gt;<br>
&gt;     Thank you,<br>
&gt;     Kiam<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; _______________________________________________<br>
&gt; Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
&gt; <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
&gt;<br>
&gt; Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
&gt; Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
&gt; Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
&gt;<br>
<br>
<br>
_______________________________________________<br>
Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
</blockquote></div><br></div>