Hello Andrew<br><br>use crm_mon -of when your virtualdomain resource fail to see which operation resource report the problem<br><br><div class="gmail_quote">2012/6/19 Andrew Martin <span dir="ltr">&lt;<a href="mailto:amartin@xes-inc.com" target="_blank">amartin@xes-inc.com</a>&gt;</span><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div style="font-size:12pt;font-family:Times New Roman">Hi Emmanuel,<div><br></div><div>Thanks for the idea. I looked through the rest of the log and these &quot;return code 8&quot; errors on the ocf:linbit:drbd resources are occurring at other intervals (e.g. today) when the VirtualDomain resource is unaffected. This seems to indicate that these soft errors do not trigger a restart of the VirtualDomain resource. Is there anything else in the log that could indicate what caused this, or is there somewhere else I can look?</div>
<div><br></div><div>Thanks,</div><div><br></div><div>Andrew</div><div><br><hr><div style="font-size:12pt;font-style:normal;font-family:Helvetica,Arial,sans-serif;text-decoration:none;font-weight:normal"><b>From: </b><span>&quot;emmanuel segura&quot; &lt;<a href="mailto:emi2fast@gmail.com" title="[GMCP] Compose a new mail to emi2fast@gmail.com" rel="noreferrer" target="_blank">emi2fast@gmail.com</a>&gt;</span><br>
<b>To: </b><span>&quot;The Pacemaker cluster resource manager&quot; &lt;<a href="mailto:pacemaker@oss.clusterlabs.org" title="[GMCP] Compose a new mail to pacemaker@oss.clusterlabs.org" rel="noreferrer" target="_blank">pacemaker@oss.clusterlabs.org</a>&gt;</span><br>
<b>Sent: </b>Tuesday, June 19, 2012 9:57:19 AM<br><b>Subject: </b>Re: [Pacemaker] Why Did Pacemaker Restart this VirtualDomain        Resource?<div class="im"><br><br>I didn&#39;t see any error in your config, the only thing i seen it&#39;s this<br>
==========================================================<br>Jun 14 15:35:27 vmhost1 lrmd: [3853]: info: rsc:p_drbd_vmstore:0<br>monitor[55] (pid 12323)<br>Jun 14 15:35:27 vmhost1 lrmd: [3853]: info: rsc:p_drbd_mount2:0 monitor[53]<br>
(pid 12324)<br>Jun 14 15:35:27 vmhost1 lrmd: [3853]: info: operation monitor[55] on<br>p_drbd_vmstore:0 for client 3856: pid 12323 exited with return code 8<br>Jun 14 15:35:27 vmhost1 lrmd: [3853]: info: operation monitor[53] on<br>
p_drbd_mount2:0 for client 3856: pid 12324 exited with return code 8<br>Jun 14 15:35:31 vmhost1 lrmd: [3853]: info: rsc:p_drbd_mount1:0 monitor[54]<br>(pid 12396)<br>=========================================================<br>
it can be a drbd problem, but i tell you the true i&#39;m not sure<br><br>======================================================<br><span><a href="http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/s-ocf-return-codes.html" target="_blank">http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/s-ocf-return-codes.html</a></span><br>
=========================================================<br><br><span>2012/6/19 Andrew Martin &lt;<a href="mailto:amartin@xes-inc.com" title="[GMCP] Compose a new mail to amartin@xes-inc.com" rel="noreferrer" target="_blank">amartin@xes-inc.com</a>&gt;</span><br>
<br>&gt; Hello,<br>&gt;<br>&gt; I have a 3 node Pacemaker+Heartbeat cluster (two real nodes and one<br>&gt; &quot;standby&quot; quorum node) with Ubuntu 10.04 LTS on the nodes and using the<br>&gt; Pacemaker+Heartbeat packages from the Ubuntu HA Team PPA (<br>
</div><span>&gt; <a href="https://launchpad.net/%7Eubuntu-ha-maintainers/+archive/ppa" target="_blank">https://launchpad.net/~ubuntu-ha-maintainers/+archive/ppa</a>&lt;<a href="https://launchpad.net/%7Eubuntu-ha-maintainers/+archive/ppa" target="_blank">https://launchpad.net/%7Eubuntu-ha-maintainers/+archive/ppa</a>&gt;).</span><div>
<div class="h5"><br>&gt; I have configured 3 DRBD resources, a filesystem mount, and a KVM-based<br>&gt; virtual machine (using the VirtualDomain resource). I have constraints in<br>&gt; place so that the DRBD devices must become primary and the filesystem must<br>
&gt; be mounted before the VM can start:<br>&gt; node $id=&quot;1ab0690c-5aa0-4d9c-ae4e-b662e0ca54e5&quot; vmhost1<br>&gt; node $id=&quot;219e9bf6-ea99-41f4-895f-4c2c5c78484a&quot; quorumnode \<br>&gt;         attributes standby=&quot;on&quot;<br>
&gt; node $id=&quot;645e09b4-aee5-4cec-a241-8bd4e03a78c3&quot; vmhost2<br>&gt; primitive p_drbd_mount2 ocf:linbit:drbd \<br>&gt;         params drbd_resource=&quot;mount2&quot; \<br>&gt;         op start interval=&quot;0&quot; timeout=&quot;240&quot; \<br>
&gt;         op stop interval=&quot;0&quot; timeout=&quot;100&quot; \<br>&gt;         op monitor interval=&quot;10&quot; role=&quot;Master&quot; timeout=&quot;30&quot; \<br>&gt;         op monitor interval=&quot;20&quot; role=&quot;Slave&quot; timeout=&quot;30&quot;<br>
&gt; primitive p_drbd_mount1 ocf:linbit:drbd \<br>&gt;         params drbd_resource=&quot;mount1&quot; \<br>&gt;         op start interval=&quot;0&quot; timeout=&quot;240&quot; \<br>&gt;         op stop interval=&quot;0&quot; timeout=&quot;100&quot; \<br>
&gt;         op monitor interval=&quot;10&quot; role=&quot;Master&quot; timeout=&quot;30&quot; \<br>&gt;         op monitor interval=&quot;20&quot; role=&quot;Slave&quot; timeout=&quot;30&quot;<br>&gt; primitive p_drbd_vmstore ocf:linbit:drbd \<br>
&gt;         params drbd_resource=&quot;vmstore&quot; \<br>&gt;         op start interval=&quot;0&quot; timeout=&quot;240&quot; \<br>&gt;         op stop interval=&quot;0&quot; timeout=&quot;100&quot; \<br>&gt;         op monitor interval=&quot;10&quot; role=&quot;Master&quot; timeout=&quot;30&quot; \<br>
&gt;         op monitor interval=&quot;20&quot; role=&quot;Slave&quot; timeout=&quot;30&quot;<br>&gt; primitive p_fs_vmstore ocf:heartbeat:Filesystem \<br>&gt;         params device=&quot;/dev/drbd0&quot; directory=&quot;/mnt/storage/vmstore&quot;<br>
&gt; fstype=&quot;ext4&quot; \<br>&gt;         op start interval=&quot;0&quot; timeout=&quot;60&quot; \<br>&gt;         op stop interval=&quot;0&quot; timeout=&quot;60&quot; \<br>&gt;         op monitor interval=&quot;20&quot; timeout=&quot;40&quot;<br>
&gt; primitive p_ping ocf:pacemaker:ping \<br>&gt;         params name=&quot;p_ping&quot; host_list=&quot;192.168.1.25 192.168.1.26&quot;<br>&gt; multiplier=&quot;1000&quot; \<br>&gt;         op start interval=&quot;0&quot; timeout=&quot;60&quot; \<br>
&gt;         op monitor interval=&quot;20s&quot; timeout=&quot;60&quot;<br>&gt; primitive p_sysadmin_notify ocf:heartbeat:MailTo \<br><span>&gt;         params email=&quot;<a href="mailto:alert@example.com" title="[GMCP] Compose a new mail to alert@example.com" rel="noreferrer" target="_blank">alert@example.com</a>&quot; \</span><br>
&gt;         params subject=&quot;Pacemaker Change&quot; \<br>&gt;         op start interval=&quot;0&quot; timeout=&quot;10&quot; \<br>&gt;         op stop interval=&quot;0&quot; timeout=&quot;10&quot; \<br>&gt;         op monitor interval=&quot;10&quot; timeout=&quot;10&quot;<br>
&gt; primitive p_vm_myvm ocf:heartbeat:VirtualDomain \<br>&gt;         params config=&quot;/mnt/storage/vmstore/config/myvm.xml&quot; \<br>&gt;         meta allow-migrate=&quot;false&quot; target-role=&quot;Started&quot; is-managed=&quot;true&quot;<br>
&gt; \<br>&gt;         op start interval=&quot;0&quot; timeout=&quot;180&quot; \<br>&gt;         op stop interval=&quot;0&quot; timeout=&quot;180&quot; \<br>&gt;         op monitor interval=&quot;10&quot; timeout=&quot;30&quot;<br>
&gt; primitive stonithquorumnode stonith:external/webpowerswitch \<br>&gt;         params wps_ipaddr=&quot;192.168.3.100&quot; wps_port=&quot;x&quot; wps_username=&quot;xxx&quot;<br>&gt; wps_password=&quot;xxx&quot; hostname_to_stonith=&quot;quorumnode&quot;<br>
&gt; primitive stonithvmhost1 stonith:external/webpowerswitch \<br>&gt;         params wps_ipaddr=&quot;192.168.3.100&quot; wps_port=&quot;x&quot; wps_username=&quot;xxx&quot;<br>&gt; wps_password=&quot;xxx&quot; hostname_to_stonith=&quot;vmhost1&quot;<br>
&gt; primitive stonithvmhost2 stonith:external/webpowerswitch \<br>&gt;         params wps_ipaddr=&quot;192.168.3.100&quot; wps_port=&quot;x&quot; wps_username=&quot;xxx&quot;<br>&gt; wps_password=&quot;xxx&quot; hostname_to_stonith=&quot;vmhost2&quot;<br>
&gt; group g_vm p_fs_vmstore p_vm_myvm<br>&gt; ms ms_drbd_mount2 p_drbd_mount2 \<br>&gt;         meta master-max=&quot;1&quot; master-node-max=&quot;1&quot; clone-max=&quot;2&quot;<br>&gt; clone-node-max=&quot;1&quot; notify=&quot;true&quot;<br>
&gt; ms ms_drbd_mount1 p_drbd_mount1 \<br>&gt;         meta master-max=&quot;1&quot; master-node-max=&quot;1&quot; clone-max=&quot;2&quot;<br>&gt; clone-node-max=&quot;1&quot; notify=&quot;true&quot;<br>&gt; ms ms_drbd_vmstore p_drbd_vmstore \<br>
&gt;         meta master-max=&quot;1&quot; master-node-max=&quot;1&quot; clone-max=&quot;2&quot;<br>&gt; clone-node-max=&quot;1&quot; notify=&quot;true&quot;<br>&gt; clone cl_ping p_ping \<br>&gt;         meta interleave=&quot;true&quot;<br>
&gt; clone cl_sysadmin_notify p_sysadmin_notify<br>&gt; location loc_run_on_most_connected g_vm \<br>&gt;         rule $id=&quot;loc_run_on_most_connected-rule&quot; p_ping: defined p_ping<br>&gt; location loc_st_nodescan stonithquorumnode -inf: vmhost1<br>
&gt; location loc_st_vmhost1 stonithvmhost1 -inf: vmhost1<br>&gt; location loc_st_vmhost2 stonithvmhost2 -inf: vmhost2<br>&gt; colocation c_drbd_libvirt_vm inf: g_vm ms_drbd_vmstore:Master<br>&gt; ms_drbd_tools:Master ms_drbd_crm:Master<br>
&gt; order o_drbd-fs-vm inf: ms_drbd_vmstore:promote ms_drbd_tools:promote<br>&gt; ms_drbd_crm:promote g_vm:start<br>&gt; property $id=&quot;cib-bootstrap-options&quot; \<br>&gt;         dc-version=&quot;1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c&quot; \<br>
&gt;         cluster-infrastructure=&quot;Heartbeat&quot; \<br>&gt;         stonith-enabled=&quot;true&quot; \<br>&gt;         no-quorum-policy=&quot;freeze&quot; \<br>&gt;         last-lrm-refresh=&quot;1337746179&quot;<br>
&gt;<br>&gt; This has been working well, however last week Pacemaker all of a sudden<br>&gt; stopped the p_vm_myvm resource and then started it up again. I have<br>&gt; attached the relevant section of /var/log/daemon.log - I am unable to<br>
&gt; determine what caused Pacemaker to restart this resource. Based on the log,<br>&gt; could you tell me what event triggered this?<br>&gt;<br>&gt; Thanks,<br>&gt;<br>&gt; Andrew<br>&gt;<br>&gt; _______________________________________________<br>
<span>&gt; Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org" title="[GMCP] Compose a new mail to Pacemaker@oss.clusterlabs.org" rel="noreferrer" target="_blank">Pacemaker@oss.clusterlabs.org</a></span><br>
<span>&gt; <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a></span><br>&gt;<br><span>&gt; Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a></span><br>
<span>&gt; Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a></span><br><span>&gt; Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a></span><br>
&gt;<br>&gt;<br><br><br>-- <br>esta es mi vida e me la vivo hasta que dios quiera<br><br>_______________________________________________<br><span>Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org" title="[GMCP] Compose a new mail to Pacemaker@oss.clusterlabs.org" rel="noreferrer" target="_blank">Pacemaker@oss.clusterlabs.org</a></span><br>
<span><a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a></span><br><br><span>Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a></span><br>
<span>Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a></span><br><span>Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a></span><br>
</div></div></div><br></div></div></div><br>_______________________________________________<br>
Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br></blockquote></div><br><br clear="all"><br>-- <br>esta es mi vida e me la vivo hasta que dios quiera<br>