Thanks Andrew :-)<br><br><div class="gmail_quote">2012/6/21 Andrew Beekhof <span dir="ltr">&lt;<a href="mailto:andrew@beekhof.net" target="_blank">andrew@beekhof.net</a>&gt;</span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On Thu, Jun 21, 2012 at 12:11 AM, emmanuel segura &lt;<a href="mailto:emi2fast@gmail.com">emi2fast@gmail.com</a>&gt; wrote:<br>
&gt; I don&#39;t know but see the fail it&#39;s in the operation lx0_monitor_0, so i ask<br>
&gt; to someone with more experience then me, if pacemaker does a monitor<br>
&gt; operation before start?<br>
<br>
Always.<br>
We never start a resource unless we know for sure its not already<br>
running somewhere.<br>
Thats what we use non-recurring monitor operations for.<br>
<br>
&gt;<br>
&gt; maybe when you restart the resource something goes wrong and the resource<br>
&gt; fail and after that it&#39;s blocked<br>
&gt;<br>
&gt; ================<br>
&gt; on-fail=&quot;block&quot;<br>
&gt; ================<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; 2012/6/20 Kadlecsik József &lt;<a href="mailto:kadlecsik.jozsef@wigner.mta.hu">kadlecsik.jozsef@wigner.mta.hu</a>&gt;<br>
&gt;&gt;<br>
&gt;&gt; On Wed, 20 Jun 2012, emmanuel segura wrote:<br>
&gt;&gt;<br>
&gt;&gt; &gt; Why you say there is not error in the message<br>
&gt;&gt; &gt; =========================================================<br>
&gt;&gt; &gt; Jun 20 11:57:25 atlas4 lrmd: [17568]: info: operation monitor[35] on lx0<br>
&gt;&gt; &gt; for client 17571: pid 30179 exited with return code 7<br>
&gt;&gt; &gt; Jun 20 11:57:25 atlas4 crmd: [17571]: debug: create_operation_update:<br>
&gt;&gt; &gt; do_update_resource: Updating resouce lx0 after complete monitor op<br>
&gt;&gt; &gt; (interval=0)<br>
&gt;&gt; &gt; Jun 20 11:57:25 atlas4 crmd: [17571]: info: process_lrm_event: LRM<br>
&gt;&gt; &gt; operation lx0_monitor_0 (call=35, rc=7, cib-update=61, confirmed=true)<br>
&gt;&gt; &gt; not<br>
&gt;&gt; &gt; running<br>
&gt;&gt;<br>
&gt;&gt; I interpreted those lines as a checking that the resource hasn&#39;t been<br>
&gt;&gt; started yet (confirmed=true). And indeed, it&#39;s not running so the return<br>
&gt;&gt; code is OCF_NOT_RUNNING.<br>
&gt;&gt;<br>
&gt;&gt; There&#39;s no log message about an attempt to start the resource.<br>
&gt;&gt;<br>
&gt;&gt; Best regards,<br>
&gt;&gt; Jozsef<br>
&gt;&gt;<br>
&gt;&gt; &gt; 2012/6/20 Kadlecsik József &lt;<a href="mailto:kadlecsik.jozsef@wigner.mta.hu">kadlecsik.jozsef@wigner.mta.hu</a>&gt;<br>
&gt;&gt; &gt;       Hello,<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt;       Somehow a VirtualDomain resource after a &quot;crm resource restart&quot;,<br>
&gt;&gt; &gt;       which did<br>
&gt;&gt; &gt;       *not* start the resource but just stop, the resource cannot be<br>
&gt;&gt; &gt;       started<br>
&gt;&gt; &gt;       anymore. The most baffling is that I do not see an error<br>
&gt;&gt; &gt;       message. The<br>
&gt;&gt; &gt;       resource in question, named &#39;lx0&#39;, can be started directly via<br>
&gt;&gt; &gt;       virsh/libvirt and libvirtd is running on all cluster nodes.<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt;       We run corosync 1.4.2-1~bpo60+1, pacemaker 1.1.6-2~bpo60+1<br>
&gt;&gt; &gt;       (debian).<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt;       # crm status<br>
&gt;&gt; &gt;       ============<br>
&gt;&gt; &gt;       Last updated: Wed Jun 20 15:14:44 2012<br>
&gt;&gt; &gt;       Last change: Wed Jun 20 14:07:40 2012 via cibadmin on atlas0<br>
&gt;&gt; &gt;       Stack: openais<br>
&gt;&gt; &gt;       Current DC: atlas0 - partition with quorum<br>
&gt;&gt; &gt;       Version: 1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c<br>
&gt;&gt; &gt;       7 Nodes configured, 7 expected votes<br>
&gt;&gt; &gt;       18 Resources configured.<br>
&gt;&gt; &gt;       ============<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt;       Online: [ atlas0 atlas1 atlas2 atlas3 atlas4 atlas5 atlas6 ]<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt;        kerberos       (ocf::heartbeat:VirtualDomain): Started atlas0<br>
&gt;&gt; &gt;        stonith-atlas3 (stonith:ipmilan):      Started atlas4<br>
&gt;&gt; &gt;        stonith-atlas1 (stonith:ipmilan):      Started atlas4<br>
&gt;&gt; &gt;        stonith-atlas2 (stonith:ipmilan):      Started atlas4<br>
&gt;&gt; &gt;        stonith-atlas0 (stonith:ipmilan):      Started atlas4<br>
&gt;&gt; &gt;        stonith-atlas4 (stonith:ipmilan):      Started atlas3<br>
&gt;&gt; &gt;        mailman        (ocf::heartbeat:VirtualDomain): Started atlas6<br>
&gt;&gt; &gt;        indico (ocf::heartbeat:VirtualDomain): Started atlas0<br>
&gt;&gt; &gt;        papi   (ocf::heartbeat:VirtualDomain): Started atlas1<br>
&gt;&gt; &gt;        wwwd   (ocf::heartbeat:VirtualDomain): Started atlas2<br>
&gt;&gt; &gt;        webauth        (ocf::heartbeat:VirtualDomain): Started atlas3<br>
&gt;&gt; &gt;        caladan        (ocf::heartbeat:VirtualDomain): Started atlas4<br>
&gt;&gt; &gt;        radius (ocf::heartbeat:VirtualDomain): Started atlas5<br>
&gt;&gt; &gt;        mail0  (ocf::heartbeat:VirtualDomain): Started atlas6<br>
&gt;&gt; &gt;        stonith-atlas5 (stonith:apcmastersnmp):        Started atlas4<br>
&gt;&gt; &gt;        stonith-atlas6 (stonith:apcmastersnmp):        Started atlas4<br>
&gt;&gt; &gt;        w0     (ocf::heartbeat:VirtualDomain): Started atlas2<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt;       # crm resource show<br>
&gt;&gt; &gt;        kerberos       (ocf::heartbeat:VirtualDomain) Started<br>
&gt;&gt; &gt;        stonith-atlas3 (stonith:ipmilan) Started<br>
&gt;&gt; &gt;        stonith-atlas1 (stonith:ipmilan) Started<br>
&gt;&gt; &gt;        stonith-atlas2 (stonith:ipmilan) Started<br>
&gt;&gt; &gt;        stonith-atlas0 (stonith:ipmilan) Started<br>
&gt;&gt; &gt;        stonith-atlas4 (stonith:ipmilan) Started<br>
&gt;&gt; &gt;        mailman        (ocf::heartbeat:VirtualDomain) Started<br>
&gt;&gt; &gt;        indico (ocf::heartbeat:VirtualDomain) Started<br>
&gt;&gt; &gt;        papi   (ocf::heartbeat:VirtualDomain) Started<br>
&gt;&gt; &gt;        wwwd   (ocf::heartbeat:VirtualDomain) Started<br>
&gt;&gt; &gt;        webauth        (ocf::heartbeat:VirtualDomain) Started<br>
&gt;&gt; &gt;        caladan        (ocf::heartbeat:VirtualDomain) Started<br>
&gt;&gt; &gt;        radius (ocf::heartbeat:VirtualDomain) Started<br>
&gt;&gt; &gt;        mail0  (ocf::heartbeat:VirtualDomain) Started<br>
&gt;&gt; &gt;        stonith-atlas5 (stonith:apcmastersnmp) Started<br>
&gt;&gt; &gt;        stonith-atlas6 (stonith:apcmastersnmp) Started<br>
&gt;&gt; &gt;        w0     (ocf::heartbeat:VirtualDomain) Started<br>
&gt;&gt; &gt;        lx0    (ocf::heartbeat:VirtualDomain) Stopped<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt;       # crm configure show<br>
&gt;&gt; &gt;       node atlas0 \<br>
&gt;&gt; &gt;              attributes standby=&quot;false&quot; \<br>
&gt;&gt; &gt;              utilization memory=&quot;24576&quot;<br>
&gt;&gt; &gt;       node atlas1 \<br>
&gt;&gt; &gt;              attributes standby=&quot;false&quot; \<br>
&gt;&gt; &gt;              utilization memory=&quot;24576&quot;<br>
&gt;&gt; &gt;       node atlas2 \<br>
&gt;&gt; &gt;              attributes standby=&quot;false&quot; \<br>
&gt;&gt; &gt;              utilization memory=&quot;24576&quot;<br>
&gt;&gt; &gt;       node atlas3 \<br>
&gt;&gt; &gt;              attributes standby=&quot;false&quot; \<br>
&gt;&gt; &gt;              utilization memory=&quot;24576&quot;<br>
&gt;&gt; &gt;       node atlas4 \<br>
&gt;&gt; &gt;              attributes standby=&quot;false&quot; \<br>
&gt;&gt; &gt;              utilization memory=&quot;24576&quot;<br>
&gt;&gt; &gt;       node atlas5 \<br>
&gt;&gt; &gt;              attributes standby=&quot;off&quot; \<br>
&gt;&gt; &gt;              utilization memory=&quot;20480&quot;<br>
&gt;&gt; &gt;       node atlas6 \<br>
&gt;&gt; &gt;              attributes standby=&quot;off&quot; \<br>
&gt;&gt; &gt;              utilization memory=&quot;20480&quot;<br>
&gt;&gt; &gt;       primitive caladan ocf:heartbeat:VirtualDomain \<br>
&gt;&gt; &gt;              params config=&quot;/etc/libvirt/crm/caladan.xml&quot;<br>
&gt;&gt; &gt;       hypervisor=&quot;qemu:///system&quot; \<br>
&gt;&gt; &gt;              meta allow-migrate=&quot;true&quot; target-role=&quot;Started&quot;<br>
&gt;&gt; &gt;       is-managed=&quot;true&quot; \<br>
&gt;&gt; &gt;              op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
&gt;&gt; &gt;              op stop interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
&gt;&gt; &gt;              op monitor interval=&quot;10s&quot; timeout=&quot;40s&quot; depth=&quot;0&quot; \<br>
&gt;&gt; &gt;              op migrate_to interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot;<br>
&gt;&gt; &gt;       \<br>
&gt;&gt; &gt;              op migrate_from interval=&quot;0&quot; timeout=&quot;240s&quot;<br>
&gt;&gt; &gt;       on-fail=&quot;block&quot; \<br>
&gt;&gt; &gt;              utilization memory=&quot;4608&quot;<br>
&gt;&gt; &gt;       primitive indico ocf:heartbeat:VirtualDomain \<br>
&gt;&gt; &gt;              params config=&quot;/etc/libvirt/crm/indico.xml&quot;<br>
&gt;&gt; &gt;       hypervisor=&quot;qemu:///system&quot; \<br>
&gt;&gt; &gt;              meta allow-migrate=&quot;true&quot; target-role=&quot;Started&quot;<br>
&gt;&gt; &gt;       is-managed=&quot;true&quot; \<br>
&gt;&gt; &gt;              op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
&gt;&gt; &gt;              op stop interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
&gt;&gt; &gt;              op monitor interval=&quot;10s&quot; timeout=&quot;40s&quot; depth=&quot;0&quot; \<br>
&gt;&gt; &gt;              op migrate_to interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot;<br>
&gt;&gt; &gt;       \<br>
&gt;&gt; &gt;              op migrate_from interval=&quot;0&quot; timeout=&quot;240s&quot;<br>
&gt;&gt; &gt;       on-fail=&quot;block&quot; \<br>
&gt;&gt; &gt;              utilization memory=&quot;5120&quot;<br>
&gt;&gt; &gt;       primitive kerberos ocf:heartbeat:VirtualDomain \<br>
&gt;&gt; &gt;              params config=&quot;/etc/libvirt/qemu/kerberos.xml&quot;<br>
&gt;&gt; &gt;       hypervisor=&quot;qemu:///system&quot; \<br>
&gt;&gt; &gt;              meta allow-migrate=&quot;true&quot; target-role=&quot;Started&quot;<br>
&gt;&gt; &gt;       is-managed=&quot;true&quot; \<br>
&gt;&gt; &gt;              op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
&gt;&gt; &gt;              op stop interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
&gt;&gt; &gt;              op monitor interval=&quot;10s&quot; timeout=&quot;40s&quot; depth=&quot;0&quot; \<br>
&gt;&gt; &gt;              op migrate_to interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot;<br>
&gt;&gt; &gt;       \<br>
&gt;&gt; &gt;              op migrate_from interval=&quot;0&quot; timeout=&quot;240s&quot;<br>
&gt;&gt; &gt;       on-fail=&quot;block&quot; \<br>
&gt;&gt; &gt;              utilization memory=&quot;4608&quot;<br>
&gt;&gt; &gt;       primitive lx0 ocf:heartbeat:VirtualDomain \<br>
&gt;&gt; &gt;              params config=&quot;/etc/libvirt/crm/lx0.xml&quot;<br>
&gt;&gt; &gt;       hypervisor=&quot;qemu:///system&quot; \<br>
&gt;&gt; &gt;              meta allow-migrate=&quot;true&quot; target-role=&quot;Started&quot;<br>
&gt;&gt; &gt;       is-managed=&quot;true&quot; \<br>
&gt;&gt; &gt;              op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
&gt;&gt; &gt;              op stop interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
&gt;&gt; &gt;              op monitor interval=&quot;10s&quot; timeout=&quot;40s&quot; depth=&quot;0&quot; \<br>
&gt;&gt; &gt;              op migrate_to interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot;<br>
&gt;&gt; &gt;       \<br>
&gt;&gt; &gt;              op migrate_from interval=&quot;0&quot; timeout=&quot;240s&quot;<br>
&gt;&gt; &gt;       on-fail=&quot;block&quot; \<br>
&gt;&gt; &gt;              utilization memory=&quot;4608&quot;<br>
&gt;&gt; &gt;       primitive mail0 ocf:heartbeat:VirtualDomain \<br>
&gt;&gt; &gt;              params config=&quot;/etc/libvirt/crm/mail0.xml&quot;<br>
&gt;&gt; &gt;       hypervisor=&quot;qemu:///system&quot; \<br>
&gt;&gt; &gt;              meta allow-migrate=&quot;true&quot; target-role=&quot;Started&quot;<br>
&gt;&gt; &gt;       is-managed=&quot;true&quot; \<br>
&gt;&gt; &gt;              op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
&gt;&gt; &gt;              op stop interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
&gt;&gt; &gt;              op monitor interval=&quot;10s&quot; timeout=&quot;40s&quot; depth=&quot;0&quot; \<br>
&gt;&gt; &gt;              op migrate_to interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot;<br>
&gt;&gt; &gt;       \<br>
&gt;&gt; &gt;              op migrate_from interval=&quot;0&quot; timeout=&quot;240s&quot;<br>
&gt;&gt; &gt;       on-fail=&quot;block&quot; \<br>
&gt;&gt; &gt;              utilization memory=&quot;4608&quot;<br>
&gt;&gt; &gt;       primitive mailman ocf:heartbeat:VirtualDomain \<br>
&gt;&gt; &gt;              params config=&quot;/etc/libvirt/crm/mailman.xml&quot;<br>
&gt;&gt; &gt;       hypervisor=&quot;qemu:///system&quot; \<br>
&gt;&gt; &gt;              meta allow-migrate=&quot;true&quot; target-role=&quot;Started&quot;<br>
&gt;&gt; &gt;       is-managed=&quot;true&quot; \<br>
&gt;&gt; &gt;              op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
&gt;&gt; &gt;              op stop interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
&gt;&gt; &gt;              op monitor interval=&quot;10s&quot; timeout=&quot;40s&quot; depth=&quot;0&quot; \<br>
&gt;&gt; &gt;              op migrate_to interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot;<br>
&gt;&gt; &gt;       \<br>
&gt;&gt; &gt;              op migrate_from interval=&quot;0&quot; timeout=&quot;240s&quot;<br>
&gt;&gt; &gt;       on-fail=&quot;block&quot; \<br>
&gt;&gt; &gt;              utilization memory=&quot;5120&quot;<br>
&gt;&gt; &gt;       primitive papi ocf:heartbeat:VirtualDomain \<br>
&gt;&gt; &gt;              params config=&quot;/etc/libvirt/crm/papi.xml&quot;<br>
&gt;&gt; &gt;       hypervisor=&quot;qemu:///system&quot; \<br>
&gt;&gt; &gt;              meta allow-migrate=&quot;true&quot; target-role=&quot;Started&quot;<br>
&gt;&gt; &gt;       is-managed=&quot;true&quot; \<br>
&gt;&gt; &gt;              op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
&gt;&gt; &gt;              op stop interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
&gt;&gt; &gt;              op monitor interval=&quot;10s&quot; timeout=&quot;40s&quot; depth=&quot;0&quot; \<br>
&gt;&gt; &gt;              op migrate_to interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot;<br>
&gt;&gt; &gt;       \<br>
&gt;&gt; &gt;              op migrate_from interval=&quot;0&quot; timeout=&quot;240s&quot;<br>
&gt;&gt; &gt;       on-fail=&quot;block&quot; \<br>
&gt;&gt; &gt;              utilization memory=&quot;6144&quot;<br>
&gt;&gt; &gt;       primitive radius ocf:heartbeat:VirtualDomain \<br>
&gt;&gt; &gt;              params config=&quot;/etc/libvirt/crm/radius.xml&quot;<br>
&gt;&gt; &gt;       hypervisor=&quot;qemu:///system&quot; \<br>
&gt;&gt; &gt;              meta allow-migrate=&quot;true&quot; target-role=&quot;Started&quot;<br>
&gt;&gt; &gt;       is-managed=&quot;true&quot; \<br>
&gt;&gt; &gt;              op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
&gt;&gt; &gt;              op stop interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
&gt;&gt; &gt;              op monitor interval=&quot;10s&quot; timeout=&quot;40s&quot; depth=&quot;0&quot; \<br>
&gt;&gt; &gt;              op migrate_to interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot;<br>
&gt;&gt; &gt;       \<br>
&gt;&gt; &gt;              op migrate_from interval=&quot;0&quot; timeout=&quot;240s&quot;<br>
&gt;&gt; &gt;       on-fail=&quot;block&quot; \<br>
&gt;&gt; &gt;              utilization memory=&quot;4608&quot;<br>
&gt;&gt; &gt;       primitive stonith-atlas0 stonith:ipmilan \<br>
&gt;&gt; &gt;              params hostname=&quot;atlas0&quot; ipaddr=&quot;192.168.40.20&quot;<br>
&gt;&gt; &gt;       port=&quot;623&quot;<br>
&gt;&gt; &gt;       auth=&quot;md5&quot; priv=&quot;admin&quot; login=&quot;root&quot; password=&quot;XXXXX&quot; \<br>
&gt;&gt; &gt;              op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
&gt;&gt; &gt;              meta target-role=&quot;Started&quot;<br>
&gt;&gt; &gt;       primitive stonith-atlas1 stonith:ipmilan \<br>
&gt;&gt; &gt;              params hostname=&quot;atlas1&quot; ipaddr=&quot;192.168.40.21&quot;<br>
&gt;&gt; &gt;       port=&quot;623&quot;<br>
&gt;&gt; &gt;       auth=&quot;md5&quot; priv=&quot;admin&quot; login=&quot;root&quot; password=&quot;XXXX&quot; \<br>
&gt;&gt; &gt;              op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
&gt;&gt; &gt;              meta target-role=&quot;Started&quot;<br>
&gt;&gt; &gt;       primitive stonith-atlas2 stonith:ipmilan \<br>
&gt;&gt; &gt;              params hostname=&quot;atlas2&quot; ipaddr=&quot;192.168.40.22&quot;<br>
&gt;&gt; &gt;       port=&quot;623&quot;<br>
&gt;&gt; &gt;       auth=&quot;md5&quot; priv=&quot;admin&quot; login=&quot;root&quot; password=&quot;XXXX&quot; \<br>
&gt;&gt; &gt;              op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
&gt;&gt; &gt;              meta target-role=&quot;Started&quot;<br>
&gt;&gt; &gt;       primitive stonith-atlas3 stonith:ipmilan \<br>
&gt;&gt; &gt;              params hostname=&quot;atlas3&quot; ipaddr=&quot;192.168.40.23&quot;<br>
&gt;&gt; &gt;       port=&quot;623&quot;<br>
&gt;&gt; &gt;       auth=&quot;md5&quot; priv=&quot;admin&quot; login=&quot;root&quot; password=&quot;XXXX&quot; \<br>
&gt;&gt; &gt;              op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
&gt;&gt; &gt;              meta target-role=&quot;Started&quot;<br>
&gt;&gt; &gt;       primitive stonith-atlas4 stonith:ipmilan \<br>
&gt;&gt; &gt;              params hostname=&quot;atlas4&quot; ipaddr=&quot;192.168.40.24&quot;<br>
&gt;&gt; &gt;       port=&quot;623&quot;<br>
&gt;&gt; &gt;       auth=&quot;md5&quot; priv=&quot;admin&quot; login=&quot;root&quot; password=&quot;XXXX&quot; \<br>
&gt;&gt; &gt;              op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
&gt;&gt; &gt;              meta target-role=&quot;Started&quot;<br>
&gt;&gt; &gt;       primitive stonith-atlas5 stonith:apcmastersnmp \<br>
&gt;&gt; &gt;              params ipaddr=&quot;192.168.40.252&quot; port=&quot;161&quot;<br>
&gt;&gt; &gt;       community=&quot;XXXX&quot;<br>
&gt;&gt; &gt;       pcmk_host_list=&quot;atlas5&quot; pcmk_host_check=&quot;static-list&quot;<br>
&gt;&gt; &gt;       primitive stonith-atlas6 stonith:apcmastersnmp \<br>
&gt;&gt; &gt;              params ipaddr=&quot;192.168.40.252&quot; port=&quot;161&quot;<br>
&gt;&gt; &gt;       community=&quot;XXXX&quot;<br>
&gt;&gt; &gt;       pcmk_host_list=&quot;atlas6&quot; pcmk_host_check=&quot;static-list&quot;<br>
&gt;&gt; &gt;       primitive w0 ocf:heartbeat:VirtualDomain \<br>
&gt;&gt; &gt;              params config=&quot;/etc/libvirt/crm/w0.xml&quot;<br>
&gt;&gt; &gt;       hypervisor=&quot;qemu:///system&quot; \<br>
&gt;&gt; &gt;              meta allow-migrate=&quot;true&quot; target-role=&quot;Started&quot; \<br>
&gt;&gt; &gt;              op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
&gt;&gt; &gt;              op stop interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
&gt;&gt; &gt;              op monitor interval=&quot;10s&quot; timeout=&quot;40s&quot; depth=&quot;0&quot; \<br>
&gt;&gt; &gt;              op migrate_to interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot;<br>
&gt;&gt; &gt;       \<br>
&gt;&gt; &gt;              op migrate_from interval=&quot;0&quot; timeout=&quot;240s&quot;<br>
&gt;&gt; &gt;       on-fail=&quot;block&quot; \<br>
&gt;&gt; &gt;              utilization memory=&quot;4608&quot;<br>
&gt;&gt; &gt;       primitive webauth ocf:heartbeat:VirtualDomain \<br>
&gt;&gt; &gt;              params config=&quot;/etc/libvirt/crm/webauth.xml&quot;<br>
&gt;&gt; &gt;       hypervisor=&quot;qemu:///system&quot; \<br>
&gt;&gt; &gt;              meta allow-migrate=&quot;true&quot; target-role=&quot;Started&quot;<br>
&gt;&gt; &gt;       is-managed=&quot;true&quot; \<br>
&gt;&gt; &gt;              op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
&gt;&gt; &gt;              op stop interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
&gt;&gt; &gt;              op monitor interval=&quot;10s&quot; timeout=&quot;40s&quot; depth=&quot;0&quot; \<br>
&gt;&gt; &gt;              op migrate_to interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot;<br>
&gt;&gt; &gt;       \<br>
&gt;&gt; &gt;              op migrate_from interval=&quot;0&quot; timeout=&quot;240s&quot;<br>
&gt;&gt; &gt;       on-fail=&quot;block&quot; \<br>
&gt;&gt; &gt;              utilization memory=&quot;4608&quot;<br>
&gt;&gt; &gt;       primitive wwwd ocf:heartbeat:VirtualDomain \<br>
&gt;&gt; &gt;              params config=&quot;/etc/libvirt/crm/wwwd.xml&quot;<br>
&gt;&gt; &gt;       hypervisor=&quot;qemu:///system&quot; \<br>
&gt;&gt; &gt;              meta allow-migrate=&quot;true&quot; target-role=&quot;Started&quot;<br>
&gt;&gt; &gt;       is-managed=&quot;true&quot; \<br>
&gt;&gt; &gt;              op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
&gt;&gt; &gt;              op stop interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
&gt;&gt; &gt;              op monitor interval=&quot;10s&quot; timeout=&quot;40s&quot; depth=&quot;0&quot; \<br>
&gt;&gt; &gt;              op migrate_to interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot;<br>
&gt;&gt; &gt;       \<br>
&gt;&gt; &gt;              op migrate_from interval=&quot;0&quot; timeout=&quot;240s&quot;<br>
&gt;&gt; &gt;       on-fail=&quot;block&quot; \<br>
&gt;&gt; &gt;              utilization memory=&quot;5120&quot;<br>
&gt;&gt; &gt;       location location-stonith-atlas0 stonith-atlas0 -inf: atlas0<br>
&gt;&gt; &gt;       location location-stonith-atlas1 stonith-atlas1 -inf: atlas1<br>
&gt;&gt; &gt;       location location-stonith-atlas2 stonith-atlas2 -inf: atlas2<br>
&gt;&gt; &gt;       location location-stonith-atlas3 stonith-atlas3 -inf: atlas3<br>
&gt;&gt; &gt;       location location-stonith-atlas4 stonith-atlas4 -inf: atlas4<br>
&gt;&gt; &gt;       location location-stonith-atlas5 stonith-atlas5 -inf: atlas5<br>
&gt;&gt; &gt;       location location-stonith-atlas6 stonith-atlas6 -inf: atlas6<br>
&gt;&gt; &gt;       property $id=&quot;cib-bootstrap-options&quot; \<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt;        dc-version=&quot;1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c&quot; \<br>
&gt;&gt; &gt;              cluster-infrastructure=&quot;openais&quot; \<br>
&gt;&gt; &gt;              expected-quorum-votes=&quot;7&quot; \<br>
&gt;&gt; &gt;              stonith-enabled=&quot;true&quot; \<br>
&gt;&gt; &gt;              no-quorum-policy=&quot;stop&quot; \<br>
&gt;&gt; &gt;              last-lrm-refresh=&quot;1340193431&quot; \<br>
&gt;&gt; &gt;              symmetric-cluster=&quot;true&quot; \<br>
&gt;&gt; &gt;              maintenance-mode=&quot;false&quot; \<br>
&gt;&gt; &gt;              stop-all-resources=&quot;false&quot; \<br>
&gt;&gt; &gt;              is-managed-default=&quot;true&quot; \<br>
&gt;&gt; &gt;              placement-strategy=&quot;balanced&quot;<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt;       # crm_verify -L -VV<br>
&gt;&gt; &gt;       [...]<br>
&gt;&gt; &gt;       crm_verify[19320]: 2012/06/20_15:25:50 notice: LogActions: Leave<br>
&gt;&gt; &gt;         w0<br>
&gt;&gt; &gt;       (Started atlas2)<br>
&gt;&gt; &gt;       crm_verify[19320]: 2012/06/20_15:25:50 notice: LogActions: Leave<br>
&gt;&gt; &gt;       stonith-atlas6       (Started atlas4)<br>
&gt;&gt; &gt;       crm_verify[19320]: 2012/06/20_15:25:50 notice: LogActions: Leave<br>
&gt;&gt; &gt;       stonith-atlas5       (Started atlas4)<br>
&gt;&gt; &gt;       crm_verify[19320]: 2012/06/20_15:25:50 notice: LogActions: Leave<br>
&gt;&gt; &gt;       stonith-atlas4       (Started atlas3)<br>
&gt;&gt; &gt;       crm_verify[19320]: 2012/06/20_15:25:50 notice: LogActions: Leave<br>
&gt;&gt; &gt;       stonith-atlas3       (Started atlas4)<br>
&gt;&gt; &gt;       crm_verify[19320]: 2012/06/20_15:25:50 notice: LogActions: Leave<br>
&gt;&gt; &gt;       stonith-atlas2       (Started atlas4)<br>
&gt;&gt; &gt;       crm_verify[19320]: 2012/06/20_15:25:50 notice: LogActions: Leave<br>
&gt;&gt; &gt;       stonith-atlas1       (Started atlas4)<br>
&gt;&gt; &gt;       crm_verify[19320]: 2012/06/20_15:25:50 notice: LogActions: Leave<br>
&gt;&gt; &gt;       stonith-atlas0       (Started atlas4)<br>
&gt;&gt; &gt;       crm_verify[19320]: 2012/06/20_15:25:50 notice: LogActions: Start<br>
&gt;&gt; &gt;         lx0<br>
&gt;&gt; &gt;       (atlas4)<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt;       I have tried to delete the resource and add again, did not help.<br>
&gt;&gt; &gt;       The corresponding log entries:<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 crmd: [17571]: info: delete_resource:<br>
&gt;&gt; &gt;       Removing<br>
&gt;&gt; &gt;       resource lx0 for 28654_crm_resource (internal) on atlas0<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 lrmd: [17568]: debug: lrmd_rsc_destroy:<br>
&gt;&gt; &gt;       removing<br>
&gt;&gt; &gt;       resource lx0<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 crmd: [17571]: debug: delete_rsc_entry:<br>
&gt;&gt; &gt;       sync:<br>
&gt;&gt; &gt;       Sending delete op for lx0<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 crmd: [17571]: info: notify_deleted:<br>
&gt;&gt; &gt;       Notifying<br>
&gt;&gt; &gt;       28654_crm_resource on atlas0 that lx0 was deleted<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 crmd: [17571]: WARN:<br>
&gt;&gt; &gt;       decode_transition_key: Bad<br>
&gt;&gt; &gt;       UUID (crm-resource-28654) in sscanf result (3) for<br>
&gt;&gt; &gt;       0:0:crm-resource-28654<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 crmd: [17571]: debug:<br>
&gt;&gt; &gt;       create_operation_update:<br>
&gt;&gt; &gt;       send_direct_ack: Updating resouce lx0 after complete delete op<br>
&gt;&gt; &gt;       (interval=60000)<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 crmd: [17571]: info: send_direct_ack:<br>
&gt;&gt; &gt;       ACK&#39;ing<br>
&gt;&gt; &gt;       resource op lx0_delete_60000 from 0:0:crm-resource-28654:<br>
&gt;&gt; &gt;       lrm_invoke-lrmd-1340186245-16<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] mcasted<br>
&gt;&gt; &gt;       message added<br>
&gt;&gt; &gt;       to pending queue<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] mcasted<br>
&gt;&gt; &gt;       message added<br>
&gt;&gt; &gt;       to pending queue<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] Delivering<br>
&gt;&gt; &gt;       10d5 to 10d7<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] Delivering<br>
&gt;&gt; &gt;       MCAST<br>
&gt;&gt; &gt;       message with seq 10d6 to pending delivery queue<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] Delivering<br>
&gt;&gt; &gt;       MCAST<br>
&gt;&gt; &gt;       message with seq 10d7 to pending delivery queue<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] Received<br>
&gt;&gt; &gt;       ringid(<a href="http://192.168.40.60:22264" target="_blank">192.168.40.60:22264</a>) seq 10d6<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] Received<br>
&gt;&gt; &gt;       ringid(<a href="http://192.168.40.60:22264" target="_blank">192.168.40.60:22264</a>) seq 10d7<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 crmd: [17571]: debug: notify_deleted:<br>
&gt;&gt; &gt;       Triggering a<br>
&gt;&gt; &gt;       refresh after 28654_crm_resource deleted lx0 from the LRM<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 cib: [17567]: debug: cib_process_xpath:<br>
&gt;&gt; &gt;       Processing<br>
&gt;&gt; &gt;       cib_query op for<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; //cib/configuration/crm_config//cluster_property_set//nvpair[@name=&#39;last-lr<br>
&gt;&gt; &gt;       m-refresh&#39;]<br>
&gt;&gt; &gt;       (/cib/configuration/crm_config/cluster_property_set/nvpair[6])<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 lrmd: [17568]: debug:<br>
&gt;&gt; &gt;       on_msg_add_rsc:client [17571]<br>
&gt;&gt; &gt;       adds resource lx0<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] Delivering<br>
&gt;&gt; &gt;       149e to 149f<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] Delivering<br>
&gt;&gt; &gt;       MCAST<br>
&gt;&gt; &gt;       message with seq 149f to pending delivery queue<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] Received<br>
&gt;&gt; &gt;       ringid(<a href="http://192.168.40.60:22264" target="_blank">192.168.40.60:22264</a>) seq 14a0<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] Delivering<br>
&gt;&gt; &gt;       149f to 14a0<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] Delivering<br>
&gt;&gt; &gt;       MCAST<br>
&gt;&gt; &gt;       message with seq 14a0 to pending delivery queue<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] releasing<br>
&gt;&gt; &gt;       messages up<br>
&gt;&gt; &gt;       to and including 149e<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 crmd: [17571]: info: do_lrm_rsc_op:<br>
&gt;&gt; &gt;       Performing<br>
&gt;&gt; &gt;       key=26:10266:7:e7426ec7-3bae-4a4b-a4ae-c3f80f17e058<br>
&gt;&gt; &gt;       op=lx0_monitor_0 )<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 lrmd: [17568]: debug:<br>
&gt;&gt; &gt;       on_msg_perform_op:2396:<br>
&gt;&gt; &gt;       copying parameters for rsc lx0<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 lrmd: [17568]: debug: on_msg_perform_op:<br>
&gt;&gt; &gt;       add an<br>
&gt;&gt; &gt;       operation operation monitor[35] on lx0 for client 17571, its<br>
&gt;&gt; &gt;       parameters:<br>
&gt;&gt; &gt;       crm_feature_set=[3.0.5] config=[/etc/libvirt/crm/lx0.xml]<br>
&gt;&gt; &gt;       CRM_meta_timeout=[20000] hypervisor=[qemu:///system]  to the<br>
&gt;&gt; &gt;       operation<br>
&gt;&gt; &gt;       list.<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] releasing<br>
&gt;&gt; &gt;       messages up<br>
&gt;&gt; &gt;       to and including 149f<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 lrmd: [17568]: info: rsc:lx0 probe[35]<br>
&gt;&gt; &gt;       (pid 30179)<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 VirtualDomain[30179]: INFO: Domain name<br>
&gt;&gt; &gt;       &quot;lx0&quot; saved<br>
&gt;&gt; &gt;       to /var/run/resource-agents/VirtualDomain-lx0.state.<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] releasing<br>
&gt;&gt; &gt;       messages up<br>
&gt;&gt; &gt;       to and including 14bc<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 VirtualDomain[30179]: DEBUG: Virtual<br>
&gt;&gt; &gt;       domain lx0 is<br>
&gt;&gt; &gt;       currently shut off.<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 lrmd: [17568]: WARN: Managed lx0:monitor<br>
&gt;&gt; &gt;       process<br>
&gt;&gt; &gt;       30179 exited with return code 7.<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 lrmd: [17568]: info: operation<br>
&gt;&gt; &gt;       monitor[35] on lx0<br>
&gt;&gt; &gt;       for client 17571: pid 30179 exited with return code 7<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 crmd: [17571]: debug:<br>
&gt;&gt; &gt;       create_operation_update:<br>
&gt;&gt; &gt;       do_update_resource: Updating resouce lx0 after complete monitor<br>
&gt;&gt; &gt;       op<br>
&gt;&gt; &gt;       (interval=0)<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 crmd: [17571]: info: process_lrm_event:<br>
&gt;&gt; &gt;       LRM<br>
&gt;&gt; &gt;       operation lx0_monitor_0 (call=35, rc=7, cib-update=61,<br>
&gt;&gt; &gt;       confirmed=true) not<br>
&gt;&gt; &gt;       running<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 crmd: [17571]: debug:<br>
&gt;&gt; &gt;       update_history_cache:<br>
&gt;&gt; &gt;       Appending monitor op to history for &#39;lx0&#39;<br>
&gt;&gt; &gt;       Jun 20 11:57:25 atlas4 crmd: [17571]: debug: get_xpath_object:<br>
&gt;&gt; &gt;       No match<br>
&gt;&gt; &gt;       for //cib_update_result//diff-added//crm_config in<br>
&gt;&gt; &gt;       /notify/cib_update_result/diff<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt;       What can be wrong in the setup/configuration? And what on the<br>
&gt;&gt; &gt;       earth<br>
&gt;&gt; &gt;       happened?<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt;       Best regards,<br>
&gt;&gt; &gt;       Jozsef<br>
&gt;&gt; &gt;       --<br>
&gt;&gt; &gt;       E-mail : <a href="mailto:kadlecsik.jozsef@wigner.mta.hu">kadlecsik.jozsef@wigner.mta.hu</a><br>
&gt;&gt; &gt;       PGP key: <a href="http://www.kfki.hu/%7Ekadlec/pgp_public_key.txt" target="_blank">http://www.kfki.hu/~kadlec/pgp_public_key.txt</a><br>
&gt;&gt; &gt;       Address: Wigner Research Centre for Physics, Hungarian Academy<br>
&gt;&gt; &gt;       of Sciences<br>
&gt;&gt; &gt;               H-1525 Budapest 114, POB. 49, Hungary<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt;       _______________________________________________<br>
&gt;&gt; &gt;       Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
&gt;&gt; &gt;       <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt;       Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
&gt;&gt; &gt;       Getting started:<br>
&gt;&gt; &gt;       <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
&gt;&gt; &gt;       Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; --<br>
&gt;&gt; &gt; esta es mi vida e me la vivo hasta que dios quiera<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt;<br>
&gt;&gt;<br>
&gt;&gt; --<br>
&gt;&gt; E-mail : <a href="mailto:kadlecsik.jozsef@wigner.mta.hu">kadlecsik.jozsef@wigner.mta.hu</a><br>
&gt;&gt; PGP key: <a href="http://www.kfki.hu/%7Ekadlec/pgp_public_key.txt" target="_blank">http://www.kfki.hu/~kadlec/pgp_public_key.txt</a><br>
&gt;&gt; Address: Wigner Research Centre for Physics, Hungarian Academy of Sciences<br>
&gt;&gt;         H-1525 Budapest 114, POB. 49, Hungary<br>
&gt;&gt;<br>
&gt;&gt; _______________________________________________<br>
&gt;&gt; Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
&gt;&gt; <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
&gt;&gt;<br>
&gt;&gt; Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
&gt;&gt; Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
&gt;&gt; Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
&gt;&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; --<br>
&gt; esta es mi vida e me la vivo hasta que dios quiera<br>
&gt;<br>
&gt; _______________________________________________<br>
&gt; Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
&gt; <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
&gt;<br>
&gt; Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
&gt; Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
&gt; Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
&gt;<br>
<br>
_______________________________________________<br>
Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
</blockquote></div><br><br clear="all"><br>-- <br>esta es mi vida e me la vivo hasta que dios quiera<br>