Hello<br><br>Why you say there is not error in the message<br>=========================================================<br>Jun 20 11:57:25 atlas4 lrmd: [17568]: info: operation monitor[35] on lx0<br>
for client 17571: pid 30179 exited with return code 7<br>
Jun 20 11:57:25 atlas4 crmd: [17571]: debug: create_operation_update:<br>
do_update_resource: Updating resouce lx0 after complete monitor op<br>
(interval=0)<br>
Jun 20 11:57:25 atlas4 crmd: [17571]: info: process_lrm_event: LRM<br>
operation lx0_monitor_0 (call=35, rc=7, cib-update=61, confirmed=true) not<br>
running<br>=========================================================<br><br><div class="gmail_quote">2012/6/20 Kadlecsik József <span dir="ltr">&lt;<a href="mailto:kadlecsik.jozsef@wigner.mta.hu" target="_blank">kadlecsik.jozsef@wigner.mta.hu</a>&gt;</span><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello,<br>
<br>
Somehow a VirtualDomain resource after a &quot;crm resource restart&quot;, which did<br>
*not* start the resource but just stop, the resource cannot be started<br>
anymore. The most baffling is that I do not see an error message. The<br>
resource in question, named &#39;lx0&#39;, can be started directly via<br>
virsh/libvirt and libvirtd is running on all cluster nodes.<br>
<br>
We run corosync 1.4.2-1~bpo60+1, pacemaker 1.1.6-2~bpo60+1 (debian).<br>
<br>
# crm status<br>
============<br>
Last updated: Wed Jun 20 15:14:44 2012<br>
Last change: Wed Jun 20 14:07:40 2012 via cibadmin on atlas0<br>
Stack: openais<br>
Current DC: atlas0 - partition with quorum<br>
Version: 1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c<br>
7 Nodes configured, 7 expected votes<br>
18 Resources configured.<br>
============<br>
<br>
Online: [ atlas0 atlas1 atlas2 atlas3 atlas4 atlas5 atlas6 ]<br>
<br>
 kerberos       (ocf::heartbeat:VirtualDomain): Started atlas0<br>
 stonith-atlas3 (stonith:ipmilan):      Started atlas4<br>
 stonith-atlas1 (stonith:ipmilan):      Started atlas4<br>
 stonith-atlas2 (stonith:ipmilan):      Started atlas4<br>
 stonith-atlas0 (stonith:ipmilan):      Started atlas4<br>
 stonith-atlas4 (stonith:ipmilan):      Started atlas3<br>
 mailman        (ocf::heartbeat:VirtualDomain): Started atlas6<br>
 indico (ocf::heartbeat:VirtualDomain): Started atlas0<br>
 papi   (ocf::heartbeat:VirtualDomain): Started atlas1<br>
 wwwd   (ocf::heartbeat:VirtualDomain): Started atlas2<br>
 webauth        (ocf::heartbeat:VirtualDomain): Started atlas3<br>
 caladan        (ocf::heartbeat:VirtualDomain): Started atlas4<br>
 radius (ocf::heartbeat:VirtualDomain): Started atlas5<br>
 mail0  (ocf::heartbeat:VirtualDomain): Started atlas6<br>
 stonith-atlas5 (stonith:apcmastersnmp):        Started atlas4<br>
 stonith-atlas6 (stonith:apcmastersnmp):        Started atlas4<br>
 w0     (ocf::heartbeat:VirtualDomain): Started atlas2<br>
<br>
# crm resource show<br>
 kerberos       (ocf::heartbeat:VirtualDomain) Started<br>
 stonith-atlas3 (stonith:ipmilan) Started<br>
 stonith-atlas1 (stonith:ipmilan) Started<br>
 stonith-atlas2 (stonith:ipmilan) Started<br>
 stonith-atlas0 (stonith:ipmilan) Started<br>
 stonith-atlas4 (stonith:ipmilan) Started<br>
 mailman        (ocf::heartbeat:VirtualDomain) Started<br>
 indico (ocf::heartbeat:VirtualDomain) Started<br>
 papi   (ocf::heartbeat:VirtualDomain) Started<br>
 wwwd   (ocf::heartbeat:VirtualDomain) Started<br>
 webauth        (ocf::heartbeat:VirtualDomain) Started<br>
 caladan        (ocf::heartbeat:VirtualDomain) Started<br>
 radius (ocf::heartbeat:VirtualDomain) Started<br>
 mail0  (ocf::heartbeat:VirtualDomain) Started<br>
 stonith-atlas5 (stonith:apcmastersnmp) Started<br>
 stonith-atlas6 (stonith:apcmastersnmp) Started<br>
 w0     (ocf::heartbeat:VirtualDomain) Started<br>
 lx0    (ocf::heartbeat:VirtualDomain) Stopped<br>
<br>
# crm configure show<br>
node atlas0 \<br>
        attributes standby=&quot;false&quot; \<br>
        utilization memory=&quot;24576&quot;<br>
node atlas1 \<br>
        attributes standby=&quot;false&quot; \<br>
        utilization memory=&quot;24576&quot;<br>
node atlas2 \<br>
        attributes standby=&quot;false&quot; \<br>
        utilization memory=&quot;24576&quot;<br>
node atlas3 \<br>
        attributes standby=&quot;false&quot; \<br>
        utilization memory=&quot;24576&quot;<br>
node atlas4 \<br>
        attributes standby=&quot;false&quot; \<br>
        utilization memory=&quot;24576&quot;<br>
node atlas5 \<br>
        attributes standby=&quot;off&quot; \<br>
        utilization memory=&quot;20480&quot;<br>
node atlas6 \<br>
        attributes standby=&quot;off&quot; \<br>
        utilization memory=&quot;20480&quot;<br>
primitive caladan ocf:heartbeat:VirtualDomain \<br>
        params config=&quot;/etc/libvirt/crm/caladan.xml&quot; hypervisor=&quot;qemu:///system&quot; \<br>
        meta allow-migrate=&quot;true&quot; target-role=&quot;Started&quot; is-managed=&quot;true&quot; \<br>
        op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
        op stop interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
        op monitor interval=&quot;10s&quot; timeout=&quot;40s&quot; depth=&quot;0&quot; \<br>
        op migrate_to interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot; \<br>
        op migrate_from interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot; \<br>
        utilization memory=&quot;4608&quot;<br>
primitive indico ocf:heartbeat:VirtualDomain \<br>
        params config=&quot;/etc/libvirt/crm/indico.xml&quot; hypervisor=&quot;qemu:///system&quot; \<br>
        meta allow-migrate=&quot;true&quot; target-role=&quot;Started&quot; is-managed=&quot;true&quot; \<br>
        op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
        op stop interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
        op monitor interval=&quot;10s&quot; timeout=&quot;40s&quot; depth=&quot;0&quot; \<br>
        op migrate_to interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot; \<br>
        op migrate_from interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot; \<br>
        utilization memory=&quot;5120&quot;<br>
primitive kerberos ocf:heartbeat:VirtualDomain \<br>
        params config=&quot;/etc/libvirt/qemu/kerberos.xml&quot; hypervisor=&quot;qemu:///system&quot; \<br>
        meta allow-migrate=&quot;true&quot; target-role=&quot;Started&quot; is-managed=&quot;true&quot; \<br>
        op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
        op stop interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
        op monitor interval=&quot;10s&quot; timeout=&quot;40s&quot; depth=&quot;0&quot; \<br>
        op migrate_to interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot; \<br>
        op migrate_from interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot; \<br>
        utilization memory=&quot;4608&quot;<br>
primitive lx0 ocf:heartbeat:VirtualDomain \<br>
        params config=&quot;/etc/libvirt/crm/lx0.xml&quot; hypervisor=&quot;qemu:///system&quot; \<br>
        meta allow-migrate=&quot;true&quot; target-role=&quot;Started&quot; is-managed=&quot;true&quot; \<br>
        op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
        op stop interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
        op monitor interval=&quot;10s&quot; timeout=&quot;40s&quot; depth=&quot;0&quot; \<br>
        op migrate_to interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot; \<br>
        op migrate_from interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot; \<br>
        utilization memory=&quot;4608&quot;<br>
primitive mail0 ocf:heartbeat:VirtualDomain \<br>
        params config=&quot;/etc/libvirt/crm/mail0.xml&quot; hypervisor=&quot;qemu:///system&quot; \<br>
        meta allow-migrate=&quot;true&quot; target-role=&quot;Started&quot; is-managed=&quot;true&quot; \<br>
        op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
        op stop interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
        op monitor interval=&quot;10s&quot; timeout=&quot;40s&quot; depth=&quot;0&quot; \<br>
        op migrate_to interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot; \<br>
        op migrate_from interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot; \<br>
        utilization memory=&quot;4608&quot;<br>
primitive mailman ocf:heartbeat:VirtualDomain \<br>
        params config=&quot;/etc/libvirt/crm/mailman.xml&quot; hypervisor=&quot;qemu:///system&quot; \<br>
        meta allow-migrate=&quot;true&quot; target-role=&quot;Started&quot; is-managed=&quot;true&quot; \<br>
        op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
        op stop interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
        op monitor interval=&quot;10s&quot; timeout=&quot;40s&quot; depth=&quot;0&quot; \<br>
        op migrate_to interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot; \<br>
        op migrate_from interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot; \<br>
        utilization memory=&quot;5120&quot;<br>
primitive papi ocf:heartbeat:VirtualDomain \<br>
        params config=&quot;/etc/libvirt/crm/papi.xml&quot; hypervisor=&quot;qemu:///system&quot; \<br>
        meta allow-migrate=&quot;true&quot; target-role=&quot;Started&quot; is-managed=&quot;true&quot; \<br>
        op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
        op stop interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
        op monitor interval=&quot;10s&quot; timeout=&quot;40s&quot; depth=&quot;0&quot; \<br>
        op migrate_to interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot; \<br>
        op migrate_from interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot; \<br>
        utilization memory=&quot;6144&quot;<br>
primitive radius ocf:heartbeat:VirtualDomain \<br>
        params config=&quot;/etc/libvirt/crm/radius.xml&quot; hypervisor=&quot;qemu:///system&quot; \<br>
        meta allow-migrate=&quot;true&quot; target-role=&quot;Started&quot; is-managed=&quot;true&quot; \<br>
        op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
        op stop interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
        op monitor interval=&quot;10s&quot; timeout=&quot;40s&quot; depth=&quot;0&quot; \<br>
        op migrate_to interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot; \<br>
        op migrate_from interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot; \<br>
        utilization memory=&quot;4608&quot;<br>
primitive stonith-atlas0 stonith:ipmilan \<br>
        params hostname=&quot;atlas0&quot; ipaddr=&quot;192.168.40.20&quot; port=&quot;623&quot;<br>
auth=&quot;md5&quot; priv=&quot;admin&quot; login=&quot;root&quot; password=&quot;XXXXX&quot; \<br>
        op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
        meta target-role=&quot;Started&quot;<br>
primitive stonith-atlas1 stonith:ipmilan \<br>
        params hostname=&quot;atlas1&quot; ipaddr=&quot;192.168.40.21&quot; port=&quot;623&quot;<br>
auth=&quot;md5&quot; priv=&quot;admin&quot; login=&quot;root&quot; password=&quot;XXXX&quot; \<br>
        op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
        meta target-role=&quot;Started&quot;<br>
primitive stonith-atlas2 stonith:ipmilan \<br>
        params hostname=&quot;atlas2&quot; ipaddr=&quot;192.168.40.22&quot; port=&quot;623&quot;<br>
auth=&quot;md5&quot; priv=&quot;admin&quot; login=&quot;root&quot; password=&quot;XXXX&quot; \<br>
        op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
        meta target-role=&quot;Started&quot;<br>
primitive stonith-atlas3 stonith:ipmilan \<br>
        params hostname=&quot;atlas3&quot; ipaddr=&quot;192.168.40.23&quot; port=&quot;623&quot;<br>
auth=&quot;md5&quot; priv=&quot;admin&quot; login=&quot;root&quot; password=&quot;XXXX&quot; \<br>
        op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
        meta target-role=&quot;Started&quot;<br>
primitive stonith-atlas4 stonith:ipmilan \<br>
        params hostname=&quot;atlas4&quot; ipaddr=&quot;192.168.40.24&quot; port=&quot;623&quot;<br>
auth=&quot;md5&quot; priv=&quot;admin&quot; login=&quot;root&quot; password=&quot;XXXX&quot; \<br>
        op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
        meta target-role=&quot;Started&quot;<br>
primitive stonith-atlas5 stonith:apcmastersnmp \<br>
        params ipaddr=&quot;192.168.40.252&quot; port=&quot;161&quot; community=&quot;XXXX&quot;<br>
pcmk_host_list=&quot;atlas5&quot; pcmk_host_check=&quot;static-list&quot;<br>
primitive stonith-atlas6 stonith:apcmastersnmp \<br>
        params ipaddr=&quot;192.168.40.252&quot; port=&quot;161&quot; community=&quot;XXXX&quot;<br>
pcmk_host_list=&quot;atlas6&quot; pcmk_host_check=&quot;static-list&quot;<br>
primitive w0 ocf:heartbeat:VirtualDomain \<br>
        params config=&quot;/etc/libvirt/crm/w0.xml&quot; hypervisor=&quot;qemu:///system&quot; \<br>
        meta allow-migrate=&quot;true&quot; target-role=&quot;Started&quot; \<br>
        op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
        op stop interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
        op monitor interval=&quot;10s&quot; timeout=&quot;40s&quot; depth=&quot;0&quot; \<br>
        op migrate_to interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot; \<br>
        op migrate_from interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot; \<br>
        utilization memory=&quot;4608&quot;<br>
primitive webauth ocf:heartbeat:VirtualDomain \<br>
        params config=&quot;/etc/libvirt/crm/webauth.xml&quot; hypervisor=&quot;qemu:///system&quot; \<br>
        meta allow-migrate=&quot;true&quot; target-role=&quot;Started&quot; is-managed=&quot;true&quot; \<br>
        op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
        op stop interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
        op monitor interval=&quot;10s&quot; timeout=&quot;40s&quot; depth=&quot;0&quot; \<br>
        op migrate_to interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot; \<br>
        op migrate_from interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot; \<br>
        utilization memory=&quot;4608&quot;<br>
primitive wwwd ocf:heartbeat:VirtualDomain \<br>
        params config=&quot;/etc/libvirt/crm/wwwd.xml&quot; hypervisor=&quot;qemu:///system&quot; \<br>
        meta allow-migrate=&quot;true&quot; target-role=&quot;Started&quot; is-managed=&quot;true&quot; \<br>
        op start interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
        op stop interval=&quot;0&quot; timeout=&quot;120s&quot; \<br>
        op monitor interval=&quot;10s&quot; timeout=&quot;40s&quot; depth=&quot;0&quot; \<br>
        op migrate_to interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot; \<br>
        op migrate_from interval=&quot;0&quot; timeout=&quot;240s&quot; on-fail=&quot;block&quot; \<br>
        utilization memory=&quot;5120&quot;<br>
location location-stonith-atlas0 stonith-atlas0 -inf: atlas0<br>
location location-stonith-atlas1 stonith-atlas1 -inf: atlas1<br>
location location-stonith-atlas2 stonith-atlas2 -inf: atlas2<br>
location location-stonith-atlas3 stonith-atlas3 -inf: atlas3<br>
location location-stonith-atlas4 stonith-atlas4 -inf: atlas4<br>
location location-stonith-atlas5 stonith-atlas5 -inf: atlas5<br>
location location-stonith-atlas6 stonith-atlas6 -inf: atlas6<br>
property $id=&quot;cib-bootstrap-options&quot; \<br>
        dc-version=&quot;1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c&quot; \<br>
        cluster-infrastructure=&quot;openais&quot; \<br>
        expected-quorum-votes=&quot;7&quot; \<br>
        stonith-enabled=&quot;true&quot; \<br>
        no-quorum-policy=&quot;stop&quot; \<br>
        last-lrm-refresh=&quot;1340193431&quot; \<br>
        symmetric-cluster=&quot;true&quot; \<br>
        maintenance-mode=&quot;false&quot; \<br>
        stop-all-resources=&quot;false&quot; \<br>
        is-managed-default=&quot;true&quot; \<br>
        placement-strategy=&quot;balanced&quot;<br>
<br>
# crm_verify -L -VV<br>
[...]<br>
crm_verify[19320]: 2012/06/20_15:25:50 notice: LogActions: Leave   w0<br>
(Started atlas2)<br>
crm_verify[19320]: 2012/06/20_15:25:50 notice: LogActions: Leave<br>
stonith-atlas6       (Started atlas4)<br>
crm_verify[19320]: 2012/06/20_15:25:50 notice: LogActions: Leave<br>
stonith-atlas5       (Started atlas4)<br>
crm_verify[19320]: 2012/06/20_15:25:50 notice: LogActions: Leave<br>
stonith-atlas4       (Started atlas3)<br>
crm_verify[19320]: 2012/06/20_15:25:50 notice: LogActions: Leave<br>
stonith-atlas3       (Started atlas4)<br>
crm_verify[19320]: 2012/06/20_15:25:50 notice: LogActions: Leave<br>
stonith-atlas2       (Started atlas4)<br>
crm_verify[19320]: 2012/06/20_15:25:50 notice: LogActions: Leave<br>
stonith-atlas1       (Started atlas4)<br>
crm_verify[19320]: 2012/06/20_15:25:50 notice: LogActions: Leave<br>
stonith-atlas0       (Started atlas4)<br>
crm_verify[19320]: 2012/06/20_15:25:50 notice: LogActions: Start   lx0<br>
(atlas4)<br>
<br>
I have tried to delete the resource and add again, did not help.<br>
The corresponding log entries:<br>
<br>
Jun 20 11:57:25 atlas4 crmd: [17571]: info: delete_resource: Removing<br>
resource lx0 for 28654_crm_resource (internal) on atlas0<br>
Jun 20 11:57:25 atlas4 lrmd: [17568]: debug: lrmd_rsc_destroy: removing<br>
resource lx0<br>
Jun 20 11:57:25 atlas4 crmd: [17571]: debug: delete_rsc_entry: sync:<br>
Sending delete op for lx0<br>
Jun 20 11:57:25 atlas4 crmd: [17571]: info: notify_deleted: Notifying<br>
28654_crm_resource on atlas0 that lx0 was deleted<br>
Jun 20 11:57:25 atlas4 crmd: [17571]: WARN: decode_transition_key: Bad<br>
UUID (crm-resource-28654) in sscanf result (3) for 0:0:crm-resource-28654<br>
Jun 20 11:57:25 atlas4 crmd: [17571]: debug: create_operation_update:<br>
send_direct_ack: Updating resouce lx0 after complete delete op<br>
(interval=60000)<br>
Jun 20 11:57:25 atlas4 crmd: [17571]: info: send_direct_ack: ACK&#39;ing<br>
resource op lx0_delete_60000 from 0:0:crm-resource-28654:<br>
lrm_invoke-lrmd-1340186245-16<br>
Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] mcasted message added<br>
to pending queue<br>
Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] mcasted message added<br>
to pending queue<br>
Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] Delivering 10d5 to 10d7<br>
Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] Delivering MCAST<br>
message with seq 10d6 to pending delivery queue<br>
Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] Delivering MCAST<br>
message with seq 10d7 to pending delivery queue<br>
Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] Received<br>
ringid(<a href="http://192.168.40.60:22264" target="_blank">192.168.40.60:22264</a>) seq 10d6<br>
Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] Received<br>
ringid(<a href="http://192.168.40.60:22264" target="_blank">192.168.40.60:22264</a>) seq 10d7<br>
Jun 20 11:57:25 atlas4 crmd: [17571]: debug: notify_deleted: Triggering a<br>
refresh after 28654_crm_resource deleted lx0 from the LRM<br>
Jun 20 11:57:25 atlas4 cib: [17567]: debug: cib_process_xpath: Processing<br>
cib_query op for<br>
//cib/configuration/crm_config//cluster_property_set//nvpair[@name=&#39;last-lrm-refresh&#39;]<br>
(/cib/configuration/crm_config/cluster_property_set/nvpair[6])<br>
<br>
<br>
Jun 20 11:57:25 atlas4 lrmd: [17568]: debug: on_msg_add_rsc:client [17571]<br>
adds resource lx0<br>
Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] Delivering 149e to 149f<br>
Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] Delivering MCAST<br>
message with seq 149f to pending delivery queue<br>
Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] Received<br>
ringid(<a href="http://192.168.40.60:22264" target="_blank">192.168.40.60:22264</a>) seq 14a0<br>
Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] Delivering 149f to 14a0<br>
Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] Delivering MCAST<br>
message with seq 14a0 to pending delivery queue<br>
Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] releasing messages up<br>
to and including 149e<br>
Jun 20 11:57:25 atlas4 crmd: [17571]: info: do_lrm_rsc_op: Performing<br>
key=26:10266:7:e7426ec7-3bae-4a4b-a4ae-c3f80f17e058 op=lx0_monitor_0 )<br>
Jun 20 11:57:25 atlas4 lrmd: [17568]: debug: on_msg_perform_op:2396:<br>
copying parameters for rsc lx0<br>
Jun 20 11:57:25 atlas4 lrmd: [17568]: debug: on_msg_perform_op: add an<br>
operation operation monitor[35] on lx0 for client 17571, its parameters:<br>
crm_feature_set=[3.0.5] config=[/etc/libvirt/crm/lx0.xml]<br>
CRM_meta_timeout=[20000] hypervisor=[qemu:///system]  to the operation<br>
list.<br>
Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] releasing messages up<br>
to and including 149f<br>
Jun 20 11:57:25 atlas4 lrmd: [17568]: info: rsc:lx0 probe[35] (pid 30179)<br>
Jun 20 11:57:25 atlas4 VirtualDomain[30179]: INFO: Domain name &quot;lx0&quot; saved<br>
to /var/run/resource-agents/VirtualDomain-lx0.state.<br>
Jun 20 11:57:25 atlas4 corosync[17530]:   [TOTEM ] releasing messages up<br>
to and including 14bc<br>
Jun 20 11:57:25 atlas4 VirtualDomain[30179]: DEBUG: Virtual domain lx0 is<br>
currently shut off.<br>
Jun 20 11:57:25 atlas4 lrmd: [17568]: WARN: Managed lx0:monitor process<br>
30179 exited with return code 7.<br>
Jun 20 11:57:25 atlas4 lrmd: [17568]: info: operation monitor[35] on lx0<br>
for client 17571: pid 30179 exited with return code 7<br>
Jun 20 11:57:25 atlas4 crmd: [17571]: debug: create_operation_update:<br>
do_update_resource: Updating resouce lx0 after complete monitor op<br>
(interval=0)<br>
Jun 20 11:57:25 atlas4 crmd: [17571]: info: process_lrm_event: LRM<br>
operation lx0_monitor_0 (call=35, rc=7, cib-update=61, confirmed=true) not<br>
running<br>
Jun 20 11:57:25 atlas4 crmd: [17571]: debug: update_history_cache:<br>
Appending monitor op to history for &#39;lx0&#39;<br>
Jun 20 11:57:25 atlas4 crmd: [17571]: debug: get_xpath_object: No match<br>
for //cib_update_result//diff-added//crm_config in<br>
/notify/cib_update_result/diff<br>
<br>
What can be wrong in the setup/configuration? And what on the earth<br>
happened?<br>
<br>
Best regards,<br>
Jozsef<br>
--<br>
E-mail : <a href="mailto:kadlecsik.jozsef@wigner.mta.hu">kadlecsik.jozsef@wigner.mta.hu</a><br>
PGP key: <a href="http://www.kfki.hu/%7Ekadlec/pgp_public_key.txt" target="_blank">http://www.kfki.hu/~kadlec/pgp_public_key.txt</a><br>
Address: Wigner Research Centre for Physics, Hungarian Academy of Sciences<br>
         H-1525 Budapest 114, POB. 49, Hungary<br>
<br>
_______________________________________________<br>
Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
</blockquote></div><br><br clear="all"><br>-- <br>esta es mi vida e me la vivo hasta que dios quiera<br>