<div dir="ltr">No joy with ipport sadly<div><br></div><div>&lt;nvpair id=&quot;st-rhevm-instance_attributes-ipport&quot; name=&quot;ipport&quot; value=&quot;443&quot;/&gt;<br></div><div>&lt;nvpair id=&quot;st-rhevm-instance_attributes-shell_timeout&quot; name=&quot;shell_timeout&quot; value=&quot;10&quot;/&gt; <br>

</div><div><br></div><div style>Can  you share the changes you made to fence_rhevm for the API change? I&#39;ve got what *should* be the latest packages from the HA channel on both systems.</div></div><div class="gmail_extra">

<br><br><div class="gmail_quote">On Wed, May 22, 2013 at 11:34 AM, Andrew Beekhof <span dir="ltr">&lt;<a href="mailto:andrew@beekhof.net" target="_blank">andrew@beekhof.net</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">

<div class="HOEnZb"><div class="h5"><br>
On 22/05/2013, at 7:31 PM, John McCabe &lt;<a href="mailto:john@johnmccabe.net">john@johnmccabe.net</a>&gt; wrote:<br>
<br>
&gt; Hi,<br>
&gt; I&#39;ve been trying to get fence_rhevm (fence-agents-3.1.5-25.el6_4.2.x86_64) working within pacemaker (pacemaker-1.1.8-7.el6.x86_64) but am unable to get it to work as intended, using fence_rhevm on the command line works as expected, as does stonith_admin but from within pacemaker (triggered by deliberately killing corosync on the node to be fenced):<br>


&gt;<br>
&gt; May 21 22:21:32 defiant corosync[1245]:   [TOTEM ] A processor failed, forming new configuration.<br>
&gt; May 21 22:21:34 defiant corosync[1245]:   [QUORUM] Members[1]: 1<br>
&gt; May 21 22:21:34 defiant corosync[1245]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.<br>
&gt; May 21 22:21:34 defiant kernel: dlm: closing connection to node 2<br>
&gt; May 21 22:21:34 defiant corosync[1245]:   [CPG   ] chosen downlist: sender r(0) ip(10.10.25.152) ; members(old:2 left:1)<br>
&gt; May 21 22:21:34 defiant corosync[1245]:   [MAIN  ] Completed service synchronization, ready to provide service.<br>
&gt; May 21 22:21:34 defiant crmd[1749]:   notice: crm_update_peer_state: cman_event_callback: Node enterprise[2] - state is now lost<br>
&gt; May 21 22:21:34 defiant crmd[1749]:  warning: match_down_event: No match for shutdown action on enterprise<br>
&gt; May 21 22:21:34 defiant crmd[1749]:   notice: peer_update_callback: Stonith/shutdown of enterprise not matched<br>
&gt; May 21 22:21:34 defiant crmd[1749]:   notice: do_state_transition: State transition S_IDLE -&gt; S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]<br>
&gt; May 21 22:21:34 defiant fenced[1302]: fencing node enterprise<br>
&gt; May 21 22:21:34 defiant logger: fence_pcmk[2219]: Requesting Pacemaker fence enterprise (reset)<br>
&gt; May 21 22:21:34 defiant stonith_admin[2220]:   notice: crm_log_args: Invoked: stonith_admin --reboot enterprise --tolerance 5s<br>
&gt; May 21 22:21:35 defiant attrd[1747]:   notice: attrd_local_callback: Sending full refresh (origin=crmd)<br>
&gt; May 21 22:21:35 defiant attrd[1747]:   notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)<br>
&gt; May 21 22:21:36 defiant pengine[1748]:   notice: unpack_config: On loss of CCM Quorum: Ignore<br>
&gt; May 21 22:21:36 defiant pengine[1748]:   notice: process_pe_message: Calculated Transition 64: /var/lib/pacemaker/pengine/pe-input-60.bz2<br>
&gt; May 21 22:21:36 defiant crmd[1749]:   notice: run_graph: Transition 64 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-60.bz2): Complete<br>
&gt; May 21 22:21:36 defiant crmd[1749]:   notice: do_state_transition: State transition S_TRANSITION_ENGINE -&gt; S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]<br>
&gt; May 21 22:21:44 defiant logger: fence_pcmk[2219]: Call to fence enterprise (reset) failed with rc=255<br>
&gt; May 21 22:21:45 defiant fenced[1302]: fence enterprise dev 0.0 agent fence_pcmk result: error from agent<br>
&gt; May 21 22:21:45 defiant fenced[1302]: fence enterprise failed<br>
&gt; May 21 22:21:48 defiant fenced[1302]: fencing node enterprise<br>
&gt; May 21 22:21:48 defiant logger: fence_pcmk[2239]: Requesting Pacemaker fence enterprise (reset)<br>
&gt; May 21 22:21:48 defiant stonith_admin[2240]:   notice: crm_log_args: Invoked: stonith_admin --reboot enterprise --tolerance 5s<br>
&gt; May 21 22:21:58 defiant logger: fence_pcmk[2239]: Call to fence enterprise (reset) failed with rc=255<br>
&gt; May 21 22:21:58 defiant fenced[1302]: fence enterprise dev 0.0 agent fence_pcmk result: error from agent<br>
&gt; May 21 22:21:58 defiant fenced[1302]: fence enterprise failed<br>
&gt; May 21 22:22:01 defiant fenced[1302]: fencing node enterprise<br>
&gt;<br>
&gt; and with corosync.log showing &quot;warning: match_down_event:  No match for shutdown action on enterprise&quot;, &quot;notice: peer_update_callback:      Stonith/shutdown of enterprise not matched&quot;<br>
&gt;<br>
&gt; May 21 22:21:32 corosync [TOTEM ] A processor failed, forming new configuration.<br>
&gt; May 21 22:21:34 corosync [QUORUM] Members[1]: 1<br>
&gt; May 21 22:21:34 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.<br>
&gt; May 21 22:21:34 [1749] defiant       crmd:     info: cman_event_callback:       Membership 296: quorum retained<br>
&gt; May 21 22:21:34 [1744] defiant        cib:     info: pcmk_cpg_membership:       Left[5.0] cib.2<br>
&gt; May 21 22:21:34 [1744] defiant        cib:     info: crm_update_peer_proc:      pcmk_cpg_membership: Node enterprise[2] - corosync-cpg is now offline<br>
&gt; May 21 22:21:34 [1744] defiant        cib:     info: pcmk_cpg_membership:       Member[5.0] cib.1<br>
&gt; May 21 22:21:34 [1745] defiant stonith-ng:     info: pcmk_cpg_membership:       Left[5.0] stonith-ng.2<br>
&gt; May 21 22:21:34 [1745] defiant stonith-ng:     info: crm_update_peer_proc:      pcmk_cpg_membership: Node enterprise[2] - corosync-cpg is now offline<br>
&gt; May 21 22:21:34 corosync [CPG   ] chosen downlist: sender r(0) ip(10.10.25.152) ; members(old:2 left:1)<br>
&gt; May 21 22:21:34 corosync [MAIN  ] Completed service synchronization, ready to provide service.<br>
&gt; May 21 22:21:34 [1745] defiant stonith-ng:     info: pcmk_cpg_membership:       Member[5.0] stonith-ng.1<br>
&gt; May 21 22:21:34 [1749] defiant       crmd:   notice: crm_update_peer_state:     cman_event_callback: Node enterprise[2] - state is now lost<br>
&gt; May 21 22:21:34 [1749] defiant       crmd:     info: peer_update_callback:      enterprise is now lost (was member)<br>
&gt; May 21 22:21:34 [1744] defiant        cib:     info: cib_process_request:       Operation complete: op cib_modify for section nodes (origin=local/crmd/150, version=0.22.3): OK (rc=0)<br>
&gt; May 21 22:21:34 [1749] defiant       crmd:     info: pcmk_cpg_membership:       Left[5.0] crmd.2<br>
&gt; May 21 22:21:34 [1749] defiant       crmd:     info: crm_update_peer_proc:      pcmk_cpg_membership: Node enterprise[2] - corosync-cpg is now offline<br>
&gt; May 21 22:21:34 [1749] defiant       crmd:     info: peer_update_callback:      Client enterprise/peer now has status [offline] (DC=true)<br>
&gt; May 21 22:21:34 [1749] defiant       crmd:  warning: match_down_event:  No match for shutdown action on enterprise<br>
&gt; May 21 22:21:34 [1749] defiant       crmd:   notice: peer_update_callback:      Stonith/shutdown of enterprise not matched<br>
&gt; May 21 22:21:34 [1749] defiant       crmd:     info: crm_update_peer_expected:  peer_update_callback: Node enterprise[2] - expected state is now down<br>
&gt; May 21 22:21:34 [1749] defiant       crmd:     info: abort_transition_graph:    peer_update_callback:211 - Triggered transition abort (complete=1) : Node failure<br>
&gt; May 21 22:21:34 [1749] defiant       crmd:     info: pcmk_cpg_membership:       Member[5.0] crmd.1<br>
&gt; May 21 22:21:34 [1749] defiant       crmd:   notice: do_state_transition:       State transition S_IDLE -&gt; S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]<br>
&gt; May 21 22:21:34 [1749] defiant       crmd:     info: abort_transition_graph:    do_te_invoke:163 - Triggered transition abort (complete=1) : Peer Halt<br>
&gt; May 21 22:21:34 [1749] defiant       crmd:     info: join_make_offer:   Making join offers based on membership 296<br>
&gt; May 21 22:21:34 [1749] defiant       crmd:     info: do_dc_join_offer_all:      join-7: Waiting on 1 outstanding join acks<br>
&gt; May 21 22:21:34 [1749] defiant       crmd:     info: update_dc:         Set DC to defiant (3.0.7)<br>
&gt; May 21 22:21:34 [1749] defiant       crmd:     info: do_state_transition:       State transition S_INTEGRATION -&gt; S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]<br>
&gt; May 21 22:21:34 [1749] defiant       crmd:     info: do_dc_join_finalize:       join-7: Syncing the CIB from defiant to the rest of the cluster<br>
&gt; May 21 22:21:34 [1744] defiant        cib:     info: cib_process_request:       Operation complete: op cib_sync for section &#39;all&#39; (origin=local/crmd/154, version=0.22.5): OK (rc=0)<br>
&gt; May 21 22:21:34 [1744] defiant        cib:     info: cib_process_request:       Operation complete: op cib_modify for section nodes (origin=local/crmd/155, version=0.22.6): OK (rc=0)<br>
&gt; May 21 22:21:34 [1749] defiant       crmd:     info: stonith_action_create:     Initiating action metadata for agent fence_rhevm (target=(null))<br>
&gt; May 21 22:21:35 [1749] defiant       crmd:     info: do_dc_join_ack:    join-7: Updating node state to member for defiant<br>
&gt; May 21 22:21:35 [1749] defiant       crmd:     info: erase_status_tag:  Deleting xpath: //node_state[@uname=&#39;defiant&#39;]/lrm<br>
&gt; May 21 22:21:35 [1744] defiant        cib:     info: cib_process_request:       Operation complete: op cib_delete for section //node_state[@uname=&#39;defiant&#39;]/lrm (origin=local/crmd/156, version=0.22.7): OK (rc=0)<br>


&gt; May 21 22:21:35 [1749] defiant       crmd:     info: do_state_transition:       State transition S_FINALIZE_JOIN -&gt; S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]<br>
&gt; May 21 22:21:35 [1749] defiant       crmd:     info: abort_transition_graph:    do_te_invoke:156 - Triggered transition abort (complete=1) : Peer Cancelled<br>
&gt; May 21 22:21:35 [1747] defiant      attrd:   notice: attrd_local_callback:      Sending full refresh (origin=crmd)<br>
&gt; May 21 22:21:35 [1747] defiant      attrd:   notice: attrd_trigger_update:      Sending flush op to all hosts for: probe_complete (true)<br>
&gt; May 21 22:21:35 [1744] defiant        cib:     info: cib_process_request:       Operation complete: op cib_modify for section nodes (origin=local/crmd/158, version=0.22.9): OK (rc=0)<br>
&gt; May 21 22:21:35 [1744] defiant        cib:     info: cib_process_request:       Operation complete: op cib_modify for section cib (origin=local/crmd/160, version=0.22.11): OK (rc=0)<br>
&gt; May 21 22:21:36 [1748] defiant    pengine:     info: unpack_config:     Startup probes: enabled<br>
&gt; May 21 22:21:36 [1748] defiant    pengine:   notice: unpack_config:     On loss of CCM Quorum: Ignore<br>
&gt; May 21 22:21:36 [1748] defiant    pengine:     info: unpack_config:     Node scores: &#39;red&#39; = -INFINITY, &#39;yellow&#39; = 0, &#39;green&#39; = 0<br>
&gt; May 21 22:21:36 [1748] defiant    pengine:     info: unpack_domains:    Unpacking domains<br>
&gt; May 21 22:21:36 [1748] defiant    pengine:     info: determine_online_status_fencing:   Node defiant is active<br>
&gt; May 21 22:21:36 [1748] defiant    pengine:     info: determine_online_status:   Node defiant is online<br>
&gt; May 21 22:21:36 [1748] defiant    pengine:     info: native_print:      st-rhevm        (stonith:fence_rhevm):  Started defiant<br>
&gt; May 21 22:21:36 [1748] defiant    pengine:     info: LogActions:        Leave   st-rhevm        (Started defiant)<br>
&gt; May 21 22:21:36 [1748] defiant    pengine:   notice: process_pe_message:        Calculated Transition 64: /var/lib/pacemaker/pengine/pe-input-60.bz2<br>
&gt; May 21 22:21:36 [1749] defiant       crmd:     info: do_state_transition:       State transition S_POLICY_ENGINE -&gt; S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]<br>
&gt; May 21 22:21:36 [1749] defiant       crmd:     info: do_te_invoke:      Processing graph 64 (ref=pe_calc-dc-1369171296-118) derived from /var/lib/pacemaker/pengine/pe-input-60.bz2<br>
&gt; May 21 22:21:36 [1749] defiant       crmd:   notice: run_graph:         Transition 64 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-60.bz2): Complete<br>
&gt; May 21 22:21:36 [1749] defiant       crmd:   notice: do_state_transition:       State transition S_TRANSITION_ENGINE -&gt; S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]<br>
&gt;<br>
&gt;<br>
&gt; I can get the node enterprise to fence as expected from the command line with:<br>
&gt;<br>
&gt; stonith_admin --reboot enterprise --tolerance 5s<br>
&gt;<br>
&gt; fence_rhevm -o reboot -a &lt;hypervisor ip&gt; -l &lt;user&gt;@&lt;domain&gt; -p &lt;password&gt; -n enterprise -z<br>
&gt;<br>
&gt; My config is as follows:<br>
&gt;<br>
&gt; cluster.conf -----------------------------------<br>
&gt;<br>
&gt; &lt;?xml version=&quot;1.0&quot;?&gt;<br>
&gt; &lt;cluster config_version=&quot;1&quot; name=&quot;cluster&quot;&gt;<br>
&gt;   &lt;logging debug=&quot;off&quot;/&gt;<br>
&gt;   &lt;clusternodes&gt;<br>
&gt;     &lt;clusternode name=&quot;defiant&quot; nodeid=&quot;1&quot;&gt;<br>
&gt;       &lt;fence&gt;<br>
&gt;         &lt;method name=&quot;pcmk-redirect&quot;&gt;<br>
&gt;           &lt;device name=&quot;pcmk&quot; port=&quot;defiant&quot;/&gt;<br>
&gt;         &lt;/method&gt;<br>
&gt;       &lt;/fence&gt;<br>
&gt;     &lt;/clusternode&gt;<br>
&gt;     &lt;clusternode name=&quot;enterprise&quot; nodeid=&quot;2&quot;&gt;<br>
&gt;       &lt;fence&gt;<br>
&gt;         &lt;method name=&quot;pcmk-redirect&quot;&gt;<br>
&gt;           &lt;device name=&quot;pcmk&quot; port=&quot;enterprise&quot;/&gt;<br>
&gt;         &lt;/method&gt;<br>
&gt;       &lt;/fence&gt;<br>
&gt;     &lt;/clusternode&gt;<br>
&gt;   &lt;/clusternodes&gt;<br>
&gt;   &lt;fencedevices&gt;<br>
&gt;     &lt;fencedevice name=&quot;pcmk&quot; agent=&quot;fence_pcmk&quot;/&gt;<br>
&gt;   &lt;/fencedevices&gt;<br>
&gt;   &lt;cman two_node=&quot;1&quot; expected_votes=&quot;1&quot;&gt;<br>
&gt;   &lt;/cman&gt;<br>
&gt; &lt;/cluster&gt;<br>
&gt;<br>
&gt; pacemaker cib ---------------------------------<br>
&gt;<br>
&gt; Stonith device created with:<br>
&gt;<br>
&gt; pcs stonith create st-rhevm fence_rhevm login=&quot;&lt;user&gt;@&lt;domain&gt;&quot; passwd=&quot;&lt;password&gt;&quot; ssl=1 ipaddr=&quot;&lt;hypervisor ip&gt;&quot; verbose=1 debug=&quot;/tmp/debug.log&quot;<br>


&gt;<br>
&gt;<br>
&gt; &lt;cib epoch=&quot;18&quot; num_updates=&quot;88&quot; admin_epoch=&quot;0&quot; validate-with=&quot;pacemaker-1.2&quot; update-origin=&quot;defiant&quot; update-client=&quot;cibadmin&quot; cib-last-written=&quot;Tue May 21 07:55:31 2013&quot; crm_feature_set=&quot;3.0.7&quot; have-quorum=&quot;1&quot; dc-uuid=&quot;defiant&quot;&gt;<br>


&gt;   &lt;configuration&gt;<br>
&gt;     &lt;crm_config&gt;<br>
&gt;       &lt;cluster_property_set id=&quot;cib-bootstrap-options&quot;&gt;<br>
&gt;         &lt;nvpair id=&quot;cib-bootstrap-options-dc-version&quot; name=&quot;dc-version&quot; value=&quot;1.1.8-7.el6-394e906&quot;/&gt;<br>
&gt;         &lt;nvpair id=&quot;cib-bootstrap-options-cluster-infrastructure&quot; name=&quot;cluster-infrastructure&quot; value=&quot;cman&quot;/&gt;<br>
&gt;         &lt;nvpair id=&quot;cib-bootstrap-options-no-quorum-policy&quot; name=&quot;no-quorum-policy&quot; value=&quot;ignore&quot;/&gt;<br>
&gt;         &lt;nvpair id=&quot;cib-bootstrap-options-stonith-enabled&quot; name=&quot;stonith-enabled&quot; value=&quot;true&quot;/&gt;<br>
&gt;       &lt;/cluster_property_set&gt;<br>
&gt;     &lt;/crm_config&gt;<br>
&gt;     &lt;nodes&gt;<br>
&gt;       &lt;node id=&quot;defiant&quot; uname=&quot;defiant&quot;/&gt;<br>
&gt;       &lt;node id=&quot;enterprise&quot; uname=&quot;enterprise&quot;/&gt;<br>
&gt;     &lt;/nodes&gt;<br>
&gt;     &lt;resources&gt;<br>
&gt;       &lt;primitive class=&quot;stonith&quot; id=&quot;st-rhevm&quot; type=&quot;fence_rhevm&quot;&gt;<br>
&gt;         &lt;instance_attributes id=&quot;st-rhevm-instance_attributes&quot;&gt;<br>
&gt;           &lt;nvpair id=&quot;st-rhevm-instance_attributes-login&quot; name=&quot;login&quot; value=&quot;&lt;user&gt;@&lt;domain&gt;&quot;/&gt;<br>
&gt;           &lt;nvpair id=&quot;st-rhevm-instance_attributes-passwd&quot; name=&quot;passwd&quot; value=&quot;&lt;password&gt;&quot;/&gt;<br>
&gt;           &lt;nvpair id=&quot;st-rhevm-instance_attributes-debug&quot; name=&quot;debug&quot; value=&quot;/tmp/debug.log&quot;/&gt;<br>
&gt;           &lt;nvpair id=&quot;st-rhevm-instance_attributes-ssl&quot; name=&quot;ssl&quot; value=&quot;1&quot;/&gt;<br>
&gt;           &lt;nvpair id=&quot;st-rhevm-instance_attributes-verbose&quot; name=&quot;verbose&quot; value=&quot;1&quot;/&gt;<br>
&gt;           &lt;nvpair id=&quot;st-rhevm-instance_attributes-ipaddr&quot; name=&quot;ipaddr&quot; value=&quot;&lt;hypervisor ip&gt;&quot;/&gt;<br>
&gt;         &lt;/instance_attributes&gt;<br>
&gt;       &lt;/primitive&gt;<br>
<br>
</div></div>Mine is:<br>
<br>
      &lt;primitive id=&quot;Fencing&quot; class=&quot;stonith&quot; type=&quot;fence_rhevm&quot;&gt;<br>
        &lt;instance_attributes id=&quot;Fencing-params&quot;&gt;<br>
          &lt;nvpair id=&quot;Fencing-ipport&quot; name=&quot;ipport&quot; value=&quot;443&quot;/&gt;<br>
          &lt;nvpair id=&quot;Fencing-shell_timeout&quot; name=&quot;shell_timeout&quot; value=&quot;10&quot;/&gt;<br>
          &lt;nvpair id=&quot;Fencing-passwd&quot; name=&quot;passwd&quot; value=&quot;{pass}&quot;/&gt;<br>
          &lt;nvpair id=&quot;Fencing-ipaddr&quot; name=&quot;ipaddr&quot; value=&quot;{ip}&quot;/&gt;<br>
          &lt;nvpair id=&quot;Fencing-ssl&quot; name=&quot;ssl&quot; value=&quot;1&quot;/&gt;<br>
          &lt;nvpair id=&quot;Fencing-login&quot; name=&quot;login&quot; value=&quot;{user}@{domain}&quot;/&gt;<br>
        &lt;/instance_attributes&gt;<br>
        &lt;operations&gt;<br>
          &lt;op id=&quot;Fencing-monitor-120s&quot; interval=&quot;120s&quot; name=&quot;monitor&quot; timeout=&quot;120s&quot;/&gt;<br>
          &lt;op id=&quot;Fencing-stop-0&quot; interval=&quot;0&quot; name=&quot;stop&quot; timeout=&quot;60s&quot;/&gt;<br>
          &lt;op id=&quot;Fencing-start-0&quot; interval=&quot;0&quot; name=&quot;start&quot; timeout=&quot;60s&quot;/&gt;<br>
        &lt;/operations&gt;<br>
      &lt;/primitive&gt;<br>
<br>
Maybe ipport is important?<br>
Also, there was a RHEVM API change recently, I had to update the fence_rhevm agent before it would work again.<br>
<div><div class="h5"><br>
&gt;     &lt;/resources&gt;<br>
&gt;     &lt;constraints/&gt;<br>
&gt;   &lt;/configuration&gt;<br>
&gt;   &lt;status&gt;<br>
&gt;     &lt;node_state id=&quot;defiant&quot; uname=&quot;defiant&quot; in_ccm=&quot;true&quot; crmd=&quot;online&quot; crm-debug-origin=&quot;do_state_transition&quot; join=&quot;member&quot; expected=&quot;member&quot;&gt;<br>


&gt;       &lt;transient_attributes id=&quot;defiant&quot;&gt;<br>
&gt;         &lt;instance_attributes id=&quot;status-defiant&quot;&gt;<br>
&gt;           &lt;nvpair id=&quot;status-defiant-probe_complete&quot; name=&quot;probe_complete&quot; value=&quot;true&quot;/&gt;<br>
&gt;         &lt;/instance_attributes&gt;<br>
&gt;       &lt;/transient_attributes&gt;<br>
&gt;       &lt;lrm id=&quot;defiant&quot;&gt;<br>
&gt;         &lt;lrm_resources&gt;<br>
&gt;           &lt;lrm_resource id=&quot;st-rhevm&quot; type=&quot;fence_rhevm&quot; class=&quot;stonith&quot;&gt;<br>
&gt;             &lt;lrm_rsc_op id=&quot;st-rhevm_last_0&quot; operation_key=&quot;st-rhevm_start_0&quot; operation=&quot;start&quot; crm-debug-origin=&quot;build_active_RAs&quot; crm_feature_set=&quot;3.0.7&quot; transition-key=&quot;2:1:0:1e7972e8-6f9a-4325-b9c3-3d7e2950d996&quot; transition-magic=&quot;0:0;2:1:0:1e7972e8-6f9a-4325-b9c3-3d7e2950d996&quot; call-id=&quot;14&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;0&quot; last-run=&quot;1369119332&quot; last-rc-change=&quot;0&quot; exec-time=&quot;232&quot; queue-time=&quot;0&quot; op-digest=&quot;3bc7e1ce413fe37998a289f77f85d159&quot;/&gt;<br>


&gt;           &lt;/lrm_resource&gt;<br>
&gt;         &lt;/lrm_resources&gt;<br>
&gt;       &lt;/lrm&gt;<br>
&gt;     &lt;/node_state&gt;<br>
&gt;     &lt;node_state id=&quot;enterprise&quot; uname=&quot;enterprise&quot; in_ccm=&quot;true&quot; crmd=&quot;online&quot; crm-debug-origin=&quot;do_update_resource&quot; join=&quot;member&quot; expected=&quot;member&quot;&gt;<br>


&gt;       &lt;lrm id=&quot;enterprise&quot;&gt;<br>
&gt;         &lt;lrm_resources&gt;<br>
&gt;           &lt;lrm_resource id=&quot;st-rhevm&quot; type=&quot;fence_rhevm&quot; class=&quot;stonith&quot;&gt;<br>
&gt;             &lt;lrm_rsc_op id=&quot;st-rhevm_last_0&quot; operation_key=&quot;st-rhevm_monitor_0&quot; operation=&quot;monitor&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.7&quot; transition-key=&quot;5:59:7:8170c498-f66b-4974-b3c0-c17eb45ba5cb&quot; transition-magic=&quot;0:7;5:59:7:8170c498-f66b-4974-b3c0-c17eb45ba5cb&quot; call-id=&quot;5&quot; rc-code=&quot;7&quot; op-status=&quot;0&quot; interval=&quot;0&quot; last-run=&quot;1369170800&quot; last-rc-change=&quot;0&quot; exec-time=&quot;4&quot; queue-time=&quot;0&quot; op-digest=&quot;3bc7e1ce413fe37998a289f77f85d159&quot;/&gt;<br>


&gt;           &lt;/lrm_resource&gt;<br>
&gt;         &lt;/lrm_resources&gt;<br>
&gt;       &lt;/lrm&gt;<br>
&gt;       &lt;transient_attributes id=&quot;enterprise&quot;&gt;<br>
&gt;         &lt;instance_attributes id=&quot;status-enterprise&quot;&gt;<br>
&gt;           &lt;nvpair id=&quot;status-enterprise-probe_complete&quot; name=&quot;probe_complete&quot; value=&quot;true&quot;/&gt;<br>
&gt;         &lt;/instance_attributes&gt;<br>
&gt;       &lt;/transient_attributes&gt;<br>
&gt;     &lt;/node_state&gt;<br>
&gt;   &lt;/status&gt;<br>
&gt; &lt;/cib&gt;<br>
&gt;<br>
&gt;<br>
&gt; The debug log output from fence_rhevm doesn&#39;t appear to show pacemaker trying to request the reboot, only a vms command sent to the hypervisor which responds with xml listing the VMs.<br>
&gt;<br>
&gt; I can&#39;t quite see why its failing? Are you aware of any issues with fence_rhevm (fence-agents-3.1.5-25.el6_4.2.x86_64) not working with pacemaker (pacemaker-1.1.8-7.el6.x86_64) on RHEL6.4?<br>
&gt;<br>
&gt; All the best,<br>
&gt; /John<br>
</div></div>&gt; _______________________________________________<br>
&gt; Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
&gt; <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
&gt;<br>
&gt; Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
&gt; Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
&gt; Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br>
<br>
_______________________________________________<br>
Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
</blockquote></div><br></div>