I seen this in your log<br>=====================================================<br><pre>eb 13 10:32:07 nodeb stonith-ng: [22991]: notice: stonith_device_action: Device fence-fcma not found
Feb 13 10:32:07 nodeb stonith-ng: [22991]: info: stonith_command: Processed st_execute from lrmd: rc=-12
Feb 13 10:32:07 nodeb stonith-ng: [22991]: notice: stonith_device_action: Device fence-fcmb not found
Feb 13 10:32:07 nodeb stonith-ng: [22991]: info: stonith_command: Processed st_execute from lrmd: rc=-12
Feb 13 10:32:07 nodeb crmd: [22996]: info: process_lrm_event: LRM operation fence-fcma_monitor_0 (call=8, rc=7, cib-update=30, confirmed=true) not running
Feb 13 10:32:07 nodeb crmd: [22996]: info: process_lrm_event: LRM operation fence-fcmb_monitor_0 (call=9, rc=7, cib-update=31, confirmed=true) not running</pre><br><br><div class="gmail_quote">2012/2/13 James FLatten <span dir="ltr">&lt;<a href="mailto:jflatten@iso-ne.com">jflatten@iso-ne.com</a>&gt;</span><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><u></u>

  
    
  
  <div text="#000000" bgcolor="#ffffff">
    On 02/12/2012 04:55 PM, Andreas Kurz wrote:
    <blockquote>
      <pre>        op monitor role=&quot;Master&quot; interval=&quot;30s&quot;
        op monitor role=&quot;Slave&quot;  interval=&quot;31s&quot;
</pre>
      <pre>ipmi fencing device capable of
fencing more than one node?</pre>
    </blockquote>
    Andreas-<br>
    <br>
    I applied both changes you mentioned and the behavior still exists. 
    Here is my current configuration:<br>
    <br>
    <blockquote><tt>node nodea \<br>
            attributes standby=&quot;off&quot;<br>
        node nodeb \<br>
            attributes standby=&quot;off&quot;<br>
        primitive ClusterIP ocf:heartbeat:IPaddr2 \<br>
            params ip=&quot;192.168.1.3&quot; cidr_netmask=&quot;32&quot; \<br>
            op monitor interval=&quot;30s&quot;<br>
        primitive datafs ocf:heartbeat:Filesystem \<br>
            params device=&quot;/dev/drbd0&quot; directory=&quot;/data&quot; fstype=&quot;ext3&quot; \<br>
            meta target-role=&quot;Started&quot;<br>
        primitive drbd0 ocf:linbit:drbd \<br>
            params drbd_resource=&quot;drbd0&quot; \<br>
            op monitor interval=&quot;31s&quot; role=&quot;Slave&quot; \<br>
            op monitor interval=&quot;30s&quot; role=&quot;Master&quot;<br>
        primitive drbd1 ocf:linbit:drbd \<br>
            params drbd_resource=&quot;drbd1&quot; \<br>
            op monitor interval=&quot;31s&quot; role=&quot;Slave&quot; \<br>
            op monitor interval=&quot;30s&quot; role=&quot;Master&quot;<br>
        primitive fence-nodea stonith:fence_ipmilan \<br>
            params pcmk_host_list=&quot;nodeb&quot; ipaddr=&quot;xxx.xxx.xxx.xxx&quot;
        login=&quot;xxxxxxx&quot; passwd=&quot;xxxxxxxx&quot; lanplus=&quot;1&quot; timeout=&quot;4&quot;
        auth=&quot;md5&quot; \<br>
            op monitor interval=&quot;60s&quot;<br>
        primitive fence-nodeb stonith:fence_ipmilan \<br>
            params pcmk_host_list=&quot;nodea&quot; ipaddr=&quot;xxx.xxx.xxx.xxx&quot;
        login=&quot;xxxxxxx&quot; passwd=&quot;xxxxxxxx&quot; lanplus=&quot;1&quot; timeout=&quot;4&quot;
        auth=&quot;md5&quot; \<br>
            op monitor interval=&quot;60s&quot;<br>
        primitive httpd ocf:heartbeat:apache \<br>
            params configfile=&quot;/etc/httpd/conf/httpd.conf&quot; \<br>
            op monitor interval=&quot;1min&quot;<br>
        primitive patchfs ocf:heartbeat:Filesystem \<br>
            params device=&quot;/dev/drbd1&quot; directory=&quot;/patch&quot; fstype=&quot;ext3&quot;
        \<br>
            meta target-role=&quot;Started&quot;<br>
        group web datafs patchfs ClusterIP httpd<br>
        ms drbd0clone drbd0 \<br>
            meta master-max=&quot;1&quot; master-node-max=&quot;1&quot; clone-max=&quot;2&quot;
        clone-node-max=&quot;1&quot; notify=&quot;true&quot; target-role=&quot;Master&quot;<br>
        ms drbd1clone drbd1 \<br>
            meta master-max=&quot;1&quot; master-node-max=&quot;1&quot; clone-max=&quot;2&quot;
        clone-node-max=&quot;1&quot; notify=&quot;true&quot; target-role=&quot;Master&quot;<br>
        location fence-on-nodea fence-nodea \<br>
            rule $id=&quot;fence-on-nodea-rule&quot; -inf: #uname ne nodea<br>
        location fence-on-nodeb fence-nodeb \<br>
            rule $id=&quot;fence-on-nodeb-rule&quot; -inf: #uname ne nodeb<br>
        colocation datafs-with-drbd0 inf: web drbd0clone:Master<br>
        colocation patchfs-with-drbd1 inf: web drbd1clone:Master<br>
        order datafs-after-drbd0 inf: drbd0clone:promote web:start<br>
        order patchfs-after-drbd1 inf: drbd1clone:promote web:start<br>
        property $id=&quot;cib-bootstrap-options&quot; \<br>
           
        dc-version=&quot;1.1.6-3.el6-a02c0f19a00c1eb2527ad38f146ebc0834814558&quot;
        \<br>
            cluster-infrastructure=&quot;openais&quot; \<br>
            expected-quorum-votes=&quot;2&quot; \<br>
            stonith-enabled=&quot;false&quot; \<br>
            no-quorum-policy=&quot;ignore&quot; \<br>
            last-lrm-refresh=&quot;1328556424&quot;<br>
        rsc_defaults $id=&quot;rsc-options&quot; \<br>
            resource-stickiness=&quot;100&quot;</tt><br>
    </blockquote>
    If the cluster is fully down, I start corosync and pacemaker on one
    node, the cluster fences the other node but the services do not come
    up until the cluster-recheck-interval occurs.  I have attached the
    corosync.log from this latest test.<br>
    <br>
    -Davin<br>
  </div>

<br>_______________________________________________<br>
Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br></blockquote></div><br><br clear="all"><br>-- <br>esta es mi vida e me la vivo hasta que dios quiera<br>