<div dir="ltr">Sure.  Here&#39;s the full config:<div><br></div><div><div>&lt;cib epoch=&quot;28&quot; num_updates=&quot;34&quot; admin_epoch=&quot;0&quot; validate-with=&quot;pacemaker-1.2&quot; cib-last-written=&quot;Thu Oct  3 16:26:39 2013&quot; crm_feature_set=&quot;3.0.6&quot; update-origin=&quot;test-vm-2&quot; update-client=&quot;cibadmin&quot; have-quorum=&quot;1&quot; dc-uuid=&quot;test-vm-1&quot;&gt;</div>

<div>  &lt;configuration&gt;</div><div>    &lt;crm_config&gt;</div><div>      &lt;cluster_property_set id=&quot;cib-bootstrap-options&quot;&gt;</div><div>        &lt;nvpair id=&quot;cib-bootstrap-options-dc-version&quot; name=&quot;dc-version&quot; value=&quot;1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff&quot;/&gt;</div>

<div>        &lt;nvpair id=&quot;cib-bootstrap-options-cluster-infrastructure&quot; name=&quot;cluster-infrastructure&quot; value=&quot;openais&quot;/&gt;</div><div>        &lt;nvpair id=&quot;cib-bootstrap-options-expected-quorum-votes&quot; name=&quot;expected-quorum-votes&quot; value=&quot;2&quot;/&gt;</div>

<div>        &lt;nvpair id=&quot;cib-bootstrap-options-stonith-enabled&quot; name=&quot;stonith-enabled&quot; value=&quot;false&quot;/&gt;</div><div>        &lt;nvpair id=&quot;cib-bootstrap-options-no-quorum-policy&quot; name=&quot;no-quorum-policy&quot; value=&quot;ignore&quot;/&gt;</div>

<div>      &lt;/cluster_property_set&gt;</div><div>    &lt;/crm_config&gt;</div><div>    &lt;nodes&gt;</div><div>      &lt;node id=&quot;test-vm-1&quot; type=&quot;normal&quot; uname=&quot;test-vm-1&quot;/&gt;</div><div>
      &lt;node id=&quot;test-vm-2&quot; type=&quot;normal&quot; uname=&quot;test-vm-2&quot;/&gt;</div>
<div>    &lt;/nodes&gt;</div><div>    &lt;resources&gt;</div><div>      &lt;group id=&quot;nfs_resources&quot;&gt;</div><div>        &lt;meta_attributes id=&quot;nfs_resources-meta_attributes&quot;&gt;</div><div>          &lt;nvpair id=&quot;nfs_resources-meta_attributes-target-role&quot; name=&quot;target-role&quot; value=&quot;Started&quot;/&gt;</div>

<div>        &lt;/meta_attributes&gt;</div><div>        &lt;primitive class=&quot;ocf&quot; id=&quot;nfs_fs&quot; provider=&quot;heartbeat&quot; type=&quot;Filesystem&quot;&gt;</div><div>          &lt;instance_attributes id=&quot;nfs_fs-instance_attributes&quot;&gt;</div>

<div>            &lt;nvpair id=&quot;nfs_fs-instance_attributes-device&quot; name=&quot;device&quot; value=&quot;/dev/drbd1&quot;/&gt;</div><div>            &lt;nvpair id=&quot;nfs_fs-instance_attributes-directory&quot; name=&quot;directory&quot; value=&quot;/export/data/&quot;/&gt;</div>

<div>            &lt;nvpair id=&quot;nfs_fs-instance_attributes-fstype&quot; name=&quot;fstype&quot; value=&quot;ext3&quot;/&gt;</div><div>            &lt;nvpair id=&quot;nfs_fs-instance_attributes-options&quot; name=&quot;options&quot; value=&quot;noatime,nodiratime&quot;/&gt;</div>

<div>          &lt;/instance_attributes&gt;</div><div>          &lt;operations&gt;</div><div>            &lt;op id=&quot;nfs_fs-start-0&quot; interval=&quot;0&quot; name=&quot;start&quot; timeout=&quot;60&quot;/&gt;</div>

<div>            &lt;op id=&quot;nfs_fs-stop-0&quot; interval=&quot;0&quot; name=&quot;stop&quot; timeout=&quot;120&quot;/&gt;</div><div>          &lt;/operations&gt;</div><div>        &lt;/primitive&gt;</div><div>        &lt;primitive class=&quot;ocf&quot; id=&quot;nfs_ip&quot; provider=&quot;heartbeat&quot; type=&quot;IPaddr2&quot;&gt;</div>

<div>          &lt;instance_attributes id=&quot;nfs_ip-instance_attributes&quot;&gt;</div><div>            &lt;nvpair id=&quot;nfs_ip-instance_attributes-ip&quot; name=&quot;ip&quot; value=&quot;192.168.25.205&quot;/&gt;</div>

<div>            &lt;nvpair id=&quot;nfs_ip-instance_attributes-cidr_netmask&quot; name=&quot;cidr_netmask&quot; value=&quot;32&quot;/&gt;</div><div>          &lt;/instance_attributes&gt;</div><div>          &lt;operations&gt;</div>

<div>            &lt;op id=&quot;nfs_ip-monitor-10s&quot; interval=&quot;10s&quot; name=&quot;monitor&quot;/&gt;</div><div>          &lt;/operations&gt;</div><div>          &lt;meta_attributes id=&quot;nfs_ip-meta_attributes&quot;&gt;</div>

<div>            &lt;nvpair id=&quot;nfs_ip-meta_attributes-is-managed&quot; name=&quot;is-managed&quot; value=&quot;true&quot;/&gt;</div><div>          &lt;/meta_attributes&gt;</div><div>        &lt;/primitive&gt;</div>
<div>
        &lt;primitive class=&quot;lsb&quot; id=&quot;nfs&quot; type=&quot;nfs-kernel-server&quot;&gt;</div><div>          &lt;operations&gt;</div><div>            &lt;op id=&quot;nfs-monitor-5s&quot; interval=&quot;5s&quot; name=&quot;monitor&quot;/&gt;</div>

<div>            &lt;op id=&quot;nfs-start-0&quot; interval=&quot;0&quot; name=&quot;start&quot; timeout=&quot;120&quot;/&gt;</div><div>            &lt;op id=&quot;nfs-stop-0&quot; interval=&quot;0&quot; name=&quot;stop&quot; timeout=&quot;120&quot;/&gt;</div>

<div>          &lt;/operations&gt;</div><div>        &lt;/primitive&gt;</div><div>      &lt;/group&gt;</div><div>      &lt;master id=&quot;ms-drbd_r0&quot;&gt;</div><div>        &lt;meta_attributes id=&quot;ms-drbd_r0-meta_attributes&quot;&gt;</div>

<div>          &lt;nvpair id=&quot;ms-drbd_r0-meta_attributes-clone-max&quot; name=&quot;clone-max&quot; value=&quot;2&quot;/&gt;</div><div>          &lt;nvpair id=&quot;ms-drbd_r0-meta_attributes-notify&quot; name=&quot;notify&quot; value=&quot;true&quot;/&gt;</div>

<div>          &lt;nvpair id=&quot;ms-drbd_r0-meta_attributes-globally-unique&quot; name=&quot;globally-unique&quot; value=&quot;false&quot;/&gt;</div><div>          &lt;nvpair id=&quot;ms-drbd_r0-meta_attributes-target-role&quot; name=&quot;target-role&quot; value=&quot;Master&quot;/&gt;</div>

<div>        &lt;/meta_attributes&gt;</div><div>        &lt;primitive class=&quot;ocf&quot; id=&quot;drbd_r0&quot; provider=&quot;heartbeat&quot; type=&quot;drbd&quot;&gt;</div><div>          &lt;instance_attributes id=&quot;drbd_r0-instance_attributes&quot;&gt;</div>

<div>            &lt;nvpair id=&quot;drbd_r0-instance_attributes-drbd_resource&quot; name=&quot;drbd_resource&quot; value=&quot;r0&quot;/&gt;</div><div>          &lt;/instance_attributes&gt;</div><div>          &lt;operations&gt;</div>

<div>            &lt;op id=&quot;drbd_r0-monitor-59s&quot; interval=&quot;59s&quot; name=&quot;monitor&quot; role=&quot;Master&quot; timeout=&quot;30s&quot;/&gt;</div><div>            &lt;op id=&quot;drbd_r0-monitor-60s&quot; interval=&quot;60s&quot; name=&quot;monitor&quot; role=&quot;Slave&quot; timeout=&quot;30s&quot;/&gt;</div>

<div>          &lt;/operations&gt;</div><div>        &lt;/primitive&gt;</div><div>      &lt;/master&gt;</div><div>    &lt;/resources&gt;</div><div>    &lt;constraints&gt;</div><div>      &lt;rsc_colocation id=&quot;drbd-nfs-ha&quot; rsc=&quot;ms-drbd_r0&quot; rsc-role=&quot;Master&quot; score=&quot;INFINITY&quot; with-rsc=&quot;nfs_resources&quot;/&gt;</div>

<div>      &lt;rsc_order id=&quot;drbd-before-nfs&quot; first=&quot;ms-drbd_r0&quot; first-action=&quot;promote&quot; score=&quot;INFINITY&quot; then=&quot;nfs_resources&quot; then-action=&quot;start&quot;/&gt;</div><div>

    &lt;/constraints&gt;</div><div>    &lt;rsc_defaults&gt;</div><div>      &lt;meta_attributes id=&quot;rsc-options&quot;&gt;</div><div>        &lt;nvpair id=&quot;rsc-options-resource-stickiness&quot; name=&quot;resource-stickiness&quot; value=&quot;100&quot;/&gt;</div>

<div>      &lt;/meta_attributes&gt;</div><div>    &lt;/rsc_defaults&gt;</div><div>  &lt;/configuration&gt;</div><div>  &lt;status&gt;</div><div>    &lt;node_state id=&quot;test-vm-1&quot; uname=&quot;test-vm-1&quot; ha=&quot;active&quot; in_ccm=&quot;true&quot; crmd=&quot;online&quot; join=&quot;member&quot; expected=&quot;member&quot; crm-debug-origin=&quot;do_state_transition&quot; shutdown=&quot;0&quot;&gt;</div>

<div>      &lt;transient_attributes id=&quot;test-vm-1&quot;&gt;</div><div>        &lt;instance_attributes id=&quot;status-test-vm-1&quot;&gt;</div><div>          &lt;nvpair id=&quot;status-test-vm-1-fail-count-drbd_r0.1&quot; name=&quot;fail-count-drbd_r0:1&quot; value=&quot;1&quot;/&gt;</div>

<div>          &lt;nvpair id=&quot;status-test-vm-1-last-failure-drbd_r0.1&quot; name=&quot;last-failure-drbd_r0:1&quot; value=&quot;1380831442&quot;/&gt;</div><div>          &lt;nvpair id=&quot;status-test-vm-1-master-drbd_r0.0&quot; name=&quot;master-drbd_r0:0&quot; value=&quot;100&quot;/&gt;</div>

<div>          &lt;nvpair id=&quot;status-test-vm-1-probe_complete&quot; name=&quot;probe_complete&quot; value=&quot;true&quot;/&gt;</div><div>        &lt;/instance_attributes&gt;</div><div>      &lt;/transient_attributes&gt;</div>

<div>      &lt;lrm id=&quot;test-vm-1&quot;&gt;</div><div>        &lt;lrm_resources&gt;</div><div>          &lt;lrm_resource id=&quot;drbd_r0:0&quot; type=&quot;drbd&quot; class=&quot;ocf&quot; provider=&quot;heartbeat&quot;&gt;</div>

<div>            &lt;lrm_rsc_op id=&quot;drbd_r0:0_last_failure_0&quot; operation_key=&quot;drbd_r0:0_monitor_0&quot; operation=&quot;monitor&quot; crm-debug-origin=&quot;build_active_RAs&quot; crm_feature_set=&quot;3.0.6&quot; transition-key=&quot;7:4:7:1b4a3ae4-b013-45d1-a865-9b3b3deecf5f&quot; transition-magic=&quot;0:8;7:4:7:1b4a3ae4-b013-45d1-a865-9b3b3deecf5f&quot; call-id=&quot;32&quot; rc-code=&quot;8&quot; op-status=&quot;0&quot; interval=&quot;0&quot; op-digest=&quot;c0e018b73fdf522b6cdd355e125af15e&quot;/&gt;</div>

<div>            &lt;lrm_rsc_op id=&quot;drbd_r0:0_monitor_59000&quot; operation_key=&quot;drbd_r0:0_monitor_59000&quot; operation=&quot;monitor&quot; crm-debug-origin=&quot;build_active_RAs&quot; crm_feature_set=&quot;3.0.6&quot; transition-key=&quot;20:5:8:1b4a3ae4-b013-45d1-a865-9b3b3deecf5f&quot; transition-magic=&quot;0:8;20:5:8:1b4a3ae4-b013-45d1-a865-9b3b3deecf5f&quot; call-id=&quot;35&quot; rc-code=&quot;8&quot; op-status=&quot;0&quot; interval=&quot;59000&quot; op-digest=&quot;6f5adcd7f1211cdfc17850827b8582c5&quot;/&gt;</div>

<div>          &lt;/lrm_resource&gt;</div><div>          &lt;lrm_resource id=&quot;nfs&quot; type=&quot;nfs-kernel-server&quot; class=&quot;lsb&quot;&gt;</div><div>            &lt;lrm_rsc_op id=&quot;nfs_last_0&quot; operation_key=&quot;nfs_start_0&quot; operation=&quot;start&quot; crm-debug-origin=&quot;build_active_RAs&quot; crm_feature_set=&quot;3.0.6&quot; transition-key=&quot;14:8:0:1b4a3ae4-b013-45d1-a865-9b3b3deecf5f&quot; transition-magic=&quot;0:0;14:8:0:1b4a3ae4-b013-45d1-a865-9b3b3deecf5f&quot; call-id=&quot;39&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;0&quot; op-digest=&quot;f2317cad3d54cec5d7d7aa7d0bf35cf8&quot;/&gt;</div>

<div>            &lt;lrm_rsc_op id=&quot;nfs_last_failure_0&quot; operation_key=&quot;nfs_monitor_0&quot; operation=&quot;monitor&quot; crm-debug-origin=&quot;build_active_RAs&quot; crm_feature_set=&quot;3.0.6&quot; transition-key=&quot;6:4:7:1b4a3ae4-b013-45d1-a865-9b3b3deecf5f&quot; transition-magic=&quot;0:0;6:4:7:1b4a3ae4-b013-45d1-a865-9b3b3deecf5f&quot; call-id=&quot;31&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;0&quot; op-digest=&quot;f2317cad3d54cec5d7d7aa7d0bf35cf8&quot;/&gt;</div>

<div>            &lt;lrm_rsc_op id=&quot;nfs_monitor_5000&quot; operation_key=&quot;nfs_monitor_5000&quot; operation=&quot;monitor&quot; crm-debug-origin=&quot;build_active_RAs&quot; crm_feature_set=&quot;3.0.6&quot; transition-key=&quot;2:8:0:1b4a3ae4-b013-45d1-a865-9b3b3deecf5f&quot; transition-magic=&quot;0:0;2:8:0:1b4a3ae4-b013-45d1-a865-9b3b3deecf5f&quot; call-id=&quot;40&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;5000&quot; op-digest=&quot;4811cef7f7f94e3a35a70be7916cb2fd&quot;/&gt;</div>

<div>          &lt;/lrm_resource&gt;</div><div>          &lt;lrm_resource id=&quot;nfs_ip&quot; type=&quot;IPaddr2&quot; class=&quot;ocf&quot; provider=&quot;heartbeat&quot;&gt;</div><div>            &lt;lrm_rsc_op id=&quot;nfs_ip_last_failure_0&quot; operation_key=&quot;nfs_ip_monitor_0&quot; operation=&quot;monitor&quot; crm-debug-origin=&quot;build_active_RAs&quot; crm_feature_set=&quot;3.0.6&quot; transition-key=&quot;5:4:7:1b4a3ae4-b013-45d1-a865-9b3b3deecf5f&quot; transition-magic=&quot;0:0;5:4:7:1b4a3ae4-b013-45d1-a865-9b3b3deecf5f&quot; call-id=&quot;30&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;0&quot; op-digest=&quot;570cd25774b1ead32cb1840813adbe21&quot;/&gt;</div>

<div>            &lt;lrm_rsc_op id=&quot;nfs_ip_monitor_10000&quot; operation_key=&quot;nfs_ip_monitor_10000&quot; operation=&quot;monitor&quot; crm-debug-origin=&quot;build_active_RAs&quot; crm_feature_set=&quot;3.0.6&quot; transition-key=&quot;8:5:0:1b4a3ae4-b013-45d1-a865-9b3b3deecf5f&quot; transition-magic=&quot;0:0;8:5:0:1b4a3ae4-b013-45d1-a865-9b3b3deecf5f&quot; call-id=&quot;33&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;10000&quot; op-digest=&quot;bc929bfa78c3086ebd199cf0110b87bf&quot;/&gt;</div>

<div>          &lt;/lrm_resource&gt;</div><div>          &lt;lrm_resource id=&quot;nfs_fs&quot; type=&quot;Filesystem&quot; class=&quot;ocf&quot; provider=&quot;heartbeat&quot;&gt;</div><div>            &lt;lrm_rsc_op id=&quot;nfs_fs_last_failure_0&quot; operation_key=&quot;nfs_fs_monitor_0&quot; operation=&quot;monitor&quot; crm-debug-origin=&quot;build_active_RAs&quot; crm_feature_set=&quot;3.0.6&quot; transition-key=&quot;4:4:7:1b4a3ae4-b013-45d1-a865-9b3b3deecf5f&quot; transition-magic=&quot;0:0;4:4:7:1b4a3ae4-b013-45d1-a865-9b3b3deecf5f&quot; call-id=&quot;29&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;0&quot; op-digest=&quot;c0a40c0015f71e8b20b5359e12f25eb5&quot;/&gt;</div>

<div>          &lt;/lrm_resource&gt;</div><div>        &lt;/lrm_resources&gt;</div><div>      &lt;/lrm&gt;</div><div>    &lt;/node_state&gt;</div><div>    &lt;node_state id=&quot;test-vm-2&quot; uname=&quot;test-vm-2&quot; ha=&quot;active&quot; in_ccm=&quot;true&quot; crmd=&quot;online&quot; join=&quot;member&quot; crm-debug-origin=&quot;do_update_resource&quot; expected=&quot;member&quot; shutdown=&quot;0&quot;&gt;</div>

<div>      &lt;lrm id=&quot;test-vm-2&quot;&gt;</div><div>        &lt;lrm_resources&gt;</div><div>          &lt;lrm_resource id=&quot;nfs&quot; type=&quot;nfs-kernel-server&quot; class=&quot;lsb&quot;&gt;</div><div>            &lt;lrm_rsc_op id=&quot;nfs_last_0&quot; operation_key=&quot;nfs_monitor_0&quot; operation=&quot;monitor&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.6&quot; transition-key=&quot;10:14:7:1b4a3ae4-b013-45d1-a865-9b3b3deecf5f&quot; transition-magic=&quot;0:7;10:14:7:1b4a3ae4-b013-45d1-a865-9b3b3deecf5f&quot; call-id=&quot;4&quot; rc-code=&quot;7&quot; op-status=&quot;0&quot; interval=&quot;0&quot; last-run=&quot;1380832563&quot; last-rc-change=&quot;1380832563&quot; exec-time=&quot;210&quot; queue-time=&quot;0&quot; op-digest=&quot;f2317cad3d54cec5d7d7aa7d0bf35cf8&quot;/&gt;</div>

<div>          &lt;/lrm_resource&gt;</div><div>          &lt;lrm_resource id=&quot;nfs_ip&quot; type=&quot;IPaddr2&quot; class=&quot;ocf&quot; provider=&quot;heartbeat&quot;&gt;</div><div>            &lt;lrm_rsc_op id=&quot;nfs_ip_last_0&quot; operation_key=&quot;nfs_ip_monitor_0&quot; operation=&quot;monitor&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.6&quot; transition-key=&quot;9:14:7:1b4a3ae4-b013-45d1-a865-9b3b3deecf5f&quot; transition-magic=&quot;0:7;9:14:7:1b4a3ae4-b013-45d1-a865-9b3b3deecf5f&quot; call-id=&quot;3&quot; rc-code=&quot;7&quot; op-status=&quot;0&quot; interval=&quot;0&quot; last-run=&quot;1380832563&quot; last-rc-change=&quot;1380832563&quot; exec-time=&quot;490&quot; queue-time=&quot;0&quot; op-digest=&quot;570cd25774b1ead32cb1840813adbe21&quot;/&gt;</div>

<div>          &lt;/lrm_resource&gt;</div><div>          &lt;lrm_resource id=&quot;nfs_fs&quot; type=&quot;Filesystem&quot; class=&quot;ocf&quot; provider=&quot;heartbeat&quot;&gt;</div><div>            &lt;lrm_rsc_op id=&quot;nfs_fs_last_0&quot; operation_key=&quot;nfs_fs_monitor_0&quot; operation=&quot;monitor&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.6&quot; transition-key=&quot;8:14:7:1b4a3ae4-b013-45d1-a865-9b3b3deecf5f&quot; transition-magic=&quot;0:7;8:14:7:1b4a3ae4-b013-45d1-a865-9b3b3deecf5f&quot; call-id=&quot;2&quot; rc-code=&quot;7&quot; op-status=&quot;0&quot; interval=&quot;0&quot; last-run=&quot;1380832563&quot; last-rc-change=&quot;1380832563&quot; exec-time=&quot;690&quot; queue-time=&quot;0&quot; op-digest=&quot;c0a40c0015f71e8b20b5359e12f25eb5&quot;/&gt;</div>

<div>          &lt;/lrm_resource&gt;</div><div>          &lt;lrm_resource id=&quot;drbd_r0:1&quot; type=&quot;drbd&quot; class=&quot;ocf&quot; provider=&quot;heartbeat&quot;&gt;</div><div>            &lt;lrm_rsc_op id=&quot;drbd_r0:1_last_0&quot; operation_key=&quot;drbd_r0:1_start_0&quot; operation=&quot;start&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.6&quot; transition-key=&quot;26:14:0:1b4a3ae4-b013-45d1-a865-9b3b3deecf5f&quot; transition-magic=&quot;0:0;26:14:0:1b4a3ae4-b013-45d1-a865-9b3b3deecf5f&quot; call-id=&quot;6&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;0&quot; last-run=&quot;1380832564&quot; last-rc-change=&quot;1380832564&quot; exec-time=&quot;840&quot; queue-time=&quot;0&quot; op-digest=&quot;c0e018b73fdf522b6cdd355e125af15e&quot;/&gt;</div>

<div>            &lt;lrm_rsc_op id=&quot;drbd_r0:1_monitor_60000&quot; operation_key=&quot;drbd_r0:1_monitor_60000&quot; operation=&quot;monitor&quot; crm-debug-origin=&quot;do_update_resource&quot; crm_feature_set=&quot;3.0.6&quot; transition-key=&quot;25:15:0:1b4a3ae4-b013-45d1-a865-9b3b3deecf5f&quot; transition-magic=&quot;0:0;25:15:0:1b4a3ae4-b013-45d1-a865-9b3b3deecf5f&quot; call-id=&quot;8&quot; rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;60000&quot; last-rc-change=&quot;1380832565&quot; exec-time=&quot;310&quot; queue-time=&quot;10&quot; op-digest=&quot;6f5adcd7f1211cdfc17850827b8582c5&quot;/&gt;</div>

<div>          &lt;/lrm_resource&gt;</div><div>        &lt;/lrm_resources&gt;</div><div>      &lt;/lrm&gt;</div><div>      &lt;transient_attributes id=&quot;test-vm-2&quot;&gt;</div><div>        &lt;instance_attributes id=&quot;status-test-vm-2&quot;&gt;</div>

<div>          &lt;nvpair id=&quot;status-test-vm-2-probe_complete&quot; name=&quot;probe_complete&quot; value=&quot;true&quot;/&gt;</div><div>          &lt;nvpair id=&quot;status-test-vm-2-master-drbd_r0.1&quot; name=&quot;master-drbd_r0:1&quot; value=&quot;75&quot;/&gt;</div>

<div>        &lt;/instance_attributes&gt;</div><div>      &lt;/transient_attributes&gt;</div><div>    &lt;/node_state&gt;</div><div>  &lt;/status&gt;</div><div>&lt;/cib&gt;</div></div></div><div class="gmail_extra"><br><br>

<div class="gmail_quote">On Thu, Oct 3, 2013 at 5:06 PM, Andreas Kurz <span dir="ltr">&lt;<a href="mailto:andreas@hastexo.com" target="_blank">andreas@hastexo.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">

<div class="HOEnZb"><div class="h5">On 2013-10-03 22:12, David Parker wrote:<br>
&gt; Thanks, Andrew.  The goal was to use either Pacemaker and Corosync 1.x<br>
&gt; from the Debain packages, or use both compiled from source.  So, with<br>
&gt; the compiled version, I was hoping to avoid CMAN.  However, it seems the<br>
&gt; packaged version of Pacemaker doesn&#39;t support CMAN anyway, so it&#39;s moot.<br>
&gt;<br>
&gt; I rebuilt my VMs from scratch, re-installed Pacemaker and Corosync from<br>
&gt; the Debian packages, but I&#39;m still having an odd problem.  Here is the<br>
&gt; config portion of my CIB:<br>
&gt;<br>
&gt;     &lt;crm_config&gt;<br>
&gt;       &lt;cluster_property_set id=&quot;cib-bootstrap-options&quot;&gt;<br>
&gt;         &lt;nvpair id=&quot;cib-bootstrap-options-dc-version&quot; name=&quot;dc-version&quot;<br>
&gt; value=&quot;1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff&quot;/&gt;<br>
&gt;         &lt;nvpair id=&quot;cib-bootstrap-options-cluster-infrastructure&quot;<br>
&gt; name=&quot;cluster-infrastructure&quot; value=&quot;openais&quot;/&gt;<br>
&gt;         &lt;nvpair id=&quot;cib-bootstrap-options-expected-quorum-votes&quot;<br>
&gt; name=&quot;expected-quorum-votes&quot; value=&quot;2&quot;/&gt;<br>
&gt;         &lt;nvpair id=&quot;cib-bootstrap-options-stonith-enabled&quot;<br>
&gt; name=&quot;stonith-enabled&quot; value=&quot;false&quot;/&gt;<br>
&gt;         &lt;nvpair id=&quot;cib-bootstrap-options-no-quorum-policy&quot;<br>
&gt; name=&quot;no-quorum-policy&quot; value=&quot;ignore&quot;/&gt;<br>
&gt;       &lt;/cluster_property_set&gt;<br>
&gt;     &lt;/crm_config&gt;<br>
&gt;<br>
&gt; I set no-quorum-policy=ignore based on the documentation example for a<br>
&gt; 2-node cluster.  But when Pacemaker starts up on the first node, the<br>
&gt; DRBD resource is in slave mode and none of the other resources are<br>
&gt; started (they depend on DRBD being master), and I see these lines in the<br>
&gt; log:<br>
&gt;<br>
&gt; Oct 03 15:29:18 test-vm-1 pengine: [3742]: notice: unpack_config: On<br>
&gt; loss of CCM Quorum: Ignore<br>
&gt; Oct 03 15:29:18 test-vm-1 pengine: [3742]: notice: LogActions: Start<br>
&gt; nfs_fs   (test-vm-1 - blocked)<br>
&gt; Oct 03 15:29:18 test-vm-1 pengine: [3742]: notice: LogActions: Start<br>
&gt; nfs_ip   (test-vm-1 - blocked)<br>
&gt; Oct 03 15:29:18 test-vm-1 pengine: [3742]: notice: LogActions: Start<br>
&gt; nfs      (test-vm-1 - blocked)<br>
&gt; Oct 03 15:29:18 test-vm-1 pengine: [3742]: notice: LogActions: Start<br>
&gt; drbd_r0:0        (test-vm-1)<br>
&gt;<br>
&gt; I&#39;m assuming the NFS resources show &quot;blocked&quot; because the resource they<br>
&gt; depend on is not in the correct state.<br>
&gt;<br>
&gt; Even when the second node (test-vm-2) comes online, the state of these<br>
&gt; resources does not change.  I can shutdown and re-start Pacemaker over<br>
&gt; and over again on test-vm-2, but nothihg changes.  However... and this<br>
&gt; is where it gets weird... if I shut down Pacemaker on test-vm-1, then<br>
&gt; all of the resources immediately fail over to test-vm-2 and start<br>
&gt; correctly.  And I see these lines in the log:<br>
&gt;<br>
&gt; Oct 03 15:44:26 test-vm-1 pengine: [5305]: notice: unpack_config: On<br>
&gt; loss of CCM Quorum: Ignore<br>
&gt; Oct 03 15:44:26 test-vm-1 pengine: [5305]: notice: stage6: Scheduling<br>
&gt; Node test-vm-1 for shutdown<br>
&gt; Oct 03 15:44:26 test-vm-1 pengine: [5305]: notice: LogActions: Start<br>
&gt; nfs_fs   (test-vm-2)<br>
&gt; Oct 03 15:44:26 test-vm-1 pengine: [5305]: notice: LogActions: Start<br>
&gt; nfs_ip   (test-vm-2)<br>
&gt; Oct 03 15:44:26 test-vm-1 pengine: [5305]: notice: LogActions: Start<br>
&gt; nfs      (test-vm-2)<br>
&gt; Oct 03 15:44:26 test-vm-1 pengine: [5305]: notice: LogActions: Stop<br>
&gt;  drbd_r0:0        (test-vm-1)<br>
&gt; Oct 03 15:44:26 test-vm-1 pengine: [5305]: notice: LogActions: Promote<br>
&gt; drbd_r0:1        (Slave -&gt; Master test-vm-2)<br>
&gt;<br>
&gt; After that, I can generally move the resources back and forth, and even<br>
&gt; fail them over by hard-failing a node, without any problems.  The real<br>
&gt; problem is that this isn&#39;t consistent, though.  Every once in a while,<br>
&gt; I&#39;ll hard-fail a node and the other one will go into this &quot;stuck&quot; state<br>
&gt; where Pacemaker knows it lost a node, but DRBD will stay in slave mode<br>
&gt; and the other resources will never start.  It seems to happen quite<br>
&gt; randomly.  Then, even if I restart Pacemaker on both nodes, or reboot<br>
&gt; them altogether, I run into the startup issue mentioned previously.<br>
&gt;<br>
&gt; Any ideas?<br>
<br>
</div></div>Yes, share your complete resource configuration ;-)<br>
<br>
Regards,<br>
Andreas<br>
<div class="im"><br>
&gt;<br>
&gt;     Thanks,<br>
&gt;     Dave<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; On Wed, Oct 2, 2013 at 1:01 AM, Andrew Beekhof &lt;<a href="mailto:andrew@beekhof.net">andrew@beekhof.net</a><br>
</div><div class="im">&gt; &lt;mailto:<a href="mailto:andrew@beekhof.net">andrew@beekhof.net</a>&gt;&gt; wrote:<br>
&gt;<br>
&gt;<br>
&gt;     On 02/10/2013, at 5:24 AM, David Parker &lt;<a href="mailto:dparker@utica.edu">dparker@utica.edu</a><br>
</div><div class="im">&gt;     &lt;mailto:<a href="mailto:dparker@utica.edu">dparker@utica.edu</a>&gt;&gt; wrote:<br>
&gt;<br>
&gt;     &gt; Thanks, I did a little Googling and found the git repository for pcs.<br>
&gt;<br>
&gt;     pcs won&#39;t help you rebuild pacemaker with cman support (or corosync<br>
&gt;     2.x support) turned on though.<br>
&gt;<br>
&gt;<br>
&gt;     &gt;  Is there any way to make a two-node cluster work with the stock<br>
&gt;     Debian packages, though?  It seems odd that this would be impossible.<br>
&gt;<br>
&gt;     it really depends how the debian maintainers built pacemaker.<br>
&gt;     by the sounds of it, it only supports the pacemaker plugin mode for<br>
&gt;     corosync 1.x<br>
&gt;<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt; On Tue, Oct 1, 2013 at 3:16 PM, Larry Brigman<br>
</div><div class="im">&gt;     &lt;<a href="mailto:larry.brigman@gmail.com">larry.brigman@gmail.com</a> &lt;mailto:<a href="mailto:larry.brigman@gmail.com">larry.brigman@gmail.com</a>&gt;&gt; wrote:<br>
&gt;     &gt; pcs is another package you will need to install.<br>
&gt;     &gt;<br>
&gt;     &gt; On Oct 1, 2013 9:04 AM, &quot;David Parker&quot; &lt;<a href="mailto:dparker@utica.edu">dparker@utica.edu</a><br>
</div><div><div class="h5">&gt;     &lt;mailto:<a href="mailto:dparker@utica.edu">dparker@utica.edu</a>&gt;&gt; wrote:<br>
&gt;     &gt; Hello,<br>
&gt;     &gt;<br>
&gt;     &gt; Sorry for the delay in my reply.  I&#39;ve been doing a lot of<br>
&gt;     experimentation, but so far I&#39;ve had no luck.<br>
&gt;     &gt;<br>
&gt;     &gt; Thanks for the suggestion, but it seems I&#39;m not able to use CMAN.<br>
&gt;      I&#39;m running Debian Wheezy with Corosync and Pacemaker installed via<br>
&gt;     apt-get.  When I installed CMAN and set up a cluster.conf file,<br>
&gt;     Pacemaker refused to start and said that CMAN was not supported.<br>
&gt;      When CMAN is not installed, Pacemaker starts up fine, but I see<br>
&gt;     these lines in the log:<br>
&gt;     &gt;<br>
&gt;     &gt; Sep 30 23:36:29 test-vm-1 crmd: [6941]: ERROR:<br>
&gt;     init_quorum_connection: The Corosync quorum API is not supported in<br>
&gt;     this build<br>
&gt;     &gt; Sep 30 23:36:29 test-vm-1 pacemakerd: [6932]: ERROR:<br>
&gt;     pcmk_child_exit: Child process crmd exited (pid=6941, rc=100)<br>
&gt;     &gt; Sep 30 23:36:29 test-vm-1 pacemakerd: [6932]: WARN:<br>
&gt;     pcmk_child_exit: Pacemaker child process crmd no longer wishes to be<br>
&gt;     respawned. Shutting ourselves down.<br>
&gt;     &gt;<br>
&gt;     &gt; So, then I checked to see which plugins are supported:<br>
&gt;     &gt;<br>
&gt;     &gt; # pacemakerd -F<br>
&gt;     &gt; Pacemaker 1.1.7 (Build: ee0730e13d124c3d58f00016c3376a1de5323cff)<br>
&gt;     &gt;  Supporting:  generated-manpages agent-manpages ncurses  heartbeat<br>
&gt;     corosync-plugin snmp libesmtp<br>
&gt;     &gt;<br>
&gt;     &gt; Am I correct in believing that this Pacemaker package has been<br>
&gt;     compiled without support for any quorum API?  If so, does anyone<br>
&gt;     know if there is a Debian package which has the correct support?<br>
&gt;     &gt;<br>
&gt;     &gt; I also tried compiling LibQB, Corosync and Pacemaker from source<br>
&gt;     via git, following the instructions documented here:<br>
&gt;     &gt;<br>
&gt;     &gt; <a href="http://clusterlabs.org/wiki/SourceInstall" target="_blank">http://clusterlabs.org/wiki/SourceInstall</a><br>
&gt;     &gt;<br>
&gt;     &gt; I was hopeful that this would work, because as I understand it,<br>
&gt;     Corosync 2.x no longer uses CMAN.  Everything compiled and started<br>
&gt;     fine, but the compiled version of Pacemaker did not include either<br>
&gt;     the &#39;crm&#39; or &#39;pcs&#39; commands.  Do I need to install something else in<br>
&gt;     order to get one of these?<br>
&gt;     &gt;<br>
&gt;     &gt; Any and all help is greatly appreciated!<br>
&gt;     &gt;<br>
&gt;     &gt;     Thanks,<br>
&gt;     &gt;     Dave<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt; On Wed, Sep 25, 2013 at 6:08 AM, David Lang &lt;<a href="mailto:david@lang.hm">david@lang.hm</a><br>
</div></div><div class="im">&gt;     &lt;mailto:<a href="mailto:david@lang.hm">david@lang.hm</a>&gt;&gt; wrote:<br>
&gt;     &gt; the cluster is trying to reach a quarum (the majority of the nodes<br>
&gt;     talking to each other) and that is never going to happen with only<br>
&gt;     one node. so you have to disable this.<br>
&gt;     &gt;<br>
&gt;     &gt; try putting<br>
&gt;     &gt; &lt;cman two_node=&quot;1&quot; expected_votes=&quot;1&quot; transport=&quot;udpu&quot;/&gt;<br>
&gt;     &gt; in your cluster.conf<br>
&gt;     &gt;<br>
&gt;     &gt; David Lang<br>
&gt;     &gt;<br>
&gt;     &gt;  On Tue, 24 Sep 2013, David Parker wrote:<br>
&gt;     &gt;<br>
&gt;     &gt; Date: Tue, 24 Sep 2013 11:48:59 -0400<br>
</div>&gt;     &gt; From: David Parker &lt;<a href="mailto:dparker@utica.edu">dparker@utica.edu</a> &lt;mailto:<a href="mailto:dparker@utica.edu">dparker@utica.edu</a>&gt;&gt;<br>
<div class="im">&gt;     &gt; Reply-To: The Pacemaker cluster resource manager<br>
&gt;     &gt;     &lt;<a href="mailto:pacemaker@oss.clusterlabs.org">pacemaker@oss.clusterlabs.org</a><br>
</div>&gt;     &lt;mailto:<a href="mailto:pacemaker@oss.clusterlabs.org">pacemaker@oss.clusterlabs.org</a>&gt;&gt;<br>
<div class="im">&gt;     &gt; To: The Pacemaker cluster resource manager<br>
</div>&gt;     &lt;<a href="mailto:pacemaker@oss.clusterlabs.org">pacemaker@oss.clusterlabs.org</a> &lt;mailto:<a href="mailto:pacemaker@oss.clusterlabs.org">pacemaker@oss.clusterlabs.org</a>&gt;&gt;<br>
<div class="im">&gt;     &gt; Subject: Re: [Pacemaker] Corosync won&#39;t recover when a node fails<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt; I forgot to mention, OS is Debian Wheezy 64-bit, Corosync and<br>
&gt;     Pacemaker<br>
&gt;     &gt; installed from packages via apt-get, and there are no local<br>
&gt;     firewall rules<br>
&gt;     &gt; in place:<br>
&gt;     &gt;<br>
&gt;     &gt; # iptables -L<br>
&gt;     &gt; Chain INPUT (policy ACCEPT)<br>
&gt;     &gt; target     prot opt source               destination<br>
&gt;     &gt;<br>
&gt;     &gt; Chain FORWARD (policy ACCEPT)<br>
&gt;     &gt; target     prot opt source               destination<br>
&gt;     &gt;<br>
&gt;     &gt; Chain OUTPUT (policy ACCEPT)<br>
&gt;     &gt; target     prot opt source               destination<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt; On Tue, Sep 24, 2013 at 11:41 AM, David Parker &lt;<a href="mailto:dparker@utica.edu">dparker@utica.edu</a><br>
</div><div><div class="h5">&gt;     &lt;mailto:<a href="mailto:dparker@utica.edu">dparker@utica.edu</a>&gt;&gt; wrote:<br>
&gt;     &gt;<br>
&gt;     &gt; Hello,<br>
&gt;     &gt;<br>
&gt;     &gt; I have a 2-node cluster using Corosync and Pacemaker, where the<br>
&gt;     nodes are<br>
&gt;     &gt; actually to VirtualBox VMs on the same physical machine.  I have some<br>
&gt;     &gt; resources set up in Pacemaker, and everything works fine if I move<br>
&gt;     them in<br>
&gt;     &gt; a controlled way with the &quot;crm_resource -r &lt;resource&gt; --move<br>
&gt;     --node &lt;node&gt;&quot;<br>
&gt;     &gt; command.<br>
&gt;     &gt;<br>
&gt;     &gt; However, when I hard-fail one of the nodes via the &quot;poweroff&quot;<br>
&gt;     command in<br>
&gt;     &gt; Virtual Box, which &quot;pulls the plug&quot; on the VM, the resources do<br>
&gt;     not move,<br>
&gt;     &gt; and I see the following output in the log on the remaining node:<br>
&gt;     &gt;<br>
&gt;     &gt; Sep 24 11:20:30 corosync [TOTEM ] The token was lost in the<br>
&gt;     OPERATIONAL<br>
&gt;     &gt; state.<br>
&gt;     &gt; Sep 24 11:20:30 corosync [TOTEM ] A processor failed, forming new<br>
&gt;     &gt; configuration.<br>
&gt;     &gt; Sep 24 11:20:30 corosync [TOTEM ] entering GATHER state from 2.<br>
&gt;     &gt; Sep 24 11:20:31 test-vm-2 lrmd: [2503]: debug: rsc:drbd_r0:0<br>
&gt;     monitor[31]<br>
&gt;     &gt; (pid 8495)<br>
&gt;     &gt; drbd[8495]:     2013/09/24_11:20:31 WARNING: This resource agent is<br>
&gt;     &gt; deprecated and may be removed in a future release. See the man<br>
&gt;     page for<br>
&gt;     &gt; details. To suppress this warning, set the &quot;ignore_deprecation&quot;<br>
&gt;     resource<br>
&gt;     &gt; parameter to true.<br>
&gt;     &gt; drbd[8495]:     2013/09/24_11:20:31 WARNING: This resource agent is<br>
&gt;     &gt; deprecated and may be removed in a future release. See the man<br>
&gt;     page for<br>
&gt;     &gt; details. To suppress this warning, set the &quot;ignore_deprecation&quot;<br>
&gt;     resource<br>
&gt;     &gt; parameter to true.<br>
&gt;     &gt; drbd[8495]:     2013/09/24_11:20:31 DEBUG: r0: Calling drbdadm -c<br>
&gt;     &gt; /etc/drbd.conf role r0<br>
&gt;     &gt; drbd[8495]:     2013/09/24_11:20:31 DEBUG: r0: Exit code 0<br>
&gt;     &gt; drbd[8495]:     2013/09/24_11:20:31 DEBUG: r0: Command output:<br>
&gt;     &gt; Secondary/Primary<br>
&gt;     &gt; drbd[8495]:     2013/09/24_11:20:31 DEBUG: r0: Calling drbdadm -c<br>
&gt;     &gt; /etc/drbd.conf cstate r0<br>
&gt;     &gt; drbd[8495]:     2013/09/24_11:20:31 DEBUG: r0: Exit code 0<br>
&gt;     &gt; drbd[8495]:     2013/09/24_11:20:31 DEBUG: r0: Command output:<br>
&gt;     Connected<br>
&gt;     &gt; drbd[8495]:     2013/09/24_11:20:31 DEBUG: r0 status:<br>
&gt;     Secondary/Primary<br>
&gt;     &gt; Secondary Primary Connected<br>
&gt;     &gt; Sep 24 11:20:31 test-vm-2 lrmd: [2503]: info: operation monitor[31] on<br>
&gt;     &gt; drbd_r0:0 for client 2506: pid 8495 exited with return code 0<br>
&gt;     &gt; Sep 24 11:20:32 corosync [TOTEM ] entering GATHER state from 0.<br>
&gt;     &gt; Sep 24 11:20:34 corosync [TOTEM ] The consensus timeout expired.<br>
&gt;     &gt; Sep 24 11:20:34 corosync [TOTEM ] entering GATHER state from 3.<br>
&gt;     &gt; Sep 24 11:20:36 corosync [TOTEM ] The consensus timeout expired.<br>
&gt;     &gt; Sep 24 11:20:36 corosync [TOTEM ] entering GATHER state from 3.<br>
&gt;     &gt; Sep 24 11:20:38 corosync [TOTEM ] The consensus timeout expired.<br>
&gt;     &gt; Sep 24 11:20:38 corosync [TOTEM ] entering GATHER state from 3.<br>
&gt;     &gt; Sep 24 11:20:40 corosync [TOTEM ] The consensus timeout expired.<br>
&gt;     &gt; Sep 24 11:20:40 corosync [TOTEM ] entering GATHER state from 3.<br>
&gt;     &gt; Sep 24 11:20:40 corosync [TOTEM ] Totem is unable to form a cluster<br>
&gt;     &gt; because of an operating system or network fault. The most common<br>
&gt;     cause of<br>
&gt;     &gt; this message is that the local firewall is configured improperly.<br>
&gt;     &gt; Sep 24 11:20:43 corosync [TOTEM ] The consensus timeout expired.<br>
&gt;     &gt; Sep 24 11:20:43 corosync [TOTEM ] entering GATHER state from 3.<br>
&gt;     &gt; Sep 24 11:20:43 corosync [TOTEM ] Totem is unable to form a cluster<br>
&gt;     &gt; because of an operating system or network fault. The most common<br>
&gt;     cause of<br>
&gt;     &gt; this message is that the local firewall is configured improperly.<br>
&gt;     &gt; Sep 24 11:20:45 corosync [TOTEM ] The consensus timeout expired.<br>
&gt;     &gt; Sep 24 11:20:45 corosync [TOTEM ] entering GATHER state from 3.<br>
&gt;     &gt; Sep 24 11:20:45 corosync [TOTEM ] Totem is unable to form a cluster<br>
&gt;     &gt; because of an operating system or network fault. The most common<br>
&gt;     cause of<br>
&gt;     &gt; this message is that the local firewall is configured improperly.<br>
&gt;     &gt; Sep 24 11:20:47 corosync [TOTEM ] The consensus timeout expired.<br>
&gt;     &gt;<br>
&gt;     &gt; Those last 3 messages just repeat over and over, the cluster never<br>
&gt;     &gt; recovers, and the resources never move.  &quot;crm_mon&quot; reports that the<br>
&gt;     &gt; resources are still running on the dead node, and shows no<br>
&gt;     indication that<br>
&gt;     &gt; anything has gone wrong.<br>
&gt;     &gt;<br>
&gt;     &gt; Does anyone know what the issue could be?  My expectation was that the<br>
&gt;     &gt; remaining node would become the sole member of the cluster, take<br>
&gt;     over the<br>
&gt;     &gt; resources, and everything would keep running.<br>
&gt;     &gt;<br>
&gt;     &gt; For reference, my corosync.conf file is below:<br>
&gt;     &gt;<br>
&gt;     &gt; compatibility: whitetank<br>
&gt;     &gt;<br>
&gt;     &gt; totem {<br>
&gt;     &gt;         version: 2<br>
&gt;     &gt;         secauth: off<br>
&gt;     &gt;         interface {<br>
&gt;     &gt;                 member {<br>
&gt;     &gt;                         memberaddr: 192.168.25.201<br>
&gt;     &gt;                 }<br>
&gt;     &gt;                 member {<br>
&gt;     &gt;                         memberaddr: 192.168.25.202<br>
&gt;     &gt;                  }<br>
&gt;     &gt;                 ringnumber: 0<br>
&gt;     &gt;                 bindnetaddr: 192.168.25.0<br>
&gt;     &gt;                 mcastport: 5405<br>
&gt;     &gt;         }<br>
&gt;     &gt;         transport: udpu<br>
&gt;     &gt; }<br>
&gt;     &gt;<br>
&gt;     &gt; logging {<br>
&gt;     &gt;         fileline: off<br>
&gt;     &gt;         to_logfile: yes<br>
&gt;     &gt;         to_syslog: yes<br>
&gt;     &gt;         debug: on<br>
&gt;     &gt;         logfile: /var/log/cluster/corosync.log<br>
&gt;     &gt;         timestamp: on<br>
&gt;     &gt;         logger_subsys {<br>
&gt;     &gt;                 subsys: AMF<br>
&gt;     &gt;                 debug: on<br>
&gt;     &gt;         }<br>
&gt;     &gt; }<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt; Thanks!<br>
&gt;     &gt; Dave<br>
&gt;     &gt;<br>
&gt;     &gt; --<br>
&gt;     &gt; Dave Parker<br>
&gt;     &gt; Systems Administrator<br>
&gt;     &gt; Utica College<br>
&gt;     &gt; Integrated Information Technology Services<br>
&gt;     &gt; (315) 792-3229<br>
&gt;     &gt; Registered Linux User #408177<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt; _______________________________________________<br>
&gt;     &gt;<br>
&gt;     &gt; Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
</div></div>&gt;     &lt;mailto:<a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a>&gt;<br>
<div class="im">&gt;     &gt;<br>
&gt;     &gt; <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt; Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
&gt;     &gt;<br>
&gt;     &gt; Getting started:<br>
&gt;     <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
&gt;     &gt;<br>
&gt;     &gt; Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt; _______________________________________________<br>
&gt;     &gt; Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
</div>&gt;     &lt;mailto:<a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a>&gt;<br>
<div class="im">&gt;     &gt; <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
&gt;     &gt;<br>
&gt;     &gt; Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
&gt;     &gt; Getting started:<br>
&gt;     <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
&gt;     &gt; Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt; --<br>
&gt;     &gt; Dave Parker<br>
&gt;     &gt; Systems Administrator<br>
&gt;     &gt; Utica College<br>
&gt;     &gt; Integrated Information Technology Services<br>
</div>&gt;     &gt; <a href="tel:%28315%29%20792-3229" value="+13157923229">(315) 792-3229</a> &lt;tel:%28315%29%20792-3229&gt;<br>
<div class="im">&gt;     &gt; Registered Linux User #408177<br>
&gt;     &gt;<br>
&gt;     &gt; _______________________________________________<br>
&gt;     &gt; Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
</div>&gt;     &lt;mailto:<a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a>&gt;<br>
<div class="im">&gt;     &gt; <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
&gt;     &gt;<br>
&gt;     &gt; Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
&gt;     &gt; Getting started:<br>
&gt;     <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
&gt;     &gt; Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt; _______________________________________________<br>
&gt;     &gt; Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
</div>&gt;     &lt;mailto:<a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a>&gt;<br>
<div class="im">&gt;     &gt; <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
&gt;     &gt;<br>
&gt;     &gt; Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
&gt;     &gt; Getting started:<br>
&gt;     <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
&gt;     &gt; Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt;<br>
&gt;     &gt; --<br>
&gt;     &gt; Dave Parker<br>
&gt;     &gt; Systems Administrator<br>
&gt;     &gt; Utica College<br>
&gt;     &gt; Integrated Information Technology Services<br>
</div>&gt;     &gt; <a href="tel:%28315%29%20792-3229" value="+13157923229">(315) 792-3229</a> &lt;tel:%28315%29%20792-3229&gt;<br>
<div class="im">&gt;     &gt; Registered Linux User #408177<br>
&gt;     &gt; _______________________________________________<br>
&gt;     &gt; Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
</div>&gt;     &lt;mailto:<a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a>&gt;<br>
<div class="im">&gt;     &gt; <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
&gt;     &gt;<br>
&gt;     &gt; Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
&gt;     &gt; Getting started:<br>
&gt;     <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
&gt;     &gt; Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
&gt;<br>
&gt;<br>
&gt;     _______________________________________________<br>
&gt;     Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
</div>&gt;     &lt;mailto:<a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a>&gt;<br>
<div class="im">&gt;     <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
&gt;<br>
&gt;     Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
&gt;     Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
&gt;     Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; --<br>
&gt; Dave Parker<br>
&gt; Systems Administrator<br>
&gt; Utica College<br>
&gt; Integrated Information Technology Services<br>
&gt; <a href="tel:%28315%29%20792-3229" value="+13157923229">(315) 792-3229</a><br>
&gt; Registered Linux User #408177<br>
&gt;<br>
&gt;<br>
</div>&gt; This body part will be downloaded on demand.<br>
&gt;<br>
<span class="HOEnZb"><font color="#888888"><br>
<br>
--<br>
Need help with Pacemaker?<br>
<a href="http://www.hastexo.com/now" target="_blank">http://www.hastexo.com/now</a><br>
<br>
<br>
</font></span><br>_______________________________________________<br>
Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br></blockquote></div><br><br clear="all"><div><br></div>-- <br><div>Dave Parker</div>Systems Administrator<br>Utica College<br>Integrated Information Technology Services<br>(315) 792-3229<br>Registered Linux User #408177
</div>