Hello,<br>I am using a cluster with following configuration:<br><br> [root@MCG1 neha]# crm configure show<br>node $id=&quot;0686a4d1-c9de-4334-8d33-1a9f6f0755dd&quot; ggns2mexsatsdp22<br>node $id=&quot;76246d46-f0e4-4ba8-9179-d60aa7c697c8&quot; ggns2mexsatsdp23<br>
node $id=&quot;9d59c9e6-24e0-4684-94ab-c07af7e7a2f0&quot; mcg1 \<br>        attributes standby=&quot;off&quot;<br>node $id=&quot;fb3f06f0-05bf-42ef-a312-c072f589918a&quot; mcg2 \<br>        attributes standby=&quot;off&quot;<br>
primitive ClusterIP ocf:mcg:MCG_VIPaddr_RA \<br>        params ip=&quot;192.168.113.77&quot; cidr_netmask=&quot;255.255.255.0&quot; nic=&quot;eth0:1&quot; \<br>        op monitor interval=&quot;40&quot; timeout=&quot;20&quot;<br>
primitive RM ocf:mcg:RM_RA \<br>        op monitor interval=&quot;60&quot; role=&quot;Master&quot; timeout=&quot;30&quot; on-fail=&quot;restart&quot; \<br>        op monitor interval=&quot;40&quot; role=&quot;Slave&quot; timeout=&quot;40&quot; on-fail=&quot;restart&quot;<br>
primitive Tmgr ocf:mcg:TM_RA \<br>        op monitor interval=&quot;60&quot; role=&quot;Master&quot; timeout=&quot;30&quot; on-fail=&quot;restart&quot; \<br>        op monitor interval=&quot;40&quot; role=&quot;Slave&quot; timeout=&quot;40&quot; on-fail=&quot;restart&quot;<br>
primitive pimd ocf:mcg:PIMD_RA \<br>        op monitor interval=&quot;60&quot; role=&quot;Master&quot; timeout=&quot;30&quot; on-fail=&quot;standby&quot; \<br>        op monitor interval=&quot;40&quot; role=&quot;Slave&quot; timeout=&quot;40&quot; on-fail=&quot;restart&quot;<br>
ms ms_RM RM \<br>        meta master-max=&quot;1&quot; master-node-max=&quot;1&quot; clone-max=&quot;2&quot; clone-node-max=&quot;1&quot; notify=&quot;true&quot; target-role=&quot;Started&quot;<br>ms ms_Tmgr Tmgr \<br>        meta master-max=&quot;1&quot; master-node-max=&quot;1&quot; clone-max=&quot;2&quot; clone-node-max=&quot;1&quot; notify=&quot;true&quot; target-role=&quot;Started&quot;<br>
ms ms_pimd pimd \<br>        meta master-max=&quot;1&quot; master-node-max=&quot;1&quot; clone-max=&quot;2&quot; clone-node-max=&quot;1&quot; notify=&quot;true&quot; target-role=&quot;Started&quot;<br>colocation ip_with_RM inf: ClusterIP ms_RM:Master<br>
colocation ip_with_Tmgr inf: ClusterIP ms_Tmgr:Master<br>colocation ip_with_pimd inf: ClusterIP ms_pimd:Master<br>order TM-after-RM inf: ms_RM:promote ms_Tmgr:start<br>order ip-after-pimd inf: ms_pimd:promote ClusterIP:start<br>
order pimd-after-TM inf: ms_Tmgr:promote ms_pimd:start<br>property $id=&quot;cib-bootstrap-options&quot; \<br>        dc-version=&quot;1.0.11-55a5f5be61c367cbd676c2f0ec4f1c62b38223d7&quot; \<br>        cluster-infrastructure=&quot;Heartbeat&quot; \<br>
        no-quorum-policy=&quot;ignore&quot; \<br>        stonith-enabled=&quot;false&quot;<br>rsc_defaults $id=&quot;rsc-options&quot; \<br>        resource-stickiness=&quot;100&quot; \<br>        migration-threshold=&quot;3&quot;<br>
<br>When I execute &quot;crm node standby&quot; command on the Active node, it leads to stopping of resourcs on both Active and Standby node.<br>As per my understanding, this should lead to stopping of resources only on current Active node and all the resources on the standby node should get a promote.<br>
<br>Please comment.<br><br>Thanks and regards<br>Neha<br><br><br>