<html dir="ltr">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=windows-1256">
<style id="owaParaStyle" type="text/css">P {margin-top:0;margin-bottom:0;}</style>
</head>
<body ocsi="0" fpstyle="1">
<div style="direction: ltr;font-family: Tahoma;color: #000000;font-size: 10pt;">Hi<br>
<br>
We have built a cluster on top of the SLES 11 SP1 stack, which manages various Xen VMs.<br>
<br>
In the development phase we used some test VM resources, which have since been removed from the resource list. However I see some remnants of these old resources in the log files, and would like to xclean this up.<br>
<br>
e.g. I see<br>
<br>
Dec 22 12:27:18 node2 pengine: [6262]: info: get_failcount: hvm1 has failed 1 times on node2<br>
Dec 22 12:27:18 node2 pengine: [6262]: notice: common_apply_stickiness: hvm1 can fail 999999 more times on node2 before being forced off<br>
Dec 22 12:27:18 node2 attrd: [6261]: info: attrd_trigger_update: Sending flush op to all hosts for: fail-count-hvm1 (1)<br>
Dec 22 12:27:18 node2 attrd: [6261]: info: attrd_trigger_update: Sending flush op to all hosts for: last-failure-hvm1 (1322579680)<br>
<br>
hvm1 was a VM in that test phase.<br>
<br>
If I do a dump of the CIB, I find this section<br>
<br>
&nbsp; &lt;status&gt;<br>
&nbsp;&nbsp;&nbsp; &lt;node_state uname=&quot;node2&quot; ha=&quot;active&quot; in_ccm=&quot;true&quot; crmd=&quot;online&quot; join=&quot;member&quot; expected=&quot;member&quot; shutdown=&quot;0&quot; id=&quot;node2&quot; crm-debug-origin=&quot;do_state_transition&quot;&gt;<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;lrm id=&quot;node2&quot;&gt;<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;lrm_resources&gt;<br>
...<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;lrm_resource id=&quot;hvm1&quot; type=&quot;Xen&quot; class=&quot;ocf&quot; provider=&quot;heartbeat&quot;&gt;<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;lrm_rsc_op id=&quot;hvm1_monitor_0&quot; operation=&quot;monitor&quot; crm-debug-origin=&quot;build_active_RAs&quot; crm_feature_set=&quot;3.0.2&quot; transition-key=&quot;20:11:7:1fd9e9b1-610e-4768-abd5-35ea3ce45c4d&quot; transition-magic=&quot;0:7;20:11:7:1fd9e9b1-610e-4768-abd5-35ea3ce45c4d&quot; call-id=&quot;27&quot;
 rc-code=&quot;7&quot; op-status=&quot;0&quot; interval=&quot;0&quot; last-run=&quot;1322130825&quot; last-rc-change=&quot;1322130825&quot; exec-time=&quot;550&quot; queue-time=&quot;0&quot; op-digest=&quot;71594dc818f53dfe034bb5e84c6d80fb&quot;/&gt;<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;lrm_rsc_op id=&quot;hvm1_stop_0&quot; operation=&quot;stop&quot; crm-debug-origin=&quot;build_active_RAs&quot; crm_feature_set=&quot;3.0.2&quot; transition-key=&quot;61:511:0:abda911e-05ed-4e11-8e25-ab03a1bfd7b7&quot; transition-magic=&quot;0:0;61:511:0:abda911e-05ed-4e11-8e25-ab03a1bfd7b7&quot; call-id=&quot;56&quot;
 rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;0&quot; last-run=&quot;1322580820&quot; last-rc-change=&quot;1322580820&quot; exec-time=&quot;164320&quot; queue-time=&quot;0&quot; op-digest=&quot;71594dc818f53dfe034bb5e84c6d80fb&quot;/&gt;<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;lrm_rsc_op id=&quot;hvm1_start_0&quot; operation=&quot;start&quot; crm-debug-origin=&quot;build_active_RAs&quot; crm_feature_set=&quot;3.0.2&quot; transition-key=&quot;59:16:0:1fd9e9b1-610e-4768-abd5-35ea3ce45c4d&quot; transition-magic=&quot;0:0;59:16:0:1fd9e9b1-610e-4768-abd5-35ea3ce45c4d&quot; call-id=&quot;30&quot;
 rc-code=&quot;0&quot; op-status=&quot;0&quot; interval=&quot;0&quot; last-run=&quot;1322131559&quot; last-rc-change=&quot;1322131559&quot; exec-time=&quot;470&quot; queue-time=&quot;0&quot; op-digest=&quot;71594dc818f53dfe034bb5e84c6d80fb&quot;/&gt;<br>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;/lrm_resource&gt;<br>
...<br>
<br>
I tried<br>
<br>
cibadmin -Q &gt; tmp.xml<br>
vi tmp.xml<br>
cibadmin --replace --xml-file tmp.xml<br>
<br>
but this does not do the job, I guess because the problematic bits are in the status section.<br>
<br>
Any clue how to clean this up properly, preferably without any cluster downtime?<br>
<br>
Thanks,<br>
Kevin<br>
<br>
version info<br>
<br>
node2 # rpm -qa | egrep &quot;heartbeat|pacemaker|cluster|openais&quot;<br>
libopenais3-1.1.2-0.5.19<br>
pacemaker-mgmt-2.0.0-0.2.19<br>
openais-1.1.2-0.5.19<br>
cluster-network-kmp-xen-1.4_2.6.32.12_0.6-2.1.73<br>
libpacemaker3-1.1.2-0.2.1<br>
drbd-heartbeat-8.3.7-0.4.15<br>
cluster-glue-1.0.5-0.5.1<br>
drbd-pacemaker-8.3.7-0.4.15<br>
cluster-network-kmp-default-1.4_2.6.32.12_0.6-2.1.73<br>
pacemaker-1.1.2-0.2.1<br>
yast2-cluster-2.15.0-8.6.19<br>
pacemaker-mgmt-client-2.0.0-0.2.19<br>
</div>
</body>
</html>