[Pacemaker] version compatibility between pcs and pacemaker

K Mehta kiranmehta1981 at gmail.com
Tue May 27 01:45:54 EDT 2014


No reason. Need to know which combination is expected to work.
using 0.9.26 with new 1.1.10 pacemaker
and 0.9.90 with old pacemaker has issues in resource deletion.

Below is the commands executed on 0.9.26 with 1.1.10 pacemaker

[root at vsanqa11 tmp]# rpm -qa | grep pcs; rpm -qa | grep pacemaker; rpm -qa
| grep corosync; rpm -qa | grep libqb
pcs-0.9.26-10.el6.noarch
pacemaker-cli-1.1.10-14.el6_5.3.x86_64
pacemaker-libs-1.1.10-14.el6_5.3.x86_64
pacemaker-1.1.10-14.el6_5.3.x86_64
pacemaker-cluster-libs-1.1.10-14.el6_5.3.x86_64
corosynclib-1.4.1-17.el6_5.1.x86_64
corosync-1.4.1-17.el6_5.1.x86_64
libqb-devel-0.16.0-2.el6.x86_64
libqb-0.16.0-2.el6.x86_64


[root at vsanqa11 tmp]# pcs status
Last updated: Mon May 26 22:39:42 2014
Last change: Mon May 26 22:39:01 2014 via cibadmin on vsanqa11
Stack: cman
Current DC: vsanqa11 - partition with quorum
Version: 1.1.10-14.el6_5.3-368c726
2 Nodes configured
2 Resources configured


Online: [ vsanqa11 vsanqa12 ]

Full list of resources:

 Master/Slave Set: ms-9fc36888-cf2a-417a-907c-db3f5e9b7a8a
[vha-9fc36888-cf2a-417a-907c-db3f5e9b7a8a]
     Masters: [ vsanqa12 ]
     Slaves: [ vsanqa11 ]



[root at vsanqa11 tmp]# pcs config
Corosync Nodes:

Pacemaker Nodes:
 vsanqa11 vsanqa12

Resources:

Location Constraints:
  Resource: ms-9fc36888-cf2a-417a-907c-db3f5e9b7a8a
    Enabled on: vsanqa11
    Enabled on: vsanqa12
  Resource: vha-9fc36888-cf2a-417a-907c-db3f5e9b7a8a
    Enabled on: vsanqa11
    Enabled on: vsanqa12
Ordering Constraints:
Colocation Constraints:

Cluster Properties:
 dc-version: 1.1.10-14.el6_5.3-368c726
 cluster-infrastructure: cman
 last-lrm-refresh: 1401098102
 expected-quorum-votes: 2
 stonith-enabled: false
 no-quorum-policy: ignore



[root at vsanqa11 tmp]# pcs resource show
ms-9fc36888-cf2a-417a-907c-db3f5e9b7a8a
Resource: ms-9fc36888-cf2a-417a-907c-db3f5e9b7a8a
  cluster_uuid: 9fc36888-cf2a-417a-907c-db3f5e9b7a8a
  clone-max: 2
  globally-unique: false
  target-role: started
  op monitor interval=31s role=Slave timeout=100s
[root at vsanqa11 tmp]# pcs resource show
vha-9fc36888-cf2a-417a-907c-db3f5e9b7a8a
Resource: vha-9fc36888-cf2a-417a-907c-db3f5e9b7a8a
  cluster_uuid: 9fc36888-cf2a-417a-907c-db3f5e9b7a8a
  op monitor interval=31s role=Slave timeout=100s


[root at vsanqa11 tmp]# pcs resource delete
ms-9fc36888-cf2a-417a-907c-db3f5e9b7a8a
ERROR: Unable to update cib
Call cib_replace failed (-203): Update does not conform to the configured
schema
<cib admin_epoch="0" cib-last-written="Mon May 26 22:40:45 2014"
crm_feature_set="3.0.7" dc-uuid="vsanqa11" epoch="10550" have-quorum="1"
num_updates="1" update-client="cibadmin" update-origin="vsanqa11"
validate-with="pacemaker-1.2">
  <configuration>
    <crm_config>
      <cluster_property_set id="cib-bootstrap-options">
        <nvpair id="cib-bootstrap-options-dc-version" name="dc-version"
value="1.1.10-14.el6_5.3-368c726"/>
        <nvpair id="cib-bootstrap-options-cluster-infrastructure"
name="cluster-infrastructure" value="cman"/>
        <nvpair id="cib-bootstrap-options-last-lrm-refresh"
name="last-lrm-refresh" value="1401098102"/>
        <nvpair id="cib-bootstrap-options-expected-quorum-votes"
name="expected-quorum-votes" value="2"/>
        <nvpair id="cib-bootstrap-options-stonith-enabled"
name="stonith-enabled" value="false"/>
        <nvpair id="cib-bootstrap-options-no-quorum-policy"
name="no-quorum-policy" value="ignore"/>
      </cluster_property_set>
    </crm_config>
    <nodes>
      <node id="vsanqa11" uname="vsanqa11"/>
      <node id="vsanqa12" uname="vsanqa12"/>
    </nodes>
    <resources/>
    <constraints>
      <rsc_location
id="location-vha-9fc36888-cf2a-417a-907c-db3f5e9b7a8a-vsanqa11-INFINITY"
node="vsanqa11" rsc="vha-9fc36888-cf2a-417a-907c-db3f5e9b7a8a"
score="INFINITY"/>
      <rsc_location
id="location-vha-9fc36888-cf2a-417a-907c-db3f5e9b7a8a-vsanqa12-INFINITY"
node="vsanqa12" rsc="vha-9fc36888-cf2a-417a-907c-db3f5e9b7a8a"
score="INFINITY"/>
      <rsc_location
id="location-ms-9fc36888-cf2a-417a-907c-db3f5e9b7a8a-vsanqa11-INFINITY"
node="vsanqa11" rsc="ms-9fc36888-cf2a-417a-907c-db3f5e9b7a8a"
score="INFINITY"/>
      <rsc_location
id="location-ms-9fc36888-cf2a-417a-907c-db3f5e9b7a8a-vsanqa12-INFINITY"
node="vsanqa12" rsc="ms-9fc36888-cf2a-417a-907c-db3f5e9b7a8a"
score="INFINITY"/>
    </constraints>
    <rsc_defaults>
      <meta_attributes id="rsc_defaults-options">
        <nvpair id="rsc_defaults-options-resource-stickiness"
name="resource-stickiness" value="100"/>
        <nvpair id="rsc_defaults-options-timeout" name="timeout"
value="100s"/>
      </meta_attributes>
    </rsc_defaults>
  </configuration>
  <status>
    <node_state crm-debug-origin="do_update_resource" crmd="online"
expected="member" id="vsanqa11" in_ccm="true" join="member"
uname="vsanqa11">
      <transient_attributes id="vsanqa11">
        <instance_attributes id="status-vsanqa11">
          <nvpair id="status-vsanqa11-probe_complete" name="probe_complete"
value="true"/>
          <nvpair
id="status-vsanqa11-master-vha-9fc36888-cf2a-417a-907c-db3f5e9b7a8a"
name="master-vha-9fc36888-cf2a-417a-907c-db3f5e9b7a8a" value="4"/>
        </instance_attributes>
      </transient_attributes>
      <lrm id="vsanqa11">
        <lrm_resources>
          <lrm_resource id="vha-9fc36888-cf2a-417a-907c-db3f5e9b7a8a"
type="vgc-cm-agent.ocf" class="ocf" provider="heartbeat">
            <lrm_rsc_op
id="vha-9fc36888-cf2a-417a-907c-db3f5e9b7a8a_last_0"
operation_key="vha-9fc36888-cf2a-417a-907c-db3f5e9b7a8a_start_0"
operation="start" crm-debug-origin="do_update_resource"
crm_feature_set="3.0.7"
transition-key="7:195:0:79ecdaeb-e637-4fdf-b8e8-ebfc7e2eca39"
transition-magic="0:0;7:195:0:79ecdaeb-e637-4fdf-b8e8-ebfc7e2eca39"
call-id="290" rc-code="0" op-status="0" interval="0" last-run="1401169141"
last-rc-change="1401169141" exec-time="1103" queue-time="0"
op-digest="494c7d757cceae4f35487404ebc12a10"/>
            <lrm_rsc_op
id="vha-9fc36888-cf2a-417a-907c-db3f5e9b7a8a_monitor_31000"
operation_key="vha-9fc36888-cf2a-417a-907c-db3f5e9b7a8a_monitor_31000"
operation="monitor" crm-debug-origin="do_update_resource"
crm_feature_set="3.0.7"
transition-key="8:195:0:79ecdaeb-e637-4fdf-b8e8-ebfc7e2eca39"
transition-magic="0:0;8:195:0:79ecdaeb-e637-4fdf-b8e8-ebfc7e2eca39"
call-id="293" rc-code="0" op-status="0" interval="31000"
last-rc-change="1401169142" exec-time="76" queue-time="0"
op-digest="f81bd1d7870f6bb69f88312740132d65"/>
          </lrm_resource>
        </lrm_resources>
      </lrm>
    </node_state>
    <node_state crm-debug-origin="do_update_resource" crmd="online"
expected="member" id="vsanqa12" in_ccm="true" join="member"
uname="vsanqa12">
      <transient_attributes id="vsanqa12">
        <instance_attributes id="status-vsanqa12">
          <nvpair id="status-vsanqa12-probe_complete" name="probe_complete"
value="true"/>
          <nvpair
id="status-vsanqa12-master-vha-9fc36888-cf2a-417a-907c-db3f5e9b7a8a"
name="master-vha-9fc36888-cf2a-417a-907c-db3f5e9b7a8a" value="5"/>
        </instance_attributes>
      </transient_attributes>
      <lrm id="vsanqa12">
        <lrm_resources>
          <lrm_resource id="vha-9fc36888-cf2a-417a-907c-db3f5e9b7a8a"
type="vgc-cm-agent.ocf" class="ocf" provider="heartbeat">
            <lrm_rsc_op
id="vha-9fc36888-cf2a-417a-907c-db3f5e9b7a8a_last_0"
operation_key="vha-9fc36888-cf2a-417a-907c-db3f5e9b7a8a_promote_0"
operation="promote" crm-debug-origin="do_update_resource"
crm_feature_set="3.0.7"
transition-key="11:196:0:79ecdaeb-e637-4fdf-b8e8-ebfc7e2eca39"
transition-magic="0:0;11:196:0:79ecdaeb-e637-4fdf-b8e8-ebfc7e2eca39"
call-id="275" rc-code="0" op-status="0" interval="0" last-run="1401169174"
last-rc-change="1401169174" exec-time="196" queue-time="0"
op-digest="494c7d757cceae4f35487404ebc12a10"/>
          </lrm_resource>
        </lrm_resources>
      </lrm>
    </node_state>
  </status>
</cib>


****deleted on base (non clone) resource works however it stays in orphaned
state for few seconds

[root at vsanqa11 tmp]# pcs resource delete
vha-9fc36888-cf2a-417a-907c-db3f5e9b7a8a; pcs status
Removing Constraint -
location-ms-9fc36888-cf2a-417a-907c-db3f5e9b7a8a-vsanqa11-INFINITY
Removing Constraint -
location-ms-9fc36888-cf2a-417a-907c-db3f5e9b7a8a-vsanqa12-INFINITY
Removing Constraint -
location-vha-9fc36888-cf2a-417a-907c-db3f5e9b7a8a-vsanqa11-INFINITY
Removing Constraint -
location-vha-9fc36888-cf2a-417a-907c-db3f5e9b7a8a-vsanqa12-INFINITY
Deleting Resource - vha-9fc36888-cf2a-417a-907c-db3f5e9b7a8a
Last updated: Mon May 26 22:41:08 2014
Last change: Mon May 26 22:41:07 2014 via cibadmin on vsanqa11
Stack: cman
Current DC: vsanqa11 - partition with quorum
Version: 1.1.10-14.el6_5.3-368c726
2 Nodes configured
0 Resources configured


Online: [ vsanqa11 vsanqa12 ]

Full list of resources:

 vha-9fc36888-cf2a-417a-907c-db3f5e9b7a8a
(ocf::heartbeat:vgc-cm-agent.ocf):       ORPHANED Master [ vsanqa11
vsanqa12 ]

[root at vsanqa11 tmp]# pcs status
Last updated: Mon May 26 22:41:18 2014
Last change: Mon May 26 22:41:07 2014 via cibadmin on vsanqa11
Stack: cman
Current DC: vsanqa11 - partition with quorum
Version: 1.1.10-14.el6_5.3-368c726
2 Nodes configured
0 Resources configured


Online: [ vsanqa11 vsanqa12 ]

Full list of resources:



On Tue, May 27, 2014 at 11:01 AM, Andrew Beekhof <andrew at beekhof.net> wrote:

>
> On 27 May 2014, at 2:34 pm, K Mehta <kiranmehta1981 at gmail.com> wrote:
>
> > I have seen that 0.9.26 works with 1.1.8 pacemaker and 0.9.90 works with
> 1.1.10 pacemaker.
> > However, with 0.9.90 pcs and 1.1.8 pacemaker, pcs delete resource
> <multistate resource name> fails with error CIB update failed because of
> schema error
>
> Any specific reason to stay on 1.1.8?
>
> >
> >
> > On Tue, May 27, 2014 at 5:28 AM, Andrew Beekhof <andrew at beekhof.net>
> wrote:
> >
> > On 26 May 2014, at 5:15 pm, K Mehta <kiranmehta1981 at gmail.com> wrote:
> >
> > > pcs versions 0.9.26 and 0.9.90
> > > pacemaker versions 1.1.8 and 1.1.10
> > >
> > > Which pcs versions are expected to work with which pacemaker versions ?
> >
> > I think for the most part, all versions will work together.
> > There may be the odd command that requires a flag exposed by newer
> versions of pacemaker, but that should be minimal.
> >
> > >
> > >
> > > Regards,
> > >  Kiran
> > > _______________________________________________
> > > Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> > > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> > >
> > > Project Home: http://www.clusterlabs.org
> > > Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > > Bugs: http://bugs.clusterlabs.org
> >
> >
> > _______________________________________________
> > Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
> >
> >
> > _______________________________________________
> > Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20140527/96a41198/attachment-0003.html>


More information about the Pacemaker mailing list