[Pacemaker] [Patch] When the ID of the resource changes, influence may be reflected on an application of colocation.
Andrew Beekhof
andrew at beekhof.net
Tue Aug 7 11:24:06 UTC 2012
The problem with this approach is that ordering of the constraints in
the cib is not preserved between the nodes.
I will follow up further on the bugzilla.
On Wed, Aug 1, 2012 at 11:35 AM, <renayama19661014 at ybb.ne.jp> wrote:
> Hi All,
>
> When the ID of the resource changed, we confirmed that control of different colocation was carried out.
> The control of the resource may vary according to this problem only in the difference in ID name of the resource.
>
>
> The first pattern) The state transition that we expect is made. (pe-input-423)
>
> [root at drbd2 trac2114]# ptest -x pe-input-423 -VVV
> ptest[13220]: 2012/08/01_10:29:16 notice: unpack_config: On loss of CCM Quorum: Ignore
> ptest[13220]: 2012/08/01_10:29:16 WARN: unpack_nodes: Blind faith: not fencing unseen nodes
> ptest[13220]: 2012/08/01_10:29:16 WARN: unpack_rsc_op: Processing failed op postgresql:0_monitor_9000 on 02-sl6: not running (7)
> ptest[13220]: 2012/08/01_10:29:16 notice: native_print: vipCheck (ocf::pacemaker:Dummy): Started 02-sl6
> ptest[13220]: 2012/08/01_10:29:16 notice: native_print: vipCheckSupport (ocf::pacemaker:Dummy): Started 02-sl6
> ptest[13220]: 2012/08/01_10:29:16 notice: group_print: Resource Group: master-group
> ptest[13220]: 2012/08/01_10:29:16 notice: native_print: vip-master (ocf::heartbeat:IPaddr2): Started 02-sl6
> ptest[13220]: 2012/08/01_10:29:16 notice: native_print: vip-rep (ocf::heartbeat:IPaddr2): Started 02-sl6
> ptest[13220]: 2012/08/01_10:29:16 notice: clone_print: Master/Slave Set: msPostgresql
> ptest[13220]: 2012/08/01_10:29:16 notice: native_print: postgresql:0 (ocf::heartbeat:pgsql): Slave 02-sl6 FAILED
> ptest[13220]: 2012/08/01_10:29:16 notice: short_print: Slaves: [ 03-sl6 ]
> ptest[13220]: 2012/08/01_10:29:16 notice: clone_print: Clone Set: clnDiskd1
> ptest[13220]: 2012/08/01_10:29:16 notice: short_print: Started: [ 02-sl6 03-sl6 ]
> ptest[13220]: 2012/08/01_10:29:16 notice: clone_print: Clone Set: clnDiskd2
> ptest[13220]: 2012/08/01_10:29:16 notice: short_print: Started: [ 02-sl6 03-sl6 ]
> ptest[13220]: 2012/08/01_10:29:16 notice: clone_print: Clone Set: clnPingd
> ptest[13220]: 2012/08/01_10:29:16 notice: short_print: Started: [ 02-sl6 03-sl6 ]
> ptest[13220]: 2012/08/01_10:29:16 WARN: common_apply_stickiness: Forcing msPostgresql away from 02-sl6 after 1 failures (max=1)
> ptest[13220]: 2012/08/01_10:29:16 WARN: common_apply_stickiness: Forcing msPostgresql away from 02-sl6 after 1 failures (max=1)
> ptest[13220]: 2012/08/01_10:29:16 notice: RecurringOp: Start recurring monitor (10s) for vip-master on 03-sl6
> ptest[13220]: 2012/08/01_10:29:16 notice: RecurringOp: Start recurring monitor (10s) for vip-rep on 03-sl6
> ptest[13220]: 2012/08/01_10:29:16 notice: RecurringOp: Start recurring monitor (9s) for postgresql:1 on 03-sl6
> ptest[13220]: 2012/08/01_10:29:16 notice: RecurringOp: Start recurring monitor (9s) for postgresql:1 on 03-sl6
> ptest[13220]: 2012/08/01_10:29:16 notice: LogActions: Move resource vipCheck (Started 02-sl6 -> 03-sl6)
> ptest[13220]: 2012/08/01_10:29:16 notice: LogActions: Move resource vipCheckSupport (Started 02-sl6 -> 03-sl6)
> ptest[13220]: 2012/08/01_10:29:16 notice: LogActions: Move resource vip-master (Started 02-sl6 -> 03-sl6)
> ptest[13220]: 2012/08/01_10:29:16 notice: LogActions: Move resource vip-rep (Started 02-sl6 -> 03-sl6)
> ptest[13220]: 2012/08/01_10:29:16 notice: LogActions: Stop resource postgresql:0 (02-sl6)
> ptest[13220]: 2012/08/01_10:29:16 notice: LogActions: Promote postgresql:1 (Slave -> Master 03-sl6)
> ptest[13220]: 2012/08/01_10:29:16 notice: LogActions: Leave resource prmDiskd1:0 (Started 02-sl6)
> ptest[13220]: 2012/08/01_10:29:16 notice: LogActions: Leave resource prmDiskd1:1 (Started 03-sl6)
> ptest[13220]: 2012/08/01_10:29:16 notice: LogActions: Leave resource prmDiskd2:0 (Started 02-sl6)
> ptest[13220]: 2012/08/01_10:29:16 notice: LogActions: Leave resource prmDiskd2:1 (Started 03-sl6)
> ptest[13220]: 2012/08/01_10:29:16 notice: LogActions: Leave resource pingCheck:0 (Started 02-sl6)
> ptest[13220]: 2012/08/01_10:29:16 notice: LogActions: Leave resource pingCheck:1 (Started 03-sl6)
>
>
> The second pattern) Different state transition is made only by resource ID being different.(pe-input-396)
> * I changed a resource name into gtmproxy1 from vipCheck.
> * I changed a resource name into gtmproxy1Support from vipCheckSupport.
>
> [root at drbd2 trac2114]# ptest -x pe-input-396 -VVV
> ptest[13221]: 2012/08/01_10:29:36 notice: unpack_config: On loss of CCM Quorum: Ignore
> ptest[13221]: 2012/08/01_10:29:36 WARN: unpack_nodes: Blind faith: not fencing unseen nodes
> ptest[13221]: 2012/08/01_10:29:36 WARN: unpack_rsc_op: Processing failed op datanode1:0_monitor_9000 on 02-sl6: not running (7)
> ptest[13221]: 2012/08/01_10:29:36 notice: native_print: gtmproxy1 (ocf::pacemaker:Dummy): Started 02-sl6
> ptest[13221]: 2012/08/01_10:29:36 notice: native_print: gtmproxy1Support (ocf::pacemaker:Dummy): Started 02-sl6
> ptest[13221]: 2012/08/01_10:29:36 notice: group_print: Resource Group: master-group1
> ptest[13221]: 2012/08/01_10:29:36 notice: native_print: vip-master1 (ocf::heartbeat:IPaddr2): Started 02-sl6
> ptest[13221]: 2012/08/01_10:29:36 notice: native_print: vip-rep1 (ocf::heartbeat:IPaddr2): Started 02-sl6
> ptest[13221]: 2012/08/01_10:29:36 notice: clone_print: Master/Slave Set: msDatanode1
> ptest[13221]: 2012/08/01_10:29:36 notice: native_print: datanode1:0 (ocf::heartbeat:pgsql): Slave 02-sl6 FAILED
> ptest[13221]: 2012/08/01_10:29:36 notice: short_print: Slaves: [ 03-sl6 ]
> ptest[13221]: 2012/08/01_10:29:36 notice: clone_print: Clone Set: clnDiskd1
> ptest[13221]: 2012/08/01_10:29:36 notice: short_print: Started: [ 02-sl6 03-sl6 ]
> ptest[13221]: 2012/08/01_10:29:36 notice: clone_print: Clone Set: clnDiskd2
> ptest[13221]: 2012/08/01_10:29:36 notice: short_print: Started: [ 02-sl6 03-sl6 ]
> ptest[13221]: 2012/08/01_10:29:36 notice: clone_print: Clone Set: clnPingd
> ptest[13221]: 2012/08/01_10:29:36 notice: short_print: Started: [ 02-sl6 03-sl6 ]
> ptest[13221]: 2012/08/01_10:29:36 WARN: common_apply_stickiness: Forcing msDatanode1 away from 02-sl6 after 1 failures (max=1)
> ptest[13221]: 2012/08/01_10:29:36 WARN: common_apply_stickiness: Forcing msDatanode1 away from 02-sl6 after 1 failures (max=1)
> ptest[13221]: 2012/08/01_10:29:36 notice: RecurringOp: Start recurring monitor (9s) for datanode1:1 on 03-sl6
> ptest[13221]: 2012/08/01_10:29:36 notice: RecurringOp: Start recurring monitor (9s) for datanode1:1 on 03-sl6
> ptest[13221]: 2012/08/01_10:29:36 notice: LogActions: Stop resource gtmproxy1 (02-sl6)
> ptest[13221]: 2012/08/01_10:29:36 notice: LogActions: Stop resource gtmproxy1Support (Started 02-sl6)
> ptest[13221]: 2012/08/01_10:29:36 notice: LogActions: Stop resource vip-master1 (02-sl6)
> ptest[13221]: 2012/08/01_10:29:36 notice: LogActions: Stop resource vip-rep1 (02-sl6)
> ptest[13221]: 2012/08/01_10:29:36 notice: LogActions: Stop resource datanode1:0 (02-sl6)
> ptest[13221]: 2012/08/01_10:29:36 notice: LogActions: Promote datanode1:1 (Slave -> Master 03-sl6)
> ptest[13221]: 2012/08/01_10:29:36 notice: LogActions: Leave resource prmDiskd1:0 (Started 02-sl6)
> ptest[13221]: 2012/08/01_10:29:36 notice: LogActions: Leave resource prmDiskd1:1 (Started 03-sl6)
> ptest[13221]: 2012/08/01_10:29:36 notice: LogActions: Leave resource prmDiskd2:0 (Started 02-sl6)
> ptest[13221]: 2012/08/01_10:29:36 notice: LogActions: Leave resource prmDiskd2:1 (Started 03-sl6)
> ptest[13221]: 2012/08/01_10:29:36 notice: LogActions: Leave resource pingCheck:0 (Started 02-sl6)
> ptest[13221]: 2012/08/01_10:29:36 notice: LogActions: Leave resource pingCheck:1 (Started 03-sl6)
>
>
> I made the patch(trac2114.patch) which solved this problem.(for ClusterLabs-pacemaker-1.0-Pacemaker-1.0.12-19-g489cf4e)
>
> Please confirm the contents of my patch.
> And please apply to a repository.
> Or please solve a problem by a better correction
>
> * I do not confirm movement in Pacemaker1.1.
> * The correction may be necessary in Pacemaker1.1.
>
> Best Regards,
> Hideo Yamauchi.
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
More information about the Pacemaker
mailing list