[Pacemaker] [Patch] When the ID of the resource changes, influence may be reflected on an application of colocation.

renayama19661014 at ybb.ne.jp renayama19661014 at ybb.ne.jp
Sun Aug 5 19:20:50 EDT 2012


Hi All,

I registered this problem with Bugzilla.

 * http://bugs.clusterlabs.org/show_bug.cgi?id=5089

Best Regards,
Hideo Yamauchi.

--- On Wed, 2012/8/1, renayama19661014 at ybb.ne.jp <renayama19661014 at ybb.ne.jp> wrote:

> Hi All,
> 
> When the ID of the resource changed, we confirmed that control of different colocation was carried out.
> The control of the resource may vary according to this problem only in the difference in ID name of the resource.
> 
> 
> The first pattern) The state transition that we expect is made. (pe-input-423)
> 
> [root at drbd2 trac2114]# ptest -x pe-input-423 -VVV
> ptest[13220]: 2012/08/01_10:29:16 notice: unpack_config: On loss of CCM Quorum: Ignore
> ptest[13220]: 2012/08/01_10:29:16 WARN: unpack_nodes: Blind faith: not fencing unseen nodes
> ptest[13220]: 2012/08/01_10:29:16 WARN: unpack_rsc_op: Processing failed op postgresql:0_monitor_9000 on 02-sl6: not running (7)
> ptest[13220]: 2012/08/01_10:29:16 notice: native_print: vipCheck        (ocf::pacemaker:Dummy): Started 02-sl6
> ptest[13220]: 2012/08/01_10:29:16 notice: native_print: vipCheckSupport (ocf::pacemaker:Dummy): Started 02-sl6
> ptest[13220]: 2012/08/01_10:29:16 notice: group_print:  Resource Group: master-group
> ptest[13220]: 2012/08/01_10:29:16 notice: native_print:      vip-master (ocf::heartbeat:IPaddr2):       Started 02-sl6
> ptest[13220]: 2012/08/01_10:29:16 notice: native_print:      vip-rep    (ocf::heartbeat:IPaddr2):       Started 02-sl6
> ptest[13220]: 2012/08/01_10:29:16 notice: clone_print:  Master/Slave Set: msPostgresql
> ptest[13220]: 2012/08/01_10:29:16 notice: native_print:      postgresql:0       (ocf::heartbeat:pgsql): Slave 02-sl6 FAILED
> ptest[13220]: 2012/08/01_10:29:16 notice: short_print:      Slaves: [ 03-sl6 ]
> ptest[13220]: 2012/08/01_10:29:16 notice: clone_print:  Clone Set: clnDiskd1
> ptest[13220]: 2012/08/01_10:29:16 notice: short_print:      Started: [ 02-sl6 03-sl6 ]
> ptest[13220]: 2012/08/01_10:29:16 notice: clone_print:  Clone Set: clnDiskd2
> ptest[13220]: 2012/08/01_10:29:16 notice: short_print:      Started: [ 02-sl6 03-sl6 ]
> ptest[13220]: 2012/08/01_10:29:16 notice: clone_print:  Clone Set: clnPingd
> ptest[13220]: 2012/08/01_10:29:16 notice: short_print:      Started: [ 02-sl6 03-sl6 ]
> ptest[13220]: 2012/08/01_10:29:16 WARN: common_apply_stickiness: Forcing msPostgresql away from 02-sl6 after 1 failures (max=1)
> ptest[13220]: 2012/08/01_10:29:16 WARN: common_apply_stickiness: Forcing msPostgresql away from 02-sl6 after 1 failures (max=1)
> ptest[13220]: 2012/08/01_10:29:16 notice: RecurringOp:  Start recurring monitor (10s) for vip-master on 03-sl6
> ptest[13220]: 2012/08/01_10:29:16 notice: RecurringOp:  Start recurring monitor (10s) for vip-rep on 03-sl6
> ptest[13220]: 2012/08/01_10:29:16 notice: RecurringOp:  Start recurring monitor (9s) for postgresql:1 on 03-sl6
> ptest[13220]: 2012/08/01_10:29:16 notice: RecurringOp:  Start recurring monitor (9s) for postgresql:1 on 03-sl6
> ptest[13220]: 2012/08/01_10:29:16 notice: LogActions: Move    resource vipCheck (Started 02-sl6 -> 03-sl6)
> ptest[13220]: 2012/08/01_10:29:16 notice: LogActions: Move    resource vipCheckSupport  (Started 02-sl6 -> 03-sl6)
> ptest[13220]: 2012/08/01_10:29:16 notice: LogActions: Move    resource vip-master       (Started 02-sl6 -> 03-sl6)
> ptest[13220]: 2012/08/01_10:29:16 notice: LogActions: Move    resource vip-rep  (Started 02-sl6 -> 03-sl6)
> ptest[13220]: 2012/08/01_10:29:16 notice: LogActions: Stop    resource postgresql:0     (02-sl6)
> ptest[13220]: 2012/08/01_10:29:16 notice: LogActions: Promote postgresql:1      (Slave -> Master 03-sl6)
> ptest[13220]: 2012/08/01_10:29:16 notice: LogActions: Leave   resource prmDiskd1:0      (Started 02-sl6)
> ptest[13220]: 2012/08/01_10:29:16 notice: LogActions: Leave   resource prmDiskd1:1      (Started 03-sl6)
> ptest[13220]: 2012/08/01_10:29:16 notice: LogActions: Leave   resource prmDiskd2:0      (Started 02-sl6)
> ptest[13220]: 2012/08/01_10:29:16 notice: LogActions: Leave   resource prmDiskd2:1      (Started 03-sl6)
> ptest[13220]: 2012/08/01_10:29:16 notice: LogActions: Leave   resource pingCheck:0      (Started 02-sl6)
> ptest[13220]: 2012/08/01_10:29:16 notice: LogActions: Leave   resource pingCheck:1      (Started 03-sl6)
> 
> 
> The second pattern) Different state transition is made only by resource ID being different.(pe-input-396)
>  * I changed a resource name into gtmproxy1 from vipCheck.
>  * I changed a resource name into gtmproxy1Support from vipCheckSupport.
> 
> [root at drbd2 trac2114]# ptest -x pe-input-396 -VVV
> ptest[13221]: 2012/08/01_10:29:36 notice: unpack_config: On loss of CCM Quorum: Ignore
> ptest[13221]: 2012/08/01_10:29:36 WARN: unpack_nodes: Blind faith: not fencing unseen nodes
> ptest[13221]: 2012/08/01_10:29:36 WARN: unpack_rsc_op: Processing failed op datanode1:0_monitor_9000 on 02-sl6: not running (7)
> ptest[13221]: 2012/08/01_10:29:36 notice: native_print: gtmproxy1       (ocf::pacemaker:Dummy): Started 02-sl6
> ptest[13221]: 2012/08/01_10:29:36 notice: native_print: gtmproxy1Support        (ocf::pacemaker:Dummy): Started 02-sl6
> ptest[13221]: 2012/08/01_10:29:36 notice: group_print:  Resource Group: master-group1
> ptest[13221]: 2012/08/01_10:29:36 notice: native_print:      vip-master1        (ocf::heartbeat:IPaddr2):       Started 02-sl6
> ptest[13221]: 2012/08/01_10:29:36 notice: native_print:      vip-rep1   (ocf::heartbeat:IPaddr2):       Started 02-sl6
> ptest[13221]: 2012/08/01_10:29:36 notice: clone_print:  Master/Slave Set: msDatanode1
> ptest[13221]: 2012/08/01_10:29:36 notice: native_print:      datanode1:0        (ocf::heartbeat:pgsql): Slave 02-sl6 FAILED
> ptest[13221]: 2012/08/01_10:29:36 notice: short_print:      Slaves: [ 03-sl6 ]
> ptest[13221]: 2012/08/01_10:29:36 notice: clone_print:  Clone Set: clnDiskd1
> ptest[13221]: 2012/08/01_10:29:36 notice: short_print:      Started: [ 02-sl6 03-sl6 ]
> ptest[13221]: 2012/08/01_10:29:36 notice: clone_print:  Clone Set: clnDiskd2
> ptest[13221]: 2012/08/01_10:29:36 notice: short_print:      Started: [ 02-sl6 03-sl6 ]
> ptest[13221]: 2012/08/01_10:29:36 notice: clone_print:  Clone Set: clnPingd
> ptest[13221]: 2012/08/01_10:29:36 notice: short_print:      Started: [ 02-sl6 03-sl6 ]
> ptest[13221]: 2012/08/01_10:29:36 WARN: common_apply_stickiness: Forcing msDatanode1 away from 02-sl6 after 1 failures (max=1)
> ptest[13221]: 2012/08/01_10:29:36 WARN: common_apply_stickiness: Forcing msDatanode1 away from 02-sl6 after 1 failures (max=1)
> ptest[13221]: 2012/08/01_10:29:36 notice: RecurringOp:  Start recurring monitor (9s) for datanode1:1 on 03-sl6
> ptest[13221]: 2012/08/01_10:29:36 notice: RecurringOp:  Start recurring monitor (9s) for datanode1:1 on 03-sl6
> ptest[13221]: 2012/08/01_10:29:36 notice: LogActions: Stop    resource gtmproxy1        (02-sl6)
> ptest[13221]: 2012/08/01_10:29:36 notice: LogActions: Stop    resource gtmproxy1Support (Started 02-sl6)
> ptest[13221]: 2012/08/01_10:29:36 notice: LogActions: Stop    resource vip-master1      (02-sl6)
> ptest[13221]: 2012/08/01_10:29:36 notice: LogActions: Stop    resource vip-rep1 (02-sl6)
> ptest[13221]: 2012/08/01_10:29:36 notice: LogActions: Stop    resource datanode1:0      (02-sl6)
> ptest[13221]: 2012/08/01_10:29:36 notice: LogActions: Promote datanode1:1       (Slave -> Master 03-sl6)
> ptest[13221]: 2012/08/01_10:29:36 notice: LogActions: Leave   resource prmDiskd1:0      (Started 02-sl6)
> ptest[13221]: 2012/08/01_10:29:36 notice: LogActions: Leave   resource prmDiskd1:1      (Started 03-sl6)
> ptest[13221]: 2012/08/01_10:29:36 notice: LogActions: Leave   resource prmDiskd2:0      (Started 02-sl6)
> ptest[13221]: 2012/08/01_10:29:36 notice: LogActions: Leave   resource prmDiskd2:1      (Started 03-sl6)
> ptest[13221]: 2012/08/01_10:29:36 notice: LogActions: Leave   resource pingCheck:0      (Started 02-sl6)
> ptest[13221]: 2012/08/01_10:29:36 notice: LogActions: Leave   resource pingCheck:1      (Started 03-sl6)
> 
> 
> I made the patch(trac2114.patch) which solved this problem.(for ClusterLabs-pacemaker-1.0-Pacemaker-1.0.12-19-g489cf4e)
> 
> Please confirm the contents of my patch.
> And please apply to a repository.
> Or please solve a problem by a better correction
> 
>  * I do not confirm movement in Pacemaker1.1.
>  * The correction may be necessary in Pacemaker1.1.
> 
> Best Regards,
> Hideo Yamauchi.




More information about the Pacemaker mailing list