[Pacemaker] It affects it that the update of the attribute by attrd is late, and a resource starts with a standby node.

Andrew Beekhof andrew at beekhof.net
Wed Jan 19 08:22:35 UTC 2011


Catching up on old email...
I see you've filed a bug for this one, I'll follow up there.

On Thu, Dec 2, 2010 at 2:16 AM,  <renayama19661014 at ybb.ne.jp> wrote:
> Hi Andrew,
>
>> > Step1) 192.168.40.3 addresses invalidate the understanding of ping.
>>
>> Not sure I understand this, can you rephrase?
>
> Sorry....
>
> For pingd, we address 2 of the next.
>
>  * 192.168.4.2
>  * 192.168.4.3
>
> When one address cannot communicate, this problem occurs.
> When cluster can communicate with both addresses, the resource starts in a srv01 node definitely.
>
> Best Regards,
> Hideo Yamauchi.
>
>
> --- Andrew Beekhof <andrew at beekhof.net> wrote:
>
>> On Mon, Nov 29, 2010 at 3:18 AM,  <renayama19661014 at ybb.ne.jp> wrote:
>> > Hi,
>> >
>> > We constituted a cluster by two node constitution.
>> > It is constitution complicated slightly that included two pingd in
>> > constitution.
>> >
>> > We confirmed a phenomenon in the next procedure.
>> >
>> > Step1) 192.168.40.3 addresses invalidate the understanding of ping.
>>
>> Not sure I understand this, can you rephrase?
>>
>> > Step2) Start two nodes and send trac1383.crm.
>> >
>> > ============
>> > Last updated: Mon Nov 29 10:35:09 2010
>> > Stack: Heartbeat
>> > Current DC: srv02 (27456b6d-bb8e-445b-9baf-725a6b9417c6) - partition with
>> > quorum
>> > Version: 1.0.10-b2e39d318fda501e2fcf223c2d039b721f3679a9
>> > 2 Nodes configured, unknown expected votes
>> > 12 Resources configured.
>> > ============
>> >
>> > Online: [ srv01 srv02 ]
>> >
>> > &#65533;Resource Group: TESTgroup1
>> > &#65533; &#65533; TESTIPaddr (ocf::heartbeat:IPaddr2): &#65533; &#65533; &#65533; Started srv02
>> > &#65533; &#65533; TESTDummy01 &#65533; &#65533; &#65533; &#65533;(ocf::pacemaker:Dummy): Started
> srv02
>> > &#65533; &#65533; TESTDummy02 &#65533; &#65533; &#65533; &#65533;(ocf::pacemaker:Dummy): Started
> srv02
>> > &#65533; &#65533; TESTDummy03 &#65533; &#65533; &#65533; &#65533;(ocf::pacemaker:Dummy): Started
> srv02
>> > &#65533;Resource Group: groupStonith1
>> > &#65533; &#65533; prmStonithN1-1 &#65533; &#65533; (stonith:external/stonith-helper): &#65533;
> &#65533; &#65533;Started srv02
>> > &#65533; &#65533; prmStonithN1-2 &#65533; &#65533; (stonith:external/ssh): Started srv02
>> > &#65533; &#65533; prmStonithN1-3 &#65533; &#65533; (stonith:meatware): &#65533; &#65533; Started
> srv02
>> > &#65533;Resource Group: groupStonith2
>> > &#65533; &#65533; prmStonithN2-1 &#65533; &#65533; (stonith:external/stonith-helper): &#65533;
> &#65533; &#65533;Started srv01
>> > &#65533; &#65533; prmStonithN2-2 &#65533; &#65533; (stonith:external/ssh): Started srv01
>> > &#65533; &#65533; prmStonithN2-3 &#65533; &#65533; (stonith:meatware): &#65533; &#65533; Started
> srv01
>> > &#65533;Clone Set: clnPingd1
>> > &#65533; &#65533; Started: [ srv01 srv02 ]
>> > &#65533;Clone Set: clnPingd2
>> > &#65533; &#65533; Started: [ srv01 srv02 ]
>> > &#65533;Clone Set: clnTESTmssd
>> > &#65533; &#65533; Started: [ srv01 srv02 ]
>> > &#65533;Clone Set: clnTESTopsc
>> > &#65533; &#65533; Started: [ srv01 srv02 ]
>> > &#65533;Clone Set: clncrond
>> > &#65533; &#65533; Started: [ srv01 srv02 ]
>> > &#65533;Clone Set: clnportmap
>> > &#65533; &#65533; Started: [ srv01 srv02 ]
>> > &#65533;Clone Set: clnsnmpd
>> > &#65533; &#65533; Started: [ srv01 srv02 ]
>> > &#65533;Clone Set: clnvsftpd
>> > &#65533; &#65533; Started: [ srv01 srv02 ]
>> > &#65533;Clone Set: clnxinetd
>> > &#65533; &#65533; Started: [ srv01 srv02 ]
>> >
>> >
>> > We hoped that a TESTgroup1 resource started in a node of active node(srv01),
>> > but the result was started in the node of the standby(srv02).
>> >
>> > The problem seems to be caused by the fact that the attribute change of the
>> > active node of attrd is late somehow or other.
>> >
>> > Can we evade this problem by setting?
>> >
>> > &#65533;* hb_report attached it to Bugzilla.
>> > &#65533;* http://developerbugs.linux-foundation.org/show_bug.cgi?id=2528
>> >
>> > Best Regards,
>> > Hideo Yamauchi.
>> >
>> > _______________________________________________
>> > Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>> >
>> > Project Home: http://www.clusterlabs.org
>> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> > Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
>> >
>>
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
>>
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
>



More information about the Pacemaker mailing list