[Pacemaker] Restarting resources even when remain in the same node

Andrew Beekhof andrew at beekhof.net
Wed Jun 30 06:24:50 EDT 2010


On Tue, Jun 29, 2010 at 11:32 AM, marc genou <marcgenou at gmail.com> wrote:
> yep ok:
> node $id="492edbea-b2e9-40a5-9208-bacb1bbad124" openvz1
> node $id="60ed0c20-c1ae-4b72-8781-10b4ff99f75b" openvz2
> primitive ClusterIP ocf:heartbeat:IPaddr2 \
>         params ip="10.10.12.250" cidr_netmask="16" \
>         op monitor interval="30s"
> primitive OWP lsb:owp \
>         op monitor interval="1min"
> primitive VZ lsb:vz \
>         op monitor interval="1min"
> primitive VZData ocf:linbit:drbd \
>         params drbd_resource="vzpart" \
>         op monitor interval="60s"
> primitive VZFS ocf:heartbeat:Filesystem \
>         params device="/dev/drbd/by-res/vzpart" directory="/vz"
> fstype="ext3"
> ms VZDataClone VZData \
>         meta master-max="1" master-node-max="1" clone-max="2"
> clone-node-max="1" notify="true"
> location prefer-openvz1 VZDataClone inf: openvz1
> colocation OWP-with-ip inf: OWP VZ VZFS VZDataClone:Master ClusterIP
> order VZFS-after-VZData inf: ClusterIP VZDataClone:promote VZFS:start VZ OWP

If VZDataClone was being demoted/promoted, then I'd expect VZFS, VZ,
and OWP to all be restarted.
Perhaps log a bug and include a hb_report of the time period covered
by this scenario.

> property $id="cib-bootstrap-options" \
>         dc-version="1.0.9-89bd754939df5150de7cd76835f98fe90851b677" \
>         cluster-infrastructure="Heartbeat" \
>         stonith-enabled="false" \
>         no-quorum-policy="ignore" \
>         default-action-timeout="60s"
>
> Last updated: Tue Jun 29 11:06:19 2010
> Stack: Heartbeat
> Current DC: openvz1 (492edbea-b2e9-40a5-9208-bacb1bbad124) - partition with
> quor
> um
> Version: 1.0.9-89bd754939df5150de7cd76835f98fe90851b677
>
>
> On Tue, Jun 29, 2010 at 11:30 AM, Andrew Beekhof <andrew at beekhof.net> wrote:
>>
>> On Tue, Jun 29, 2010 at 11:07 AM, marc genou <marcgenou at gmail.com> wrote:
>> > Hi again
>> > I am testing an active pasive config and I noticed that when I power off
>> > the
>> > active node, the resources move to another one, ok that's right. And I
>> > added
>> > a location so when I switch on the active node again all resources come
>> > back
>> > to this node, Ok that's right too. But if resources are running on the
>> > active node and I power off the pasive one, all the resources restart in
>> > the
>> > active tough they stand still there.
>> > Is this a normal behaviour? How can we managed to avoid this?
>>
>> Answer to both depends on your pacemaker version and configuration.
>> Neither of which was included ;-)
>>
>> >
>> >
>> >
>> > _______________________________________________
>> > Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>> >
>> > Project Home: http://www.clusterlabs.org
>> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> > Bugs:
>> >
>> > http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
>> >
>> >
>>
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs:
>> http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs:
> http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
>
>




More information about the Pacemaker mailing list