[Pacemaker] Colocation constraint to External Managed Resource
Lars Ellenberg
lars.ellenberg at linbit.com
Fri Oct 11 23:53:01 UTC 2013
On Thu, Oct 10, 2013 at 06:20:54PM +0200, Robert H. wrote:
> Hello,
>
> Am 10.10.2013 16:18, schrieb Andreas Kurz:
>
> >You configured a monitor operation for this unmanaged resource?
>
> Yes, and some parts work as expected, however some behaviour is
> strange.
>
> Config (relevant part only):
> ----------------------------
>
> primitive mysql-percona lsb:mysql \
> op start enabled="false" interval="0" \
> op stop enabled="false" interval="0" \
> op monitor enabled="true" timeout="20s" interval="10s" \
You probably also want to monitor even if pacemaker thinks this is
supposed to be stopped.
op monitor interval=11s timeout=20s role=Stopped
> meta migration-threshold="2" failure-timeout="30s"
> is-managed="false"
> clone CLONE-percona mysql-percona \
> meta clone-max="2" clone-node-max="1" is-managed="false"
> location clone-percona-placement CLONE-percona \
> rule $id="clone-percona-placement-rule" -inf: #uname ne
> NODE1 and #uname ne NODE2
> colocation APP-dev2-private-percona-withip inf: IP CLONE-percona
>
>
> Test:
> ----
>
> I start by both Percona XtraDB machines running:
>
> IP-dev2-privatevip1 (ocf::heartbeat:IPaddr2): Started
> NODE2
> Clone Set: CLONE-percona [mysql-percona] (unmanaged)
> mysql-percona:0 (lsb:mysql): Started NODE1 (unmanaged)
> mysql-percona:1 (lsb:mysql): Started NODE2 (unmanaged)
>
> shell# /etc/init.d/mysql stop on NODE2
>
> ... Pacemaker reacts as expected ....
>
> IP-dev2-privatevip1 (ocf::heartbeat:IPaddr2): Started
> NODE1
> Clone Set: CLONE-percona [mysql-percona] (unmanaged)
> mysql-percona:0 (lsb:mysql): Started NODE1 (unmanaged)
> mysql-percona:1 (lsb:mysql): Started NODE2 (unmanaged)
> FAILED
>
> .. then I wait ....
> .. after some time (1 min), the ressource is shown as running ...
>
> IP-dev2-privatevip1 (ocf::heartbeat:IPaddr2): Started
> NODE1
> Clone Set: CLONE-percona [mysql-percona] (unmanaged)
> mysql-percona:0 (lsb:mysql): Started NODE1 (unmanaged)
> mysql-percona:1 (lsb:mysql): Started NODE2 (unmanaged)
>
> But it is definitly not running:
>
> shell# /etc/init.d/mysql status
> MySQL (Percona XtraDB Cluster) is not running
> [FEHLGESCHLAGEN]
>
> When I run probe "crm resource reprobe" it switches to:
>
> IP-dev2-privatevip1 (ocf::heartbeat:IPaddr2): Started
> NODE1
> Clone Set: CLONE-percona [mysql-percona] (unmanaged)
> mysql-percona:0 (lsb:mysql): Started NODE1 (unmanaged)
> Stopped: [ mysql-percona:1 ]
>
> Then when I start it again:
>
> /etc/init.d/mysql start on NODE2
>
> It stays this way:
>
> IP-dev2-privatevip1 (ocf::heartbeat:IPaddr2): Started
> NODE1
> Clone Set: CLONE-percona [mysql-percona] (unmanaged)
> mysql-percona:0 (lsb:mysql): Started NODE1 (unmanaged)
> Stopped: [ mysql-percona:1 ]
>
> Only a manual "reprobe" helps:
>
> IP-dev2-privatevip1 (ocf::heartbeat:IPaddr2): Started
> NODE1
> Clone Set: CLONE-percona [mysql-percona] (unmanaged)
> mysql-percona:0 (lsb:mysql): Started NODE1 (unmanaged)
> mysql-percona:1 (lsb:mysql): Started NODE2 (unmanaged)
>
> Same thing happens when I reboot NODE2 (or other way around).
>
> ---
>
> I would expect that crm_mon ALWAYS reflects the local state, however
> it looks like a bug for me.
crm_mon reflects what is in the cib. If no-one re-populates the cib
with the current state of the world, what it shows will be stale.
--
: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com
DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
More information about the Pacemaker
mailing list