[Pacemaker] Colocation constraint to External Managed Resource
Robert H.
pacemaker at elconas.de
Mon Oct 14 10:07:40 UTC 2013
Hi,
one more note:
When I cleanup the ressource, the monitor operation is triggered and
the result is as expected:
[root at NODE2 ~]# crm_resource --resource mysql-percona --cleanup --node
NODE2
Cleaning up mysql-percona:0 on NODE2
Waiting for 1 replies from the CRMd. OK
Clone Set: CLONE-percona [mysql-percona] (unmanaged)
mysql-percona:0 (lsb:mysql): Started NODE1 (unmanaged)
mysql-percona:1 (lsb:mysql): Started NODE2 (unmanaged)
I assumed that the failure-timeout="xxx" will cause the cleanup to be
done automatically. Am I wrong ?
Can I tell pacemaker to perform a "cleanup" automatically from time to
time (I don't want to use cron...) ?
Regards,
Robert
Am 14.10.2013 11:30, schrieb Robert H.:
>> You probably also want to monitor even if pacemaker thinks this is
>> supposed to be stopped.
>>
>> op monitor interval=11s timeout=20s role=Stopped
>>
>
> I added this:
>
> primitive mysql-percona lsb:mysql \
> op start enabled="false" interval="0" \
> op stop enabled="false" interval="0" \
> op monitor enabled="true" timeout="20s" interval="10s" \
> op monitor enabled="true" timeout="20s" interval="11s"
> role="Stopped" \
> meta migration-threshold="2" failure-timeout="30s"
> is-managed="false"
>
> However after a reboot of NODE2, the resource stays at:
>
> Clone Set: CLONE-percona [mysql-percona] (unmanaged)
> mysql-percona:0 (lsb:mysql): Started NODE1(unmanaged)
> Stopped: [ mysql-percona:1 ]
>
> But mysql is running:
>
> [root at NODE2~]# /etc/init.d/mysql status
> MySQL (Percona XtraDB Cluster) running (2619) [ OK ]
> [root at NODE2~]# echo $?
> 0
>
> .. hmm beeing confused :/
>
>
>> crm_mon reflects what is in the cib. If no-one re-populates the cib
>> with the current state of the world, what it shows will be stale.
>
> How can I force this ?
>
> Regards,
> Robert
--
--
Robert
More information about the Pacemaker
mailing list