[Pacemaker] Time-based resource stickiness not working cleanly
Velayutham, Prakash
Prakash.Velayutham at cchmc.org
Tue Jun 26 20:33:56 UTC 2012
Hi,
Initially I had the following resource setup.
1. Clone -> Group -> Primitives (ocf:pacemaker:controld, ocf:ocfs2:o2cb)
2. Clone -> Primitives (ocf:heartbeat:Filesystem)
3. Group -> Primitives (ocf:heartbeat:IPaddr2, ocf:heartbeat:mysql)
I had a resource colocation where 3, 2 and 1 would all need to run on the same node.
I had a resource location preference for 3 on node 1.
I had an order constraint of 1 -> 2 -> 3
In the above scenario, I had issues of resource getting restarted when any of the nodes in the cluster rebooted. I added the Time-based rule later and noticed the same again, so the resource restarting is probably not because of the Time-based rule as you mentioned.
I have changed my config since to below:
1. Clone -> group -> primitives (1 - ocf:pacemaker:controld, 2 - ocf:ocfs2:o2cb, 3 - ocf:heartbeat:Filesystem)
2. Group -> Primitives (ocf:heartbeat:IPaddr2, ocf:heartbeat:mysql)
I have a resource colocation where 2 would run where primitive 3 of Clone above is running.
I have a resource location preference for 2 on node 1.
I have an order constraint of primitive 3 of Clone -> 2
Now, I am unable to start the MySQL resource 2 on node 1 at all because of some kind of locking.
InnoDB: Unable to lock ./ibdata1, error: 38
InnoDB: Unable to lock ./ibdata1, error: 38
InnoDB: Unable to lock ./ibdata1, error: 38
InnoDB: Unable to lock ./ibdata1, error: 38
InnoDB: Unable to lock ./ibdata1, error: 38
InnoDB: Unable to lock ./ibdata1, error: 38
InnoDB: Unable to lock ./ibdata1, error: 38
InnoDB: Unable to lock ./ibdata1, error: 38
InnoDB: Unable to lock ./ibdata1, error: 38
InnoDB: Unable to lock ./ibdata1, error: 38
InnoDB: Unable to lock ./ibdata1, error: 38
InnoDB: Unable to lock ./ibdata1, error: 38
InnoDB: Unable to lock ./ibdata1, error: 38
InnoDB: Unable to lock ./ibdata1, error: 38
InnoDB: Unable to lock ./ibdata1, error: 38
InnoDB: Unable to lock ./ibdata1, error: 38
InnoDB: Unable to lock ./ibdata1, error: 38
InnoDB: Unable to lock ./ibdata1, error: 38
InnoDB: Unable to lock ./ibdata1, error: 38
InnoDB: Unable to lock ./ibdata1, error: 38
InnoDB: Unable to lock ./ibdata1, error: 38
InnoDB: Unable to lock ./ibdata1, error: 38
InnoDB: Unable to lock ./ibdata1, error: 38
InnoDB: Unable to lock ./ibdata1, error: 38
InnoDB: Unable to lock ./ibdata1, error: 38
InnoDB: Unable to lock ./ibdata1, error: 38
120626 15:01:48 InnoDB: Unable to open the first data file
InnoDB: Error in opening ./ibdata1
120626 15:01:48 InnoDB: Operating system error number 38 in a file operation.
InnoDB: Error number 38 means 'Function not implemented'.
InnoDB: Some operating system error numbers are described at
InnoDB: http://dev.mysql.com/doc/refman/5.5/en/operating-system-error-codes.html
120626 15:01:48 InnoDB: Could not open or create data files.
120626 15:01:48 InnoDB: If you tried to add new data files, and it failed here,
120626 15:01:48 InnoDB: you should now edit innodb_data_file_path in my.cnf back
120626 15:01:48 InnoDB: to what it was, and remove the new ibdata files InnoDB created
120626 15:01:48 InnoDB: in this failed attempt. InnoDB only wrote those files full of
120626 15:01:48 InnoDB: zeros, but did not yet use them in any way. But be careful: do not
120626 15:01:48 InnoDB: remove old data files which contain your precious data!
120626 15:01:48 [ERROR] Plugin 'InnoDB' init function returned error.
120626 15:01:48 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
120626 15:01:48 [ERROR] Unknown/unsupported storage engine: InnoDB
120626 15:01:48 [ERROR] Aborting
Any idea? Can a resource order constraint be specified depending on a primitive that is part of a clone resource? Is that even supported?
Thanks,
Prakash
On Jun 26, 2012, at 1:48 PM, Phil Frost wrote:
On 06/26/2012 12:59 PM, Velayutham, Prakash wrote:
Hi,
I have a Corosync (1.3.0-5.6.1) / Pacemaker (1.1.5-5.5.5) cluster where I am using a Time-based rule for resource stickiness (http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/s-rules-cluster-options.html).
Everything works as expected, except that the resources get stopped and restarted on the same node during the core-hours if any other node in the cluster gets rebooted. The stickiness works, but I would prefer that the resources do not get affected this way. Would anyone know what I have configured wrong?
If you delete your time-based rules, do you still have these undesired restarts? Maybe the restarts are due to something else entirely, like maybe an order constraint on a clone. There was a bug some time ago regarding this, but I've found configurations where it's still a problem. More here:
https://developerbugs.linuxfoundation.org/show_bug.cgi?id=2153
_______________________________________________
Pacemaker mailing list: Pacemaker at oss.clusterlabs.org<mailto:Pacemaker at oss.clusterlabs.org>
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20120626/db79dfa1/attachment.htm>
More information about the Pacemaker
mailing list