[Pacemaker] Resource colocation with a clone
Brice Figureau
brice+ha at daysofwonder.com
Thu Aug 13 11:31:40 UTC 2009
Hi Diego,
On Wed, 2009-08-12 at 18:06 -0400, Diego Julian Remolina wrote:
> > Now that's where the things get interesting, I want to relocate the IP
> >
> > address on another node _if_ nginx fails on the node on which the IP
> > runs.
>
> Are you already addressing network connectivity with pingd or
> something else? In your model it would be possible to move the IP
> address to a node with nginx running but no network connectivity.
Yes, I'm using pingd.
> Do you need the services running at all times on all nodes with the IP
> changing from one to another for some other reason? Please explain
> this reason. If you only need to run apache and nginx on a node
> connected to the network and capable of serving, then the model will
> be different.
Yes, I at least the apaches to run on every node, as they serve as
upstreams for at least one of the nginx (the one running where the ip
runs).
But I'd prefer to have the nginx running everywhere also, so that there
is the less downtime while failovering the IP.
Also in the future it is possible I might have several IP running on
different nodes of the cluster. Each ip would also needs those nginx.
> You have to be careful with colocation in the sense that it sometimes
> will prevent a resource from running in both nodes.
>
> You can see a colocation example for my fileserver with drbd posted in
> a previous message to the list called: "Master/Slave resource cannot
> start" I was missing the cloned resource and had a bad colocation
> rule. After that was pointed out to me, I fixed my configuration which
> is posted in a reply later down the road. That configuration is
> working well for me.
Thanks, I've read your config.
I tried an alternative configuration this morning, with 2 clones instead
of one clone of a group (this is a test config, it has no pingd):
node $id="560dd187-f1d0-408b-b695-8d857d5c5e4d" glusterfs1
node $id="e46f6b38-1a89-45ca-a516-7d84ffc8ecf9" glusterfs2
primitive apache ocf:heartbeat:apache \
params httpd="/usr/sbin/apache2" configfile="/etc/apache2/apache2.conf"
envfiles="/etc/apache2/envvars" \
op monitor interval="30s"
primitive nginx ocf:heartbeat:Nginx \
op monitor interval="30s"
primitive vip ocf:heartbeat:IPaddr2 \
params ip="172.16.10.164" nic="eth1" cidr_netmask="255.255.255.0" \
op monitor interval="10s"
clone www_apache apache \
meta globally-unique="false"
clone www_nginx nginx \
meta globally-unique="false"
colocation vip-nginx inf: vip www_nginx
property $id="cib-bootstrap-options" \
dc-version="1.0.4-2ec1d189f9c23093bf9239a980534b661baf782d" \
cluster-infrastructure="Heartbeat" \
stonith-enabled="false" \
last-lrm-refresh="1246553424"
rsc_defaults $id="rsc-options" \
migration-threshold="10"
So I'm colocating the vip on nodes running nginx. Since all nodes except
those with a failcount of 10 must run nginx, I _think_ this solves my
issue.
I'm wondering if I won't lower migration-threshold to 1, and add a
failure-timeout.
I was wondering if colocating with a clone would work, because as I see
it (certainly a wrong view), the clone always run on some nodes of the
cluster. I mean I wasn't sure the colocation rules checks nodes per
nodes the status of the clone or the clone as whole.
Thanks,
--
Brice Figureau
My Blog: http://www.masterzen.fr/
More information about the Pacemaker
mailing list