[Pacemaker] Antwort: Re: Constraining clones per node
Michael Schwartzkopff
misch at multinet.de
Mon Nov 30 18:56:22 UTC 2009
Am Montag, 30. November 2009 14:40:19 schrieb Jens.Braeuer at rohde-schwarz.com:
> > > My environment consists of multiple servers (~40), each with one or
>
> more
>
> > > cpu-cores. I have two application-types called A and B (services like
>
> eg.
>
> > > apache), that each use one cpu core. A is mission critical, B is
>
> optional.
>
> > > So what i want to express is that there should be 20 A's and the
>
> remaining
>
> > > cpu's may be used by B's. When a node executing A's fails, it is
>
> perfectly
>
> > > ok to shut down B's to make cpu cores available for A's to be started.
> > >
> > > Any idea how to do this?
> >
> > In pacemaker resources have a meta_attribute "priority". If there are
>
> not
>
> > enough nodes available ton run all resources the resources with higher
> > priority are run.
> >
> > so make a clone of to start 20 times A. Resource A has a priority
> > of 20. Make a clone of B with B having a priority of 10.
>
> Your suggestion is to do something like (xml totally untested and for sure
> with syntax-errors.. :-)
>
> <clone id="A" clone-max="20" priority="20">
> ...
> </clone>
> <clone id="B clone-max="60" priority="10">
> ...
> </clone>
>
> Right?
> But how do i constraint the sum of A and B's running on one node to the
> number of cpu-cores available?
>
> best regards,
> Jens Braeuer
Hi,
the capacity utilization would be exactly what you need for more information
see:
http://www.gossamer-
threads.com/lists/linuxha/pacemaker/59053?search_string=capacity;#59053
The only problem is that this feature is not yet implemented in the sources.
At least as far as know.
@beehof: Am I right? I greped Pacemaker-Devel-e0bbe832b7ba for utilization but
did not find anything.
For now you would have to write external scripts that
- check the number of resources running on one node and write that result to
an attribute (i.e. res-count="3" or res-count="5")
- Make a location constraint that the reources run only on a node with res-
count thess or equal 4 i.e.
location locA resA \
rule $id="locA-rule" -inf: res-count gt 4
and the same for resource B.
Some other thoughts:
- The clone of resoure A and B should be globally unique. Otherwise two of
resA cannot run on the same node.
- Check if openais/corosync can cope with 40 nodes in the cluster.
- The dynamical update of the res-count attribute could be done externally or
during any monitor operation
- The res-count attribut is updated with the attrd-updater command.
- Clone of resoure A: clone-max=16, clone-node-max=4
- Perhaps my solution from above is not so good, since all resources would
move away from the node if you have 5 resources on one node.
All this are rough estimations and wild guesses. You would need a detailed
review of these ideas. Or you wait until the capacity utilization is in the
code.
Greetings,
--
Dr. Michael Schwartzkopff
MultiNET Services GmbH
Addresse: Bretonischer Ring 7; 85630 Grasbrunn; Germany
Tel: +49 - 89 - 45 69 11 0
Fax: +49 - 89 - 45 69 11 21
mob: +49 - 174 - 343 28 75
mail: misch at multinet.de
web: www.multinet.de
Sitz der Gesellschaft: 85630 Grasbrunn
Registergericht: Amtsgericht München HRB 114375
Geschäftsführer: Günter Jurgeneit, Hubert Martens
---
PGP Fingerprint: F919 3919 FF12 ED5A 2801 DEA6 AA77 57A4 EDD8 979B
Skype: misch42
More information about the Pacemaker
mailing list