[Pacemaker] Manging Virtual Machine's resource
Xinwei Hu
hxinwei at gmail.com
Wed May 21 16:05:11 UTC 2008
2008/5/21 Andrew Beekhof <beekhof at gmail.com>:
>
> On May 21, 2008, at 12:31 PM, Xinwei Hu wrote:
>
>> 2008/5/21 Andrew Beekhof <beekhof at gmail.com>:
>>>
>>> On May 21, 2008, at 8:10 AM, Nitin wrote:
>>>
>>>> On Wed, 2008-05-21 at 11:13 +0800, Xinwei Hu wrote:
>>>>>
>>>>> We had a deployment of this kind running for more then a half year.
>>>>> 2 lessons we had so far:
>>>>>
>>>>> 1. Start a "standalone" heartbeat inside VM is the best option so far.
>>>>> i.e "ucast lo 127.0.0.1" in ha.cf
>>>>> It's the simplest way to have monitoring and restarting inside VM.
>>>>
>>>> Yes. we thought of that but want to use it as the last resort.
>>>>
>>>>> 2. Manage VMs as Xen resources in a cluster of all dom0.
>>>>> However,
>>>>> a. VMs might be migrated between dom0s anytime, so set dom0 as a
>>>>> parameter to STONITH plugin is not ideal in practice. (The same
>>>>> problem applies to VMware ESX server also.)
>>>
>>> Why even set a dom0 name?
>>> Just have a clone that only runs on physical machines (use a node
>>> attribute)
>>> and have the STONITH plugin examine its own list of nodes it can fence
>>> (the
>>> vmware STONITH plugin does something like this).
>>>
>>> It makes no sense for VMs to run a STONITH resource (assuming they're
>>> part
>>> of the cluster - I'm not 100% sure what you're proposing), since the only
>>> method of communication to the STONITH device (dom0) is inherently
>>> unreliable (ie. ssh/telnet).
>>
>> I'm not talking about a mixed cluster of VM and physical servers. ;)
>
> Ok, i misinterpreted the bit about "a cluster of all dom0"
>
>>
>> So yes, you still have to set dom0 name for STONITH.
>> Plus, I didn't managed to run pacemaker on VMware ESX server. :P
>
> Its technically possible - I do it all the time.
>
>>
>>
>>> Sidenote: Yes, I have been known to advocate using ssh-based STONITH
>>> plugins
>>> - but only in cases where there is no alternative.
>>> In this case there is a viable alternative, the dom0 hosts.
>>>
>>> Writing a STONITH plugin is not the hard part of having a mixed
>>> physical+VM
>>> cluster... it gets "interesting" when you start thinking about cases
>>> like:
>>> what happens when a cluster split happens and a minority of the physical
>>> nodes are able to shoot the larger set of physical nodes because they
>>> have
>>> enough VMs to steal quorum, or: Can VMs shoot physical machines? How
>>> does
>>> it know not to shoot the one it's running on!
>>>
>> Yes, and that's why I switch back to the simplest solution :)
>
> In that case, I suggest writing something similar to the vmware stonith
> module... either one specific to Xen, or ideally one that use libvirt that
> can be used for all types of VMs.
>
>>
>>
>>>
>>>>> b. VM is a typical example that we'd better support "inheritance"
>>>>> for RA. VM's RA can only tell whether it's running, but there're
>>>>> different ways to tell if the OS (Linux, BSD, Windows, Netware) inside
>>>>> is health.
>>>
>>> I don't understand how attribute inheritance could possibly hide the
>>> differences between operating systems.
>>>
>> I mean function inheritance and override.
>
> I've still no idea how this helps - and I don't believe this is what Lon was
> suggesting (though I may be mistaken)
>
Correct me if I miss understand anything ;)
a.
The resource/vm.sh from rgmanager mentioned by Lon is similar to
resource/Xen from heartbeat.
And agents/xvm from fence, is similar to stonith/external/xen0.
Both are available to pacemaker already.
b.
The "inherit" attribute in rgmanager, means the instance of RA
"inherit" parameters from its
parent resource. It's a little helpful cause some time we do have to
double check if the parameters
passed to different RA are the same.
But this is not what I was talking about ;)
c.
I'm looking for a solution to "inherit" RAs.
Take JeOS for example. We may have different RAs for all different
kind of JeOS, but all of them are
only slightly different in a few parameters and monitoring methods. We
also can have a huge RA to handle
them all, but that'll be very scary to maintain/evo (i.e Filesystem)
em. Sorry if I am off topic in this thread ? ;P
> _______________________________________________
> Pacemaker mailing list
> Pacemaker at clusterlabs.org
> http://list.clusterlabs.org/mailman/listinfo/pacemaker
>
More information about the Pacemaker
mailing list