[Pacemaker] Does pingd works on openais?
Lars Marowsky-Bree
lmb at suse.de
Wed Mar 19 15:29:08 UTC 2008
On 2008-03-19T11:20:46, Atanas Dyulgerov <atanas.dyulgerov at postpath.com> wrote:
> The case when node loses connectivity to the cluster but it still remains
> connected to the shared resource. Then the other nodes which retain quorum
> can lock the shared storage resource to stop the errant node from accessing
> it. This fencing method does not require communication with the failed node.
> That's what RHCS do I believe.
Right, but that does only work if the resource supports that. If you're
looking at DR style setups with replicated storage, that doesn't really
apply - the storage links will likely be severed as well.
Again, I'm not saying this isn't a nice feature, but not as important
technically. We'll implement it in the future, but you can have working
clusters even without.
> >WAN clusters require the concept of self-fencing after loss of site
> >quorum.
>
> Any self-fencing method for production use implemented so far? I would
> like to test...
No. I didn't say we had that implemented. DR - metro- and wide-area
clusters - are not one of heartbeat/Pacemaker's strengths right now. We
plan to address this in the future.
> The application I'm running in the cluster is mail server. I benchmarked
> the server performance running on GNBD(ext3), iSCSI(ext3) and NFS.
> Performance is measured with Loadsim. http://www.msexchange.org/tutorials/Simulating-Stress-Exchange-2003-LoadSim.html
> If Loadsim results will have meaning for I can share them with you.
Thanks, that's interesting. I'll try to find some resources to run
benchmarks on our own, because this _is_ a little bit counterintuitive.
Regards,
Lars
--
Teamlead Kernel, SuSE Labs, Research and Development
SUSE LINUX Products GmbH, GF: Markus Rex, HRB 16746 (AG Nürnberg)
"Experience is the name everyone gives to their mistakes." -- Oscar Wilde
More information about the Pacemaker
mailing list