[Pacemaker] RFC: What part of the XML configuration do you hate the most?
Lars Marowsky-Bree
lmb at suse.de
Tue Jun 24 14:02:06 UTC 2008
On 2008-06-24T15:48:12, Dejan Muhamedagic <dejanmm at fastmail.fm> wrote:
> > But precisely we have two scenarios to configure to:
> > a) monitor NG -> stop -> start on the same node
> > -> monitor NG (Nth time) -> stop -> failover to another node
> > b) monitor NG -> monitor NG (Nth times) -> stop -> failover to another node
> >
> > The current pacemaker behaves as a), I think, but b) is also
> > useful when you want to ignore a transient error.
>
> The b) part has already been discussed on the list and it's
> supposed to be implemented in lrmd. I still don't have the API
> defined, but thought about something like
>
> max-total-failures (how many times a monitor may fail)
> max-consecutive-failures (how many times in a row a monitor may fail)
>
> These should probably be attributes defined on the monitor
> operation level.
The "ignore failure reports" clashes a bit with the "react to failures
ASAP" requirement.
It is my belief that this should be handled by the RA, not in the LRM
nor the CRM. The monitor op implementation is the place to handle this.
Beyond that, I strongly feel that "transient errors" are a bad
foundation to build clusters on.
> > - we want to shutdown the services gracefully as long as possible.
> Well, if the stop op failed, one can't do anything but shutdown,
> right?
There's also the side implication of "as long as possible"; graceful
fail-over is much slower, and impacts recovery times.
> > - rebooting the failed node may lose the evidence of the
> > real cause of a failure. We want to preserve it as possible
> > to investigate it later and to ensure that the all problems are resolved.
> >
> > We think that, ideally, when a resource failed the node would
> > try to go to 'standby' state, and only when it failed it
> > would escalate to STONITH to poweroff.
>
> Perhaps another on_fail action. But I still don't see how that
> could help.
But that's what already happens. If a resource fails, it gets stopped;
if that doesn't work, only then will it be fenced, and there's no other
way around this.
A standby state wouldn't help?
> > 6) node fencing when the connectivity failure is detected by pingd.
> > Currently we have to have the pingd constrains for all resources.
> > It woule be helpful to simplify the config and the recovery operation
> > if we could configure the behavior as same as a resource failure.
> Agreed. Just not sure how this could be implemented. Perhaps an
> RA which would monitor the attributes created by pingd and for
> which one could set on_fail to fence.
Uhm. This contradicts the previous request.
If pingd reports loss of connectivity, the resources get stopped and
moved elsewhere. (ie, the node ends up in an implicit standby-state.)
It will only get fenced if the stop ops fail.
What would fencing the node do better?
Regards,
Lars
--
Teamlead Kernel, SuSE Labs, Research and Development
SUSE LINUX Products GmbH, GF: Markus Rex, HRB 16746 (AG Nürnberg)
"Experience is the name everyone gives to their mistakes." -- Oscar Wilde
More information about the Pacemaker
mailing list