[ClusterLabs] pacemaker with sbd fails to start if node reboots too fast.
Andrei Borzenkov
arvidjaar at gmail.com
Wed Nov 22 14:01:04 EST 2017
SLES12 SP2 with pacemaker 1.1.15-21.1-e174ec8; two node cluster with
VM on VSphere using shared VMDK as SBD. During basic tests by killing
corosync and forcing STONITH pacemaker was not started after reboot.
In logs I see during boot
Nov 22 16:04:56 sapprod01s crmd[3151]: crit: We were allegedly
just fenced by sapprod01p for sapprod01p
Nov 22 16:04:56 sapprod01s pacemakerd[3137]: warning: The crmd
process (3151) can no longer be respawned,
Nov 22 16:04:56 sapprod01s pacemakerd[3137]: notice: Shutting down Pacemaker
SBD timeouts are 60s for watchdog and 120s for msgwait. It seems that
stonith with SBD always takes msgwait (at least, visually host is not
declared as OFFLINE until 120s passed). But VM rebots lightning fast
and is up and running long before timeout expires.
I think I have seen similar report already. Is it something that can
be fixed by SBD/pacemaker tuning?
I can provide full logs tomorrow if needed.
TIA
-andrei
More information about the Users
mailing list