[Pacemaker] "ERROR: Wrong stack o2cb" when trying to start o2cb service in Pacemaker cluster

Andreas Kurz andreas at hastexo.com
Wed Jun 20 15:39:26 UTC 2012


On 06/20/2012 03:49 PM, David Guyot wrote:
> Actually, yes, I start DRBD manually, because this is currently a test
> configuration which relies on OpenVPN for the communications between
> these 2 nodes. I have no order and collocation constraints because I'm
> discovering these software and trying to configure them step by step and
> make resources work before ordering them (nevertheless, I just tried to
> configure DLM/O2CB constraints, but they fail, apparently because they
> are relying on O2CB, which causes the problem I wrote you about.) And I
> have no OCFS2 mounts because I was on the assumption that OCFS2 wouldn't
> mount partitions without O2CB and DLM, which seems to be right :

In fact it won't work without constraints, even if you are only testing
e.g. controld and o2cb must run on the same node (in fact on both nodes
of course) and controld must run before o2cb.

And the error message you showed in a previous mail:

2012/06/20_09:04:35 ERROR: Wrong stack o2cb

... implies, that you are already running the native ocfs2 cluster stack
outside of pacemaker. You did a "/etc/init.d/ocfs2" stop before starting
your cluster tests and it is still stopped? And if it is stopped, a
cleanup of cl_ocfs2mgmt resource should start that resource ... if there
are no more other errors.

You installed dlm-pcmk and ocfs2-tools-pacemaker packages from backports?

> 
> root at Malastare:/home/david# crm_mon --one-shot -VroA
> ============
> Last updated: Wed Jun 20 15:32:50 2012
> Last change: Wed Jun 20 15:28:34 2012 via crm_shadow on Malastare
> Stack: openais
> Current DC: Vindemiatrix - partition with quorum
> Version: 1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff
> 2 Nodes configured, 2 expected votes
> 14 Resources configured.
> ============
> 
> Online: [ Vindemiatrix Malastare ]
> 
> Full list of resources:
> 
>  soapi-fencing-malastare    (stonith:external/ovh):    Started Vindemiatrix
>  soapi-fencing-vindemiatrix    (stonith:external/ovh):    Started Malastare
>  Master/Slave Set: ms_drbd_ocfs2_pgsql [p_drbd_ocfs2_pgsql]
>      Masters: [ Malastare Vindemiatrix ]
>  Master/Slave Set: ms_drbd_ocfs2_backupvi [p_drbd_ocfs2_backupvi]
>      Masters: [ Malastare Vindemiatrix ]
>  Master/Slave Set: ms_drbd_ocfs2_svn [p_drbd_ocfs2_svn]
>      Masters: [ Malastare Vindemiatrix ]
>  Master/Slave Set: ms_drbd_ocfs2_www [p_drbd_ocfs2_www]
>      Masters: [ Malastare Vindemiatrix ]
>  Clone Set: cl_ocfs2mgmt [g_ocfs2mgmt]
>      Stopped: [ g_ocfs2mgmt:0 g_ocfs2mgmt:1 ]
> 
> Node Attributes:
> * Node Vindemiatrix:
>     + master-p_drbd_ocfs2_backupvi:1      : 10000    
>     + master-p_drbd_ocfs2_pgsql:1         : 10000    
>     + master-p_drbd_ocfs2_svn:1           : 10000    
>     + master-p_drbd_ocfs2_www:1           : 10000    
> * Node Malastare:
>     + master-p_drbd_ocfs2_backupvi:0      : 10000    
>     + master-p_drbd_ocfs2_pgsql:0         : 10000    
>     + master-p_drbd_ocfs2_svn:0           : 10000    
>     + master-p_drbd_ocfs2_www:0           : 10000    
> 
> Operations:
> * Node Vindemiatrix:
>    p_drbd_ocfs2_pgsql:1: migration-threshold=1000000
>     + (4) probe: rc=8 (master)
>    p_drbd_ocfs2_backupvi:1: migration-threshold=1000000
>     + (5) probe: rc=8 (master)
>    p_drbd_ocfs2_svn:1: migration-threshold=1000000
>     + (6) probe: rc=8 (master)
>    p_drbd_ocfs2_www:1: migration-threshold=1000000
>     + (7) probe: rc=8 (master)
>    soapi-fencing-malastare: migration-threshold=1000000
>     + (10) start: rc=0 (ok)
>    p_o2cb:1: migration-threshold=1000000
>     + (9) probe: rc=5 (not installed)
> * Node Malastare:
>    p_drbd_ocfs2_pgsql:0: migration-threshold=1000000
>     + (4) probe: rc=8 (master)
>    p_drbd_ocfs2_backupvi:0: migration-threshold=1000000
>     + (5) probe: rc=8 (master)
>    p_drbd_ocfs2_svn:0: migration-threshold=1000000
>     + (6) probe: rc=8 (master)
>    soapi-fencing-vindemiatrix: migration-threshold=1000000
>     + (10) start: rc=0 (ok)
>    p_drbd_ocfs2_www:0: migration-threshold=1000000
>     + (7) probe: rc=8 (master)
>    p_o2cb:0: migration-threshold=1000000
>     + (9) probe: rc=5 (not installed)
> 
> Failed actions:
>     p_o2cb:1_monitor_0 (node=Vindemiatrix, call=9, rc=5,
> status=complete): not installed
>     p_o2cb:0_monitor_0 (node=Malastare, call=9, rc=5, status=complete):
> not installed
> root at Malastare:/home/david# mount -t ocfs2 /dev/drbd1 /media/ocfs/
> mount.ocfs2: Cluster stack specified does not match the one currently
> running while trying to join the group
> 
> Concerning the notify meta-attribute, I didn't configured it because it
> wasn't even referred to in the DRBD official guide (
> http://www.drbd.org/users-guide-8.3/s-ocfs2-pacemaker.html), and I don't
> know what it does, so, by default, I stupidly followed the official
> guide. What does this meta-attribute sets? If you know a better guide,
> could you please tell me about, so I can check my config based on this
> other guide?

Well, than this is a documentation bug ... you will find the correct
configuration in the same guide, where pacemaker integration is
described ... "notify" sends out notification messages before and after
an instance of the DRBD OCF RA exectutes an action (like start, stop,
promote, demote) ... that allows the other instances to react.

Regards,
Andreas

-- 
Need help with Pacemaker?
http://www.hastexo.com/now


> 
> And, last but not least, I run Debian Squeeze 3.2.13-grsec-xxxx-grs-ipv6-64.
> 
> Thank you in advance.
> 
> Kind regards.
> 
> PS: if you find me a bit rude, please accept my apologies; I'm working
> on it for weeks following the official DRBD guide and it's frustrating
> to ask help as a last resort and to be answered with something which
> sounds like "What's this bloody mess !?!" to my tired nerve cells. Once
> again, please accept my apologies.
> 
> Le 20/06/2012 15:09, Andreas Kurz a écrit :
>> On 06/20/2012 02:22 PM, David Guyot wrote:
>>> Hello.
>>>
>>> Oops, an omission.
>>>
>>> Here comes my Pacemaker config :
>>> root at Malastare:/home/david# crm configure show
>>> node Malastare
>>> node Vindemiatrix
>>> primitive p_controld ocf:pacemaker:controld
>>> primitive p_drbd_ocfs2_backupvi ocf:linbit:drbd \
>>>     params drbd_resource="backupvi"
>>> primitive p_drbd_ocfs2_pgsql ocf:linbit:drbd \
>>>     params drbd_resource="postgresql"
>>> primitive p_drbd_ocfs2_svn ocf:linbit:drbd \
>>>     params drbd_resource="svn"
>>> primitive p_drbd_ocfs2_www ocf:linbit:drbd \
>>>     params drbd_resource="www"
>>> primitive p_o2cb ocf:pacemaker:o2cb \
>>>     meta target-role="Started"
>>> primitive soapi-fencing-malastare stonith:external/ovh \
>>>     params reversedns="ns208812.ovh.net"
>>> primitive soapi-fencing-vindemiatrix stonith:external/ovh \
>>>     params reversedns="ns235795.ovh.net"
>>> ms ms_drbd_ocfs2_backupvi p_drbd_ocfs2_backupvi \
>>>     meta master-max="2" clone-max="2"
>>> ms ms_drbd_ocfs2_pgsql p_drbd_ocfs2_pgsql \
>>>     meta master-max="2" clone-max="2"
>>> ms ms_drbd_ocfs2_svn p_drbd_ocfs2_svn \
>>>     meta master-max="2" clone-max="2"
>>> ms ms_drbd_ocfs2_www p_drbd_ocfs2_www \
>>>     meta master-max="2" clone-max="2"
>>> location stonith-malastare soapi-fencing-malastare -inf: Malastare
>>> location stonith-vindemiatrix soapi-fencing-vindemiatrix -inf: Vindemiatrix
>>> property $id="cib-bootstrap-options" \
>>>     dc-version="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" \
>>>     cluster-infrastructure="openais" \
>>>     expected-quorum-votes="2"
>>>
>> I have absolutely no idea why your configuration can run at all without
>> more errors ... do you start the drbd resources manually before the cluster?
>>
>> You are missing the notify meta-attribute for all your DRBD ms
>> resources, you have no order and colocation constraints or groups at all
>> and you don't clone controld and o2cb ... and there are no ocfs2 mounts?
>>
>> Also quite important: what distribution are you using?
>>
>>> The STONITH resources are custom ones which use my provider SOAP API to
>>> electrically reboot fenced nodes.
>>>
>>> Concerning the web page you talked me about, I tried to insert the
>>> referred environment variable, but it did not solved the problem :
>> Really have a look at the crm configuration snippet on that page and
>> read manuals about setting up DRBD in Pacemaker.
>>
>> Regards,
>> Andreas
>>
>>> root at Malastare:/home/david# crm_mon --one-shot -VroA
>>> ============
>>> Last updated: Wed Jun 20 14:14:41 2012
>>> Last change: Wed Jun 20 09:22:39 2012 via cibadmin on Malastare
>>> Stack: openais
>>> Current DC: Vindemiatrix - partition with quorum
>>> Version: 1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff
>>> 2 Nodes configured, 2 expected votes
>>> 12 Resources configured.
>>> ============
>>>
>>> Online: [ Vindemiatrix Malastare ]
>>>
>>> Full list of resources:
>>>
>>>  soapi-fencing-malastare    (stonith:external/ovh):    Stopped
>>>  p_controld    (ocf::pacemaker:controld):    Started Malastare
>>>  p_o2cb    (ocf::pacemaker:o2cb):    Started Vindemiatrix FAILED
>>>  soapi-fencing-vindemiatrix    (stonith:external/ovh):    Stopped
>>>  Master/Slave Set: ms_drbd_ocfs2_pgsql [p_drbd_ocfs2_pgsql]
>>>      Masters: [ Vindemiatrix Malastare ]
>>>  Master/Slave Set: ms_drbd_ocfs2_backupvi [p_drbd_ocfs2_backupvi]
>>>      Masters: [ Vindemiatrix Malastare ]
>>>  Master/Slave Set: ms_drbd_ocfs2_svn [p_drbd_ocfs2_svn]
>>>      Masters: [ Vindemiatrix Malastare ]
>>>  Master/Slave Set: ms_drbd_ocfs2_www [p_drbd_ocfs2_www]
>>>      Masters: [ Vindemiatrix Malastare ]
>>>
>>> Node Attributes:
>>> * Node Vindemiatrix:
>>>     + master-p_drbd_ocfs2_backupvi:0      : 10000    
>>>     + master-p_drbd_ocfs2_pgsql:0         : 10000    
>>>     + master-p_drbd_ocfs2_svn:0           : 10000    
>>>     + master-p_drbd_ocfs2_www:0           : 10000    
>>> * Node Malastare:
>>>     + master-p_drbd_ocfs2_backupvi:1      : 10000    
>>>     + master-p_drbd_ocfs2_pgsql:1         : 10000    
>>>     + master-p_drbd_ocfs2_svn:1           : 10000    
>>>     + master-p_drbd_ocfs2_www:1           : 10000    
>>>
>>> Operations:
>>> * Node Vindemiatrix:
>>>    p_o2cb: migration-threshold=1000000 fail-count=1000000
>>>     + (11) start: rc=5 (not installed)
>>>    p_drbd_ocfs2_pgsql:0: migration-threshold=1000000
>>>     + (6) probe: rc=8 (master)
>>>    p_drbd_ocfs2_backupvi:0: migration-threshold=1000000
>>>     + (7) probe: rc=8 (master)
>>>    p_drbd_ocfs2_svn:0: migration-threshold=1000000
>>>     + (8) probe: rc=8 (master)
>>>    p_drbd_ocfs2_www:0: migration-threshold=1000000
>>>     + (9) probe: rc=8 (master)
>>> * Node Malastare:
>>>    p_controld: migration-threshold=1000000
>>>     + (10) start: rc=0 (ok)
>>>    p_o2cb: migration-threshold=1000000
>>>     + (4) probe: rc=5 (not installed)
>>>    p_drbd_ocfs2_pgsql:1: migration-threshold=1000000
>>>     + (6) probe: rc=8 (master)
>>>    p_drbd_ocfs2_backupvi:1: migration-threshold=1000000
>>>     + (7) probe: rc=8 (master)
>>>    p_drbd_ocfs2_svn:1: migration-threshold=1000000
>>>     + (8) probe: rc=8 (master)
>>>    p_drbd_ocfs2_www:1: migration-threshold=1000000
>>>     + (9) probe: rc=8 (master)
>>>
>>> Failed actions:
>>>     p_o2cb_start_0 (node=Vindemiatrix, call=11, rc=5, status=complete):
>>> not installed
>>>     p_o2cb_monitor_0 (node=Malastare, call=4, rc=5, status=complete):
>>> not installed
>>>
>>> Thank you in advance for your help!
>>>
>>> Kind regards.
>>>
>>> Le 20/06/2012 14:02, Andreas Kurz a écrit :
>>>> On 06/20/2012 01:43 PM, David Guyot wrote:
>>>>> Hello, everybody.
>>>>>
>>>>> I'm trying to configure Pacemaker for using DRBD + OCFS2 storage, but
>>>>> I'm stuck with DRBD and controld up and o2cb doggedly displaying "not
>>>>> installed" errors. To do this, I followed the DRBD guide (
>>>>> http://www.drbd.org/users-guide-8.3/ch-ocfs2.html), with the difference
>>>>> that I was forced to disable DRBD fencing because it was interfering
>>>>> with Pacemaker fencing and stopping each nodes as often as it could.
>>>> Unfortunately you didn't share your Pacemaker configuration but you
>>>> definitely must not start any ocfs2 init script but let all be managed
>>>> by the cluster-manager.
>>>>
>>>> Here is a brief setup description, also mentioning the tune.ocfs2 when
>>>> the Pacemaker stack is running:
>>>>
>>>> http://www.hastexo.com/resources/hints-and-kinks/ocfs2-pacemaker-debianubuntu
>>>>
>>>> And once this is running as expected you really want to reactivate the
>>>> DRBD fencing configuration.
>>>>
>>>> Regards,
>>>> Andreas
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>>>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>>>
>>>> Project Home: http://www.clusterlabs.org
>>>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>>> Bugs: http://bugs.clusterlabs.org
>>>
>>>
>>>
>>> _______________________________________________
>>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>>
>>> Project Home: http://www.clusterlabs.org
>>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>> Bugs: http://bugs.clusterlabs.org
>>>
>>
>>
>>
>>
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
> 
> 
> 
> 
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 



-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 222 bytes
Desc: OpenPGP digital signature
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20120620/bc6f30b8/attachment-0004.sig>


More information about the Pacemaker mailing list