[Pacemaker] colocation issues (isnt it always)
Pavlos Parissis
pavlos.parissis at gmail.com
Mon Dec 13 22:19:48 UTC 2010
On 13 December 2010 06:09, Patrick H. <pacemaker at feystorm.net> wrote:
>
> So colocation is biting me in the ass again and I cant figure this one out.
> I have a group of iSCSI devices that then go into an md raid device that then goes into an lvm device which then gets mounted and then exported by nfs. Thoughout this whole project I've had the resources trying to start on nodes where the resource it depends on wasnt even running. But those were usually when I failed something and they tried to migrate. Right now I'm having problems simply starting the final resource, the nfs export
>
> All the resource are started on the same node except for the ocf:hearbeat:nfsserver resource. For some reason it refuses to start on the same node, even though I have a colocation rule.
> All the other resources are on node 'nas01', but every time I start up the nfsserver resource it starts on 'nas02'. I've stopped it, cleaned it, started it back up. All have no effect. Ive even tried reversing the order of the resources in the colocation rule as last time I had this problem it was cause they were backwards.
>
> Below is info from the crm util, and attached is the output of `cibadmin -Q`
> Using the pacemaker 1.1.1-1.fc13 package on rhel6
>
> crm(live)# configure show
> node nas01
> node nas02
> node nas03
> primitive filesystem_sdb1 ocf:etc:fs \
> params uuid="2c8b70de-09a7-4cf1-864d-346c0602d6e1" mount_path="/mnt/iscsi-sdb1" \
> meta target-role="Started"
> primitive iSCSI_nas01_sdb1 ocf:heartbeat:iscsi \
> params portal="165.212.101.241" target="iqn.165.212.101.240:nas01.sdb1" \
> op monitor interval="30s" \
> meta target-role="Started"
> primitive iSCSI_nas02_sdb1 ocf:heartbeat:iscsi \
> params portal="165.212.101.242" target="iqn.165.212.101.240:nas02.sdb1" \
> op monitor interval="30s" \
> meta target-role="Started"
> primitive iSCSI_nas03_sdb1 ocf:heartbeat:iscsi \
> params portal="165.212.101.243" target="iqn.165.212.101.240:nas03.sdb1" \
> op monitor interval="30s" \
> meta target-role="Started"
> primitive lvm-iSCSI_sdb1 ocf:etc:lvm \
> params vg_name="vg-md-iscsi-sdb1" lv_name="lv-vg-md-iscsi-sdb1" \
> meta target-role="Started"
> primitive md_iSCSI_sdb1 ocf:etc:md \
> params name="nas01:iscsi-sdb1" uuid="abddf4ea:06524e68:8e2b8f4e:1b24c56d" \
> meta target-role="Started" \
> op monitor interval="5s"
> primitive nfs_sdb1 ocf:heartbeat:nfsserver \
> params nfs_init_script="/etc/init.d/nfs" nfs_shared_infodir="/mnt/iscsi-sdb1/nfs" nfs_notify_cmd="/bin/true" nfs_ip="*" \
> meta target-role="Started"
> group gr-iSCSI_sdb1 iSCSI_nas01_sdb1 iSCSI_nas02_sdb1 iSCSI_nas03_sdb1
> colocation co-sdb1 inf: nfs_sdb1 filesystem_sdb1 lvm-iSCSI_sdb1 md_iSCSI_sdb1 gr-iSCSI_sdb1
> order or-sdb1 inf: gr-iSCSI_sdb1 md_iSCSI_sdb1 lvm-iSCSI_sdb1 filesystem_sdb1 nfs_sdb1
> property $id="cib-bootstrap-options" \
> dc-version="1.1.1-972b9a5f68606f632893fceed658efa085062f55" \
> cluster-infrastructure="openais" \
> expected-quorum-votes="3" \
> stonith-enabled="false" \
> no-quorum-policy="ignore" \
> default-resource-stickiness="INFINITY" \
> last-lrm-refresh="1292215929"
>
>
> crm(live)# status
> ============
> Last updated: Mon Dec 13 05:04:52 2010
> Stack: openais
> Current DC: nas03 - partition with quorum
> Version: 1.1.1-972b9a5f68606f632893fceed658efa085062f55
> 3 Nodes configured, 3 expected votes
> 5 Resources configured.
> ============
>
> Online: [ nas02 nas01 nas03 ]
>
> Resource Group: gr-iSCSI_sdb1
> iSCSI_nas01_sdb1 (ocf::heartbeat:iscsi): Started nas01
> iSCSI_nas02_sdb1 (ocf::heartbeat:iscsi): Started nas01
> iSCSI_nas03_sdb1 (ocf::heartbeat:iscsi): Started nas01
> lvm-iSCSI_sdb1 (ocf::etc:lvm): Started nas01
> filesystem_sdb1 (ocf::etc:fs): Started nas01
> md_iSCSI_sdb1 (ocf::etc:md): Started nas01
> nfs_sdb1 (ocf::heartbeat:nfsserver): Started nas02
>
If you put all of them in a group and have the nfs_sdb1 as last
resource you will manage to have what you want with a very simple
configuration
BTW, I used your conf and in my case all resources started on the same node
============
Last updated: Mon Dec 13 23:18:40 2010
Stack: Heartbeat
Current DC: node-02 (07b89fad-0626-480d-8660-238f9372bc4b) - partition
with quorum
Version: 1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b
2 Nodes configured, unknown expected votes
5 Resources configured.
============
Online: [ node-02 node-01 ]
filesystem_sdb1 (ocf::heartbeat:Dummy): Started node-02
lvm-iSCSI_sdb1 (ocf::heartbeat:Dummy): Started node-02
md_iSCSI_sdb1 (ocf::heartbeat:Dummy): Started node-02
nfs_sdb1 (ocf::heartbeat:Dummy): Started node-02
Resource Group: gr-iSCSI_sdb1
iSCSI_nas01_sdb1 (ocf::heartbeat:Dummy): Started node-02
iSCSI_nas02_sdb1 (ocf::heartbeat:Dummy): Started node-02
iSCSI_nas03_sdb1 (ocf::heartbeat:Dummy): Started node-02
root at node-01:~# crm configure show
node $id="07b89fad-0626-480d-8660-238f9372bc4b" node-02
node $id="954d162e-0d7b-4e28-a6e3-8d6eee73034e" node-01
primitive filesystem_sdb1 ocf:heartbeat:Dummy \
op monitor interval="40" \
meta target-role="Started"
primitive iSCSI_nas01_sdb1 ocf:heartbeat:Dummy \
op monitor interval="40"
primitive iSCSI_nas02_sdb1 ocf:heartbeat:Dummy \
op monitor interval="40"
primitive iSCSI_nas03_sdb1 ocf:heartbeat:Dummy \
op monitor interval="40"
primitive lvm-iSCSI_sdb1 ocf:heartbeat:Dummy \
op monitor interval="40"
primitive md_iSCSI_sdb1 ocf:heartbeat:Dummy \
op monitor interval="40"
primitive nfs_sdb1 ocf:heartbeat:Dummy \
op monitor interval="40"
group gr-iSCSI_sdb1 iSCSI_nas01_sdb1 iSCSI_nas02_sdb1 iSCSI_nas03_sdb1 \
meta target-role="Started"
colocation co-sdb1 inf: nfs_sdb1 filesystem_sdb1 lvm-iSCSI_sdb1
md_iSCSI_sdb1 gr-iSCSI_sdb1
order or-sdb1 inf: gr-iSCSI_sdb1 md_iSCSI_sdb1 lvm-iSCSI_sdb1
filesystem_sdb1 nfs_sdb1
property $id="cib-bootstrap-options" \
dc-version="1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b" \
cluster-infrastructure="Heartbeat" \
last-lrm-refresh="1292245651" \
default-resource-stickiness="INFINITY" \
stonith-enabled="false"
root at node-01:~# ptest -LsVV
Allocation scores:
native_color: nfs_sdb1 allocation score on node-02: 1000000
native_color: nfs_sdb1 allocation score on node-01: 0
native_color: filesystem_sdb1 allocation score on node-02: 1000000
native_color: filesystem_sdb1 allocation score on node-01: -1000000
native_color: lvm-iSCSI_sdb1 allocation score on node-02: 1000000
native_color: lvm-iSCSI_sdb1 allocation score on node-01: -1000000
native_color: md_iSCSI_sdb1 allocation score on node-02: 1000000
native_color: md_iSCSI_sdb1 allocation score on node-01: -1000000
group_color: gr-iSCSI_sdb1 allocation score on node-02: 0
group_color: gr-iSCSI_sdb1 allocation score on node-01: 0
group_color: iSCSI_nas01_sdb1 allocation score on node-02: 1000000
group_color: iSCSI_nas01_sdb1 allocation score on node-01: 0
group_color: iSCSI_nas02_sdb1 allocation score on node-02: 1000000
group_color: iSCSI_nas02_sdb1 allocation score on node-01: 0
group_color: iSCSI_nas03_sdb1 allocation score on node-02: 1000000
group_color: iSCSI_nas03_sdb1 allocation score on node-01: 0
native_color: iSCSI_nas01_sdb1 allocation score on node-02: 1000000
native_color: iSCSI_nas01_sdb1 allocation score on node-01: -1000000
native_color: iSCSI_nas02_sdb1 allocation score on node-02: 1000000
native_color: iSCSI_nas02_sdb1 allocation score on node-01: -1000000
native_color: iSCSI_nas03_sdb1 allocation score on node-02: 1000000
native_color: iSCSI_nas03_sdb1 allocation score on node-01: -1000000
More information about the Pacemaker
mailing list