[ClusterLabs] multiple drives looks like balancing but why and causing troubles
Streeter, Michelle N
michelle.n.streeter at boeing.com
Wed Aug 26 18:46:44 UTC 2015
I have a two node cluster. Both nodes are virtual and have five shared drives attached via sas controller. For some reason, the cluster shows both nodes have half the drives started on them. Not sure if this is called split brain or not. It definitely looks load balancing. But I did not set up load balancing. On my client, I only see the data for the shares on the active cluster node. But they should all be on the active cluster node. Any suggestions as to why this is happening? Is there a setting so that everything works on only one node at a time?
pcs cluster status:
Cluster name: CNAS
Last updated: Wed Aug 26 13:35:47 2015
Last change: Wed Aug 26 13:28:55 2015
Stack: classic openais (with plugin)
Current DC: nas02 - partition with quorum
Version: 1.1.11-97629de
2 Nodes configured, 2 expected votes
11 Resources configured
Online: [ nas01 nas02 ]
Full list of resources:
NAS (ocf::heartbeat:IPaddr2): Started nas01
Resource Group: datag
datashare (ocf::heartbeat:Filesystem): Started nas02
dataserver (ocf::heartbeat:nfsserver): Started nas02
Resource Group: oomtlg
oomtlshare (ocf::heartbeat:Filesystem): Started nas01
oomtlserver (ocf::heartbeat:nfsserver): Started nas01
Resource Group: oomtrg
oomtrshare (ocf::heartbeat:Filesystem): Started nas02
oomtrserver (ocf::heartbeat:nfsserver): Started as02
Resource Group: oomblg
oomblshare (ocf::heartbeat:Filesystem): Started nas01
oomblserver (ocf::heartbeat:nfsserver): Started nas01
Resource Group: oombrg
oombrshare (ocf::heartbeat:Filesystem): Started nas02
oombrserver (ocf::heartbeat:nfsserver): Started nas02
pcs config show:
Cluster Name: CNAS
Corosync Nodes:
nas01 nas02
Pacemaker Nodes:
nas01 nas02
Resources:
Resource: NAS (class=ocf provider=heartbeat type=IPaddr2)
Attributes: ip=192.168.56.110 cidr_netmask=24
Operations: start interval=0s timeout=20s (NAS-start-timeout-20s)
stop interval=0s timeout=20s (NAS-stop-timeout-20s)
monitor interval=10s timeout=20s (NAS-monitor-interval-10s)
Group: datag
Resource: datashare (class=ocf provider=heartbeat type=Filesystem)
Attributes: device=/dev/sdb1 directory=/data fstype=ext4
Operations: start interval=0s timeout=60 (datashare-start-timeout-60)
stop interval=0s timeout=60 (datashare-stop-timeout-60)
monitor interval=20 timeout=40 (datashare-monitor-interval-20)
Resource: dataserver (class=ocf provider=heartbeat type=nfsserver)
Attributes: nfs_shared_infodir=/data/nfsinfo nfs_no_notify=true
Operations: start interval=0s timeout=40 (dataserver-start-timeout-40)
stop interval=0s timeout=20s (dataserver-stop-timeout-20s)
monitor interval=10 timeout=20s (dataserver-monitor-interval-10)
Group: oomtlg
Resource: oomtlshare (class=ocf provider=heartbeat type=Filesystem)
Attributes: device=/dev/sdc1 directory=/oomtl fstype=ext4
Operations: start interval=0s timeout=60 (oomtlshare-start-timeout-60)
stop interval=0s timeout=60 (oomtlshare-stop-timeout-60)
monitor interval=20 timeout=40 (oomtlshare-monitor-interval-20)
Resource: oomtlserver (class=ocf provider=heartbeat type=nfsserver)
Attributes: nfs_shared_infodir=/oomtl/nfsinfo nfs_no_notify=true
Operations: start interval=0s timeout=40 (oomtlserver-start-timeout-40)
stop interval=0s timeout=20s (oomtlserver-stop-timeout-20s)
monitor interval=10 timeout=20s (oomtlserver-monitor-interval-10)
Group: oomtrg
Resource: oomtrshare (class=ocf provider=heartbeat type=Filesystem)
Attributes: device=/dev/sdd1 directory=/oomtr fstype=ext4
Operations: start interval=0s timeout=60 (oomtrshare-start-timeout-60)
stop interval=0s timeout=60 (oomtrshare-stop-timeout-60)
monitor interval=20 timeout=40 (oomtrshare-monitor-interval-20)
Resource: oomtrserver (class=ocf provider=heartbeat type=nfsserver)
Attributes: nfs_shared_infodir=/oomtr/nfsinfo nfs_no_notify=true
Operations: start interval=0s timeout=40 (oomtrserver-start-timeout-40)
stop interval=0s timeout=20s (oomtrserver-stop-timeout-20s)
monitor interval=10 timeout=20s (oomtrserver-monitor-interval-10)
Group: oomblg
Resource: oomblshare (class=ocf provider=heartbeat type=Filesystem)
Attributes: device=/dev/sde1 directory=/oombl fstype=ext4
Operations: start interval=0s timeout=60 (oomblshare-start-timeout-60)
stop interval=0s timeout=60 (oomblshare-stop-timeout-60)
monitor interval=20 timeout=40 (oomblshare-monitor-interval-20)
Resource: oomblserver (class=ocf provider=heartbeat type=nfsserver)
Attributes: nfs_shared_infodir=/oombl/nfsinfo nfs_no_notify=true
Operations: start interval=0s timeout=40 (oomblserver-start-timeout-40)
stop interval=0s timeout=20s (oomblserver-stop-timeout-20s)
monitor interval=10 timeout=20s (oomblserver-monitor-interval-10)
Group: oombrg
Resource: oombrshare (class=ocf provider=heartbeat type=Filesystem)
Attributes: device=/dev/sdf1 directory=/oombr fstype=ext4
Operations: start interval=0s timeout=60 (oombrshare-start-timeout-60)
stop interval=0s timeout=60 (oombrshare-stop-timeout-60)
monitor interval=20 timeout=40 (oombrshare-monitor-interval-20)
Resource: oombrserver (class=ocf provider=heartbeat type=nfsserver)
Attributes: nfs_shared_infodir=/oombr/nfsinfo nfs_no_notify=true
Operations: start interval=0s timeout=40 (oombrserver-start-timeout-40)
stop interval=0s timeout=20s (oombrserver-stop-timeout-20s)
monitor interval=10 timeout=20s (oombrserver-monitor-interval-10)
Stonith Devices:
Fencing Levels:
Location Constraints:
Ordering Constraints:
Colocation Constraints:
Cluster Properties:
cluster-infrastructure: classic openais (with plugin)
dc-version: 1.1.11-97629de
expected-quorum-votes: 2
no-quorum-policy: ignore
stonith-enabled: false
Michelle Streeter
ASC2 MCS - SDE/ACL/SDL/EDL OKC Software Engineer
The Boeing Company
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20150826/719cca28/attachment-0003.html>
More information about the Users
mailing list