[Pacemaker] Two node cluster and no hardware device for stonith.
emmanuel segura
emi2fast at gmail.com
Tue Jan 27 10:48:14 UTC 2015
sorry, but i forgot to tell you, you need to know the fence_scsi
doesn't reboot the evicted node, so you can combine fence_vmware with
fence_scsi as the second option.
2015-01-27 11:44 GMT+01:00 emmanuel segura <emi2fast at gmail.com>:
> In normal situation every node can in your file system, fence_scsi is
> used when your cluster is in split-braint, when your a node doesn't
> comunicate with the other node, i don't is good idea.
>
>
> 2015-01-27 11:35 GMT+01:00 Andrea <a.bacchi at codices.com>:
>> Andrea <a.bacchi at ...> writes:
>>
>>>
>>> Michael Schwartzkopff <ms <at> ...> writes:
>>>
>>> >
>>> > Am Donnerstag, 22. Januar 2015, 10:03:38 schrieb E. Kuemmerle:
>>> > > On 21.01.2015 11:18 Digimer wrote:
>>> > > > On 21/01/15 08:13 AM, Andrea wrote:
>>> > > >> > Hi All,
>>> > > >> >
>>> > > >> > I have a question about stonith
>>> > > >> > In my scenarion , I have to create 2 node cluster, but I don't
>>> >
>>> > Are you sure that you do not have fencing hardware? Perhaps you just did
>> nit
>>> > configure it? Please read the manual of you BIOS and check your system
>>> board if
>>> > you have a IPMI interface.
>>> >
>>>
>>> > > >> > In my test, when I simulate network failure, split brain occurs, and
>>> > > >> > when
>>> > > >> > network come back, One node kill the other node
>>> > > >> > -log on node 1:
>>> > > >> > Jan 21 11:45:28 corosync [CMAN ] memb: Sending KILL to node 2
>>> > > >> >
>>> > > >> > -log on node 2:
>>> > > >> > Jan 21 11:45:28 corosync [CMAN ] memb: got KILL for node 2
>>> >
>>> > That is how fencing works.
>>> >
>>> > Mit freundlichen Grüßen,
>>> >
>>> > Michael Schwartzkopff
>>> >
>>>
>>> Hi All
>>>
>>> many thanks for your replies.
>>> I will update my scenario to ask about adding some devices for stonith
>>> - Option 1
>>> I will ask for having 2 vmware virtual machine, so i can try fance_vmware
>>> -Option 2
>>> In the project, maybe will need a shared storage. In this case, the shared
>>> storage will be a NAS that a can add to my nodes via iscsi. In this case I
>>> can try fence_scsi
>>>
>>> I will write here about news
>>>
>>> Many thanks to all for support
>>> Andrea
>>>
>>
>>
>>
>> some news
>>
>> - Option 2
>> In the customer environment I configured a iscsi target that our project
>> will use as cluster filesystem
>>
>> [ONE]pvcreate /dev/sdb
>> [ONE]vgcreate -Ay -cy cluster_vg /dev/sdb
>> [ONE]lvcreate -L*G -n cluster_lv cluster_vg
>> [ONE]mkfs.gfs2 -j2 -p lock_dlm -t ProjectHA:ArchiveFS /dev/cluster_vg/cluster_lv
>>
>> now I can add a Filesystem resource
>>
>> [ONE]pcs resource create clusterfs Filesystem
>> device="/dev/cluster_vg/cluster_lv" directory="/var/mountpoint"
>> fstype="gfs2" "options=noatime" op monitor interval=10s clone interleave=true
>>
>> and I can read and write from both node.
>>
>>
>> Now I'd like to use this device with fence_scsi.
>> It is ok? because I see in the man page this:
>> "The fence_scsi agent works by having each node in the cluster register a
>> unique key with the SCSI devive(s). Once registered, a single node will
>> become the reservation holder by creating a "write exclu-sive,
>> registrants only" reservation on the device(s). The result is that only
>> registered nodes may write to the device(s)"
>> It's no good for me, I need both node can write on the device.
>> So, I need another device to use with fence_scsi? In this case I will try to
>> create two partition, sdb1 and sdb2, on this device and use sdb1 as
>> clusterfs and sdb2 for fencing.
>>
>>
>> If i try to manually test this, I obtain before any operation
>> [ONE]sg_persist -n --read-keys
>> --device=/dev/disk/by-id/scsi-36e843b608e55bb8d6d72d43bfdbc47d4
>> PR generation=0x27, 1 registered reservation key follows:
>> 0x98343e580002734d
>>
>>
>> Then, I try to set serverHA1 key
>> [serverHA1]fence_scsi -d
>> /dev/disk/by-id/scsi-36e843b608e55bb8d6d72d43bfdbc47d4 -f /tmp/miolog.txt -n
>> serverHA1 -o on
>>
>> But nothing has changed
>> [ONE]sg_persist -n --read-keys
>> --device=/dev/disk/by-id/scsi-36e843b608e55bb8d6d72d43bfdbc47d4
>> PR generation=0x27, 1 registered reservation key follows:
>> 0x98343e580002734d
>>
>>
>> and in the log:
>> gen 26 17:53:27 fence_scsi: [debug] main::do_register_ignore
>> (node_key=4d5a0001, dev=/dev/sde)
>> gen 26 17:53:27 fence_scsi: [debug] main::do_reset (dev=/dev/sde, status=6)
>> gen 26 17:53:27 fence_scsi: [debug] main::do_register_ignore (err=0)
>>
>> The same when i try on serverHA2
>> It is normal?
>>
>>
>> In any case, i try to create a stonith device
>> [ONE]pcs stonith create iscsi-stonith-device fence_scsi
>> pcmk_host_list="serverHA1 serverHA2"
>> devices=/dev/disk/by-id/scsi-36e843b608e55bb8d6d72d43bfdbc47d4 meta
>> provides=unfencing
>>
>> and the cluster status is ok
>> [ONE] pcs status
>> Cluster name: MyCluHA
>> Last updated: Tue Jan 27 11:21:48 2015
>> Last change: Tue Jan 27 10:46:57 2015
>> Stack: cman
>> Current DC: serverHA1 - partition with quorum
>> Version: 1.1.11-97629de
>> 2 Nodes configured
>> 5 Resources configured
>>
>>
>> Online: [ serverHA1 serverHA2 ]
>>
>> Full list of resources:
>>
>> Clone Set: ping-clone [ping]
>> Started: [ serverHA1 serverHA2 ]
>> Clone Set: clusterfs-clone [clusterfs]
>> Started: [ serverHA1 serverHA2 ]
>> iscsi-stonith-device (stonith:fence_scsi): Started serverHA1
>>
>>
>>
>> How I can try this from remote connection?
>>
>>
>> Andrea
>>
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>
>
>
> --
> esta es mi vida e me la vivo hasta que dios quiera
--
esta es mi vida e me la vivo hasta que dios quiera
More information about the Pacemaker
mailing list