[Pacemaker] iscsi migration to slow (disk errors) what to do ...
Jelle de Jong
jelledejong at powercraft.nl
Fri Jun 10 15:28:54 UTC 2011
Hello everybody,
I have been testing a two clusters one for iscsi and one for kvm host.
The problem is most of my kvm guest file-systems get corrupted when
migrating my iscsi target on heavy disk load on the kvm guests.
I am at the point of giving up.. I did a lot of testing and it just
fails when testing.. Please help.
I use bonnie++ on one of the kvm guests and siege from an other network
to test the connectivity to http services running on the kvm guests.
I am using crm node standby on the iscsi target cluster to test the
migration (can also use crm resource migrate/move rg_iscsi) so its not
even a best worst time scenario for the migration.
The migration on the clusters works but in the meanwhile the kvm guests
are dyeing... when the io-loads are low migrations seems to go fine...
# on both of my kvm host nodes:
/etc/lvm/lvm.conf
write_cache_state = 0
/etc/iscsi/iscsid.conf
node.session.timeo.replacement_timeout = 480
node.conn[0].timeo.noop_out_interval = 15
node.conn[0].timeo.noop_out_timeout = 30
# example of running kvm guest virtio disks without caching
/usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 1024 -smp
2,sockets=2,cores=1,threads=1 -name sylvia.powercraft.nl -uuid
57eacaa4-4337-c626-45e3-f9aeb77aced1 -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/sylvia.powercraft.nl.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc -boot
order=c,menu=off -drive
file=/dev/lvm1-vol/kvm05-disk,if=none,id=drive-virtio-disk0,boot=on,format=raw,cache=none
-device
virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0
-device
virtio-net-pci,vlan=0,id=net0,mac=52:54:00:35:5b:ab,bus=pci.0,addr=0x3
-net tap,fd=20,vlan=0,name=hostnet0 -chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -usb -device
usb-tablet,id=input0 -vnc 127.0.0.1:4 -k en-us -vga cirrus -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
# iscsi target on drbd/iscsi cluster
primitive iscsi0_target ocf:heartbeat:iSCSITarget \
params implementation="tgt"
iqn="iqn.2011-04.nl.powercraft:storage.iscsi0" tid="1"
allowed_initiators="192.168.24.1 192.168.24.17 192.168.24.18"
additional_parameters="DefaultTime2Retain=60 DefaultTime2Wait=5" \
op stop interval="0" timeout="30s" \
op monitor interval="10s"
root at godfrey:~# crm configure show
http://paste.debian.net/119445/
root at hennessy:~# cat /etc/iscsi/iscsid.conf
http://paste.debian.net/119447/
The underlying network should be fast enough all Cisco and Intel
1000Mb/s devices.
Thanks in advance,
Kind regards,
Jelle
More information about the Pacemaker
mailing list