[Pacemaker] What is the reason which the node in which failure has not occurred carries out "lost"?
yusuke iida
yusk.iida at gmail.com
Fri Jan 31 07:20:24 UTC 2014
Hi, all
I measure the performance of Pacemaker in the following combinations.
Pacemaker-1.1.11.rc1
libqb-0.16.0
corosync-2.3.2
All nodes are KVM virtual machines.
stopped the node of vm01 compulsorily from the inside, after starting 14
nodes.
"virsh destroy vm01" was used for the stop.
Then, in addition to the compulsorily stopped node, other nodes are
separated from a cluster.
The log of "Retransmit List:" is then outputted in large quantities from
corosync.
What is the reason which the node in which failure has not occurred carries
out "lost"?
Please advise, if there is a problem in a setup in something.
I attached the report when the problem occurred.
https://drive.google.com/file/d/0BwMFJItoO-fVMkFWWWlQQldsSFU/edit?usp=sharing
Regards,
Yusuke
--
----------------------------------------
METRO SYSTEMS CO., LTD
Yusuke Iida
Mail: yusk.iida at gmail.com
----------------------------------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20140131/a374bf5a/attachment-0003.html>
More information about the Pacemaker
mailing list