[Pacemaker] Cluster Test Suite CTS Pacemaker 1.2 not doing anything
    Koch, Sebastian 
    Sebastian.Koch at netzwerk.de
       
    Thu Feb  4 18:16:27 UTC 2010
    
    
  
Hi,
 
i got a problem with testing my cluster. I got a working 2 Node Setup
like this:
 
============
Last updated: Thu Feb  4 19:08:38 2010
Stack: openais
Current DC: prolog01-node1 - partition with quorum
Version: 1.0.7-54d7869bfe3691eb723b1d47810e5585d8246b58
2 Nodes configured, 3 expected votes
2 Resources configured.
============
 
Online: [ prolog01-node2 prolog01-node1 ]
 
 Master/Slave Set: ms_drbd_mysql0
     Masters: [ prolog01-node2 ]
     Slaves: [ prolog01-node1 ]
 Resource Group: grp_MySQL
     res_Filesystem     (ocf::heartbeat:Filesystem):    Started
prolog01-node2
     res_ClusterIP      (ocf::heartbeat:IPaddr2):       Started
prolog01-node2
     res_MySQL  (lsb:mysql):    Started prolog01-node2
     res_Apache (lsb:apache2):  Started prolog01-node2
 
and a third machine with the same pacemaker version installed. I
configured ssh keys, syslog and the hosts files. Every node can ping and
log into each other. When i start the Tests with the following command:
 
./CTSlab.py --nodes 'prolog01-node1 prolog01-node2' --benchmark --stack
ais --logfile /var/log/messages --schema pacemaker-1.0 3
Feb 04 19:10:32 Random seed is: 1265307032
Feb 04 19:10:32 >>>>>>>>>>>>>>>> BEGINNING 3 TESTS
Feb 04 19:10:32 System log files: /var/log/messages
Feb 04 19:10:32 Stack:            openais (whitetank)
Feb 04 19:10:32 Schema:           pacemaker-1.0
Feb 04 19:10:32 Random Seed:      1265307032
Feb 04 19:10:32 Enable Stonith:   1
Feb 04 19:10:32 Enable Fencing:   1
Feb 04 19:10:32 Enable Standby:   1
Feb 04 19:10:32 Enable Resources: 0
 
Nothing happens, it just says that it starts but nothing happens. I can
see some message in the logfile and i seems that the script does not
reach the other nodes. If i run the ping -nq -c1 -w1 prolog01-node1 it
is successfully.
 
Feb  4 19:10:22 prolog01-node3 CTS: debug: cmd: target=localhost, rc=0:
ping -nq -c1 -w1 prolog01-node1 >/dev/null 2>&1
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[CIBResource]:
0
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[CIBfilename]:
None
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[CMclass]:
CM_ais.crm_whitetank
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[ClobberCIB]:
0
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[DoBSC]:  0
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[DoFencing]:
1
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[DoStandby]:
1
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[DoStonith]:
1
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[IPBase]:
127.0.0.10
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[ListTests]:
0
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[LogFileName]:
/var/log/messages
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[RandSeed]:
1265307032
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[RecvLoss]:
0.0
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[Schema]:
pacemaker-1.0
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[Stack]:  openais
(whitetank)
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[SyslogFacility]:
None
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[XmitLoss]:
0.0
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[all-once]:
0
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[at-boot]:
1
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[benchmark]:
1
Feb  4 19:10:32 prolog01-node3 CTS: debug:
Environment[experimental-tests]:     0
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[logger]:
[<__main__.StdErrLog instance at 0x1b9b4d0>, <__main__.SysLog instance
at 0x1befe18>]
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[logrestartcmd]:
/etc/init.d/syslog-ng restart 2>&1 > /dev/null
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[loop-minutes]:
60
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[loop-tests]:
1
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[nodes]:
['prolog01-node1', 'prolog01-node2']
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[oprofile]:
[]
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[stonith-params]:
hostlist=all
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[stonith-type]:
external/ssh
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[unsafe-tests]:
1
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[use_logd]:
0
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[valgrind-opts]:
--leak-check=full --show-reachable=yes --trace-children=no
--num-callers=25 --gen-suppressions=all
--suppressions=/usr/share/pacemaker/cts/cts.supp
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[valgrind-prefix]:
None
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[valgrind-procs]:
cib crmd attrd pengine
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[valgrind-tests]:
0
Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[warn-inactive]:
0
Feb  4 19:10:32 prolog01-node3 CTS: >>>>>>>>>>>>>>>> BEGINNING 3 TESTS
Feb  4 19:10:32 prolog01-node3 CTS: System log files: /var/log/messages
Feb  4 19:10:32 prolog01-node3 CTS: Stack:            openais
(whitetank)
Feb  4 19:10:32 prolog01-node3 CTS: Schema:           pacemaker-1.0
Feb  4 19:10:32 prolog01-node3 CTS: Random Seed:      1265307032
Feb  4 19:10:32 prolog01-node3 CTS: Enable Stonith:   1
Feb  4 19:10:32 prolog01-node3 CTS: Enable Fencing:   1
Feb  4 19:10:32 prolog01-node3 CTS: Enable Standby:   1
Feb  4 19:10:32 prolog01-node3 CTS: Enable Resources: 0
Feb  4 19:10:32 prolog01-node3 CTS: debug: cmd: target=localhost, rc=0:
ping -nq -c1 -w1 prolog01-node1 >/dev/null 2>&1
Feb  4 19:11:02 prolog01-node3 CTS: debug: Waiting for node
prolog01-node1 to come up
Feb  4 19:11:02 prolog01-node3 CTS: debug: cmd: target=localhost, rc=0:
ping -nq -c1 -w1 prolog01-node1 >/dev/null 2>&1
 
 I tried now for days to get this running but i don't have clue anymore.
Hopefully some of you can help me / instruct me how to setup a test
scenario.
 
Thanks in advance for your help.
Best Regards from Germany
 
Sebastian Koch
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20100204/0045c5ac/attachment-0001.html>
    
    
More information about the Pacemaker
mailing list