[Pacemaker] Cluster Test Suite CTS Pacemaker 1.2 not doinganything

Koch, Sebastian Sebastian.Koch at netzwerk.de
Fri Feb 5 06:03:55 EST 2010


Hi, thanks for your answer.

There is only the script running:

8435 ?        Ss     0:00  \_ sshd: root at pts/0
 8437 pts/0    Ss     0:00  |   \_ -bash
 8459 pts/0    S+     0:00  |       \_ /usr/bin/python ./CTSlab.py --nodes cluster01-node1 cluster01-node2 --benchmark --stack ais --logfile /var/log/messages --schema pacem
 8462 ?        Ss     0:00  \_ sshd: root at pts/1
 8464 pts/1    Ss     0:00      \_ -bash
 8479 pts/1    R+     0:00          \_ ps axf

Quiet strange

Sebastian Koch
                                                         

NETZWERK GmbH

Fon:  +49.711.220 5498 81
Mobil: +49.160.907 908 30
Fax:  +49.711.220 5499 27
Email: sebastian.koch at netzwerk.de
Web:  www.netzwerk.de
NETZWERK GmbH, Kurze Str. 40, 70794 Filderstadt-Bonlanden
Geschäftsführer: Siegfried Herner, Hans-Baldung Luley, Olaf Müller-Haberland
Sitz der Gesellschaft: Filderstadt-Bonlanden, Amtsgericht Stuttgart HRB 225547, WEEE-Reg Nr. DE 185 622 492

-----Ursprüngliche Nachricht-----
Von: Andrew Beekhof [mailto:andrew at beekhof.net] 
Gesendet: Donnerstag, 4. Februar 2010 21:50
An: pacemaker at oss.clusterlabs.org
Betreff: Re: [Pacemaker] Cluster Test Suite CTS Pacemaker 1.2 not doinganything

If you run "ps axf" are there any child processes of CTS?

On Thu, Feb 4, 2010 at 7:16 PM, Koch, Sebastian
<Sebastian.Koch at netzwerk.de> wrote:
> Hi,
>
>
>
> i got a problem with testing my cluster. I got a working 2 Node Setup like
> this:
>
>
>
> ============
>
> Last updated: Thu Feb  4 19:08:38 2010
>
> Stack: openais
>
> Current DC: prolog01-node1 - partition with quorum
>
> Version: 1.0.7-54d7869bfe3691eb723b1d47810e5585d8246b58
>
> 2 Nodes configured, 3 expected votes
>
> 2 Resources configured.
>
> ============
>
>
>
> Online: [ prolog01-node2 prolog01-node1 ]
>
>
>
>  Master/Slave Set: ms_drbd_mysql0
>
>      Masters: [ prolog01-node2 ]
>
>      Slaves: [ prolog01-node1 ]
>
>  Resource Group: grp_MySQL
>
>      res_Filesystem     (ocf::heartbeat:Filesystem):    Started
> prolog01-node2
>
>      res_ClusterIP      (ocf::heartbeat:IPaddr2):       Started
> prolog01-node2
>
>      res_MySQL  (lsb:mysql):    Started prolog01-node2
>
>      res_Apache (lsb:apache2):  Started prolog01-node2
>
>
>
> and a third machine with the same pacemaker version installed. I configured
> ssh keys, syslog and the hosts files. Every node can ping and log into each
> other. When i start the Tests with the following command:
>
>
>
> ./CTSlab.py --nodes 'prolog01-node1 prolog01-node2' --benchmark --stack ais
> --logfile /var/log/messages --schema pacemaker-1.0 3
>
> Feb 04 19:10:32 Random seed is: 1265307032
>
> Feb 04 19:10:32 >>>>>>>>>>>>>>>> BEGINNING 3 TESTS
>
> Feb 04 19:10:32 System log files: /var/log/messages
>
> Feb 04 19:10:32 Stack:            openais (whitetank)
>
> Feb 04 19:10:32 Schema:           pacemaker-1.0
>
> Feb 04 19:10:32 Random Seed:      1265307032
>
> Feb 04 19:10:32 Enable Stonith:   1
>
> Feb 04 19:10:32 Enable Fencing:   1
>
> Feb 04 19:10:32 Enable Standby:   1
>
> Feb 04 19:10:32 Enable Resources: 0
>
>
>
> Nothing happens, it just says that it starts but nothing happens. I can see
> some message in the logfile and i seems that the script does not reach the
> other nodes. If i run the ping -nq -c1 -w1 prolog01-node1 it is
> successfully.
>
>
>
> Feb  4 19:10:22 prolog01-node3 CTS: debug: cmd: target=localhost, rc=0: ping
> -nq -c1 -w1 prolog01-node1 >/dev/null 2>&1
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[CIBResource]:    0
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[CIBfilename]:    None
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[CMclass]:
> CM_ais.crm_whitetank
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[ClobberCIB]:     0
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[DoBSC]:  0
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[DoFencing]:      1
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[DoStandby]:      1
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[DoStonith]:      1
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[IPBase]: 127.0.0.10
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[ListTests]:      0
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[LogFileName]:
> /var/log/messages
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[RandSeed]:
> 1265307032
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[RecvLoss]:       0.0
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[Schema]:
> pacemaker-1.0
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[Stack]:  openais
> (whitetank)
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[SyslogFacility]: None
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[XmitLoss]:       0.0
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[all-once]:       0
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[at-boot]:        1
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[benchmark]:      1
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug:
> Environment[experimental-tests]:     0
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[logger]:
> [<__main__.StdErrLog instance at 0x1b9b4d0>, <__main__.SysLog instance at
> 0x1befe18>]
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[logrestartcmd]:
> /etc/init.d/syslog-ng restart 2>&1 > /dev/null
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[loop-minutes]:   60
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[loop-tests]:     1
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[nodes]:
> ['prolog01-node1', 'prolog01-node2']
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[oprofile]:       []
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[stonith-params]:
> hostlist=all
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[stonith-type]:
> external/ssh
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[unsafe-tests]:   1
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[use_logd]:       0
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[valgrind-opts]:
> --leak-check=full --show-reachable=yes --trace-children=no --num-callers=25
> --gen-suppressions=all --suppressions=/usr/share/pacemaker/cts/cts.supp
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug:
> Environment[valgrind-prefix]:        None
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[valgrind-procs]: cib
> crmd attrd pengine
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[valgrind-tests]: 0
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: Environment[warn-inactive]:  0
>
> Feb  4 19:10:32 prolog01-node3 CTS: >>>>>>>>>>>>>>>> BEGINNING 3 TESTS
>
> Feb  4 19:10:32 prolog01-node3 CTS: System log files: /var/log/messages
>
> Feb  4 19:10:32 prolog01-node3 CTS: Stack:            openais (whitetank)
>
> Feb  4 19:10:32 prolog01-node3 CTS: Schema:           pacemaker-1.0
>
> Feb  4 19:10:32 prolog01-node3 CTS: Random Seed:      1265307032
>
> Feb  4 19:10:32 prolog01-node3 CTS: Enable Stonith:   1
>
> Feb  4 19:10:32 prolog01-node3 CTS: Enable Fencing:   1
>
> Feb  4 19:10:32 prolog01-node3 CTS: Enable Standby:   1
>
> Feb  4 19:10:32 prolog01-node3 CTS: Enable Resources: 0
>
> Feb  4 19:10:32 prolog01-node3 CTS: debug: cmd: target=localhost, rc=0: ping
> -nq -c1 -w1 prolog01-node1 >/dev/null 2>&1
>
> Feb  4 19:11:02 prolog01-node3 CTS: debug: Waiting for node prolog01-node1
> to come up
>
> Feb  4 19:11:02 prolog01-node3 CTS: debug: cmd: target=localhost, rc=0: ping
> -nq -c1 -w1 prolog01-node1 >/dev/null 2>&1
>
>
>
>  I tried now for days to get this running but i don't have clue anymore.
> Hopefully some of you can help me / instruct me how to setup a test
> scenario.
>
>
>
> Thanks in advance for your help.
>
> Best Regards from Germany
>
>
>
> Sebastian Koch
>
> _______________________________________________
> Pacemaker mailing list
> Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
>

_______________________________________________
Pacemaker mailing list
Pacemaker at oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker




More information about the Pacemaker mailing list