[Pacemaker] Fw: Cluster resources failing to move
Tommy Cooper
tomcooper83 at yahoo.com
Mon Mar 4 22:23:34 UTC 2013
Is this the correct way to do it?
primitive p_asterisk ocf:heartbeat:asterisk \
params user="root" group="root" maxfiles="65536" \
op start interval="0" timeout="30s" \
op monitor interval="10s" timeout="30s" \
op stop interval="0" timeout="30s" migration-threshold="1"
I tried stopping the asterisk service using service asterisk stop. I repeated that for at least 4 times but the service keeps restarting on the same node
----- Forwarded Message -----
From: emmanuel segura <emi2fast at gmail.com>
To: Tommy Cooper <tomcooper83 at yahoo.com>; The Pacemaker cluster resource manager <pacemaker at oss.clusterlabs.org>
Sent: Monday, March 4, 2013 11:05 PM
Subject: Re: [Pacemaker] Cluster resources failing to move
From Suse Docs
7.4.2. Cleaning Up Resources¶
A resource will be automatically restarted if it fails, but each failure raises the resource's failcount. If a migration-threshold has been set for that resource, the node will no longer be allowed to run the resource as soon as the number of failures has reached the migration threshold.
2013/3/4 Tommy Cooper <tomcooper83 at yahoo.com>
I have removed the order and colocation statements but I am still getting the same results. Asterisk keeps restarting on the same server, how can I switch to the other server when asterisk fails? I used those statements to make sure that both services are running on the same server and to make sure that the virtual IP is started before asterisk.
>
>
>
>----- Forwarded Message -----
>From: Jake Smith <jsmith at argotec.com>
>To: Tommy Cooper <tomcooper83 at yahoo.com>; The Pacemaker cluster resource manager <pacemaker at oss.clusterlabs.org>
>
>Sent: Monday, March 4, 2013 10:00 PM
>Subject: Re: [Pacemaker] Fw: Cluster resources failing to move
>
>
>
>----- Original Message -----
>> From: "Tommy Cooper" <tomcooper83 at yahoo.com>
>> To: pacemaker at oss.clusterlabs.org
>
>> Sent: Monday, March 4, 2013 3:51:03 PM
>
>> Subject: [Pacemaker] Fw: Cluster resources failing to move
>>
>>
>>
>>
>> Thank you for your prompt reply. I actually wanted to create an
>> active/passive cluster, so if either the network or Asterisk fails
>> these services could be migrated to the other server. As I already
>> stated earlier, the current config notifies me if asterisk is down
>> but does not start asterisk on the other server.
>
>
>Did asterisk restart on the same server? <- this is what I would expect pacemaker to do.
>
>Removing the colocation (and order) statements didn't have any effect?
>
>>
>>
>
>> ----- Forwarded Message -----
>> From: Jake Smith <jsmith at argotec.com>
>> To: Tommy Cooper <tomcooper83 at yahoo.com>; The Pacemaker cluster
>> resource manager <pacemaker at oss.clusterlabs.org>
>> Sent: Monday, March 4, 2013 9:29 PM
>> Subject: Re: [Pacemaker] Cluster resources failing to move
>>
>>
>> ----- Original Message -----
>> > From: "Tommy Cooper" < tomcooper83 at yahoo.com >
>> > To: pacemaker at oss.clusterlabs.org
>> > Sent: Monday, March 4, 2013 2:19:22 PM
>> > Subject: [Pacemaker] Cluster resources failing to move
>> >
>> >
>> >
>> >
>> > Hi,
>> >
>> >
>> > I am trying to configure a 2 node cluster using pac emaker 1.1.7
>> > and
>> > corosync 1.4.1. I. I want pacemaker to provide the virual IP
>> > (192.168.1.115), monitor Asterisk (PBX) and failover to the othe
>> > server. If I switch off pacemaker and/or corosync the cluster
>> > resources switch to the other node. I have also configured
>> > res_corosync.so module in Asterisk However if i either switch off
>> > asterisk using service *service name* stop, the following error is
>> > shown: Failed actions:
>> > p_asterisk_monitor_10000 (node=node1.localdomain, call=10, rc=7,
>> > status=complete): not running
>> >
>>
>> What do you want/expect to happen when you stop asterisk that
>> doesn't? The monitor showing not running (failed) is expected if
>> some outside event stopped the resource.
>>
>> > Corosync configuration:
>> >
>> > compatibility: whitetank
>> > totem {
>> > version: 2
>> > secauth: off
>> > interface {
>> > member {
>> > memberaddr: 192.168.1.113
>> > }
>> > member {
>> > memberaddr: 192.168.1.114
>> > }
>> > ringnumber: 0
>> > bindnetaddr: 192.168.1.0
>> > mcastport: 5405
>> > ttl: 1
>> > }
>> > transport: udpu
>> > }
>> > logging {
>> > fileline: off
>> > to_logfile: yes
>> > to_syslog: yes
>> > debug: on
>> > logfile: /var/log/cluster/corosync.log
>> > debug: off
>> > timestamp: on
>> > logger_subsys {
>> > subsys: AMF
>> > debug: off
>> > }
>> > }
>> >
>> > amf {
>> > mode: disabled
>> > }
>> > quorum {
>> > provider: corosync_votequorum
>> > expected_votes: 3
>> > }
>> >
>> > crm configure status:
>> >
>> > node node1.localdomain
>> > node node2.localdomain
>> > primitive failover-ip ocf:heartbeat:IPaddr2 \
>> > params ip="192.168.1.115" cidr_netmask="24" nic="eth6" \
>> > op start interval="0" timeout="30" \
>> > op monitor interval="1s" timeout="30" start-delay="0" \
>> > op stop interval="0" timeout="30s" \
>> > meta target-role="started"
>> > primitive p_asterisk ocf:heartbeat:asterisk \
>> > params user="root" group="root" maxfiles="65536" \
>> > op start interval="0" timeout="30s" \
>> > op monitor interval="10s" timeout="30s" \
>> > op stop interval="0" timeout="30s"
>> > group voip failover-ip p_asterisk
>>
>> You don't need these colocation and order statements if you have the
>> resources grouped - remove them. The group is a syntax shortcut for
>> writing order and colocation statements so the above is enforcing an
>> order of ip then asterisk and a colocation of asterisk with ip. Also
>> the colocation below is backwards and *might* be causing your
>> issues.
>>
>> HTH
>> Jake
>>
>> > colocation asterisk_cluster inf: failover-ip p_asterisk
>> > order start_order inf: failover-ip p_asterisk
>> > property $id="cib-bootstrap-options" \
>> > dc-version="1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14" \
>> > cluster-infrastructure="openais" \
>> > expected-quorum-votes="2" \
>> > stonith-enabled="false" \
>> > no-quorum-policy="ignore"
>> > rsc_defaults $id="rsc-options" \
>> > resource-stickiness="100"
>> >
>> >
>> >
>> >
>> >
>> > _______________________________________________
>> > Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>> >
>> > Project Home: http://www.clusterlabs.org/
>> > Getting started:
>> > http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> > Bugs: http://bugs.clusterlabs.org/
>
>> >
>>
>>
>>
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>> Project Home: http://www.clusterlabs.org/
>> Getting started:
>> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org/
>>
>
>
>
>_______________________________________________
>Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
>Project Home: http://www.clusterlabs.org/
>Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>Bugs: http://bugs.clusterlabs.org/
>
>
--
esta es mi vida e me la vivo hasta que dios quiera
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20130304/4cab21a9/attachment.htm>
More information about the Pacemaker
mailing list