[Pacemaker] running same resource on both nodes through clone
ESWAR RAO
eswar7028 at gmail.com
Fri Jun 7 16:30:41 UTC 2013
Hi Dejan,
Thanks for the info.
Please correct me if my below understanding is wrong??
With the clone, we can work out a active/active model as i understood from
Pacemaker guide.
Isn't that hot standby mode?
So the resources would be up and running on both nodes.
In my setup, I am unable to just associate a VIP with the active/active
model.
Thanks
Eswar
On Fri, Jun 7, 2013 at 9:04 PM, Dejan Muhamedagic <dejanmm at fastmail.fm>wrote:
> Hi,
>
> On Fri, Jun 07, 2013 at 08:47:19PM +0530, ESWAR RAO wrote:
> > Hi Dejan,
> >
> > Thanks for the response.
> >
> > In our setup, we want the resources to start on the 2 nodes
> (active/active)
> > so that the downtime would be less.
>
> hot-standby, I guess. I think that there was a discussion about
> it but cannot recall details. Sorry.
>
> Thanks,
>
> Dejan
>
> > All clients connect to the VIP. If the resource on any one node goes
> down,
> > I expect the VIP should be moved to another node and since the resource
> is
> > already running on the another node the down time would be less.
> >
> > I thought of configuring them as is-manged= false then the pacemaker
> > wouldn't restart the resources on failed node.
> >
> > Thanks
> > Eswar
> >
> >
> > On Fri, Jun 7, 2013 at 7:32 PM, Dejan Muhamedagic <dejanmm at fastmail.fm
> >wrote:
> >
> > > Hi,
> > >
> > > On Fri, Jun 07, 2013 at 12:49:49PM +0530, ESWAR RAO wrote:
> > > > Hi All,
> > > >
> > > > I am trying to run same RA on both nodes using clone.
> > > > My set up is a 2 node cluster with HB+pacemaker.
> > > >
> > > > The RA aren't started automatically.
> > > > They are started through pacemaker only.
> > > >
> > > > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > > > #crm configure primitive ha_vip ocf:IPaddr2 params ip=192.168.101.205
> > > > cidr_netmask=32 nic=eth1 op monitor interval=30s
> > > >
> > > > #crm configure primitive oc_d1 lsb::testd1 meta allow-migrate="true"
> > > > migration-threshold="1" failure-timeout="30s" op monitor
> interval="3s"
> > > > #crm configure clone oc_d1_clone oc_d1 meta clone-max="2"
> > > > clone-node-max="1" globally-unique="false" interleave="true"
> > > >
> > > > #crm configure primitive oc_d2 lsb::testd2 meta allow-migrate="true"
> > > > migration-threshold="3" failure-timeout="30s" op monitor
> interval="5s"
> > > > #crm configure clone oc_d2_clone oc_d2 meta clone-max="2"
> > > > clone-node-max="1" globally-unique="false" interleave="true"
> > > >
> > > > # crm configure colocation oc-ha_vip inf: ha_vip oc_d1_clone
> oc_d2_clone
> > > > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > > >
> > > > I observe the RAs are not getting started in the other node.
> > > >
> > > > ha_vip (ocf::heartbeat:IPaddr2): Started ubuntu190
> > > > Clone Set: oc_d1_clone [oc_d1]
> > > > Started: [ ubuntu190 ]
> > > > Stopped: [ oc_d1:1 ]
> > > > Clone Set: oc_d2_clone [oc_d2]
> > > > Started: [ ubuntu190 ]
> > > > Stopped: [ oc_d2:1 ]
> > > >
> > > >
> > > > But if I remove the colocation constraint then the RA are starting on
> > > the 2
> > > > nodes.
> > > > But without colocation, if any RA fails the vip will not migrate
> which is
> > > > bad.
> > >
> > > Can you explain why do you need the oc_* resources running both
> > > nodes but at the same time they depend on the IP address which is
> > > not cloned. Looks to me a condition which is simply impossible to
> > > meet.
> > >
> > > Thanks,
> > >
> > > Dejan
> > >
> > > > Can some one help me out in my issue.
> > > >
> > > >
> > > > Thanks
> > > > Eswar
> > >
> > > > _______________________________________________
> > > > Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> > > > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> > > >
> > > > Project Home: http://www.clusterlabs.org
> > > > Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > > > Bugs: http://bugs.clusterlabs.org
> > >
> > >
> > > _______________________________________________
> > > Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> > > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> > >
> > > Project Home: http://www.clusterlabs.org
> > > Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > > Bugs: http://bugs.clusterlabs.org
> > >
>
> > _______________________________________________
> > Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20130607/db0f14b1/attachment.htm>
More information about the Pacemaker
mailing list