[ClusterLabs] can't create master/slave resource
Klaus Wenninger
kwenning at redhat.com
Wed Sep 20 04:57:54 EDT 2017
On 09/20/2017 10:40 AM, Tiemen Ruiten wrote:
> Thank you very much for the detailed explanation. We will look for
> another way to determine master/slave status of this application then.
What you still could try is to write kind of an ocf-wrapper for your
systemd-service
so that you can leave starting/stopping to systemd (controlled via
systemctl) and
do what is needed on top to control master/slave-state in the
ocf-Resource-Agent.
iirc there have been threads about creating ocf-wrappers of systemd-services
on this mailing-list.
Regards,
Klaus
>
> On 20 September 2017 at 09:20, Tomas Jelinek <tojeline at redhat.com
> <mailto:tojeline at redhat.com>> wrote:
>
>
>
> Dne 20.9.2017 v 09:03 Tomas Jelinek napsal(a):
>
> Hi,
>
> systemd resources cannot be used as master/slave resources. In
> order to use a resource as a master/slave, the resource must
> support promote and demote actions [1], which systemd
> resources don't.
>
> # pcs resource create test systemd:postfix
> # pcs resource master test
> # pcs cluster verify -V
> error: native_unpack: Resource test:0 is of type
> systemd and therefore cannot be used as a master/slave resource
> error: create_child_clone: Failed unpacking resource test
> error: unpack_resources: Failed unpacking master
> test-master
> Errors found during check: config not valid
>
> You need to use an ocf resource agent (with promote and demote
> actions implemented) for your resource for this to work.
>
> Because the resource cannot be unpacked by pacemaker, it is
> not shown in the "pcs status" output - pcs doesn't get any
> info about it from pacemaker. This issue has been already
> discussed. Pacemaker will provide info when such errors occur
> so pcs will be able to display it. [2]
>
> The resource may not be running but it is still defined in the
> configuration:
> # pcs resource --full
> Master: test-master
> Resource: test (class=systemd type=postfix)
> Operations: monitor interval=60 timeout=100
> (test-monitor-interval-60)
> start interval=0s timeout=100
> (test-start-interval-0s)
> stop interval=0s timeout=100
> (test-stop-interval-0s)
>
> That's why you get an error that the id already exists.
>
>
> I'm going to file a bug against pcs so it won't be possible to
> master/slave systemd resources as it is not supported anyway.
>
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1493416
> <https://bugzilla.redhat.com/show_bug.cgi?id=1493416>
>
>
>
> Regards,
> Tomas
>
> [1]:
> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#_requirements_for_multi_state_resource_agents
> <http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#_requirements_for_multi_state_resource_agents>
>
> [2]: https://bugzilla.redhat.com/show_bug.cgi?id=1447951
> <https://bugzilla.redhat.com/show_bug.cgi?id=1447951>
>
>
> Dne 19.9.2017 v 17:13 Tiemen Ruiten napsal(a):
>
> Hello,
>
> We have a 3-node cluster (CentOS 7.4) with several systemd
> resources configured. One of them I would like to create
> as a master/slave resource, so following the RedHat
> documentation:
>
> pcs resource create ivr systemd:japp at ivr - works, the
> service is started on one of the nodes.
> pcs resource master ivr-master ivr - doesn't work as
> expected: the service is stopped and the output of pcs
> resource show doesn't list it anymore. However, if I try
> the command again, I get an error saying the resource ivr
> already exists! I have to delete the resource and recreate
> it to get the service to run.
>
> pacemaker-libs-1.1.16-12.el7_4.2.x86_64
> pacemaker-cluster-libs-1.1.16-12.el7_4.2.x86_64
> pacemaker-1.1.16-12.el7_4.2.x86_64
> pacemaker-cli-1.1.16-12.el7_4.2.x86_64
> corosynclib-2.4.0-9.el7_4.2.x86_64
> corosync-2.4.0-9.el7_4.2.x86_64
>
> Am I doing something wrong?
>
> --
> Tiemen Ruiten
> Systems Engineer
> R&D Media
>
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> <mailto:Users at clusterlabs.org>
> http://lists.clusterlabs.org/mailman/listinfo/users
> <http://lists.clusterlabs.org/mailman/listinfo/users>
>
> Project Home: http://www.clusterlabs.org
> Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> <http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf>
> Bugs: http://bugs.clusterlabs.org
>
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> <mailto:Users at clusterlabs.org>
> http://lists.clusterlabs.org/mailman/listinfo/users
> <http://lists.clusterlabs.org/mailman/listinfo/users>
>
> Project Home: http://www.clusterlabs.org
> Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> <http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf>
> Bugs: http://bugs.clusterlabs.org
>
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> <mailto:Users at clusterlabs.org>
> http://lists.clusterlabs.org/mailman/listinfo/users
> <http://lists.clusterlabs.org/mailman/listinfo/users>
>
> Project Home: http://www.clusterlabs.org
> Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> <http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf>
> Bugs: http://bugs.clusterlabs.org
>
>
>
>
> --
> Tiemen Ruiten
> Systems Engineer
> R&D Media
>
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
--
Klaus Wenninger
Senior Software Engineer, EMEA ENG Openstack Infrastructure
Red Hat
kwenning at redhat.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20170920/09a462e4/attachment-0003.html>
More information about the Users
mailing list