[ClusterLabs] [ClusterLabs Developers] Resource Agent language discussion
Andrew Beekhof
andrew at beekhof.net
Tue Aug 11 01:30:03 UTC 2015
> On 8 Aug 2015, at 1:14 am, Jehan-Guillaume de Rorthais <jgdr at dalibo.com> wrote:
>
> Hi Jan,
>
> On Fri, 7 Aug 2015 15:36:57 +0200
> Jan Pokorný <jpokorny at redhat.com> wrote:
>
>> On 07/08/15 12:09 +0200, Jehan-Guillaume de Rorthais wrote:
>>> Now, I would like to discuss about the language used to write a RA in
>>> Pacemaker. I never seen discussion or page about this so far.
>>
>> it wasn't in such a "heretic :)" tone, but I tried to show that it
>> is extremely hard (if not impossible in some instances) to write
>> bullet-proof code in bash (or POSIX shell, for that matter) because
>> it's so cumbersome to move from "whitespace-delimited words as
>> a single argument" and "words as standalone arguments" back and forth,
>> connected with quotation-desired/-counterproductive madness
>> (what if one wants to indeed pass quotation marks as legitimate
>> characters within the passed value, etc.) few months back:
>>
>> http://clusterlabs.org/pipermail/users/2015-May/000403.html
>> (even on developers list, but with fewer replies and broken threading:
>> http://clusterlabs.org/pipermail/developers/2015-May/000023.html).
>
> Thanks for the links and history. You add some more argument to my points :)
>
>>> HINT: I don't want to discuss (neither troll about) what is the best
>>> language. I would like to know why **ALL** the RA are written in
>>> bash
>>
>> I would expect the original influence were the init scripts (as RAs
>> are mostly just enriched variants to support more flexible
>> configuration and better diagnostics back to the cluster stack),
>> which in turn were born having simplicity and ease of debugging
>> (maintainability) in mind.
>
> That sounds legitimate. And bash is still appropriate for some simple RA.
>
> But for the same ease of code debugging and maintainability arguments (and many
> others), complexe RA shouldn't use shell as language.
You can and should use whatever language you like for your own private RAs.
But if you want it accepted and maintained by the resource-agents project, you would be advised to use the language they have standardised on.
As always, the people doing the work get to make the rules.
>
>>> and if there's traps (hidden far in ocf-shellfuncs as instance)
>>> to avoid if using a different language. And is it acceptable to
>>> include new libs for other languages?
>>
>> https://github.com/ClusterLabs/resource-agents/blob/v3.9.6/doc/dev-guides/ra-dev-guide.txt#L33
>> doesn't make any assumption about the target language beside stating
>> what's a common one.
>
> Yes, I know that page. But this dev guide focus on shell and have some
> assumptions about ocf-shellfuncs.
>
> I'll take the same exemple than in my previous message, there's nothing
> about the best practice for logging. In the "Script variables" section, some
> comes from environment, others from ocf-shellfuncs.
>
>>> We rewrote the RA in perl, mostly because of me. I was bored with bash/sh
>>> limitations AND syntax AND useless code complexity for some easy tasks AND
>>> traps (return code etc). In my opinion, bash/sh are fine if you RA code is
>>> short and simple. Which was mostly the case back in the time of heartbeat
>>> which was stateless only. But it became a nightmare with multi-state agents
>>> struggling with complexe code to fit with Pacemaker behavior. Have a look
>>> to the mysql or pgsql agents.
>>>
>>> Moreover, with bash, I had some weird behaviors (timeouts) from the RA
>>> between runuser/su/sudo and systemd/pamd some months ago. The three of them
>>> have system implications or side effects deep in the system you need to
>>> take care off. Using a language able to seteuid/setuid after forking is
>>> much more natural and clean to drop root privileges and start the daemon
>>> (PostgreSQL refuses to start as root and is not able to drop its privileges
>>> to another system user itself).
>>
>> Other disadvantage of shell scripts is that frequently many processes
>> are spawned for simple changes within the filesystem and for string
>> parsing/reformatting, which in turn creates a dependency on plenty
>> of external executables.
>
> True. Either you need to pipe multi small programs, forking all of them
> (cat|grep|cut|...), sometime with different behavior depending on the system or
> use a complexe one most people don't want to hear anymore (sed, awk, perl, ...).
> In the later case, you not only have to master bash, but other languages as
> well.
>
>>> Now, we are far to have a enterprise class certified code, our RA had its
>>> very first tests passed successfully yesterday, but here is a quick
>>> feedback. The downside of picking another language than bash/sh is that
>>> there is no OCF module/library available for them. This is quite
>>> inconvenient when you need to get system specifics variables or logging
>>> shortcut only defined in ocf-shellfuncs (and I would guess patched by
>>> packagers ?).
>>>
>>> As instance, I had to "capture" values of $HA_SBIN_DIR and $HA_RSCTMP from
>>> my perl code.
>>
>> There could be a shell wrapper that would put these values into the
>> environment and then executed the target itself for its disposal
>> (generic solution for arbitrary executable). That's not applicable
>> for "procedural knowledge" (logging, etc.), though, as you mention
>> below.
>
> Yes.
>
> What should we do next? Should we spin off an "ocf-perl-common" module from our
> agent and feed it with such pieces ported from ocf-shellfuncs?
>
> _______________________________________________
> Developers mailing list
> Developers at clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/developers
More information about the Users
mailing list