[Pacemaker] crm shell issues

Dejan Muhamedagic dejanmm at fastmail.fm
Tue Sep 18 16:21:46 CEST 2012


Hi,

On Fri, Sep 14, 2012 at 02:54:56PM +0300, Borislav Borisov wrote:
> Hi all, Dejan,
> 
> 
> I am struggling to get the latest crmsh version (812:b58a3398bf11) to work
> with the latest pacemaker version () and so far I've encountered couple of
> issues.

There's quite a bit of new code in v1.1.8 which obviously
behaves in a slightly different way.

> The first one, which was already discussed on the list, INFO: object
> Cluster-Server-1 cannot be represented in the CLI notation. Because you
> never replied to what Vladislav Bogdanov reported in his last reply - I
> just added the type="normal" parameter using crm edit xml, to fix the issue.
> 
> The next thing that I encountered, I believe that it was discussed earlier
> this year:
> 
> > crm(live)configure# primitive dummy ocf:heartbeat:Dummy
> > ERROR: pengine:metadata: could not parse meta-data:
> >
> 
> Which was fixed with the following patch:
> 
> > diff -r b58a3398bf11 configure.ac
> > --- a/configure.ac      Thu Sep 13 12:19:56 2012 +0200
> > +++ b/configure.ac      Fri Sep 14 14:35:17 2012 +0300
> > @@ -190,11 +190,9 @@
> >  AC_DEFINE_UNQUOTED(CRM_DTD_DIRECTORY,"$CRM_DTD_DIRECTORY", Where to keep
> > CIB configuration files)
> >  AC_SUBST(CRM_DTD_DIRECTORY)
> >
> > -dnl Eventually move out of the heartbeat dir tree and create
> > compatability code
> > -dnl CRM_DAEMON_DIR=$libdir/pacemaker
> > -GLUE_DAEMON_DIR=`extract_header_define $GLUE_HEADER GLUE_DAEMON_DIR`
> > -AC_DEFINE_UNQUOTED(GLUE_DAEMON_DIR,"$GLUE_DAEMON_DIR", Location for
> > Pacemaker daemons)
> > -AC_SUBST(GLUE_DAEMON_DIR)
> > +CRM_DAEMON_DIR=`$PKGCONFIG pcmk --variable=daemondir`
> > +AC_DEFINE_UNQUOTED(CRM_DAEMON_DIR,"$CRM_DAEMON_DIR", Location for the
> > Pacemaker daemons)
> > +AC_SUBST(CRM_DAEMON_DIR)
> >
> >  CRM_CACHE_DIR=${localstatedir}/cache/crm
> >  AC_DEFINE_UNQUOTED(CRM_CACHE_DIR,"$CRM_CACHE_DIR", Where crm shell keeps
> > the cache)
> > diff -r b58a3398bf11 modules/vars.py.in
> > --- a/modules/vars.py.in        Thu Sep 13 12:19:56 2012 +0200
> > +++ b/modules/vars.py.in        Fri Sep 14 14:35:17 2012 +0300
> > @@ -200,7 +200,7 @@
> >      crm_schema_dir = "@CRM_DTD_DIRECTORY@"
> >      pe_dir = "@PE_STATE_DIR@"
> >      crm_conf_dir = "@CRM_CONFIG_DIR@"
> > -    crm_daemon_dir = "@GLUE_DAEMON_DIR@"
> > +    crm_daemon_dir = "@CRM_DAEMON_DIR@"
> >      crm_daemon_user = "@CRM_DAEMON_USER@"
> >      crm_version = "@VERSION@ (Build @BUILD_VERSION@)"
> >

Yes, the daemons moved to another location and the glue has been
completely removed anyway. Thanks for the patch. Though I'm not
sure if I can take it as is.

>  What came next was:
> 
> > ERROR: running cibadmin -Ql -o rsc_defaults: Call cib_query failed (-6):
> > No such device or address
> >
> Configuring any of the rsc_defaults parameters solves that problem.

A different error code. It used to be twenty something, iirc.

> The last thing encountered was the unability to add LBS resource.
> 
> > crm(live)# ra
> > crm(live)ra# list lsb
> > acpid                   apache2                 apcupsd
> > atd                     bootlogd                bootlogs
> > bootmisc.sh             checkfs.sh              checkroot.sh
> > clamav-freshclam        cman
> > console-setup           corosync                corosync-notifyd
> > cron                    ctdb                    dbus
> > drbd                    halt                    hdparm
> > hostname.sh             hwclock.sh
> > hwclockfirst.sh         ifupdown                ifupdown-clean
> > iptables                iscsi-scst              kbd
> > keyboard-setup          killprocs               ldirectord
> > logd                    lvm2
> > mdadm                   mdadm-raid              minidlna
> > module-init-tools       mountall-bootclean.sh   mountall.sh
> > mountdevsubfs.sh        mountkernfs.sh          mountnfs-bootclean.sh
> > mountnfs.sh             mountoverflowtmp
> > mpt-statusd             mrmonitor               mrmonitor.dpkg-old
> > msm_profile             mtab.sh                 netatalk
> > networking              nfs-common              nfs-kernel-server
> > ntp                     openais
> > openhpid                pacemaker               procps
> > proftpd                 quota                   quotarpc
> > rc                      rc.local                rcS
> > reboot                  rmnologin
> > rpcbind                 rsync                   rsyslog
> > samba                   screen-cleanup          scst
> > sendsigs                single                  smartd
> > smartmontools           snmpd
> > ssh                     stop-bootlogd           stop-bootlogd-single
> > stor_agent              sudo                    sysstat
> > tdm2                    udev                    udev-mtab
> > umountfs                umountnfs.sh
> > umountroot              ups-monitor             urandom
> > vivaldiframeworkd       winbind                 x11-common
> > xinetd
> > crm(live)ra# end
> > crm(live)# configure
> > crm(live)configure# primitive testlsb lsb:nfs-kernel-server
> > ERROR: lsb:nfs-kernel-server: could not parse meta-data:
> > ERROR: lsb:nfs-kernel-server: no such resource agent
> >
> 
> Since I need this for my testing I stopped here.  I do not know how
> adequate my patch for the daemon dir, but it did the job. The lsb I just
> couldn't tackle.

lrmadmin/lrmd used to put together meta-data for the lsb class
agents, because they cannot produce it themselves. There's now a
new lrmd and I'm not sure if there's any replacement for
lrmadmin.

Cheers,

Dejan

> Cheers,
> 
> Borislav

> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org




More information about the Pacemaker mailing list