Hi all, Dejan,<br><br><br>I am struggling to get the latest crmsh version (812:b58a3398bf11) to work with the latest pacemaker version () and so far I&#39;ve encountered couple of issues.<br><br>The first one, which was already discussed on the list, INFO: object Cluster-Server-1 cannot be represented in the CLI notation. Because you never replied to what Vladislav Bogdanov reported in his last reply - I just added the type=&quot;normal&quot; parameter using crm edit xml, to fix the issue.<br>

<br>The next thing that I encountered, I believe that it was discussed earlier this year:<br><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote">crm(live)configure# primitive dummy ocf:heartbeat:Dummy<br>

ERROR: pengine:metadata: could not parse meta-data:<br></blockquote><div> </div><div>Which was fixed with the following patch:<br><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote">

diff -r b58a3398bf11 <a href="http://configure.ac" target="_blank">configure.ac</a><br>--- a/<a href="http://configure.ac" target="_blank">configure.ac</a>      Thu Sep 13 12:19:56 2012 +0200<br>+++ b/<a href="http://configure.ac" target="_blank">configure.ac</a>      Fri Sep 14 14:35:17 2012 +0300<br>

@@ -190,11 +190,9 @@<br> AC_DEFINE_UNQUOTED(CRM_DTD_DIRECTORY,&quot;$CRM_DTD_DIRECTORY&quot;, Where to keep CIB configuration files)<br> AC_SUBST(CRM_DTD_DIRECTORY)<br> <br>-dnl Eventually move out of the heartbeat dir tree and create compatability code<br>

-dnl CRM_DAEMON_DIR=$libdir/pacemaker<br>-GLUE_DAEMON_DIR=`extract_header_define $GLUE_HEADER GLUE_DAEMON_DIR`<br>-AC_DEFINE_UNQUOTED(GLUE_DAEMON_DIR,&quot;$GLUE_DAEMON_DIR&quot;, Location for Pacemaker daemons)<br>-AC_SUBST(GLUE_DAEMON_DIR)<br>

+CRM_DAEMON_DIR=`$PKGCONFIG pcmk --variable=daemondir`<br>+AC_DEFINE_UNQUOTED(CRM_DAEMON_DIR,&quot;$CRM_DAEMON_DIR&quot;, Location for the Pacemaker daemons)<br>+AC_SUBST(CRM_DAEMON_DIR)<br> <br> CRM_CACHE_DIR=${localstatedir}/cache/crm<br>

 AC_DEFINE_UNQUOTED(CRM_CACHE_DIR,&quot;$CRM_CACHE_DIR&quot;, Where crm shell keeps the cache)<br>diff -r b58a3398bf11 modules/<a href="http://vars.py.in" target="_blank">vars.py.in</a><br>--- a/modules/<a href="http://vars.py.in" target="_blank">vars.py.in</a>        Thu Sep 13 12:19:56 2012 +0200<br>

+++ b/modules/<a href="http://vars.py.in" target="_blank">vars.py.in</a>        Fri Sep 14 14:35:17 2012 +0300<br>@@ -200,7 +200,7 @@<br>     crm_schema_dir = &quot;@CRM_DTD_DIRECTORY@&quot;<br>     pe_dir = &quot;@PE_STATE_DIR@&quot;<br>

     crm_conf_dir = &quot;@CRM_CONFIG_DIR@&quot;<br>-    crm_daemon_dir = &quot;@GLUE_DAEMON_DIR@&quot;<br>+    crm_daemon_dir = &quot;@CRM_DAEMON_DIR@&quot;<br>     crm_daemon_user = &quot;@CRM_DAEMON_USER@&quot;<br>     crm_version = &quot;@VERSION@ (Build @BUILD_VERSION@)&quot;<br>

</blockquote><br> What came next was:<br><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote">ERROR: running cibadmin -Ql -o rsc_defaults: Call cib_query failed (-6): No such device or address<br>

</blockquote>Configuring any of the rsc_defaults parameters solves that problem.<br><br>The last thing encountered was the unability to add LBS resource.<br><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote">
crm(live)# ra<br>crm(live)ra# list lsb<br>acpid                   apache2                 apcupsd                 atd                     bootlogd                bootlogs                bootmisc.sh             checkfs.sh              checkroot.sh            clamav-freshclam        cman<br>
console-setup           corosync                corosync-notifyd        cron                    ctdb                    dbus                    drbd                    halt                    hdparm                  hostname.sh             hwclock.sh<br>
hwclockfirst.sh         ifupdown                ifupdown-clean          iptables                iscsi-scst              kbd                     keyboard-setup          killprocs               ldirectord              logd                    lvm2<br>
mdadm                   mdadm-raid              minidlna                module-init-tools       mountall-bootclean.sh   mountall.sh             mountdevsubfs.sh        mountkernfs.sh          mountnfs-bootclean.sh   mountnfs.sh             mountoverflowtmp<br>
mpt-statusd             mrmonitor               mrmonitor.dpkg-old      msm_profile             mtab.sh                 netatalk                networking              nfs-common              nfs-kernel-server       ntp                     openais<br>
openhpid                pacemaker               procps                  proftpd                 quota                   quotarpc                rc                      rc.local                rcS                     reboot                  rmnologin<br>
rpcbind                 rsync                   rsyslog                 samba                   screen-cleanup          scst                    sendsigs                single                  smartd                  smartmontools           snmpd<br>
ssh                     stop-bootlogd           stop-bootlogd-single    stor_agent              sudo                    sysstat                 tdm2                    udev                    udev-mtab               umountfs                umountnfs.sh<br>
umountroot              ups-monitor             urandom                 vivaldiframeworkd       winbind                 x11-common              xinetd                  <br>crm(live)ra# end<br>crm(live)# configure <br>crm(live)configure# primitive testlsb lsb:nfs-kernel-server<br>
ERROR: lsb:nfs-kernel-server: could not parse meta-data: <br>ERROR: lsb:nfs-kernel-server: no such resource agent<br></blockquote><div><br>Since I need this for my testing I stopped here.  I do not know how adequate my patch for the daemon dir, but it did the job. The lsb I just couldn&#39;t tackle.<br>
<br>Cheers,<br><br>Borislav<br></div></div>