[Pacemaker] resource agent starting out-of-order

AP pacemaker at inml.weebeastie.net
Thu Mar 3 01:05:39 EST 2011


Hi,

Having deep issues with my cluster setup. Everything works ok until
I add a VirtualDomain RA in. Then things go pearshaped in that it seems
to ignore the "order" crm config for it and starts as soon as it can.

The crm config is provided below. Basically p-vd_vg.test1 attempts to
start despite p-libvirtd not being started and p-drbd_vg.test1 not
being master (or slave for that matter - ie it's not configured at all).

Eventually p-libvirtd and p-drbd_vg.test1 start and p-vd_vg.test1 attempts
to, pengine on the node where p-vd_vg.test1 is already running complains
with:

Mar  3 16:49:16 breadnut pengine: [2097]: ERROR: native_create_actions: Resource p-vd_vg.test1 (ocf::VirtualDomain) is active on 2 nodes attempting recovery
Mar  3 16:49:16 breadnut pengine: [2097]: WARN: See http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active for more information.

Then mass slaughter occurs and p-vd_vg.test1 is restarted where it was
running previously whilst the other node gets an error for it.

Essentially I cannot restart the 2nd node without it breaking the 1st.

Now, as I understand it, a lone primitive will run once on any node - this
is just fine by me.

colo-vd_vg.test1 indicates that p-vd_vg.test1 should run where ms-drbd_vg.test1
is master. ms-drbd_vg.test1 should only be master where clone-libvirtd is
started.

order-vg.test1 indicates that ms-drbd_vg.test1 should start after clone-lvm_gh
is started (successfully). (This used to have a promote for ms-drbd_vg.test1
but then ms-drbd_vg.test1 would be demoted and not stopped on shutdown which
would cause clone-lvm_gh to error out on stop)

order-vd_vg.test1 indicates p-vd_vg.test1 should only start where
ms-drbd_vg.test1 and clone-libvirtd have both successfully started (the
order of their starting being irrelevant).

cli-standby-p-vd_vg.test1 was put there by my migrating p-vd_vg.test1
about the place.

This happens with or without fencing and with fencing configured as below
or as just a single primited with both nodes in the hostlist.

Help with this would be awesome and appreciated. I do not know what I am
missing here. The config makes sense to me so I don't even know where
to start poking and prodding. I be flailing.

Config and s/w version list is below:

OS: Debian Squeeze
Kernel: 2.6.37.2

PACKAGES:

ii  cluster-agents                      1:1.0.4-0ubuntu1~custom1     The reusable cluster components for Linux HA
ii  cluster-glue                        1.0.7-3ubuntu1~custom1       The reusable cluster components for Linux HA
ii  corosync                            1.3.0-1ubuntu1~custom1       Standards-based cluster framework (daemon and modules)
ii  libccs3                             3.1.0-0ubuntu1~custom1       Red Hat cluster suite - cluster configuration libraries
ii  libcib1                             1.1.5-0ubuntu1~ppa1~custom1  The Pacemaker libraries - CIB
ii  libcman3                            3.1.0-0ubuntu1~custom1       Red Hat cluster suite - cluster manager libraries
ii  libcorosync4                        1.3.0-1ubuntu1~custom1       Standards-based cluster framework (libraries)
ii  libcrmcluster1                      1.1.5-0ubuntu1~ppa1~custom1  The Pacemaker libraries - CRM
ii  libcrmcommon2                       1.1.5-0ubuntu1~ppa1~custom1  The Pacemaker libraries - common CRM
ii  libfence4                           3.1.0-0ubuntu1~custom1       Red Hat cluster suite - fence client library
ii  liblrm2                             1.0.7-3ubuntu1~custom1       Reusable cluster libraries -- liblrm2
ii  libpe-rules2                        1.1.5-0ubuntu1~ppa1~custom1  The Pacemaker libraries - rules for P-Engine
ii  libpe-status3                       1.1.5-0ubuntu1~ppa1~custom1  The Pacemaker libraries - status for P-Engine
ii  libpengine3                         1.1.5-0ubuntu1~ppa1~custom1  The Pacemaker libraries - P-Engine
ii  libpils2                            1.0.7-3ubuntu1~custom1       Reusable cluster libraries -- libpils2
ii  libplumb2                           1.0.7-3ubuntu1~custom1       Reusable cluster libraries -- libplumb2
ii  libplumbgpl2                        1.0.7-3ubuntu1~custom1       Reusable cluster libraries -- libplumbgpl2
ii  libstonith1                         1.0.7-3ubuntu1~custom1       Reusable cluster libraries -- libstonith1
ii  libstonithd1                        1.1.5-0ubuntu1~ppa1~custom1  The Pacemaker libraries - stonith
ii  libtransitioner1                    1.1.5-0ubuntu1~ppa1~custom1  The Pacemaker libraries - transitioner
ii  pacemaker                           1.1.5-0ubuntu1~ppa1~custom1  HA cluster resource manager

CONFIG:

node breadnut
node breadnut2 \
        attributes standby="off"
primitive fencing-bn stonith:meatware \
        params hostlist="breadnut" \
        op start interval="0" timeout="60s" \
        op stop interval="0" timeout="70s" \
        op monitor interval="10" timeout="60s"
primitive fencing-bn2 stonith:meatware \
        params hostlist="breadnut2" \
        op start interval="0" timeout="60s" \
        op stop interval="0" timeout="70s" \
        op monitor interval="10" timeout="60s"
primitive p-drbd_vg.test1 ocf:linbit:drbd \
        params drbd_resource="vg.test1" \
        operations $id="ops-drbd_vg.test1" \
        op start interval="0" timeout="240s" \
        op stop interval="0" timeout="100s" \
        op monitor interval="20" role="Master" timeout="20s" \
        op monitor interval="30" role="Slave" timeout="20s"
primitive p-libvirtd ocf:local:libvirtd \
        meta allow-migrate="off" \
        op start interval="0" timeout="200s" \
        op stop interval="0" timeout="100s" \
        op monitor interval="10" timeout="200s"
primitive p-lvm_gh ocf:heartbeat:LVM \
        params volgrpname="gh" \
        meta allow-migrate="off" \
        op start interval="0" timeout="90s" \
        op stop interval="0" timeout="100s" \
        op monitor interval="10" timeout="100s"
primitive p-vd_vg.test1 ocf:heartbeat:VirtualDomain \
        params config="/etc/libvirt/qemu/vg.test1.xml" \
        params migration_transport="tcp" \
        meta allow-migrate="true" is-managed="true" \
        op start interval="0" timeout="120s" \
        op stop interval="0" timeout="120s" \
        op migrate_to interval="0" timeout="120s" \
        op migrate_from interval="0" timeout="120s" \
        op monitor interval="10s" timeout="120s"
ms ms-drbd_vg.test1 p-drbd_vg.test1 \
        meta resource-stickines="100" notify="true" master-max="2" target-role="Master"
clone clone-libvirtd p-libvirtd \
        meta interleave="true"
clone clone-lvm_gh p-lvm_gh \
        meta interleave="true"
location cli-standby-p-vd_vg.test1 p-vd_vg.test1 \
        rule $id="cli-standby-rule-p-vd_vg.test1" -inf: #uname eq breadnut2
location loc-fencing-bn fencing-bn -inf: breadnut
location loc-fencing-bn2 fencing-bn2 -inf: breadnut2
colocation colo-vd_vg.test1 inf: p-vd_vg.test1:Started ms-drbd_vg.test1:Master clone-libvirtd:Started
order order-vd_vg.test1 inf: ( ms-drbd_vg.test1:start clone-libvirtd:start ) p-vd_vg.test1:start
order order-vg.test1 inf: clone-lvm_gh:start ms-drbd_vg.test1:start
property $id="cib-bootstrap-options" \
        dc-version="1.1.5-01e86afaaa6d4a8c4836f68df80ababd6ca3902f" \
        cluster-infrastructure="openais" \
        default-resource-stickiness="1000" \
        stonith-enabled="true" \
        expected-quorum-votes="2" \
        no-quorum-policy="ignore" \
        last-lrm-refresh="1299128317"

-------------- next part --------------
Mar  3 16:48:53 breadnut2 corosync[20685]:   [MAIN  ] Corosync Cluster Engine ('1.3.0'): started and ready to provide service.
Mar  3 16:48:53 breadnut2 corosync[20685]:   [MAIN  ] Corosync built-in features: nss
Mar  3 16:48:53 breadnut2 corosync[20685]:   [MAIN  ] Successfully read main configuration file '/etc/corosync/corosync.conf'.
Mar  3 16:48:53 breadnut2 corosync[20685]:   [TOTEM ] Initializing transport (UDP/IP Multicast).
Mar  3 16:48:53 breadnut2 corosync[20685]:   [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
Mar  3 16:48:53 breadnut2 corosync[20685]:   [TOTEM ] The network interface [192.168.3.3] is now up.
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: process_ais_conf: Reading configure
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: config_find_init: Local handle: 4730966301143465987 for logging
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: config_find_next: Processing additional logging options...
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: get_config_opt: Found 'off' for option: debug
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: get_config_opt: Found 'no' for option: to_logfile
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: get_config_opt: Found 'yes' for option: to_syslog
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: get_config_opt: Found 'daemon' for option: syslog_facility
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: config_find_init: Local handle: 7739444317642555396 for quorum
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: config_find_next: No additional configuration supplied for: quorum
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: get_config_opt: No default for option: provider
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: config_find_init: Local handle: 5650605097994944517 for service
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: config_find_next: Processing additional service options...
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: get_config_opt: Found '0' for option: ver
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: get_config_opt: Defaulting to 'pcmk' for option: clustername
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: get_config_opt: Defaulting to 'no' for option: use_logd
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: get_config_opt: Defaulting to 'no' for option: use_mgmtd
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: pcmk_startup: CRM: Initialized
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] Logging: Initialized pcmk_startup
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: pcmk_startup: Maximum core file size is: 18446744073709551615
Mar  3 16:48:53 breadnut2 attrd: [20695]: info: Invoked: /usr/lib/heartbeat/attrd
Mar  3 16:48:53 breadnut2 lrmd: [20694]: info: enabling coredumps
Mar  3 16:48:53 breadnut2 crmd: [20697]: info: Invoked: /usr/lib/heartbeat/crmd
Mar  3 16:48:53 breadnut2 pengine: [20696]: info: Invoked: /usr/lib/heartbeat/pengine
Mar  3 16:48:53 breadnut2 stonith-ng: [20692]: info: Invoked: /usr/lib/heartbeat/stonithd
Mar  3 16:48:53 breadnut2 cib: [20693]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: pcmk_startup: Service: 9
Mar  3 16:48:53 breadnut2 lrmd: [20694]: WARN: Core dumps could be lost if multiple dumps occur.
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: pcmk_startup: Local hostname: breadnut2
Mar  3 16:48:53 breadnut2 lrmd: [20694]: WARN: Consider setting non-default value in /proc/sys/kernel/core_pattern (or equivalent) for maximum supportability
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: pcmk_update_nodeid: Local node id: 50571456
Mar  3 16:48:53 breadnut2 lrmd: [20694]: WARN: Consider setting /proc/sys/kernel/core_uses_pid (or equivalent) to 1 for maximum supportability
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: update_member: Creating entry for node 50571456 born on 0
Mar  3 16:48:53 breadnut2 lrmd: [20694]: info: Started.
Mar  3 16:48:53 breadnut2 cib: [20693]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.xml (digest: /var/lib/heartbeat/crm/cib.xml.sig)
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: update_member: 0x7f9124000ad0 Node 50571456 now known as breadnut2 (was: (null))
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: update_member: Node breadnut2 now has 1 quorum votes (was 0)
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: update_member: Node 50571456/breadnut2 is now: member
Mar  3 16:48:53 breadnut2 crmd: [20697]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
Mar  3 16:48:53 breadnut2 crmd: [20697]: info: main: CRM Hg Version: 01e86afaaa6d4a8c4836f68df80ababd6ca3902f
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: spawn_child: Forked child 20692 for process stonith-ng
Mar  3 16:48:53 breadnut2 attrd: [20695]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/hacluster
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: spawn_child: Forked child 20693 for process cib
Mar  3 16:48:53 breadnut2 stonith-ng: [20692]: info: crm_log_init_worker: Changed active directory to /var/lib/heartbeat/cores/root
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: spawn_child: Forked child 20694 for process lrmd
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: spawn_child: Forked child 20695 for process attrd
Mar  3 16:48:53 breadnut2 attrd: [20695]: info: main: Starting up
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: spawn_child: Forked child 20696 for process pengine
Mar  3 16:48:53 breadnut2 attrd: [20695]: info: get_cluster_type: Cluster type is: 'openais'.
Mar  3 16:48:53 breadnut2 stonith-ng: [20692]: info: get_cluster_type: Cluster type is: 'openais'.
Mar  3 16:48:53 breadnut2 crmd: [20697]: info: crmd_init: Starting crmd
Mar  3 16:48:53 breadnut2 attrd: [20695]: info: crm_cluster_connect: Connecting to cluster infrastructure: classic openais (with plugin)
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: spawn_child: Forked child 20697 for process crmd
Mar  3 16:48:53 breadnut2 stonith-ng: [20692]: info: crm_cluster_connect: Connecting to cluster infrastructure: classic openais (with plugin)
Mar  3 16:48:53 breadnut2 attrd: [20695]: info: init_ais_connection_classic: Creating connection to our Corosync plugin
Mar  3 16:48:53 breadnut2 corosync[20685]:   [SERV  ] Service engine loaded: Pacemaker Cluster Manager 1.1.5
Mar  3 16:48:53 breadnut2 corosync[20685]:   [SERV  ] Service engine loaded: corosync extended virtual synchrony service
Mar  3 16:48:53 breadnut2 stonith-ng: [20692]: info: init_ais_connection_classic: Creating connection to our Corosync plugin
Mar  3 16:48:53 breadnut2 corosync[20685]:   [SERV  ] Service engine loaded: corosync configuration service
Mar  3 16:48:53 breadnut2 corosync[20685]:   [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01
Mar  3 16:48:53 breadnut2 corosync[20685]:   [SERV  ] Service engine loaded: corosync cluster config database access v1.01
Mar  3 16:48:53 breadnut2 corosync[20685]:   [SERV  ] Service engine loaded: corosync profile loading service
Mar  3 16:48:53 breadnut2 corosync[20685]:   [SERV  ] Service engine loaded: corosync cluster quorum service v0.1
Mar  3 16:48:53 breadnut2 corosync[20685]:   [MAIN  ] Compatibility mode set to whitetank.  Using V1 and V2 of the synchronization engine.
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] notice: pcmk_peer_update: Transitional membership event on ring 188: memb=0, new=0, lost=0
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] notice: pcmk_peer_update: Stable membership event on ring 188: memb=1, new=1, lost=0
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: pcmk_peer_update: NEW:  breadnut2 50571456
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: pcmk_peer_update: MEMB: breadnut2 50571456
Mar  3 16:48:53 breadnut2 cib: [20693]: info: validate_with_relaxng: Creating RNG parser context
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: update_member: Node breadnut2 now has process list: 00000000000000000000000000111312 (1118994)
Mar  3 16:48:53 breadnut2 corosync[20685]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
Mar  3 16:48:53 breadnut2 corosync[20685]:   [CPG   ] chosen downlist from node r(0) ip(192.168.3.3)
Mar  3 16:48:53 breadnut2 corosync[20685]:   [MAIN  ] Completed service synchronization, ready to provide service.
Mar  3 16:48:53 breadnut2 attrd: [20695]: info: init_ais_connection_classic: AIS connection established
Mar  3 16:48:53 breadnut2 stonith-ng: [20692]: info: init_ais_connection_classic: AIS connection established
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: pcmk_ipc: Recorded connection 0x67dab0 for attrd/20695
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: pcmk_ipc: Recorded connection 0x7f9124000ee0 for stonith-ng/20692
Mar  3 16:48:53 breadnut2 attrd: [20695]: info: get_ais_nodeid: Server details: id=50571456 uname=breadnut2 cname=pcmk
Mar  3 16:48:53 breadnut2 attrd: [20695]: info: init_ais_connection_once: Connection to 'classic openais (with plugin)': established
Mar  3 16:48:53 breadnut2 attrd: [20695]: info: crm_new_peer: Node breadnut2 now has id: 50571456
Mar  3 16:48:53 breadnut2 attrd: [20695]: info: crm_new_peer: Node 50571456 is now known as breadnut2
Mar  3 16:48:53 breadnut2 attrd: [20695]: info: main: Cluster connection active
Mar  3 16:48:53 breadnut2 stonith-ng: [20692]: info: get_ais_nodeid: Server details: id=50571456 uname=breadnut2 cname=pcmk
Mar  3 16:48:53 breadnut2 stonith-ng: [20692]: info: init_ais_connection_once: Connection to 'classic openais (with plugin)': established
Mar  3 16:48:53 breadnut2 stonith-ng: [20692]: info: crm_new_peer: Node breadnut2 now has id: 50571456
Mar  3 16:48:53 breadnut2 stonith-ng: [20692]: info: crm_new_peer: Node 50571456 is now known as breadnut2
Mar  3 16:48:53 breadnut2 attrd: [20695]: info: main: Accepting attribute updates
Mar  3 16:48:53 breadnut2 attrd: [20695]: info: main: Starting mainloop...
Mar  3 16:48:53 breadnut2 stonith-ng: [20692]: info: main: Starting stonith-ng mainloop
Mar  3 16:48:53 breadnut2 cib: [20693]: info: startCib: CIB Initialization completed successfully
Mar  3 16:48:53 breadnut2 cib: [20693]: info: get_cluster_type: Cluster type is: 'openais'.
Mar  3 16:48:53 breadnut2 cib: [20693]: info: crm_cluster_connect: Connecting to cluster infrastructure: classic openais (with plugin)
Mar  3 16:48:53 breadnut2 cib: [20693]: info: init_ais_connection_classic: Creating connection to our Corosync plugin
Mar  3 16:48:53 breadnut2 cib: [20693]: info: init_ais_connection_classic: AIS connection established
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: pcmk_ipc: Recorded connection 0x682e30 for cib/20693
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: pcmk_ipc: Sending membership update 188 to cib
Mar  3 16:48:53 breadnut2 cib: [20693]: info: get_ais_nodeid: Server details: id=50571456 uname=breadnut2 cname=pcmk
Mar  3 16:48:53 breadnut2 cib: [20693]: info: init_ais_connection_once: Connection to 'classic openais (with plugin)': established
Mar  3 16:48:53 breadnut2 cib: [20693]: info: crm_new_peer: Node breadnut2 now has id: 50571456
Mar  3 16:48:53 breadnut2 cib: [20693]: info: crm_new_peer: Node 50571456 is now known as breadnut2
Mar  3 16:48:53 breadnut2 cib: [20693]: info: cib_init: Starting cib mainloop
Mar  3 16:48:53 breadnut2 cib: [20693]: info: ais_dispatch_message: Membership 188: quorum still lost
Mar  3 16:48:53 breadnut2 cib: [20693]: info: crm_update_peer: Node breadnut2: id=50571456 state=member (new) addr=r(0) ip(192.168.3.3)  (new) votes=1 (new) born=0 seen=188 proc=00000000000000000000000000111312 (new)
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] notice: pcmk_peer_update: Transitional membership event on ring 192: memb=1, new=0, lost=0
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: pcmk_peer_update: memb: breadnut2 50571456
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] notice: pcmk_peer_update: Stable membership event on ring 192: memb=2, new=1, lost=0
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: update_member: Creating entry for node 17017024 born on 192
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: update_member: Node 17017024/unknown is now: member
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: pcmk_peer_update: NEW:  .pending. 17017024
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: pcmk_peer_update: MEMB: .pending. 17017024
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: pcmk_peer_update: MEMB: breadnut2 50571456
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: send_member_notification: Sending membership update 192 to 1 children
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: update_member: 0x7f9124000ad0 Node 50571456 ((null)) born on: 192
Mar  3 16:48:53 breadnut2 corosync[20685]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
Mar  3 16:48:53 breadnut2 cib: [20693]: info: ais_dispatch_message: Membership 192: quorum still lost
Mar  3 16:48:53 breadnut2 cib: [20693]: info: crm_new_peer: Node <null> now has id: 17017024
Mar  3 16:48:53 breadnut2 cib: [20693]: info: crm_update_peer: Node (null): id=17017024 state=member (new) addr=r(0) ip(192.168.3.1)  votes=0 born=0 seen=192 proc=00000000000000000000000000000000
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: update_member: 0x67d2d0 Node 17017024 (breadnut) born on: 4
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: update_member: 0x67d2d0 Node 17017024 now known as breadnut (was: (null))
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: update_member: Node breadnut now has process list: 00000000000000000000000000111312 (1118994)
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: update_member: Node breadnut now has 1 quorum votes (was 0)
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] info: send_member_notification: Sending membership update 192 to 1 children
Mar  3 16:48:53 breadnut2 cib: [20693]: notice: ais_dispatch_message: Membership 192: quorum acquired
Mar  3 16:48:53 breadnut2 cib: [20693]: info: crm_get_peer: Node 17017024 is now known as breadnut
Mar  3 16:48:53 breadnut2 cib: [20693]: info: crm_update_peer: Node breadnut: id=17017024 state=member addr=r(0) ip(192.168.3.1)  votes=1 (new) born=4 seen=192 proc=00000000000000000000000000111312 (new)
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] WARN: route_ais_message: Sending message to local.crmd failed: ipc delivery failed (rc=-2)
Mar  3 16:48:53 breadnut2 corosync[20685]:   [CPG   ] chosen downlist from node r(0) ip(192.168.3.1)
Mar  3 16:48:53 breadnut2 corosync[20685]:   [MAIN  ] Completed service synchronization, ready to provide service.
Mar  3 16:48:53 breadnut2 cib: [20693]: info: cib_process_diff: Diff 0.128.71 -> 0.128.72 not applied to 0.128.0: current "num_updates" is less than required
Mar  3 16:48:53 breadnut2 cib: [20693]: info: cib_server_process_diff: Requesting re-sync from peer
Mar  3 16:48:53 breadnut2 cib: [20693]: notice: cib_server_process_diff: Not applying diff 0.128.72 -> 0.128.73 (sync in progress)
Mar  3 16:48:53 breadnut2 cib: [20693]: notice: cib_server_process_diff: Not applying diff 0.128.73 -> 0.128.74 (sync in progress)
Mar  3 16:48:53 breadnut2 cib: [20693]: notice: cib_server_process_diff: Not applying diff 0.128.74 -> 0.128.75 (sync in progress)
Mar  3 16:48:53 breadnut2 cib: [20693]: notice: cib_server_process_diff: Not applying diff 0.128.75 -> 0.128.76 (sync in progress)
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] WARN: route_ais_message: Sending message to local.crmd failed: ipc delivery failed (rc=-2)
Mar  3 16:48:53 breadnut2 cib: [20693]: info: cib_replace_notify: Replaced: -1.-1.-1 -> 0.128.76 from breadnut
Mar  3 16:48:53 breadnut2 corosync[20685]:   [pcmk  ] WARN: route_ais_message: Sending message to local.crmd failed: ipc delivery failed (rc=-2)
Mar  3 16:48:54 breadnut2 crmd: [20697]: info: do_cib_control: CIB connection established
Mar  3 16:48:54 breadnut2 crmd: [20697]: info: get_cluster_type: Cluster type is: 'openais'.
Mar  3 16:48:54 breadnut2 crmd: [20697]: info: crm_cluster_connect: Connecting to cluster infrastructure: classic openais (with plugin)
Mar  3 16:48:54 breadnut2 crmd: [20697]: info: init_ais_connection_classic: Creating connection to our Corosync plugin
Mar  3 16:48:54 breadnut2 crmd: [20697]: info: init_ais_connection_classic: AIS connection established
Mar  3 16:48:54 breadnut2 corosync[20685]:   [pcmk  ] info: pcmk_ipc: Recorded connection 0x7f9124005240 for crmd/20697
Mar  3 16:48:54 breadnut2 corosync[20685]:   [pcmk  ] info: pcmk_ipc: Sending membership update 192 to crmd
Mar  3 16:48:54 breadnut2 crmd: [20697]: info: get_ais_nodeid: Server details: id=50571456 uname=breadnut2 cname=pcmk
Mar  3 16:48:54 breadnut2 crmd: [20697]: info: init_ais_connection_once: Connection to 'classic openais (with plugin)': established
Mar  3 16:48:54 breadnut2 crmd: [20697]: info: crm_new_peer: Node breadnut2 now has id: 50571456
Mar  3 16:48:54 breadnut2 crmd: [20697]: info: crm_new_peer: Node 50571456 is now known as breadnut2
Mar  3 16:48:54 breadnut2 crmd: [20697]: info: ais_status_callback: status: breadnut2 is now unknown
Mar  3 16:48:54 breadnut2 crmd: [20697]: info: do_ha_control: Connected to the cluster
Mar  3 16:48:54 breadnut2 crmd: [20697]: info: do_started: Delaying start, no membership data (0000000000100000)
Mar  3 16:48:54 breadnut2 crmd: [20697]: info: crmd_init: Starting crmd's mainloop
Mar  3 16:48:54 breadnut2 crmd: [20697]: info: config_query_callback: Shutdown escalation occurs after: 1200000ms
Mar  3 16:48:54 breadnut2 crmd: [20697]: info: config_query_callback: Checking for expired actions every 900000ms
Mar  3 16:48:54 breadnut2 crmd: [20697]: info: config_query_callback: Sending expected-votes=2 to corosync
Mar  3 16:48:54 breadnut2 crmd: [20697]: notice: ais_dispatch_message: Membership 192: quorum acquired
Mar  3 16:48:54 breadnut2 crmd: [20697]: info: crm_new_peer: Node breadnut now has id: 17017024
Mar  3 16:48:54 breadnut2 crmd: [20697]: info: crm_new_peer: Node 17017024 is now known as breadnut
Mar  3 16:48:54 breadnut2 crmd: [20697]: info: ais_status_callback: status: breadnut is now unknown
Mar  3 16:48:54 breadnut2 crmd: [20697]: info: ais_status_callback: status: breadnut is now member (was unknown)
Mar  3 16:48:54 breadnut2 crmd: [20697]: info: crm_update_peer: Node breadnut: id=17017024 state=member (new) addr=r(0) ip(192.168.3.1)  votes=1 born=4 seen=192 proc=00000000000000000000000000111312
Mar  3 16:48:54 breadnut2 crmd: [20697]: notice: crmd_peer_update: Status update: Client breadnut2/crmd now has status [online] (DC=<null>)
Mar  3 16:48:54 breadnut2 crmd: [20697]: info: ais_status_callback: status: breadnut2 is now member (was unknown)
Mar  3 16:48:54 breadnut2 crmd: [20697]: info: crm_update_peer: Node breadnut2: id=50571456 state=member (new) addr=r(0) ip(192.168.3.3)  (new) votes=1 (new) born=192 seen=192 proc=00000000000000000000000000111312 (new)
Mar  3 16:48:54 breadnut2 crmd: [20697]: info: do_started: The local CRM is operational
Mar  3 16:48:54 breadnut2 crmd: [20697]: info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Mar  3 16:48:55 breadnut2 crmd: [20697]: info: ais_dispatch_message: Membership 192: quorum retained
Mar  3 16:48:55 breadnut2 crmd: [20697]: info: te_connect_stonith: Attempting connection to fencing daemon...
Mar  3 16:48:56 breadnut2 crmd: [20697]: info: te_connect_stonith: Connected
Mar  3 16:48:56 breadnut2 crmd: [20697]: info: update_dc: Set DC to breadnut (3.0.5)
Mar  3 16:48:56 breadnut2 crmd: [20697]: info: update_attrd: Connecting to attrd...
Mar  3 16:48:56 breadnut2 crmd: [20697]: info: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: find_hash_entry: Creating hash entry for terminate
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: find_hash_entry: Creating hash entry for shutdown
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: attrd_local_callback: Sending full refresh (origin=crmd)
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: attrd_perform_update: Delaying operation shutdown=<null>: cib not connected
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: attrd_perform_update: Delaying operation terminate=<null>: cib not connected
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: attrd_perform_update: Delaying operation shutdown=<null>: cib not connected
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: attrd_perform_update: Delaying operation terminate=<null>: cib not connected
Mar  3 16:48:56 breadnut2 crmd: [20697]: info: erase_xpath_callback: Deletion of "//node_state[@uname='breadnut2']/transient_attributes": ok (rc=0)
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: crm_new_peer: Node breadnut now has id: 17017024
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: crm_new_peer: Node 17017024 is now known as breadnut
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: find_hash_entry: Creating hash entry for master-p-drbd_vg.test1:0
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: attrd_perform_update: Delaying operation master-p-drbd_vg.test1:0=<null>: cib not connected
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: find_hash_entry: Creating hash entry for master-p-drbd_vg.test1:1
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: attrd_perform_update: Delaying operation master-p-drbd_vg.test1:1=<null>: cib not connected
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: find_hash_entry: Creating hash entry for last-failure-p-libvirtd:0
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: attrd_perform_update: Delaying operation last-failure-p-libvirtd:0=<null>: cib not connected
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: find_hash_entry: Creating hash entry for last-failure-p-libvirtd:1
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: attrd_perform_update: Delaying operation last-failure-p-libvirtd:1=<null>: cib not connected
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: attrd_perform_update: Delaying operation terminate=<null>: cib not connected
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: find_hash_entry: Creating hash entry for fail-count-p-libvirtd:0
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: attrd_perform_update: Delaying operation fail-count-p-libvirtd:0=<null>: cib not connected
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: find_hash_entry: Creating hash entry for fail-count-p-libvirtd:1
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: attrd_perform_update: Delaying operation fail-count-p-libvirtd:1=<null>: cib not connected
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: attrd_perform_update: Delaying operation shutdown=<null>: cib not connected
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: find_hash_entry: Creating hash entry for last-failure-p-vd_vg.test1
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: attrd_perform_update: Delaying operation last-failure-p-vd_vg.test1=<null>: cib not connected
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: find_hash_entry: Creating hash entry for fail-count-p-vd_vg.test1
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: attrd_perform_update: Delaying operation fail-count-p-vd_vg.test1=<null>: cib not connected
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: find_hash_entry: Creating hash entry for probe_complete
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: attrd_perform_update: Delaying operation probe_complete=<null>: cib not connected
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: find_hash_entry: Creating hash entry for last-failure-p-lvm_gh:1
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: attrd_perform_update: Delaying operation last-failure-p-lvm_gh:1=<null>: cib not connected
Mar  3 16:48:56 breadnut2 crmd: [20697]: info: do_lrm_rsc_op: Performing key=10:579:7:5bf60c00-da90-45b3-a6f7-4c5451fa2dde op=p-libvirtd:1_monitor_0 )
Mar  3 16:48:56 breadnut2 lrmd: [20694]: info: rsc:p-libvirtd:1:2: probe
Mar  3 16:48:56 breadnut2 crmd: [20697]: info: do_lrm_rsc_op: Performing key=11:579:7:5bf60c00-da90-45b3-a6f7-4c5451fa2dde op=p-lvm_gh:1_monitor_0 )
Mar  3 16:48:56 breadnut2 lrmd: [20694]: info: rsc:p-lvm_gh:1:3: probe
Mar  3 16:48:56 breadnut2 crmd: [20697]: info: do_lrm_rsc_op: Performing key=12:579:7:5bf60c00-da90-45b3-a6f7-4c5451fa2dde op=p-drbd_vg.test1:1_monitor_0 )
Mar  3 16:48:56 breadnut2 lrmd: [20694]: info: rsc:p-drbd_vg.test1:1:4: probe
Mar  3 16:48:56 breadnut2 crmd: [20697]: info: do_lrm_rsc_op: Performing key=13:579:7:5bf60c00-da90-45b3-a6f7-4c5451fa2dde op=p-vd_vg.test1_monitor_0 )
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: find_hash_entry: Creating hash entry for fail-count-p-lvm_gh:1
Mar  3 16:48:56 breadnut2 lrmd: [20694]: info: rsc:p-vd_vg.test1:5: probe
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: attrd_perform_update: Delaying operation fail-count-p-lvm_gh:1=<null>: cib not connected
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: find_hash_entry: Creating hash entry for last-failure-p-lvm_gh:0
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: attrd_perform_update: Delaying operation last-failure-p-lvm_gh:0=<null>: cib not connected
Mar  3 16:48:56 breadnut2 lrmd: [20694]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
Mar  3 16:48:56 breadnut2 crmd: [20697]: info: do_lrm_rsc_op: Performing key=14:579:7:5bf60c00-da90-45b3-a6f7-4c5451fa2dde op=fencing-bn_monitor_0 )
Mar  3 16:48:56 breadnut2 lrmd: [20694]: info: rsc:fencing-bn:6: probe
Mar  3 16:48:56 breadnut2 lrmd: [20694]: notice: lrmd_rsc_new(): No lrm_rprovider field in message
Mar  3 16:48:56 breadnut2 crmd: [20697]: info: do_lrm_rsc_op: Performing key=15:579:7:5bf60c00-da90-45b3-a6f7-4c5451fa2dde op=fencing-bn2_monitor_0 )
Mar  3 16:48:56 breadnut2 lrmd: [20694]: info: rsc:fencing-bn2:7: probe
Mar  3 16:48:56 breadnut2 libvirtd[20705]: INFO: /usr/sbin/libvirtd not running
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: find_hash_entry: Creating hash entry for fail-count-p-lvm_gh:0
Mar  3 16:48:56 breadnut2 attrd: [20695]: info: attrd_perform_update: Delaying operation fail-count-p-lvm_gh:0=<null>: cib not connected
Mar  3 16:48:56 breadnut2 stonith-ng: [20692]: notice: stonith_device_action: Device fencing-bn not found
Mar  3 16:48:56 breadnut2 stonith-ng: [20692]: info: stonith_command: Processed st_execute from lrmd: rc=-11
Mar  3 16:48:56 breadnut2 stonith-ng: [20692]: notice: stonith_device_action: Device fencing-bn2 not found
Mar  3 16:48:56 breadnut2 stonith-ng: [20692]: info: stonith_command: Processed st_execute from lrmd: rc=-11
Mar  3 16:48:56 breadnut2 crmd: [20697]: info: process_lrm_event: LRM operation p-libvirtd:1_monitor_0 (call=2, rc=7, cib-update=7, confirmed=true) not running
Mar  3 16:48:56 breadnut2 crmd: [20697]: info: process_lrm_event: LRM operation fencing-bn_monitor_0 (call=6, rc=7, cib-update=8, confirmed=true) not running
Mar  3 16:48:56 breadnut2 crmd: [20697]: info: process_lrm_event: LRM operation fencing-bn2_monitor_0 (call=7, rc=7, cib-update=9, confirmed=true) not running
Mar  3 16:48:56 breadnut2 lrmd: [20694]: info: RA output: (p-vd_vg.test1:probe:stderr) error: unable to connect to '/var/run/libvirt/libvirt-sock', libvirtd may need to be started: No such file or directory#012error: failed to connect to the hypervisor
Mar  3 16:48:56 breadnut2 lrmd: [20694]: info: RA output: (p-vd_vg.test1:probe:stderr) error: unable to connect to '/var/run/libvirt/libvirt-sock', libvirtd may need to be started: No such file or directory#012error: failed to connect to the hypervisor
Mar  3 16:48:56 breadnut2 VirtualDomain[20709]: INFO: Virtual domain vg.test1 currently has no state, retrying.
Mar  3 16:48:56 breadnut2 LVM[20706]: INFO: LVM Volume gh is offline
Mar  3 16:48:56 breadnut2 crmd: [20697]: info: process_lrm_event: LRM operation p-lvm_gh:1_monitor_0 (call=3, rc=7, cib-update=10, confirmed=true) not running
Mar  3 16:48:56 breadnut2 crm_attribute: [20772]: info: Invoked: crm_attribute -N breadnut2 -n master-p-drbd_vg.test1:1 -l reboot -D
Mar  3 16:48:56 breadnut2 crmd: [20697]: info: process_lrm_event: LRM operation p-drbd_vg.test1:1_monitor_0 (call=4, rc=7, cib-update=11, confirmed=true) not running
Mar  3 16:48:57 breadnut2 lrmd: [20694]: info: RA output: (p-vd_vg.test1:probe:stderr) error: unable to connect to '/var/run/libvirt/libvirt-sock', libvirtd may need to be started: No such file or directory#012error: failed to connect to the hypervisor
Mar  3 16:48:57 breadnut2 VirtualDomain[20709]: INFO: Virtual domain vg.test1 currently has no state, retrying.
Mar  3 16:48:58 breadnut2 attrd: [20695]: info: cib_connect: Connected to the CIB after 1 signon attempts
Mar  3 16:48:58 breadnut2 attrd: [20695]: info: cib_connect: Sending full refresh
Mar  3 16:48:58 breadnut2 attrd: [20695]: info: attrd_trigger_update: Sending flush op to all hosts for: master-p-drbd_vg.test1:0 (<null>)
Mar  3 16:48:58 breadnut2 attrd: [20695]: info: attrd_trigger_update: Sending flush op to all hosts for: master-p-drbd_vg.test1:1 (<null>)
Mar  3 16:48:58 breadnut2 attrd: [20695]: info: attrd_trigger_update: Sending flush op to all hosts for: last-failure-p-libvirtd:0 (<null>)
Mar  3 16:48:58 breadnut2 attrd: [20695]: info: attrd_trigger_update: Sending flush op to all hosts for: last-failure-p-libvirtd:1 (<null>)
Mar  3 16:48:58 breadnut2 attrd: [20695]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)
Mar  3 16:48:58 breadnut2 attrd: [20695]: info: attrd_trigger_update: Sending flush op to all hosts for: fail-count-p-libvirtd:0 (<null>)
Mar  3 16:48:58 breadnut2 attrd: [20695]: info: attrd_trigger_update: Sending flush op to all hosts for: fail-count-p-libvirtd:1 (<null>)
Mar  3 16:48:58 breadnut2 attrd: [20695]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)
Mar  3 16:48:58 breadnut2 attrd: [20695]: info: attrd_trigger_update: Sending flush op to all hosts for: last-failure-p-vd_vg.test1 (<null>)
Mar  3 16:48:58 breadnut2 attrd: [20695]: info: attrd_trigger_update: Sending flush op to all hosts for: fail-count-p-vd_vg.test1 (<null>)
Mar  3 16:48:58 breadnut2 attrd: [20695]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (<null>)
Mar  3 16:48:58 breadnut2 attrd: [20695]: info: attrd_trigger_update: Sending flush op to all hosts for: last-failure-p-lvm_gh:1 (<null>)
Mar  3 16:48:58 breadnut2 attrd: [20695]: info: attrd_trigger_update: Sending flush op to all hosts for: fail-count-p-lvm_gh:1 (<null>)
Mar  3 16:48:58 breadnut2 attrd: [20695]: info: attrd_trigger_update: Sending flush op to all hosts for: last-failure-p-lvm_gh:0 (<null>)
Mar  3 16:48:58 breadnut2 attrd: [20695]: info: attrd_trigger_update: Sending flush op to all hosts for: fail-count-p-lvm_gh:0 (<null>)
Mar  3 16:48:58 breadnut2 lrmd: [20694]: info: RA output: (p-vd_vg.test1:probe:stderr) error: unable to connect to '/var/run/libvirt/libvirt-sock', libvirtd may need to be started: No such file or directory#012error:
Mar  3 16:48:58 breadnut2 lrmd: [20694]: info: RA output: (p-vd_vg.test1:probe:stderr) failed to connect to the hypervisor
Mar  3 16:48:58 breadnut2 VirtualDomain[20709]: INFO: Virtual domain vg.test1 currently has no state, retrying.
Mar  3 16:48:59 breadnut2 lrmd: [20694]: info: RA output: (p-vd_vg.test1:probe:stderr) error: unable to connect to '/var/run/libvirt/libvirt-sock', libvirtd may need to be started: No such file or directory#012error: failed to connect to the hypervisor
Mar  3 16:48:59 breadnut2 VirtualDomain[20709]: INFO: Virtual domain vg.test1 currently has no state, retrying.
Mar  3 16:49:00 breadnut2 lrmd: [20694]: info: RA output: (p-vd_vg.test1:probe:stderr) error: unable to connect to '/var/run/libvirt/libvirt-sock', libvirtd may need to be started: No such file or directory#012error: failed to connect to the hypervisor
Mar  3 16:49:00 breadnut2 VirtualDomain[20709]: INFO: Virtual domain vg.test1 currently has no state, retrying.
Mar  3 16:49:01 breadnut2 lrmd: [20694]: info: RA output: (p-vd_vg.test1:probe:stderr) error: unable to connect to '/var/run/libvirt/libvirt-sock', libvirtd may need to be started: No such file or directory#012error: failed to connect to the hypervisor
Mar  3 16:49:01 breadnut2 VirtualDomain[20709]: INFO: Virtual domain vg.test1 currently has no state, retrying.
Mar  3 16:49:02 breadnut2 lrmd: [20694]: info: RA output: (p-vd_vg.test1:probe:stderr) error: unable to connect to '/var/run/libvirt/libvirt-sock', libvirtd may need to be started: No such file or directory#012error: failed to connect to the hypervisor
Mar  3 16:49:02 breadnut2 VirtualDomain[20709]: INFO: Virtual domain vg.test1 currently has no state, retrying.
Mar  3 16:49:03 breadnut2 lrmd: [20694]: info: RA output: (p-vd_vg.test1:probe:stderr) error: unable to connect to '/var/run/libvirt/libvirt-sock', libvirtd may need to be started: No such file or directory#012error: failed to connect to the hypervisor
Mar  3 16:49:03 breadnut2 VirtualDomain[20709]: INFO: Virtual domain vg.test1 currently has no state, retrying.
... etc ...
Mar  3 16:49:16 breadnut2 VirtualDomain[20709]: INFO: Virtual domain vg.test1 currently has no state, retrying.
Mar  3 16:49:16 breadnut2 lrmd: [20694]: WARN: p-vd_vg.test1:monitor process (PID 20709) timed out (try 1).  Killing with signal SIGTERM (15).
Mar  3 16:49:16 breadnut2 lrmd: [20694]: WARN: operation monitor[5] on ocf::VirtualDomain::p-vd_vg.test1 for client 20697, its parameters: crm_feature_set=[3.0.5] config=[/etc/libvirt/qemu/vg.test1.xml] CRM_meta_timeout=[20000] migration_transport=[tcp] : pid [20709] timed out
Mar  3 16:49:16 breadnut2 crmd: [20697]: ERROR: process_lrm_event: LRM operation p-vd_vg.test1_monitor_0 (5) Timed Out (timeout=20000ms)
Mar  3 16:49:16 breadnut2 attrd: [20695]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Mar  3 16:49:16 breadnut2 attrd: [20695]: info: attrd_perform_update: Sent update 34: probe_complete=true
Mar  3 16:49:16 breadnut2 crmd: [20697]: info: do_lrm_rsc_op: Performing key=6:580:0:5bf60c00-da90-45b3-a6f7-4c5451fa2dde op=p-vd_vg.test1_stop_0 )
Mar  3 16:49:16 breadnut2 lrmd: [20694]: info: rsc:p-vd_vg.test1:8: stop
Mar  3 16:49:16 breadnut2 crmd: [20697]: info: do_lrm_rsc_op: Performing key=59:580:0:5bf60c00-da90-45b3-a6f7-4c5451fa2dde op=fencing-bn_start_0 )
Mar  3 16:49:16 breadnut2 lrmd: [20694]: info: rsc:fencing-bn:9: start
Mar  3 16:49:16 breadnut2 crmd: [20697]: info: do_lrm_rsc_op: Performing key=13:580:0:5bf60c00-da90-45b3-a6f7-4c5451fa2dde op=p-libvirtd:1_start_0 )
Mar  3 16:49:16 breadnut2 lrmd: [20694]: info: rsc:p-libvirtd:1:10: start
Mar  3 16:49:16 breadnut2 crmd: [20697]: info: do_lrm_rsc_op: Performing key=21:580:0:5bf60c00-da90-45b3-a6f7-4c5451fa2dde op=p-lvm_gh:1_start_0 )
Mar  3 16:49:16 breadnut2 lrmd: [20694]: info: rsc:p-lvm_gh:1:11: start
Mar  3 16:49:16 breadnut2 stonith-ng: [20692]: info: stonith_device_register: Added 'fencing-bn' to the device list (1 active devices)
Mar  3 16:49:16 breadnut2 stonith-ng: [20692]: info: stonith_command: Processed st_device_register from lrmd: rc=0
Mar  3 16:49:16 breadnut2 lrmd: [20694]: info: stonith_api_device_metadata: looking up meatware/heartbeat metadata
Mar  3 16:49:16 breadnut2 crmd: [20697]: info: process_lrm_event: LRM operation fencing-bn_start_0 (call=9, rc=0, cib-update=13, confirmed=true) ok
Mar  3 16:49:16 breadnut2 lrmd: [20694]: info: RA output: (p-vd_vg.test1:stop:stderr) error: unable to connect to '/var/run/libvirt/libvirt-sock', libvirtd may need to be started: No such file or directory#012error: failed to connect to the hypervisor
Mar  3 16:49:16 breadnut2 crmd: [20697]: info: do_lrm_rsc_op: Performing key=60:580:0:5bf60c00-da90-45b3-a6f7-4c5451fa2dde op=fencing-bn_monitor_10000 )
Mar  3 16:49:16 breadnut2 lrmd: [20694]: info: rsc:fencing-bn:12: monitor
Mar  3 16:49:16 breadnut2 LVM[20890]: INFO: Activating volume group gh
Mar  3 16:49:16 breadnut2 stonith: [20934]: info: parse config info info=breadnut
Mar  3 16:49:16 breadnut2 stonith: [20934]: info: meatware device OK.
Mar  3 16:49:16 breadnut2 crmd: [20697]: info: process_lrm_event: LRM operation fencing-bn_monitor_10000 (call=12, rc=0, cib-update=14, confirmed=false) ok
Mar  3 16:49:17 breadnut2 LVM[20890]: INFO: Reading all physical volumes. This may take a while... Found volume group "gh" using metadata type lvm2
Mar  3 16:49:17 breadnut2 LVM[20890]: INFO: 2 logical volume(s) in volume group "gh" now active
Mar  3 16:49:17 breadnut2 crmd: [20697]: info: process_lrm_event: LRM operation p-lvm_gh:1_start_0 (call=11, rc=0, cib-update=15, confirmed=true) ok
Mar  3 16:49:17 breadnut2 crmd: [20697]: info: do_lrm_rsc_op: Performing key=22:580:0:5bf60c00-da90-45b3-a6f7-4c5451fa2dde op=p-lvm_gh:1_monitor_10000 )
Mar  3 16:49:17 breadnut2 lrmd: [20694]: info: rsc:p-lvm_gh:1:13: monitor
Mar  3 16:49:17 breadnut2 crmd: [20697]: info: process_lrm_event: LRM operation p-lvm_gh:1_monitor_10000 (call=13, rc=0, cib-update=16, confirmed=false) ok
Mar  3 16:49:17 breadnut2 libvirtd: 16:49:17.302: 20904: warning : qemudStartup:1331 : Unable to create cgroup for driver: No such device or address
Mar  3 16:49:17 breadnut2 kernel: lo: Disabled Privacy Extensions
Mar  3 16:49:17 breadnut2 kernel: sit0: Disabled Privacy Extensions
Mar  3 16:49:17 breadnut2 kernel: ip6tnl0: Disabled Privacy Extensions
Mar  3 16:49:17 breadnut2 libvirtd: 16:49:17.403: 20904: warning : lxcStartup:2128 : Unable to create cgroup for driver: No such device or address
Mar  3 16:49:17 breadnut2 VirtualDomain[20887]: INFO: Domain vg.test1 already stopped.
Mar  3 16:49:17 breadnut2 crmd: [20697]: info: process_lrm_event: LRM operation p-libvirtd:1_start_0 (call=10, rc=0, cib-update=17, confirmed=true) ok
Mar  3 16:49:17 breadnut2 crmd: [20697]: info: process_lrm_event: LRM operation p-vd_vg.test1_stop_0 (call=8, rc=0, cib-update=18, confirmed=true) ok
Mar  3 16:49:17 breadnut2 crmd: [20697]: info: do_lrm_rsc_op: Performing key=14:580:0:5bf60c00-da90-45b3-a6f7-4c5451fa2dde op=p-libvirtd:1_monitor_10000 )
Mar  3 16:49:17 breadnut2 lrmd: [20694]: info: rsc:p-libvirtd:1:14: monitor
Mar  3 16:49:17 breadnut2 crmd: [20697]: info: process_lrm_event: LRM operation p-libvirtd:1_monitor_10000 (call=14, rc=0, cib-update=19, confirmed=false) ok
Mar  3 16:49:22 breadnut2 crmd: [20697]: info: do_lrm_rsc_op: Performing key=29:581:0:5bf60c00-da90-45b3-a6f7-4c5451fa2dde op=p-drbd_vg.test1:1_start_0 )
Mar  3 16:49:22 breadnut2 lrmd: [20694]: info: rsc:p-drbd_vg.test1:1:15: start
Mar  3 16:49:22 breadnut2 lrmd: [20694]: info: RA output: (p-drbd_vg.test1:1:start:stdout)         allow-two-primaries;
Mar  3 16:49:22 breadnut2 lrmd: [20694]: info: RA output: (p-drbd_vg.test1:1:start:stdout) Found valid meta data in the expected location, 10737414144 bytes into /dev/mapper/gh-vg.test1.
Mar  3 16:49:22 breadnut2 lrmd: [20694]: info: RA output: (p-drbd_vg.test1:1:start:stdout)
Mar  3 16:49:22 breadnut2 kernel: block drbd0: Starting worker thread (from kworker/u:0 [5])
Mar  3 16:49:22 breadnut2 kernel: block drbd0: disk( Diskless -> Attaching )
Mar  3 16:49:22 breadnut2 kernel: block drbd0: Found 25 transactions (25 active extents) in activity log.
Mar  3 16:49:22 breadnut2 kernel: block drbd0: Method to ensure write ordering: flush
Mar  3 16:49:22 breadnut2 kernel: block drbd0: max_segment_size ( = BIO size ) = 65536
Mar  3 16:49:22 breadnut2 kernel: block drbd0: drbd_bm_resize called with capacity == 20970808
Mar  3 16:49:22 breadnut2 kernel: block drbd0: resync bitmap: bits=2621351 words=40959
Mar  3 16:49:22 breadnut2 kernel: block drbd0: size = 10 GB (10485404 KB)
Mar  3 16:49:22 breadnut2 kernel: block drbd0: recounting of set bits took additional 0 jiffies
Mar  3 16:49:22 breadnut2 kernel: block drbd0: 0 KB (0 bits) marked out-of-sync by on disk bit-map.
Mar  3 16:49:22 breadnut2 kernel: block drbd0: disk( Attaching -> Outdated )
Mar  3 16:49:22 breadnut2 lrmd: [20694]: info: RA output: (p-drbd_vg.test1:1:start:stdout)
Mar  3 16:49:22 breadnut2 lrmd: [20694]: info: RA output: (p-drbd_vg.test1:1:start:stdout)
Mar  3 16:49:22 breadnut2 kernel: block drbd0: conn( StandAlone -> Unconnected )
Mar  3 16:49:22 breadnut2 kernel: block drbd0: Starting receiver thread (from drbd0_worker [21048])
Mar  3 16:49:22 breadnut2 kernel: block drbd0: receiver (re)started
Mar  3 16:49:22 breadnut2 kernel: block drbd0: conn( Unconnected -> WFConnection )
Mar  3 16:49:22 breadnut2 crm_attribute: [21069]: info: Invoked: crm_attribute -N breadnut2 -n master-p-drbd_vg.test1:1 -l reboot -D
Mar  3 16:49:22 breadnut2 lrmd: [20694]: info: RA output: (p-drbd_vg.test1:1:start:stdout)
Mar  3 16:49:22 breadnut2 crmd: [20697]: info: process_lrm_event: LRM operation p-drbd_vg.test1:1_start_0 (call=15, rc=0, cib-update=20, confirmed=true) ok
Mar  3 16:49:22 breadnut2 crmd: [20697]: info: do_lrm_rsc_op: Performing key=67:581:0:5bf60c00-da90-45b3-a6f7-4c5451fa2dde op=p-drbd_vg.test1:1_notify_0 )
Mar  3 16:49:22 breadnut2 lrmd: [20694]: info: rsc:p-drbd_vg.test1:1:16: notify
Mar  3 16:49:22 breadnut2 lrmd: [20694]: info: RA output: (p-drbd_vg.test1:1:notify:stdout)
Mar  3 16:49:22 breadnut2 crmd: [20697]: info: process_lrm_event: LRM operation p-drbd_vg.test1:1_notify_0 (call=16, rc=0, cib-update=21, confirmed=true) ok
Mar  3 16:49:22 breadnut2 crmd: [20697]: info: do_lrm_rsc_op: Performing key=80:582:0:5bf60c00-da90-45b3-a6f7-4c5451fa2dde op=p-drbd_vg.test1:1_notify_0 )
Mar  3 16:49:22 breadnut2 lrmd: [20694]: info: rsc:p-drbd_vg.test1:1:17: notify
Mar  3 16:49:22 breadnut2 crmd: [20697]: info: process_lrm_event: LRM operation p-drbd_vg.test1:1_notify_0 (call=17, rc=0, cib-update=22, confirmed=true) ok
Mar  3 16:49:22 breadnut2 crmd: [20697]: info: do_lrm_rsc_op: Performing key=81:582:0:5bf60c00-da90-45b3-a6f7-4c5451fa2dde op=p-drbd_vg.test1:1_notify_0 )
Mar  3 16:49:22 breadnut2 lrmd: [20694]: info: rsc:p-drbd_vg.test1:1:18: notify
Mar  3 16:49:22 breadnut2 crm_attribute: [21157]: info: Invoked: crm_attribute -N breadnut2 -n master-p-drbd_vg.test1:1 -l reboot -D
Mar  3 16:49:22 breadnut2 lrmd: [20694]: info: RA output: (p-drbd_vg.test1:1:notify:stdout)
Mar  3 16:49:22 breadnut2 crmd: [20697]: info: process_lrm_event: LRM operation p-drbd_vg.test1:1_notify_0 (call=18, rc=0, cib-update=23, confirmed=true) ok
Mar  3 16:49:22 breadnut2 kernel: block drbd0: Handshake successful: Agreed network protocol version 95
Mar  3 16:49:22 breadnut2 kernel: block drbd0: conn( WFConnection -> WFReportParams )
Mar  3 16:49:22 breadnut2 kernel: block drbd0: Starting asender thread (from drbd0_receiver [21062])
Mar  3 16:49:22 breadnut2 kernel: block drbd0: data-integrity-alg: <not-used>
Mar  3 16:49:22 breadnut2 kernel: block drbd0: max_segment_size ( = BIO size ) = 65536
Mar  3 16:49:22 breadnut2 kernel: block drbd0: drbd_sync_handshake:
Mar  3 16:49:22 breadnut2 kernel: block drbd0: self 16FF46C6ACAAB338:0000000000000000:C2A87D695EF6CEB0:9B2FF04823DD4AD7 bits:0 flags:0
Mar  3 16:49:22 breadnut2 kernel: block drbd0: peer B35839BF9A81D9DF:16FF46C6ACAAB339:C2A87D695EF6CEB0:9B2FF04823DD4AD7 bits:109 flags:0
Mar  3 16:49:22 breadnut2 kernel: block drbd0: uuid_compare()=-1 by rule 50
Mar  3 16:49:22 breadnut2 kernel: block drbd0: peer( Unknown -> Primary ) conn( WFReportParams -> WFBitMapT ) pdsk( DUnknown -> UpToDate )
Mar  3 16:49:22 breadnut2 kernel: block drbd0: conn( WFBitMapT -> WFSyncUUID )
Mar  3 16:49:22 breadnut2 kernel: block drbd0: helper command: /sbin/drbdadm before-resync-target minor-0
Mar  3 16:49:22 breadnut2 kernel: block drbd0: helper command: /sbin/drbdadm before-resync-target minor-0 exit code 0 (0x0)
Mar  3 16:49:22 breadnut2 kernel: block drbd0: conn( WFSyncUUID -> SyncTarget ) disk( Outdated -> Inconsistent )
Mar  3 16:49:22 breadnut2 kernel: block drbd0: Began resync as SyncTarget (will sync 436 KB [109 bits set]).
Mar  3 16:49:22 breadnut2 kernel: block drbd0: Resync done (total 1 sec; paused 0 sec; 436 K/sec)
Mar  3 16:49:22 breadnut2 kernel: block drbd0: conn( SyncTarget -> Connected ) disk( Inconsistent -> UpToDate )
Mar  3 16:49:22 breadnut2 kernel: block drbd0: helper command: /sbin/drbdadm after-resync-target minor-0
Mar  3 16:49:22 breadnut2 kernel: block drbd0: helper command: /sbin/drbdadm after-resync-target minor-0 exit code 0 (0x0)
Mar  3 16:49:23 breadnut2 crmd: [20697]: info: do_lrm_rsc_op: Performing key=35:583:0:5bf60c00-da90-45b3-a6f7-4c5451fa2dde op=p-drbd_vg.test1:1_monitor_30000 )
Mar  3 16:49:23 breadnut2 lrmd: [20694]: info: rsc:p-drbd_vg.test1:1:19: monitor
Mar  3 16:49:23 breadnut2 attrd: [20695]: info: attrd_trigger_update: Sending flush op to all hosts for: master-p-drbd_vg.test1:1 (10000)
Mar  3 16:49:23 breadnut2 attrd: [20695]: info: attrd_perform_update: Sent update 40: master-p-drbd_vg.test1:1=10000
Mar  3 16:49:23 breadnut2 crmd: [20697]: info: process_lrm_event: LRM operation p-drbd_vg.test1:1_monitor_30000 (call=19, rc=0, cib-update=24, confirmed=false) ok
Mar  3 16:49:23 breadnut2 lrmd: [20694]: info: cancel_op: operation monitor[19] on ocf::drbd::p-drbd_vg.test1:1 for client 20697, its parameters: CRM_meta_clone=[1] CRM_meta_timeout=[20000] CRM_meta_notify_slave_resource=[ ] CRM_meta_notify_active_resource=[ ] CRM_meta_notify_demote_uname=[ ] drbd_resource=[vg.test1] CRM_meta_notify_inactive_resource=[p-drbd_vg.test1:0 p-drbd_vg.test1:1 ] CRM_meta_master_node_max=[1] CRM_meta_notify_stop_resource=[ ] CRM_meta_notify_master_resource=[ ] CRM_meta_clone_node_max=[1] CRM_meta_notify_demote_resource=[ ] CRM_meta_clone cancelled
Mar  3 16:49:23 breadnut2 crmd: [20697]: info: send_direct_ack: ACK'ing resource op p-drbd_vg.test1:1_monitor_30000 from 9:584:0:5bf60c00-da90-45b3-a6f7-4c5451fa2dde: lrm_invoke-lrmd-1299131363-4
Mar  3 16:49:23 breadnut2 crmd: [20697]: info: process_lrm_event: LRM operation p-drbd_vg.test1:1_monitor_30000 (call=19, status=1, cib-update=0, confirmed=true) Cancelled
Mar  3 16:49:23 breadnut2 crmd: [20697]: info: do_lrm_rsc_op: Performing key=82:584:0:5bf60c00-da90-45b3-a6f7-4c5451fa2dde op=p-drbd_vg.test1:1_notify_0 )
Mar  3 16:49:23 breadnut2 lrmd: [20694]: info: rsc:p-drbd_vg.test1:1:20: notify
Mar  3 16:49:23 breadnut2 crmd: [20697]: info: process_lrm_event: LRM operation p-drbd_vg.test1:1_notify_0 (call=20, rc=0, cib-update=26, confirmed=true) ok
Mar  3 16:49:23 breadnut2 crmd: [20697]: info: do_lrm_rsc_op: Performing key=37:584:0:5bf60c00-da90-45b3-a6f7-4c5451fa2dde op=p-drbd_vg.test1:1_promote_0 )
Mar  3 16:49:23 breadnut2 lrmd: [20694]: info: rsc:p-drbd_vg.test1:1:21: promote
Mar  3 16:49:23 breadnut2 lrmd: [20694]: info: RA output: (p-drbd_vg.test1:1:promote:stdout)         allow-two-primaries;
Mar  3 16:49:23 breadnut2 kernel: block drbd0: role( Secondary -> Primary )
Mar  3 16:49:23 breadnut2 lrmd: [20694]: info: RA output: (p-drbd_vg.test1:1:promote:stdout)
Mar  3 16:49:23 breadnut2 crmd: [20697]: info: process_lrm_event: LRM operation p-drbd_vg.test1:1_promote_0 (call=21, rc=0, cib-update=27, confirmed=true) ok
Mar  3 16:49:23 breadnut2 crmd: [20697]: info: do_lrm_rsc_op: Performing key=83:584:0:5bf60c00-da90-45b3-a6f7-4c5451fa2dde op=p-drbd_vg.test1:1_notify_0 )
Mar  3 16:49:23 breadnut2 lrmd: [20694]: info: rsc:p-drbd_vg.test1:1:22: notify
Mar  3 16:49:23 breadnut2 lrmd: [20694]: info: RA output: (p-drbd_vg.test1:1:notify:stdout)
Mar  3 16:49:23 breadnut2 crmd: [20697]: info: process_lrm_event: LRM operation p-drbd_vg.test1:1_notify_0 (call=22, rc=0, cib-update=28, confirmed=true) ok
Mar  3 16:49:23 breadnut2 crmd: [20697]: info: do_lrm_rsc_op: Performing key=38:584:8:5bf60c00-da90-45b3-a6f7-4c5451fa2dde op=p-drbd_vg.test1:1_monitor_20000 )
Mar  3 16:49:23 breadnut2 lrmd: [20694]: info: rsc:p-drbd_vg.test1:1:23: monitor
Mar  3 16:49:23 breadnut2 crmd: [20697]: info: process_lrm_event: LRM operation p-drbd_vg.test1:1_monitor_20000 (call=23, rc=8, cib-update=29, confirmed=false) master
Mar  3 16:49:27 breadnut2 stonith: [21313]: info: parse config info info=breadnut
Mar  3 16:49:27 breadnut2 stonith: [21313]: info: meatware device OK.
Mar  3 16:49:37 breadnut2 stonith: [21334]: info: parse config info info=breadnut
Mar  3 16:49:37 breadnut2 stonith: [21334]: info: meatware device OK.
Mar  3 16:49:47 breadnut2 stonith: [21383]: info: parse config info info=breadnut
Mar  3 16:49:47 breadnut2 stonith: [21383]: info: meatware device OK.
Mar  3 16:49:57 breadnut2 stonith: [21404]: info: parse config info info=breadnut



More information about the Pacemaker mailing list