[Pacemaker] Pacemaker with Xen 4.3 problem
Tobias Reineck
tobias.reineck at hotmail.de
Wed Jul 9 09:01:50 UTC 2014
Hello,
here the log output
#############################################################################################################################
2014-07-09T10:49:01.315764+02:00 xen01 crmd[31294]: notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
2014-07-09T10:49:01.479820+02:00 xen01 crm_verify[31299]: notice: crm_log_args: Invoked: crm_verify -V -p
2014-07-09T10:49:17.135725+02:00 xen01 crmd[31359]: notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
2014-07-09T10:49:32.683094+02:00 xen01 crmd[31367]: notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
2014-07-09T10:52:33.063416+02:00 xen01 crmd[31668]: notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
2014-07-09T10:52:33.224051+02:00 xen01 crm_verify[31673]: notice: crm_log_args: Invoked: crm_verify -V -p
2014-07-09T10:52:33.378325+02:00 xen01 pengine[31686]: notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
2014-07-09T10:52:33.466427+02:00 xen01 crmd[3446]: notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
2014-07-09T10:52:33.480118+02:00 xen01 pengine[3445]: notice: unpack_config: On loss of CCM Quorum: Ignore
2014-07-09T10:52:33.480151+02:00 xen01 pengine[3445]: notice: LogActions: Start dnsdhcp#011(xen02.domain.dom)
2014-07-09T10:52:33.480161+02:00 xen01 pengine[3445]: notice: process_pe_message: Calculated Transition 227: /var/lib/pacemaker/pengine/pe-input-240.bz2
2014-07-09T10:52:33.480431+02:00 xen01 crmd[3446]: notice: te_rsc_command: Initiating action 7: monitor dnsdhcp_monitor_0 on xen02.domain.dom
2014-07-09T10:52:33.481059+02:00 xen01 crmd[3446]: notice: te_rsc_command: Initiating action 5: monitor dnsdhcp_monitor_0 on xen01.domain.dom (local)
2014-07-09T10:52:33.586987+02:00 xen01 crmd[3446]: notice: process_lrm_event: Operation dnsdhcp_monitor_0: not running (node=xen01.domain.dom, call=102, rc=7, cib-update=380, confirmed=true)
2014-07-09T10:52:33.611876+02:00 xen01 crmd[3446]: notice: te_rsc_command: Initiating action 4: probe_complete probe_complete-xen01.domain.dom on xen01.domain.dom (local) - no waiting
2014-07-09T10:52:33.810913+02:00 xen01 crmd[3446]: notice: te_rsc_command: Initiating action 6: probe_complete probe_complete-xen02.domain.dom on xen02.domain.dom - no waiting
2014-07-09T10:52:33.813788+02:00 xen01 crmd[3446]: notice: te_rsc_command: Initiating action 10: start dnsdhcp_start_0 on xen02.domain.dom
2014-07-09T10:52:33.975340+02:00 xen01 crmd[3446]: warning: status_from_rc: Action 10 (dnsdhcp_start_0) on xen02.domain.dom failed (target: 0 vs. rc: 1): Error
2014-07-09T10:52:33.975412+02:00 xen01 crmd[3446]: warning: update_failcount: Updating failcount for dnsdhcp on xen02.domain.dom after failed start: rc=1 (update=INFINITY, time=1404895953)
2014-07-09T10:52:33.979271+02:00 xen01 crmd[3446]: notice: abort_transition_graph: Transition aborted by dnsdhcp_start_0 'modify' on xen02.domain.dom: Event failed (magic=0:1;10:227:0:37f37c0c-b063-4225-a380-a41137f7d460, cib=0.94.3, source=match_graph_event:344, 0)
2014-07-09T10:52:33.984242+02:00 xen01 crmd[3446]: warning: update_failcount: Updating failcount for dnsdhcp on xen02.domain.dom after failed start: rc=1 (update=INFINITY, time=1404895953)
2014-07-09T10:52:33.985790+02:00 xen01 crmd[3446]: warning: status_from_rc: Action 10 (dnsdhcp_start_0) on xen02.domain.dom failed (target: 0 vs. rc: 1): Error
2014-07-09T10:52:33.987069+02:00 xen01 crmd[3446]: warning: update_failcount: Updating failcount for dnsdhcp on xen02.domain.dom after failed start: rc=1 (update=INFINITY, time=1404895953)
2014-07-09T10:52:33.988034+02:00 xen01 crmd[3446]: warning: update_failcount: Updating failcount for dnsdhcp on xen02.domain.dom after failed start: rc=1 (update=INFINITY, time=1404895953)
2014-07-09T10:52:33.988729+02:00 xen01 crmd[3446]: notice: run_graph: Transition 227 (Complete=6, Pending=0, Fired=0, Skipped=1, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-240.bz2): Stopped
2014-07-09T10:52:33.989334+02:00 xen01 pengine[3445]: notice: unpack_config: On loss of CCM Quorum: Ignore
2014-07-09T10:52:33.990014+02:00 xen01 pengine[3445]: warning: unpack_rsc_op_failure: Processing failed op start for dnsdhcp on xen02.domain.dom: unknown error (1)
2014-07-09T10:52:33.990615+02:00 xen01 pengine[3445]: warning: unpack_rsc_op_failure: Processing failed op start for dnsdhcp on xen02.domain.dom: unknown error (1)
2014-07-09T10:52:33.991355+02:00 xen01 pengine[3445]: notice: LogActions: Recover dnsdhcp#011(Started xen02.domain.dom)
2014-07-09T10:52:33.992005+02:00 xen01 pengine[3445]: notice: process_pe_message: Calculated Transition 228: /var/lib/pacemaker/pengine/pe-input-241.bz2
2014-07-09T10:52:34.040477+02:00 xen01 pengine[3445]: notice: unpack_config: On loss of CCM Quorum: Ignore
2014-07-09T10:52:34.042715+02:00 xen01 pengine[3445]: warning: unpack_rsc_op_failure: Processing failed op start for dnsdhcp on xen02.domain.dom: unknown error (1)
2014-07-09T10:52:34.044920+02:00 xen01 pengine[3445]: warning: unpack_rsc_op_failure: Processing failed op start for dnsdhcp on xen02.domain.dom: unknown error (1)
2014-07-09T10:52:34.047177+02:00 xen01 pengine[3445]: warning: common_apply_stickiness: Forcing dnsdhcp away from xen02.domain.dom after 1000000 failures (max=1000000)
2014-07-09T10:52:34.049493+02:00 xen01 pengine[3445]: notice: LogActions: Recover dnsdhcp#011(Started xen02.domain.dom -> xen01.domain.dom)
2014-07-09T10:52:34.051670+02:00 xen01 crmd[3446]: notice: te_rsc_command: Initiating action 2: stop dnsdhcp_stop_0 on xen02.domain.dom
2014-07-09T10:52:34.054610+02:00 xen01 pengine[3445]: notice: process_pe_message: Calculated Transition 229: /var/lib/pacemaker/pengine/pe-input-242.bz2
2014-07-09T10:52:39.566582+02:00 xen01 crmd[3446]: notice: te_rsc_command: Initiating action 9: start dnsdhcp_start_0 on xen01.domain.dom (local)
2014-07-09T10:52:39.679297+02:00 xen01 lrmd[3443]: notice: operation_finished: dnsdhcp_start_0:31734:stderr [ /root/xen_storage/dns_dhcp/dnsdhcp.xen:24: config parsing error near `dnsdhcp': syntax error, unexpected IDENT, expecting STRING or NUMBER or '[' ]
2014-07-09T10:52:39.680299+02:00 xen01 lrmd[3443]: notice: operation_finished: dnsdhcp_start_0:31734:stderr [ Failed to parse config: Invalid argument ]
2014-07-09T10:52:39.719162+02:00 xen01 crmd[3446]: notice: process_lrm_event: Operation dnsdhcp_start_0: unknown error (node=xen01.domain.dom, call=103, rc=1, cib-update=384, confirmed=true)
2014-07-09T10:52:39.720276+02:00 xen01 crmd[3446]: notice: process_lrm_event: xen01.domain.dom-dnsdhcp_start_0:103 [ /root/xen_storage/dns_dhcp/dnsdhcp.xen:24: config parsing error near `dnsdhcp': syntax error, unexpected IDENT, expecting STRING or NUMBER or '['\nFailed to parse config: Invalid argument\n ]
2014-07-09T10:52:39.721666+02:00 xen01 crmd[3446]: warning: status_from_rc: Action 9 (dnsdhcp_start_0) on xen01.domain.dom failed (target: 0 vs. rc: 1): Error
2014-07-09T10:52:39.722533+02:00 xen01 crmd[3446]: warning: update_failcount: Updating failcount for dnsdhcp on xen01.domain.dom after failed start: rc=1 (update=INFINITY, time=1404895959)
2014-07-09T10:52:39.723280+02:00 xen01 attrd[3444]: notice: attrd_trigger_update: Sending flush op to all hosts for: fail-count-dnsdhcp (INFINITY)
2014-07-09T10:52:39.724021+02:00 xen01 attrd[3444]: notice: attrd_perform_update: Sent update 317: fail-count-dnsdhcp=INFINITY
2014-07-09T10:52:39.724752+02:00 xen01 attrd[3444]: notice: attrd_trigger_update: Sending flush op to all hosts for: last-failure-dnsdhcp (1404895959)
2014-07-09T10:52:39.725445+02:00 xen01 attrd[3444]: notice: attrd_perform_update: Sent update 319: last-failure-dnsdhcp=1404895959
2014-07-09T10:52:39.726159+02:00 xen01 crmd[3446]: notice: abort_transition_graph: Transition aborted by dnsdhcp_start_0 'modify' on xen01.domain.dom: Event failed (magic=0:1;9:229:0:37f37c0c-b063-4225-a380-a41137f7d460, cib=0.94.7, source=match_graph_event:344, 0)
2014-07-09T10:52:39.727152+02:00 xen01 crmd[3446]: warning: update_failcount: Updating failcount for dnsdhcp on xen01.domain.dom after failed start: rc=1 (update=INFINITY, time=1404895959)
2014-07-09T10:52:39.727872+02:00 xen01 crmd[3446]: warning: status_from_rc: Action 9 (dnsdhcp_start_0) on xen01.domain.dom failed (target: 0 vs. rc: 1): Error
2014-07-09T10:52:39.730441+02:00 xen01 crmd[3446]: warning: update_failcount: Updating failcount for dnsdhcp on xen01.domain.dom after failed start: rc=1 (update=INFINITY, time=1404895959)
2014-07-09T10:52:39.731249+02:00 xen01 crmd[3446]: warning: update_failcount: Updating failcount for dnsdhcp on xen01.domain.dom after failed start: rc=1 (update=INFINITY, time=1404895959)
2014-07-09T10:52:39.731907+02:00 xen01 crmd[3446]: notice: abort_transition_graph: Transition aborted by status-1-fail-count-dnsdhcp, fail-count-dnsdhcp=INFINITY: Transient attribute change (create cib=0.94.8, source=te_update_diff:391, path=/cib/status/node_state[@id='1']/transient_attributes[@id='1']/instance_attributes[@id='status-1'], 0)
2014-07-09T10:52:39.733108+02:00 xen01 crmd[3446]: notice: run_graph: Transition 229 (Complete=3, Pending=0, Fired=0, Skipped=1, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-242.bz2): Stopped
2014-07-09T10:52:39.734870+02:00 xen01 pengine[3445]: notice: unpack_config: On loss of CCM Quorum: Ignore
2014-07-09T10:52:39.736966+02:00 xen01 pengine[3445]: warning: unpack_rsc_op_failure: Processing failed op start for dnsdhcp on xen01.domain.dom: unknown error (1)
2014-07-09T10:52:39.737874+02:00 xen01 pengine[3445]: warning: unpack_rsc_op_failure: Processing failed op start for dnsdhcp on xen01.domain.dom: unknown error (1)
2014-07-09T10:52:39.738615+02:00 xen01 pengine[3445]: warning: unpack_rsc_op_failure: Processing failed op start for dnsdhcp on xen02.domain.dom: unknown error (1)
2014-07-09T10:52:39.739491+02:00 xen01 pengine[3445]: warning: common_apply_stickiness: Forcing dnsdhcp away from xen01.domain.dom after 1000000 failures (max=1000000)
2014-07-09T10:52:39.740337+02:00 xen01 pengine[3445]: warning: common_apply_stickiness: Forcing dnsdhcp away from xen02.domain.dom after 1000000 failures (max=1000000)
2014-07-09T10:52:39.741456+02:00 xen01 pengine[3445]: notice: LogActions: Stop dnsdhcp#011(xen01.domain.dom)
2014-07-09T10:52:39.742298+02:00 xen01 pengine[3445]: notice: process_pe_message: Calculated Transition 230: /var/lib/pacemaker/pengine/pe-input-243.bz2
2014-07-09T10:52:39.743362+02:00 xen01 crmd[3446]: notice: te_rsc_command: Initiating action 2: stop dnsdhcp_stop_0 on xen01.domain.dom (local)
2014-07-09T10:52:45.211574+02:00 xen01 Xen(dnsdhcp)[31780]: INFO: Xen domain dnsdhcp already stopped.
2014-07-09T10:52:45.230365+02:00 xen01 lrmd[3443]: notice: operation_finished: dnsdhcp_stop_0:31780:stderr [ dnsdhcp is an invalid domain identifier (rc=-6) ]
2014-07-09T10:52:45.232665+02:00 xen01 crmd[3446]: notice: process_lrm_event: Operation dnsdhcp_stop_0: ok (node=xen01.domain.dom, call=104, rc=0, cib-update=386, confirmed=true)
2014-07-09T10:52:45.235199+02:00 xen01 crmd[3446]: notice: run_graph: Transition 230 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-243.bz2): Complete
2014-07-09T10:52:45.236029+02:00 xen01 crmd[3446]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
#############################################################################################################################
here the Xen VM config file:
#############################################################################################################################
builder='hvm'
name='dnsdhcp'
vcpus='1'
cpus='1'
memory='768'
disk=['file:/root/xen_storage/dns_dhcp/dns_dhcp.img,xvda,w']
vif=['type=paravirtualized, bridge=xenbr0, model=e1000, vifname=dns_dhcp, mac=00:16:3E:52:4C:38']
boot='c'
acpi='1'
apic='1'
viridian='1'
stdvga='0'
vnc='1'
vnclisten='0.0.0.0'
sdl='0'
usbdevice='tablet'
xen_platform_pci='1'
keymap='de'
on_poweroff='destroy'
on_reboot='restart'
on_crash='restart'
#############################################################################################################################
I dont know why pacemaker complains the:
2014-07-09T10:52:39.679297+02:00 xen01 lrmd[3443]: notice: operation_finished: dnsdhcp_start_0:31734:stderr [ /root/xen_storage/dns_dhcp/dnsdhcp.xen:24: config parsing error near `dnsdhcp': syntax error, unexpected IDENT, expecting STRING or NUMBER or '[' ]
and
2014-07-09T10:52:33.990014+02:00 xen01 pengine[3445]: warning: unpack_rsc_op_failure: Processing failed op start for dnsdhcp on xen02.domain.dom: unknown error (1)
Best regards
T. Reineck
Date: Wed, 9 Jul 2014 09:37:10 +0200
From: alxgomz at gmail.com
To: pacemaker at oss.clusterlabs.org
Subject: Re: [Pacemaker] Pacemaker with Xen 4.3 problem
Actually I did it for the stonith resource agent external:xen0.
xm and xl are supposed to be semantically very close and as far as I can see the ocf:heartbeat:Xen agent doesn't seem to use any xm command that shouldn't work with xl.
What error do you have when using xl instead of xm?
Regards.
2014-07-09 8:39 GMT+02:00 Tobias Reineck <tobias.reineck at hotmail.de>:
Hello,
do you mean the "Xen" script in /usr/lib/ocf/resource.d/heartbeat/ ?
I also tried this to replace all "xm" with "xl" with no success.
Is it possible that you can show me you RA resource for Xen ?
Best regards
T. Reineck
Date: Tue, 8 Jul 2014 22:27:59 +0200
From: alxgomz at gmail.com
To: pacemaker at oss.clusterlabs.org
Subject: Re: [Pacemaker] Pacemaker with Xen 4.3 problem
IIRC the xen RA uses 'xm'. However fixing the RAin is trivial and worked for me (if you're using the same RA)
Le 2014-07-08 21:39, "Tobias Reineck" <tobias.reineck at hotmail.de> a écrit :
Hello,
I try to build a XEN HA cluster with pacemaker/corosync.
Xen 4.3 works on all nodes and also the xen live migration works fine.
Pacemaker also works with the cluster virtual IP.
But when I try to create a XEN OCF Heartbeat resource to get online, an error
appears:
######################
Failed actions:
xen_dns_ha_start_0 on xen01.domain.dom 'unknown error' (1): call=31, status=complete, last-rc-change='Sun Jul 6 15:02:25 2014', queued=0ms, exec=555ms
xen_dns_ha_start_0 on xen02.domain.dom 'unknown error' (1): call=10, status=complete, last-rc-change='Sun Jul 6 15:15:09 2014', queued=0ms, exec=706ms
######################
I added the resource with the command
crm configure primitive xen_dns_ha ocf:heartbeat:Xen \
params xmfile="/root/xen_storage/dns_dhcp/dns_dhcp.xen" \
op monitor interval="10s" \
op start interval="0s" timeout="30s" \
op stop interval="0s" timeout="300s"
in the /var/log/messages the following error is printed:
2014-07-08T21:09:19.885239+02:00 xen01 lrmd[3443]: notice: operation_finished: xen_dns_ha_stop_0:18214:stderr [ Error: Unable to connect to xend: No such file or directory. Is xend running? ]
I use xen 4.3 with XL toolstack without xend .
Is it possible to use pacemaker with Xen 4.3 ?
Can anybody please help me ?
Best regards
T. Reineck
_______________________________________________
Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org
_______________________________________________
Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org
_______________________________________________
Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org
_______________________________________________
Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker
Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20140709/e4e1f69c/attachment.htm>
More information about the Pacemaker
mailing list