[Pacemaker] Fwd: VirtualDomain broken for live migration.
Cédric Dufour - Idiap Research Institute
cedric.dufour at idiap.ch
Tue Aug 19 09:34:48 CEST 2014
Hello,
I don't know about the "--live" flag not being available any longer (which version of Libvirt are you using?) but maybe I can help with the "email" part.
Up to now, I had been using a Pacemaker resource *group*, made of a "VirtualDomain" primitive and a *MailTo* primitive. It did work - as far as having notification e-mails sent when VMs moved - but with the following caveats:
- any error in the "MailTo" primitive would be considered "critical" (which they aren't, in my opinion) by Pacemaker, resulting in node fencing
- in the context of large cluster (several hundreds VMs), each VM would result in two "resources", thus doubling the CIB size/load
I recently merged the "VirtualDomain" and "MailTo" resource agents in a single "LibvirtQemu" RA (see attached file).
This new RA allows notification e-mails to be sent, which: 1. provide better info about what happens exactly (e.g. graceful vs. forcedful stop and live migration); 2. does not result in node fencing in case of e-mail problems (which fits my requirements but maybe not yours)
Beware that I also simplified the code by assuming a local qemu hypervisor (the "hypervisor" parameter is no longer); I did so because I experienced strange delays when the "VirtualDomain" RA was running the "virsh ... uri" command for the sake of acquiring a sensible default value for that "hypervisor" parameter (which always resulted in "qemu:///system").
Since it is a simple bash script, you can easily modify it to fit your requirements (or adapt it for missing/new Libvirt flags). It should be installed in .../resource.d/my-custom-ra/... , with Pacemaker XML being updated from "provider='heartbeat'" to 'provider='my-custom-ra'".
Hope it helps.
Best,
Cédric
On 18/08/14 22:37, Steven Hale wrote:
> Dear all,
>
> I'm in the process of setting up my first four-node cluster. I'm
> using CentOS7 with PCS/Pacemaker/Corosync.
>
> I've got everything set up with shared storage using GlusterFS. The
> cluster is running and I'm in the process of adding resources. My
> intention for the cluster is to use it to host virtual machines. I
> want the cluster to be able to live-migrate VMs between hosts. I'm
> not interested in monitoring resources inside the guests, just knowing
> that the guest is running or not is fine.
>
> I've got all the virtualization working with libvirt using KVM. Live
> migration works fine. Now I'm trying to make it work through the
> cluster.
>
> I am using the VirtualDomain resource in heartbeat. I can add and
> remove VMs. It works. But the live migration feature is broken.
> Looking at the source, the fault is on this line:
>
> virsh ${VIRSH_OPTIONS} migrate --live $DOMAIN_NAME ${remoteuri} ${migrateuri}
>
> I guess virsh must have changed at some point, because the "--live"
> flag does not exist any more. I can make it work with the following
> change
>
> virsh ${VIRSH_OPTIONS} migrate --p2p --tunnelled $DOMAIN_NAME
> ${remoteuri} ${migrateuri}
>
> This works, at least for my case where I'm tunnelling the migration
> over SSH. But it's not a real bug fix because it's going to need
> extra logic somewhere to determine whether it needs to add the
> "--tunnelled" flag or not, and whatever other flags are required.
>
> I see that the VirtualDomain resource hasn't been worked on in over
> four years. Similarly the Wiki page has had no updated in this time.
>
> http://www.linux-ha.org/wiki/VirtualDomain_%28resource_agent%29
>
> Is this project still in active development? Is anyone actually
> working on this? While I could do the work to fix the VirtualDomain
> resource to work with the latest version of virsh, I don't see the
> point if the project is dead. I gather Heartbeat became what is now
> Pacemaker, but there doesn't seem to be a new up-to-date version of
> VirtualDomain included with Pacemaker.
>
> Indeed even the Pacemaker documentation seems completely out of date.
> I spent hours working with ClusterMon and these pages
>
> http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/ch07.html
> https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/High_Availability_Add-On_Reference/s1-eventnotification-HAAR.html
>
> just trying to get my cluster to send notification emails. It was
> only when I looked at the ClusterMon source and the man page for
> crm_mon that I realised the documentation is completely wrong and
> ClusterMon has no ability at all to send emails. The "extra_options"
> field lists options that crm_mon doesn't even show as supported!
>
> What does everybody else use for managing virtual machines on a
> Pacemaker cluster? If heartbeat VirtualDomain is no longer supported,
> can anyone point me in the direction of something is that is still in
> development?
>
> Thanks for any help and advice anyone can offer.
>
> Steve.
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
-------------- next part --------------
#!/bin/bash
#
# License: GNU General Public License (GPL)
#
# Resource Agent for domains managed by the libvirt API.
# Requires a running libvirt daemon (libvirtd).
#
# (c) 2008-2010 Florian Haas, Dejan Muhamedagic,
# and Linux-HA contributors
#
# 2014.08.11: Cedric Dufour <cedric.dufour at idiap.ch>
# Simplified version of 'VirtualDomain' OCF script.
# (Partially) integrated 'MailTo' OCF script
#
# Usage: ${0} {start|stop|status|monitor|migrate_to|migrate_from|meta-data|validate-all}
#
#######################################################################
# Initialization:
: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
# Defaults
OCF_RESKEY_force_stop_default=0
OCF_RESKEY_email_subject='[SYSTEM:HA][VM:%domain_name%]'
: ${OCF_RESKEY_force_stop=${OCF_RESKEY_force_stop_default}}
: ${OCF_RESKEY_email_subject=${OCF_RESKEY_email_subject_default}}
#######################################################################
usage() {
echo "USAGE: ${0##*/} {start|stop|status|monitor|migrate_to|migrate_from|meta-data|validate-all}"
}
meta_data() {
cat <<EOF
<?xml version="1.0"?>
<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd">
<resource-agent name="LibvirtQemu">
<version>1.1</version>
<longdesc lang="en">
Resource agent for a libvirt (qemu) virtual domain.
</longdesc>
<shortdesc lang="en">Manages qemu virtual domains through the libvirt virtualization framework</shortdesc>
<parameters>
<parameter name="config" unique="1" required="1">
<longdesc lang="en">
Absolute path to the libvirt (qemu) configuration file (corresponding to the desired virtual domain).
</longdesc>
<shortdesc lang="en">Libvirt (qemu) configuration file</shortdesc>
<content type="string" default="" />
</parameter>
<parameter name="force_stop" unique="0" required="0">
<longdesc lang="en">
Always forcefully shut down ("destroy") the domain on stop. The default
behavior is to resort to a forceful shutdown only after a graceful
shutdown attempt has failed. You should only set this to true if
your virtual domain (or your virtualization backend) does not support
graceful shutdown.
</longdesc>
<shortdesc lang="en">Always force shutdown on stop</shortdesc>
<content type="boolean" default="${OCF_RESKEY_force_stop_default}" />
</parameter>
<parameter name="migration_transport" unique="0" required="0">
<longdesc lang="en">
Transport used to connect to the remote hypervisor while
migrating. Please refer to the libvirt documentation for details on
transports available. If this parameter is omitted, the resource will
use libvirt's default transport to connect to the remote hypervisor.
</longdesc>
<shortdesc lang="en">Remote hypervisor transport</shortdesc>
<content type="string" default="" />
</parameter>
<parameter name="migration_network_suffix" unique="0" required="0">
<longdesc lang="en">
Use a dedicated migration network. The migration URI is composed by
adding this parameters value to the end of the node name. If the node
name happens to be an FQDN (as opposed to an unqualified host name),
insert the suffix immediately prior to the first period (.) in the FQDN.
Note: Be sure this composed host name is locally resolveable and the
associated IP is reachable through the favored network.
</longdesc>
<shortdesc lang="en">Migration network host name suffix</shortdesc>
<content type="string" default="" />
</parameter>
<parameter name="monitor_scripts" unique="0" required="0">
<longdesc lang="en">
To additionally monitor services within the virtual domain, add this
parameter with a list of scripts to monitor.
Note: when monitor scripts are used, the start and migrate_from operations
will complete only when all monitor scripts have completed successfully.
Be sure to set the timeout of these operations to accommodate this delay.
</longdesc>
<shortdesc lang="en">Space-separated list of monitor scripts</shortdesc>
<content type="string" default="" />
</parameter>
<parameter name="email" unique="0" required="0">
<longdesc lang="en">
Space-separated list of operators E-mail addresses (to send status notifications to).
</longdesc>
<shortdesc lang="en">Space-separated E-mail addresses</shortdesc>
<content type="string" default="" />
</parameter>
<parameter name="email_subject" unique="0" required="0">
<longdesc lang="en">
The subject of the status notification E-mails.
The '%domain_name%' macro shall be replaced with the actual virtual domain name.
</longdesc>
<shortdesc lang="en">E-mail subject</shortdesc>
<content type="string" default="[SYSTEM:HA][VM:%domain_name%]" />
</parameter>
</parameters>
<actions>
<action name="start" timeout="30" />
<action name="stop" timeout="90" />
<action name="status" depth="0" timeout="30" interval="60" />
<action name="monitor" depth="0" timeout="30" interval="60" />
<action name="migrate_from" timeout="90" />
<action name="migrate_to" timeout="90" />
<action name="meta-data" timeout="5" />
<action name="validate-all" timeout="5" />
</actions>
</resource-agent>
EOF
}
# Options to be passed to virsh
VIRSH_OPTIONS="--quiet"
# State file where to record the domain name
STATEFILE="${HA_RSCTMP}/LibvirtQemu-${OCF_RESOURCE_INSTANCE}.state"
LibvirtQemu_EmailSend() {
${MAILCMD} -s "${1}" "${OCF_RESKEY_email}" << EOF
${1}
EOF
return $?
}
LibvirtQemu_Define() {
local virsh_output
local domain_name
# Note: passing in the domain name from outside the script is
# intended for testing and debugging purposes only. Don't do this
# in production, instead let the script figure out the domain name
# from the config file. You have been warned.
if [ -z "${DOMAIN_NAME}" ]; then
# Spin until we have a domain name
while true; do
virsh_output="$(virsh ${VIRSH_OPTIONS} define ${OCF_RESKEY_config})"
domain_name="$(echo "${virsh_output}" | sed -e 's/Domain \(.*\) defined from .*$/\1/')"
[ -n "${domain_name}" ] && break
ocf_log debug "Domain not defined yet; retrying."
sleep 1
done
echo "${domain_name}" > "${STATEFILE}"
ocf_log info "Domain name '${domain_name}' saved to state file '${STATEFILE}'."
else
ocf_log warn "Domain name '${DOMAIN_NAME}' already defined; overriding configuration file '${OCF_RESKEY_config}' (this should NOT ne done in production!)."
fi
}
LibvirtQemu_Cleanup_Statefile() {
rm -f "${STATEFILE}"
[ $? -ne 0 ] && ocf_log warn "Failed to remove state file '${STATEFILE}' during '${__OCF_ACTION}'."
}
LibvirtQemu_Status() {
local try=0
local status
rc=${OCF_ERR_GENERIC}
status='no state'
while [ "${status}" == 'no state' ]; do
try=$(( ${try} + 1 ))
status="$(virsh ${VIRSH_OPTIONS} domstate ${DOMAIN_NAME})"
case "${status}" in
'shut off')
# shut off: domain is defined, but not started
ocf_log debug "Domain '${DOMAIN_NAME}' is currently in state '${status}'."
rc=${OCF_NOT_RUNNING}
;;
'running'|'paused'|'idle'|'in shutdown'|'blocked')
# running: domain is currently actively consuming cycles
# paused: domain is paused (suspended)
# idle: domain is running but idle
# in shutdown: domain is being (gracefully) shut down
# blocked: synonym for idle used by legacy Xen versions
ocf_log debug "Domain '${DOMAIN_NAME}' is currently in state '${status}'."
rc=${OCF_SUCCESS}
;;
''|'no state')
# Empty string may be returned when virsh does not
# receive a reply from libvirtd.
# "no state" may occur when the domain is currently
# being migrated (on the migration target only), or
# whenever virsh can't reliably obtain the domain
# state.
status='no state'
if [ "${__OCF_ACTION}" == 'stop' ] && [ ${try} -ge 3 ]; then
# During the stop operation, we want to bail out
# quickly, so as to be able to force-stop (destroy)
# the domain if necessary.
ocf_log error "Domain '${DOMAIN_NAME}' has no state during stop operation; bailing out."
return ${OCF_ERR_GENERIC};
else
# During all other actions, we just wait and try
# again, relying on the CRM/LRM to time us out if
# this takes too long.
ocf_log info "Domain '${DOMAIN_NAME}' currently has no state; retrying."
sleep 1
fi
;;
*)
# any other output is unexpected.
ocf_log error "Domain '${DOMAIN_NAME}' has unknown state ('${status}')!"
;;
esac
done
return ${rc}
}
LibvirtQemu_Start() {
if LibvirtQemu_Status; then
ocf_log info "Domain '${DOMAIN_NAME}' is already running."
return ${OCF_SUCCESS}
fi
virsh ${VIRSH_OPTIONS} start ${DOMAIN_NAME}
rc=$?
if [ ${rc} -ne 0 ]; then
ocf_log error "Failed to start domain '${DOMAIN_NAME}'."
return ${OCF_ERR_GENERIC}
fi
while ! LibvirtQemu_Monitor; do
sleep 1
done
if [ -n "${OCF_RESKEY_email}" ]; then
LibvirtQemu_EmailSend "${OCF_RESKEY_email_subject//%domain_name%/${DOMAIN_NAME}} $(date +'%Y-%m-%d %H:%M:%S') START on $(uname -n)"
fi
return ${OCF_SUCCESS}
}
LibvirtQemu_Stop() {
local status
local shutdown_timeout
local out ex
LibvirtQemu_Status
status=$?
case ${status} in
${OCF_SUCCESS})
if ! ocf_is_true ${OCF_RESKEY_force_stop}; then
# Issue a graceful shutdown request
ocf_log info "Issuing graceful shutdown request for domain '${DOMAIN_NAME}'."
virsh ${VIRSH_OPTIONS} shutdown ${DOMAIN_NAME}
# The "shutdown_timeout" we use here is the operation
# timeout specified in the CIB, minus 5 seconds
shutdown_timeout=$(( ${SECONDS} + (${OCF_RESKEY_CRM_meta_timeout}/1000)-5 ))
# Loop on status until we reach ${shutdown_timeout}
while [ ${SECONDS} -lt ${shutdown_timeout} ]; do
LibvirtQemu_Status
status=$?
case ${status} in
${OCF_NOT_RUNNING})
# This was a graceful shutdown. Clean
# up and return.
LibvirtQemu_Cleanup_Statefile
if [ -n "${OCF_RESKEY_email}" ]; then
LibvirtQemu_EmailSend "${OCF_RESKEY_email_subject//%domain_name%/${DOMAIN_NAME}} $(date +'%Y-%m-%d %H:%M:%S') STOP (graceful) on $(uname -n)"
fi
return ${OCF_SUCCESS}
;;
${OCF_SUCCESS})
# Domain is still running, keep
# waiting (until shutdown_timeout
# expires)
sleep 1
;;
*)
# Something went wrong. Bail out and
# resort to forced stop (destroy).
break;
;;
esac
done
fi
;;
${OCF_NOT_RUNNING})
ocf_log info "Domain '${DOMAIN_NAME}' already stopped."
return ${OCF_SUCCESS}
;;
esac
# OK. Now if the above graceful shutdown hasn't worked, kill
# off the domain with destroy. If that too does not work,
# have the LRM time us out.
ocf_log info "Issuing forced shutdown (destroy) request for domain '${DOMAIN_NAME}'."
out="$(virsh ${VIRSH_OPTIONS} destroy ${DOMAIN_NAME} 2>&1)"
ex=$?
echo "${out}" >&2
# unconditionally clean up.
LibvirtQemu_Cleanup_Statefile
case ${ex}${out} in
*'error:'*'domain is not running'*)
: # unexpected path to the intended outcome, all is well
;;
[!0]*)
return ${OCF_ERR_GENERIC}
;;
0*)
while [ ${status} != ${OCF_NOT_RUNNING} ]; do
LibvirtQemu_Status
status=$?
done
;;
esac
if [ -n "${OCF_RESKEY_email}" ]; then
LibvirtQemu_EmailSend "${OCF_RESKEY_email_subject//%domain_name%/${DOMAIN_NAME}} $(date +'%Y-%m-%d %H:%M:%S') STOP (forced) on $(uname -n)"
fi
return ${OCF_SUCCESS}
}
LibvirtQemu_Migrate_To() {
local target_node
local remoteuri
local transport_suffix
local migrateuri
local migrateport
local migrate_target
target_node="${OCF_RESKEY_CRM_meta_migrate_target}"
if LibvirtQemu_Status; then
# Find out the remote hypervisor to connect to. That is, turn
# something like "qemu://foo:9999/system" into
# "qemu+tcp://bar:9999/system"
if [ -n "${OCF_RESKEY_migration_transport}" ]; then
transport_suffix="+${OCF_RESKEY_migration_transport}"
fi
# A typical migration URI via a special migration network looks
# like "tcp://bar-mig:49152". The port would be randomly chosen
# by libvirt from the range 49152-49215 if omitted, at least since
# version 0.7.4 ...
if [ -n "${OCF_RESKEY_migration_network_suffix}" ]; then
# Hostname might be a FQDN
migrate_target=$(echo ${target_node} | sed -e "s,^\([^.]\+\),\1${OCF_RESKEY_migration_network_suffix},")
# For quiet ancient libvirt versions a migration port is needed
# and the URI must not contain the "//". Newer versions can handle
# the "bad" URI.
migrateport=$(( 49152 + $(ocf_maybe_random) % 64 ))
migrateuri="tcp:${migrate_target}:${migrateport}"
fi
remoteuri="qemu${transport_suffix}://${target_node}/system"
# OK, we know where to connect to. Now do the actual migration.
ocf_log info "Migrating domain '${DOMAIN_NAME}' to node '${target_node}' ('${remoteuri}' via '${migrateuri}')."
virsh ${VIRSH_OPTIONS} migrate --live ${DOMAIN_NAME} ${remoteuri} ${migrateuri}
rc=$?
if [ ${rc} -ne 0 ]; then
ocf_log err "Migration of domain '${DOMAIN_NAME} to node '${target_node}' ('${remoteuri}' via '${migrateuri}') failed: ${rc}"
return ${OCF_ERR_GENERIC}
else
ocf_log info "Migration of domain '${DOMAIN_NAME}' to node '${target_node}' succeeded."
LibvirtQemu_Cleanup_Statefile
if [ -n "${OCF_RESKEY_email}" ]; then
LibvirtQemu_EmailSend "${OCF_RESKEY_email_subject//%domain_name%/${DOMAIN_NAME}} $(date +'%Y-%m-%d %H:%M:%S') MIGRATE on $(uname -n) (to ${target_node})"
fi
return ${OCF_SUCCESS}
fi
else
ocf_log err "${DOMAIN_NAME}: migrate_to: Not active locally!"
return ${OCF_ERR_GENERIC}
fi
}
LibvirtQemu_Migrate_From() {
while ! LibvirtQemu_Monitor; do
sleep 1
done
ocf_log info "Migration of domain '${DOMAIN_NAME}' from '${OCF_RESKEY_CRM_meta_migrate_source}' succeeded."
if [ -n "${OCF_RESKEY_email}" ]; then
LibvirtQemu_EmailSend "${OCF_RESKEY_email_subject//%domain_name%/${DOMAIN_NAME}} $(date +'%Y-%m-%d %H:%M:%S') MIGRATE on $(uname -n) (from ${OCF_RESKEY_CRM_meta_migrate_source})"
fi
return ${OCF_SUCCESS}
}
LibvirtQemu_Monitor() {
# First, check the domain status. If that returns anything other
# than ${OCF_SUCCESS}, something is definitely wrong.
LibvirtQemu_Status
rc=$?
if [ ${rc} -eq ${OCF_SUCCESS} ]; then
# OK, the generic status check turned out fine. Now, if we
# have monitor scripts defined, run them one after another.
for script in ${OCF_RESKEY_monitor_scripts}; do
script_output="$( ${script} 2>&1)"
script_rc=$?
if [ ${script_rc} -ne ${OCF_SUCCESS} ]; then
# A monitor script returned a non-success exit
# code. Stop iterating over the list of scripts, log a
# warning message, and propagate ${OCF_ERR_GENERIC}.
ocf_log warn "Monitor script '${script}' for domain '${DOMAIN_NAME}' failed; '${script_output}' [rc=${script_rc}]"
rc=${OCF_ERR_GENERIC}
break
else
ocf_log debug "Monitor script '${script}' for domain '${DOMAIN_NAME}' succeeded; '${script_output}' [rc=0]"
fi
done
fi
return ${rc}
}
LibvirtQemu_Validate_All() {
# Required binaries:
for binary in virsh sed; do
check_binary ${binary}
done
if [ -z "${MAILCMD}" ]; then
ocf_log err "MAILCMD variable not set"
exit ${OCF_ERR_INSTALLED}
fi
check_binary "${MAILCMD}"
if [ -z "${OCF_RESKEY_config}" ]; then
ocf_log error "Missing configuration parameter 'config'."
return ${OCF_ERR_CONFIGURED}
fi
# check if we can read the config file (otherwise we're unable to
# deduce ${DOMAIN_NAME} from it, see below)
if [ ! -r "${OCF_RESKEY_config}" ]; then
if ocf_is_probe; then
ocf_log info "Configuration file '${OCF_RESKEY_config}' not readable during probe."
else
ocf_log error "Configuration file '${OCF_RESKEY_config}' does not exist or is not readable."
return ${OCF_ERR_INSTALLED}
fi
fi
}
if [ $# -ne 1 ]; then
usage
exit ${OCF_ERR_ARGS}
fi
case ${1} in
meta-data)
meta_data
exit ${OCF_SUCCESS}
;;
usage)
usage
exit ${OCF_SUCCESS}
;;
esac
# Everything except usage and meta-data must pass the validate test
LibvirtQemu_Validate_All || exit $?
# During a probe, it is permissible for the config file to not be
# readable (it might be on shared storage not available during the
# probe). In that case, LibvirtQemu_Define can't work and we're
# unable to get the domain name. Thus, we also can't check whether the
# domain is running. The only thing we can do here is to assume that
# it is not running.
if [ ! -r "${OCF_RESKEY_config}" ]; then
ocf_is_probe && exit ${OCF_NOT_RUNNING}
[ "${__OCF_ACTION}" == 'stop' ] && exit ${OCF_SUCCESS}
fi
# Define the domain on startup, and re-define whenever someone deleted
# the state file, or touched the config.
if [ ! -e "${STATEFILE}" ] || [ "${OCF_RESKEY_config}" -nt "${STATEFILE}" ]; then
LibvirtQemu_Define
fi
# By now, we should definitely be able to read from the state file.
# If not, something went wrong.
if [ ! -r "${STATEFILE}" ]; then
ocf_log err "State file '${STATEFILE}' not found or unreadable; cannot determine domain name."
exit ${OCF_ERR_GENERIC}
fi
# Finally, retrieve the domain name from the state file.
DOMAIN_NAME="$(cat "${STATEFILE}" 2>/dev/null)"
if [ -z "${DOMAIN_NAME}" ]; then
ocf_log err "State file '${STATEFILE}' is empty; cannot determine domain name."
exit ${OCF_ERR_GENERIC}
fi
case ${1} in
start)
LibvirtQemu_Start
;;
stop)
LibvirtQemu_Stop
;;
migrate_to)
LibvirtQemu_Migrate_To
;;
migrate_from)
LibvirtQemu_Migrate_From
;;
status)
LibvirtQemu_Status
;;
monitor)
LibvirtQemu_Monitor
;;
validate-all)
;;
*)
usage
exit ${OCF_ERR_UNIMPLEMENTED}
;;
esac
exit $?
More information about the Pacemaker
mailing list