[Pacemaker] pacemaker + drbd + mysql = confusion
Andrew Beekhof
andrew at beekhof.net
Tue Apr 27 18:54:38 UTC 2010
On Tue, Apr 27, 2010 at 6:21 PM, Oliver Hoffmann <oh at dom.de> wrote:
> Hi all!
>
>
> I had a working two-node-drbd-cluster with the following config. (Ubuntu
> 10.04 Server amd64, upgraded today)
>
> node $id="05570095-f264-41ae-a609-768fd4a3b7e8" store2
> node $id="e790fa15-1f91-4442-94e9-bf411519c4f8" store1
> primitive drbd0 ocf:linbit:drbd \
> params drbd_resource="raid" \
> op monitor interval="15s" \
> op start interval="0" timeout="240s" \
> op stop interval="0" timeout="100s"
> primitive fs_raid ocf:heartbeat:Filesystem \
> params device="/dev/drbd0" directory="/mnt/raid" fstype="ext4" \
> op start interval="0" timeout="60s" \
> op stop interval="0" timeout="60s"
> ms ms_drbd0 drbd0 \
> meta master-max="1" master-node-max="1" clone-max="2"
> clone-node-max="1" notify="true" target-role="Started"
> location ms_drbd0-master-on-store1 ms_drbd0 \
> rule $id="ms_drbd0-master-on-store1-rule" $role="master" 100:
> colocation fs_raid-on-ms-drbd0 inf: fs_raid ms_drbd0:Master
> order ms_drbd0-before-fs_raid inf: ms_drbd0:promote fs_raid:start
> property $id="cib-bootstrap-options" \
> dc-version="1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd" \
> cluster-infrastructure="Heartbeat" \ no-quorum-policy="ignore"
> \ stonith-enabled="false"
>
> Then I wanted to add mysql and later postgresql. I found several sample
> configs but the problem is that the drbd resource is always named
> "mysql" which is very confusing.
>
> Basicly I would add something like that:
>
> primitive mysql ocf:heartbeat:mysql
> order mysql_after_drbd inf: fs_raid:promote mysql:start
>
> right?
>
> After playing around a while I messed up the whole thing. When I
> paste my old config without any mysql settings I get:
>
> WARNING: resource drbd_mysql is running, can't delete it
> WARNING: resource mysqld is running, can't delete it
> WARNING: resource ms_drbd_mysql is running, can't delete it
>
> How can I start over again?
cibadmin --erase will do it, but there's probably "crm configure erase" too.
then reboot both nodes to make sure everything is stopped before
adding resources back.
>
> I had no success with postgresql either. The configuration went ok but
> then I got:
>
> pgsql_start_0 (node=store1, call=6, rc=5, status=complete): not
> installed
>
> What is missing?
Impossible to say without logs, or really any details on how pgsql was defined.
>
> Thank you for hints and your patience!
>
> Regards,
>
> Oliver
>
>
> ###########################
>
> My other files:
>
> /etc/ha.d/ha.cf
> # Logging
> debug 1
> use_logd false
> logfacility daemon
>
> # Misc Options
> traditional_compression off
> compression bz2
> coredumps true
> ################ Start Pacemaker #######################
> crm yes
> ################ Start Pacemaker #######################
>
> # Communications
> udpport 691
> bcast eth1
> autojoin any
>
> # Thresholds (in seconds)
> keepalive 1
> warntime 6
> deadtime 10
> initdead 15
>
>
> /etc/openais/openais.conf
> totem {
> version: 2
> token: 3000
> token_retransmits_before_loss_const: 10
> join: 60
> consensus: 1500
> vsftype: none
> max_messages: 20
> clear_node_high_bit: yes
> secauth: on
> threads: 0
> rrp_mode: passive
> interface {
> ringnumber: 0
> bindnetaddr: 192.168.1.0
> mcastaddr: 239.94.1.1
> mcastport: 5405
> }
> }
> logging {
> to_stderr: yes
> debug: on
> timestamp: on
> to_file: no
> to_syslog: yes
> syslog_facility: daemon
> }
> amf {
> mode: disabled
> }
> service {
> ver: 0
> name: pacemaker
> use_mgmtd: yes
> }
> aisexec {
> user: root
> group: root
> }
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>
More information about the Pacemaker
mailing list