<div dir="ltr">Thank you, Andrew!<div>You were right, removing that rule helped me.</div><div><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">2014-06-27 10:08 GMT+04:00 Andrew Beekhof <span dir="ltr">&lt;<a href="mailto:andrew@beekhof.net" target="_blank">andrew@beekhof.net</a>&gt;</span>:<br>

<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class=""><br>
On 10 Jun 2014, at 10:44 pm, Виталий Туровец &lt;<a href="mailto:corebug@corebug.net">corebug@corebug.net</a>&gt; wrote:<br>
<br>
&gt; Hello there again!<br>
&gt; Here you are: <a href="http://pastebin.com/bUaNQHs1" target="_blank">http://pastebin.com/bUaNQHs1</a><br>
&gt; It&#39;s also identical on both nodes.<br>
&gt; Thank you!<br>
&gt;<br>
&gt;<br>
&gt; 2014-06-10 3:20 GMT+03:00 Andrew Beekhof &lt;<a href="mailto:andrew@beekhof.net">andrew@beekhof.net</a>&gt;:<br>
&gt;<br>
&gt; On 9 Jun 2014, at 11:01 pm, Виталий Туровец &lt;<a href="mailto:corebug@corebug.net">corebug@corebug.net</a>&gt; wrote:<br>
&gt;<br>
&gt; &gt; Hello there again, people!<br>
&gt; &gt;<br>
&gt; &gt; After upgrading both nodes to such SW versions:<br>
&gt; &gt;<br>
&gt; &gt; pacemaker.x86_64       1.1.10-14.el6_5.3<br>
&gt; &gt; pacemaker-cli.x86_64   1.1.10-14.el6_5.3<br>
&gt; &gt; pacemaker-cluster-libs.x86_64<br>
&gt; &gt; pacemaker-libs.x86_64  1.1.10-14.el6_5.3<br>
&gt; &gt; corosync.x86_64        1.4.1-17.el6_5.1 @updates<br>
&gt; &gt; corosynclib.x86_64     1.4.1-17.el6_5.1 @updates<br>
&gt; &gt;<br>
&gt; &gt; I am still facing the same problem: slave in Master/Slave set of MySQL won&#39;t start.<br>
&gt; &gt; Master actually works correctly.<br>
&gt; &gt; Output of cibadmin -Q on both nodes is identical.<br>
&gt; &gt;<br>
&gt; &gt; And here&#39;s the log of what happens when i try to do &quot;cleanup MySQL_MasterSlave&quot;: <a href="http://pastebin.com/J90NuyEX" target="_blank">http://pastebin.com/J90NuyEX</a>.<br>
&gt; &gt; By now i have MySQL slave running in manual mode, but this definitely is not what i&#39;m trying to achieve using Pacemaker.<br>
&gt; &gt; Can anyone help with this?<br>
<br>
</div>Um, I see:<br>
<div class=""><br>
 location cli-standby-MySQL_MasterSlave MySQL_MasterSlave \<br>
         rule $id=&quot;cli-standby-rule-MySQL_MasterSlave&quot; -inf: #uname eq wb-db1<br>
<br>
</div>which tells pacemaker that the MySQL_MasterSlave resource isn&#39;t allowed on wb-db1.<br>
Thats why only one instance is being started and promoted to master.<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
&gt; &gt; Again, my pacemaker configuration:<br>
&gt;<br>
&gt; Can you provide the &#39;cibadmin -Ql&#39; output instead?<br>
&gt; We need the status section in order to comment.<br>
&gt;<br>
&gt; &gt;<br>
&gt; &gt; node wb-db1 \<br>
&gt; &gt;         attributes standby=off<br>
&gt; &gt; node wb-db2 \<br>
&gt; &gt;         attributes standby=off<br>
&gt; &gt; primitive ClusterIP IPaddr2 \<br>
&gt; &gt;         params ip=10.0.1.68 cidr_netmask=32 nic=bond0.100 \<br>
&gt; &gt;         op monitor interval=30s \<br>
&gt; &gt;         meta target-role=Started<br>
&gt; &gt; primitive MySQL mysql \<br>
&gt; &gt;         params binary=&quot;/usr/bin/mysqld_safe&quot; enable_creation=1 replication_user=slave_user replication_passwd=here_goes_the_password datadir=&quot;/var/lib/mysql/db&quot; socket=&quot;/var/run/mysqld/mysqld.sock&quot; config=&quot;/etc/my.cnf&quot; reader_attribute=readerOK evict_outdated_slaves=false max_slave_lag=600 \<br>


&gt; &gt;         op monitor interval=30s \<br>
&gt; &gt;         op monitor interval=35s role=Master OCF_CHECK_LEVEL=1 \<br>
&gt; &gt;         op monitor interval=60s role=Slave timeout=60s OCF_CHECK_LEVEL=1 \<br>
&gt; &gt;         op notify interval=0 timeout=90 \<br>
&gt; &gt;         op start interval=0 timeout=120 \<br>
&gt; &gt;         op stop interval=0 timeout=120<br>
&gt; &gt; primitive MySQL_Reader_VIP IPaddr2 \<br>
&gt; &gt;         params ip=10.0.1.66 cidr_netmask=32 nic=bond0.100 \<br>
&gt; &gt;         meta target-role=Started<br>
&gt; &gt; primitive ping-gateway ocf:pacemaker:ping \<br>
&gt; &gt;         params host_list=10.0.1.1 multiplier=100 timeout=1 \<br>
&gt; &gt;         op monitor interval=10s timeout=20s<br>
&gt; &gt; primitive resMON ocf:pacemaker:ClusterMon \<br>
&gt; &gt;         op start interval=0 timeout=90s \<br>
&gt; &gt;         op stop interval=0 timeout=100s \<br>
&gt; &gt;         op monitor interval=10s timeout=30s \<br>
&gt; &gt;         params extra_options=&quot;--mail-prefix MainDB_Cluster_Notification --mail-from <a href="mailto:cluster-alarm@gmsu.ua">cluster-alarm@gmsu.ua</a> --mail-to <a href="mailto:cluster-alarm@gmsu.ua">cluster-alarm@gmsu.ua</a> --mail-host <a href="http://mx.gmsu.ua" target="_blank">mx.gmsu.ua</a>&quot;<br>


&gt; &gt; ms MySQL_MasterSlave MySQL \<br>
&gt; &gt;         meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true globally-unique=false target-role=Started is-managed=true<br>
&gt; &gt; clone pingclone ping-gateway \<br>
&gt; &gt;         meta target-role=Started<br>
&gt; &gt; location No-MySQL_Reader_VIP MySQL_Reader_VIP \<br>
&gt; &gt;         rule $id=&quot;No-MySQL_Reader_VIP-rule&quot; -inf: readerOK eq 0 or not_defined readerOK<br>
&gt; &gt; location cli-prefer-ClusterIP ClusterIP \<br>
&gt; &gt;         rule $id=&quot;cli-prefer-rule-ClusterIP&quot; inf: #uname eq wb-db1<br>
&gt; &gt; location cli-standby-MySQL_MasterSlave MySQL_MasterSlave \<br>
&gt; &gt;         rule $id=&quot;cli-standby-rule-MySQL_MasterSlave&quot; -inf: #uname eq wb-db1<br>
&gt; &gt; location resourceClusterIPwithping ClusterIP \<br>
&gt; &gt;         rule $id=&quot;resourceClusterIPwithping-rule&quot; -inf: not_defined pingd or pingd lte 0<br>
&gt; &gt; colocation MySQL_Reader_VIP_dislike_ClusterIP -200: MySQL_Reader_VIP ClusterIP<br>
&gt; &gt; colocation MysqlMaster-with-ClusterIP inf: MySQL_MasterSlave:Master ClusterIP<br>
&gt; &gt; order MysqlMaster-after-ClusterIP inf: ClusterIP MySQL_MasterSlave:promote<br>
&gt; &gt; property cib-bootstrap-options: \<br>
&gt; &gt;         dc-version=1.1.10-14.el6_5.3-368c726 \<br>
&gt; &gt;         cluster-infrastructure=&quot;classic openais (with plugin)&quot; \<br>
&gt; &gt;         expected-quorum-votes=2 \<br>
&gt; &gt;         no-quorum-policy=ignore \<br>
&gt; &gt;         stonith-enabled=false \<br>
&gt; &gt;         last-lrm-refresh=1402318675<br>
&gt; &gt; property mysql_replication: \<br>
&gt; &gt;         MySQL_REPL_INFO=&quot;wb-db2|mysql-bin.000126|107&quot;<br>
&gt; &gt; rsc_defaults rsc-options: \<br>
&gt; &gt;         resource-stickiness=200<br>
&gt; &gt;<br>
&gt; &gt; Thank you!<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt; 2014-06-05 3:17 GMT+03:00 Andrew Beekhof &lt;<a href="mailto:andrew@beekhof.net">andrew@beekhof.net</a>&gt;:<br>
&gt; &gt;<br>
&gt; &gt; On 30 May 2014, at 6:32 pm, Виталий Туровец &lt;<a href="mailto:corebug@corebug.net">corebug@corebug.net</a>&gt; wrote:<br>
&gt; &gt;<br>
&gt; &gt; &gt; Hello there, people!<br>
&gt; &gt; &gt; I am new to this list, so please excuse me if i&#39;m posting to the wrong place.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; I&#39;ve got a pacemaker cluster with such a configuration: <a href="http://pastebin.com/1SbWWh4n" target="_blank">http://pastebin.com/1SbWWh4n</a>.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Output of &quot;crm status&quot;:<br>
&gt; &gt; &gt; ============<br>
&gt; &gt; &gt; Last updated: Fri May 30 11:22:59 2014<br>
&gt; &gt; &gt; Last change: Thu May 29 03:22:38 2014 via crmd on wb-db2<br>
&gt; &gt; &gt; Stack: openais<br>
&gt; &gt; &gt; Current DC: wb-db2 - partition with quorum<br>
&gt; &gt; &gt; Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14<br>
&gt; &gt; &gt; 2 Nodes configured, 2 expected votes<br>
&gt; &gt; &gt; 7 Resources configured.<br>
&gt; &gt; &gt; ============<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Online: [ wb-db2 wb-db1 ]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;  ClusterIP      (ocf::heartbeat:IPaddr2):       Started wb-db2<br>
&gt; &gt; &gt;  MySQL_Reader_VIP       (ocf::heartbeat:IPaddr2):       Started wb-db2<br>
&gt; &gt; &gt;  resMON (ocf::pacemaker:ClusterMon):    Started wb-db2<br>
&gt; &gt; &gt;  Master/Slave Set: MySQL_MasterSlave [MySQL]<br>
&gt; &gt; &gt;      Masters: [ wb-db2 ]<br>
&gt; &gt; &gt;      Stopped: [ MySQL:1 ]<br>
&gt; &gt; &gt;  Clone Set: pingclone [ping-gateway]<br>
&gt; &gt; &gt;      Started: [ wb-db1 wb-db2 ]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; There was an unclean shutdown of a cluster and after that i&#39;ve faced a problem that a slave of MySQL_MasterSlave resource does not come up.<br>
&gt; &gt; &gt; When i try to do a &quot;cleanup MySQL_MasterSlave&quot; i see such thing in logs:<br>
&gt; &gt;<br>
&gt; &gt; Most of those errors are cosmetic and fixed in later versions.<br>
&gt; &gt;<br>
&gt; &gt; &gt; Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14<br>
&gt; &gt;<br>
&gt; &gt; It you can get to rhel 6.5 you&#39;ll have access to 1.1.10 where these are fixed.<br>
&gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; May 29 03:22:22 [4423] wb-db1       crmd:  warning: decode_transition_key:      Bad UUID (crm-resource-4819) in sscanf result (3) for 0:0:crm-resource-4819<br>
&gt; &gt; &gt; May 29 03:22:22 [4423] wb-db1       crmd:  warning: decode_transition_key:      Bad UUID (crm-resource-4819) in sscanf result (3) for 0:0:crm-resource-4819<br>
&gt; &gt; &gt; May 29 03:22:22 [4423] wb-db1       crmd:     info: ais_dispatch_message:       Membership 408: quorum retained<br>
&gt; &gt; &gt; May 29 03:22:22 [4418] wb-db1        cib:     info: set_crm_log_level:  New log level: 3 0<br>
&gt; &gt; &gt; May 29 03:22:38 [4421] wb-db1      attrd:   notice: attrd_ais_dispatch:         Update relayed from wb-db2<br>
&gt; &gt; &gt; May 29 03:22:38 [4421] wb-db1      attrd:   notice: attrd_ais_dispatch:         Update relayed from wb-db2<br>
&gt; &gt; &gt; May 29 03:22:38 [4418] wb-db1        cib:     info: apply_xml_diff:     Digest mis-match: expected 2f5bc3d7f673df3cf37f774211976d69, calculated b8a7adf0e34966242551556aab605286<br>
&gt; &gt; &gt; May 29 03:22:38 [4418] wb-db1        cib:   notice: cib_process_diff:   Diff 0.243.4 -&gt; 0.243.5 not applied to 0.243.4: Failed application of an update diff<br>
&gt; &gt; &gt; May 29 03:22:38 [4418] wb-db1        cib:     info: cib_server_process_diff:    Requesting re-sync from peer<br>
&gt; &gt; &gt; May 29 03:22:38 [4418] wb-db1        cib:   notice: cib_server_process_diff:    Not applying diff 0.243.4 -&gt; 0.243.5 (sync in progress)<br>
&gt; &gt; &gt; May 29 03:22:38 [4418] wb-db1        cib:     info: cib_replace_notify:         Replaced: -1.-1.-1 -&gt; 0.243.5 from wb-db2<br>
&gt; &gt; &gt; May 29 03:22:38 [4421] wb-db1      attrd:   notice: attrd_trigger_update:       Sending flush op to all hosts for: pingd (100)<br>
&gt; &gt; &gt; May 29 03:22:38 [4421] wb-db1      attrd:   notice: attrd_trigger_update:       Sending flush op to all hosts for: probe_complete (true)<br>
&gt; &gt; &gt; May 29 03:22:38 [4418] wb-db1        cib:     info: set_crm_log_level:  New log level: 3 0<br>
&gt; &gt; &gt; May 29 03:22:38 [4418] wb-db1        cib:     info: apply_xml_diff:     Digest mis-match: expected 754ed3b1d999e34d93e0835b310fd98a, calculated c322686deb255936ab54e064c696b6b8<br>
&gt; &gt; &gt; May 29 03:22:38 [4418] wb-db1        cib:   notice: cib_process_diff:   Diff 0.244.5 -&gt; 0.244.6 not applied to 0.244.5: Failed application of an update diff<br>
&gt; &gt; &gt; May 29 03:22:38 [4418] wb-db1        cib:     info: cib_server_process_diff:    Requesting re-sync from peer<br>
&gt; &gt; &gt; May 29 03:22:38 [4423] wb-db1       crmd:     info: delete_resource:    Removing resource MySQL:0 for 4996_crm_resource (internal) on wb-db2<br>
&gt; &gt; &gt; May 29 03:22:38 [4423] wb-db1       crmd:     info: notify_deleted:     Notifying 4996_crm_resource on wb-db2 that MySQL:0 was deleted<br>
&gt; &gt; &gt; May 29 03:22:38 [4418] wb-db1        cib:   notice: cib_server_process_diff:    Not applying diff 0.244.5 -&gt; 0.244.6 (sync in progress)<br>
&gt; &gt; &gt; May 29 03:22:38 [4423] wb-db1       crmd:  warning: decode_transition_key:      Bad UUID (crm-resource-4996) in sscanf result (3) for 0:0:crm-resource-4996<br>
&gt; &gt; &gt; May 29 03:22:38 [4418] wb-db1        cib:   notice: cib_server_process_diff:    Not applying diff 0.244.6 -&gt; 0.244.7 (sync in progress)<br>
&gt; &gt; &gt; May 29 03:22:38 [4418] wb-db1        cib:   notice: cib_server_process_diff:    Not applying diff 0.244.7 -&gt; 0.244.8 (sync in progress)<br>
&gt; &gt; &gt; May 29 03:22:38 [4418] wb-db1        cib:     info: cib_replace_notify:         Replaced: -1.-1.-1 -&gt; 0.244.8 from wb-db2<br>
&gt; &gt; &gt; May 29 03:22:38 [4421] wb-db1      attrd:   notice: attrd_trigger_update:       Sending flush op to all hosts for: pingd (100)<br>
&gt; &gt; &gt; May 29 03:22:38 [4421] wb-db1      attrd:   notice: attrd_trigger_update:       Sending flush op to all hosts for: probe_complete (true)<br>
&gt; &gt; &gt; May 29 03:22:38 [4423] wb-db1       crmd:   notice: do_lrm_invoke:      Not creating resource for a delete event: (null)<br>
&gt; &gt; &gt; May 29 03:22:38 [4423] wb-db1       crmd:     info: notify_deleted:     Notifying 4996_crm_resource on wb-db2 that MySQL:1 was deleted<br>
&gt; &gt; &gt; May 29 03:22:38 [4423] wb-db1       crmd:  warning: decode_transition_key:      Bad UUID (crm-resource-4996) in sscanf result (3) for 0:0:crm-resource-4996<br>
&gt; &gt; &gt; May 29 03:22:38 [4423] wb-db1       crmd:  warning: decode_transition_key:      Bad UUID (crm-resource-4996) in sscanf result (3) for 0:0:crm-resource-4996<br>
&gt; &gt; &gt; May 29 03:22:38 [4418] wb-db1        cib:     info: set_crm_log_level:  New log level: 3 0<br>
&gt; &gt; &gt; May 29 03:22:38 [4423] wb-db1       crmd:     info: ais_dispatch_message:       Membership 408: quorum retained<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Here&#39;s the cibadmin -Q output from node that is alive: <a href="http://pastebin.com/aeqfTaCe" target="_blank">http://pastebin.com/aeqfTaCe</a><br>
&gt; &gt; &gt; And here&#39;s the one from failed node: <a href="http://pastebin.com/ME2U5vjK" target="_blank">http://pastebin.com/ME2U5vjK</a><br>
&gt; &gt; &gt; The question is: how do i somehow cleanup the things for master/slave resource MySQL_MasterSlave to start working properly?<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Thank you!<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; --<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; ~~~<br>
&gt; &gt; &gt; WBR,<br>
&gt; &gt; &gt; Vitaliy Turovets<br>
&gt; &gt; &gt; Lead Operations Engineer<br>
&gt; &gt; &gt; Global Message Services Ukraine<br>
&gt; &gt; &gt; <a href="tel:%2B38%28093%29265-70-55" value="+380932657055">+38(093)265-70-55</a><br>
&gt; &gt; &gt; VITU-RIPE<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; _______________________________________________<br>
&gt; &gt; &gt; Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
&gt; &gt; &gt; <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
&gt; &gt; &gt; Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
&gt; &gt; &gt; Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt; _______________________________________________<br>
&gt; &gt; Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
&gt; &gt; <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
&gt; &gt;<br>
&gt; &gt; Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
&gt; &gt; Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
&gt; &gt; Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt; --<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt; ~~~<br>
&gt; &gt; WBR,<br>
&gt; &gt; Vitaliy Turovets<br>
&gt; &gt; Lead Operations Engineer<br>
&gt; &gt; Global Message Services Ukraine<br>
&gt; &gt; <a href="tel:%2B38%28093%29265-70-55" value="+380932657055">+38(093)265-70-55</a><br>
&gt; &gt; VITU-RIPE<br>
&gt; &gt;<br>
&gt; &gt; _______________________________________________<br>
&gt; &gt; Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
&gt; &gt; <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
&gt; &gt;<br>
&gt; &gt; Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
&gt; &gt; Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
&gt; &gt; Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
&gt;<br>
&gt;<br>
&gt; _______________________________________________<br>
&gt; Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
&gt; <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
&gt;<br>
&gt; Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
&gt; Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
&gt; Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; --<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; ~~~<br>
&gt; WBR,<br>
&gt; Vitaliy Turovets<br>
&gt; Lead Operations Engineer<br>
&gt; Global Message Services Ukraine<br>
&gt; <a href="tel:%2B38%28093%29265-70-55" value="+380932657055">+38(093)265-70-55</a><br>
&gt; VITU-RIPE<br>
&gt;<br>
&gt; _______________________________________________<br>
&gt; Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
&gt; <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
&gt;<br>
&gt; Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
&gt; Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
&gt; Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br>
</div></div><br>_______________________________________________<br>
Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
<br></blockquote></div><br><br clear="all"><div><br></div>-- <br><div dir="ltr"><br><br><br><br>~~~<br>WBR,<br>Vitaliy Turovets<br>Lead Operations Engineer<div>Global Message Services<br>+38(093)265-70-55<br>VITU-RIPE<br>

<br></div></div>
</div>