[Pacemaker] prevent the resource's start if it has "stop NG" history on the other node
Junko IKEDA
tsukishima.ha at gmail.com
Fri Mar 2 06:07:38 UTC 2012
Hi,
OK, we have to setup STONITH to handle this.
By the way, I tried to run the group resource and do the same test.
crm configuration;
property \
no-quorum-policy="ignore" \
stonith-enabled="false" \
crmd-transition-delay="2s" \
cluster-recheck-interval="60s"
rsc_defaults \
resource-stickiness="INFINITY" \
migration-threshold="1"
primitive dummy01 ocf:heartbeat:Dummy \
op start timeout="60s" interval="0s" on-fail="restart" \
op monitor timeout="60s" interval="7s" on-fail="restart" \
op stop timeout="60s" interval="0s" on-fail="block"
primitive dummy02 ocf:heartbeat:Dummy-stop-NG \
op start timeout="60s" interval="0s" on-fail="restart" \
op monitor timeout="60s" interval="7s" on-fail="restart" \
op stop timeout="60s" interval="0s" on-fail="block"
group dummy-g dummy01 dummy02
in this case, dummy02 calls stop NG.
dummy02 goes to unmanaged status,
and after that, Pacemaker shutdown is freezing,
it seems that Pacemaker is waiting some clear operations for unmanaged
resources.
if dummy01 calls stop NG, Pacemaker shutdown works well.
see attached hb_report.
Thanks,
Junko
2012/3/1 Andrew Beekhof <andrew at beekhof.net>:
> On Wed, Feb 29, 2012 at 6:32 PM, Junko IKEDA <tsukishima.ha at gmail.com> wrote:
>> Hi,
>>
>> I'm running the following simple configuration with Pacemaker 1.1.6,
>> and try the test case, "resource stop NG and shutdown Pacemaker".
>>
>> property \
>> no-quorum-policy="ignore" \
>> stonith-enabled="false" \
>> crmd-transition-delay="2s"
>>
>> rsc_defaults \
>> resource-stickiness="INFINITY" \
>> migration-threshold="1"
>>
>> primitive dummy01 ocf:heartbeat:Dummy-stop-NG \
>> op start timeout="60s" interval="0s" on-fail="restart" \
>> op monitor timeout="60s" interval="7s" on-fail="restart" \
>> op stop timeout="60s" interval="0s" on-fail="block"
>>
>>
>> "Dummy-stop-NG" RA just sends "stop NG" to Pacemaker.
>>
>> # diff -urNp Dummy Dummy-stop-NG
>> --- Dummy 2011-06-30 17:43:37.000000000 +0900
>> +++ Dummy-stop-NG 2012-02-28 19:11:12.850207767 +0900
>> @@ -108,6 +108,8 @@ dummy_start() {
>> }
>>
>> dummy_stop() {
>> + exit $OCF_ERR_GENERIC
>> +
>> dummy_monitor
>> if [ $? = $OCF_SUCCESS ]; then
>> rm ${OCF_RESKEY_state}
>>
>>
>>
>> Before the test, the resource is running on "bl460g6a".
>>
>> # crm_simulate -S -x pe-input-1.bz2
>>
>> Current cluster status:
>> Online: [ bl460g6a bl460g6b ]
>>
>> dummy01 (ocf::heartbeat:Dummy-stop-NG): Stopped
>>
>> Transition Summary:
>> crm_simulate[14195]: 2012/02/29_15:46:57 notice: LogActions: Start
>> dummy01 (bl460g6a)
>>
>> Executing cluster transition:
>> * Executing action 6: dummy01_monitor_0 on bl460g6b
>> * Executing action 4: dummy01_monitor_0 on bl460g6a
>> * Executing action 7: dummy01_start_0 on bl460g6a
>> * Executing action 8: dummy01_monitor_7000 on bl460g6a
>>
>> Revised cluster status:
>> Online: [ bl460g6a bl460g6b ]
>>
>> dummy01 (ocf::heartbeat:Dummy-stop-NG): Started bl460g6a
>>
>>
>>
>> Stop Pacemaker on "bl460g6a".
>> # service heartbeat stop
>>
>> Pacemaker tries to stop resouce and move it to "bl460g6b" at first,
>> # crm_simulate -S -x pe-input-2.bz2
>>
>> Current cluster status:
>> Online: [ bl460g6a bl460g6b ]
>>
>> dummy01 (ocf::heartbeat:Dummy-stop-NG): Started bl460g6a
>>
>> Transition Summary:
>> crm_simulate[12195]: 2012/02/29_15:35:02 notice: LogActions: Move
>> dummy01 (Started bl460g6a -> bl460g6b)
>>
>> Executing cluster transition:
>> * Executing action 6: dummy01_stop_0 on bl460g6a
>> * Executing action 7: dummy01_start_0 on bl460g6b
>> * Executing action 8: dummy01_monitor_7000 on bl460g6b
>>
>> Revised cluster status:
>> Online: [ bl460g6a bl460g6b ]
>>
>> dummy01 (ocf::heartbeat:Dummy-stop-NG): Started bl460g6b
>>
>>
>>
>> but this action will fail, it means the resource goes into unmanaged state.
>> # crm_simulate -S -x pe-input-3.bz2
>>
>> Current cluster status:
>> Online: [ bl460g6a bl460g6b ]
>>
>> dummy01 (ocf::heartbeat:Dummy-stop-NG): Started bl460g6a
>> (unmanaged) FAILED
>>
>> Transition Summary:
>>
>> Executing cluster transition:
>>
>> Revised cluster status:
>> Online: [ bl460g6a bl460g6b ]
>>
>> dummy01 (ocf::heartbeat:Dummy-stop-NG): Started bl460g6a
>> (unmanaged) FAILED
>>
>>
>>
>> Pacemaker shutdown on "bl460g6a" becomes successful,
>> it seems that the following patch works well.
>> https://github.com/ClusterLabs/pacemaker/commit/07976fe5eb04c432f1d1c9aebb1b1587ba7f0bcf#pengine/graph.c
>>
>> At this time, the resource on "bl460g6a" (pacemaker already shutdowns)
>> might be running because it fails to stop.
>
> This is because we ignore the status section of any offline nodes when
> stonith-enabled=false.
>
>> In fact, the resource didn't start on "bl460g6b" after its stop NG and
>> "bl460g6a"'s shutdown, and this is an expectable behavior,
>> but I could start it on "bl460g6b" with crm command.
>> This holds the potential for the unexpected active/active status.
>> Is it possible to prevent it's start in this situation?
>
> Only by disabling the logic in
> https://github.com/ClusterLabs/pacemaker/commit/07976fe5eb04c432f1d1c9aebb1b1587ba7f0bcf#pengine/graph.c
> when stonith is disabled.
>
>> for example,
>> (1) Dummy runs on node-a
>> (2) Shutdown Pacemaker on node-a, and Dummy stop NG
>> (3) Dummy can not run on other nodes
>> (4) * cleanup the unmanaged status of Dummy after checking it's manual
>> operation on node-a
>> (5) * start Dummy on other nodes
>> This can be the safe way.
>>
>> See attached hb_report.
>>
>> Thanks,
>> Junko IKEDA
>>
>> NTT DATA INTELLILINK CORPORATION
>>
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
-------------- next part --------------
?# crm_simulate -VV -S -x pe-input-3.bz2
crm_simulate[1677]: 2012/03/02_14:55:38 notice: unpack_config: On loss of CCM Quorum: Ignore
crm_simulate[1677]: 2012/03/02_14:55:38 WARN: unpack_rsc_op: Processing failed op dummy02_last_failure_0 on bl460g6a: unknown error (1)
Current cluster status:
Online: [ bl460g6a bl460g6b ]
Resource Group: dummy-g
dummy01 (ocf::heartbeat:Dummy): Started bl460g6a
dummy02 (ocf::heartbeat:Dummy-stop-NG): Started bl460g6a (unmanaged) FAILED
crm_simulate[1677]: 2012/03/02_14:55:38 WARN: common_apply_stickiness: Forcing dummy02 away from bl460g6a after 1000000 failures (max=1)
crm_simulate[1677]: 2012/03/02_14:55:38 notice: stage6: Scheduling Node bl460g6a for shutdown
crm_simulate[1677]: 2012/03/02_14:55:38 notice: LogActions: Move dummy01 (Started bl460g6a -> bl460g6b)
Executing cluster transition:
* Pseudo action: stop
crm_simulate[1677]: 2012/03/02_14:55:38 WARN: run_graph: ==== Transition 0 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=7, Source=crm_simulate): Terminated
Transition failed: terminated
crm_simulate[1677]: 2012/03/02_14:55:38 ERROR: print_graph: Graph 0 (8 actions in 8 synapses): batch-limit=30 jobs, network-delay=60000ms
crm_simulate[1677]: 2012/03/02_14:55:38 ERROR: print_graph: Synapse 0 is pending (priority: 0)
crm_simulate[1677]: 2012/03/02_14:55:38 ERROR: print_elem: [Action 15]: Pending (id: dummy-g_stopped_0, type: pseduo, priority: 0)
crm_simulate[1677]: 2012/03/02_14:55:38 ERROR: print_elem: * [Input 6]: Pending (id: dummy01_stop_0, loc: bl460g6a, priority: 0)
crm_simulate[1677]: 2012/03/02_14:55:38 ERROR: print_elem: * [Input 9]: Pending (id: dummy02_stop_0, loc: bl460g6a, priority: 0)
crm_simulate[1677]: 2012/03/02_14:55:38 ERROR: print_elem: * [Input 14]: Completed (id: dummy-g_stop_0, type: pseduo, priority: 0)
crm_simulate[1677]: 2012/03/02_14:55:38 ERROR: print_graph: Synapse 1 was confirmed (priority: 0)
crm_simulate[1677]: 2012/03/02_14:55:38 ERROR: print_graph: Synapse 2 is pending (priority: 0)
crm_simulate[1677]: 2012/03/02_14:55:38 ERROR: print_elem: [Action 12]: Pending (id: dummy-g_start_0, type: pseduo, priority: 0)
crm_simulate[1677]: 2012/03/02_14:55:38 ERROR: print_elem: * [Input 15]: Pending (id: dummy-g_stopped_0, type: pseduo, priority: 0)
crm_simulate[1677]: 2012/03/02_14:55:38 ERROR: print_graph: Synapse 3 is pending (priority: 0)
crm_simulate[1677]: 2012/03/02_14:55:38 ERROR: print_elem: [Action 8]: Pending (id: dummy01_monitor_7000, loc: bl460g6b, priority: 0)
crm_simulate[1677]: 2012/03/02_14:55:38 ERROR: print_elem: * [Input 7]: Pending (id: dummy01_start_0, loc: bl460g6b, priority: 0)
crm_simulate[1677]: 2012/03/02_14:55:38 ERROR: print_graph: Synapse 4 is pending (priority: 0)
crm_simulate[1677]: 2012/03/02_14:55:38 ERROR: print_elem: [Action 7]: Pending (id: dummy01_start_0, loc: bl460g6b, priority: 0)
crm_simulate[1677]: 2012/03/02_14:55:38 ERROR: print_elem: * [Input 6]: Pending (id: dummy01_stop_0, loc: bl460g6a, priority: 0)
crm_simulate[1677]: 2012/03/02_14:55:38 ERROR: print_elem: * [Input 12]: Pending (id: dummy-g_start_0, type: pseduo, priority: 0)
crm_simulate[1677]: 2012/03/02_14:55:38 ERROR: print_graph: Synapse 5 is pending (priority: 0)
crm_simulate[1677]: 2012/03/02_14:55:38 ERROR: print_elem: [Action 6]: Pending (id: dummy01_stop_0, loc: bl460g6a, priority: 0)
crm_simulate[1677]: 2012/03/02_14:55:38 ERROR: print_elem: * [Input 9]: Pending (id: dummy02_stop_0, loc: bl460g6a, priority: 0)
crm_simulate[1677]: 2012/03/02_14:55:38 ERROR: print_elem: * [Input 14]: Completed (id: dummy-g_stop_0, type: pseduo, priority: 0)
crm_simulate[1677]: 2012/03/02_14:55:38 ERROR: print_graph: Synapse 6 is pending (priority: 0)
crm_simulate[1677]: 2012/03/02_14:55:38 ERROR: print_elem: [Action 18]: Pending (id: do_shutdown, loc: bl460g6a, type: crm, priority: 0)
crm_simulate[1677]: 2012/03/02_14:55:38 ERROR: print_elem: * [Input 6]: Pending (id: dummy01_stop_0, loc: bl460g6a, priority: 0)
crm_simulate[1677]: 2012/03/02_14:55:38 ERROR: print_graph: Synapse 7 is pending (priority: 0)
crm_simulate[1677]: 2012/03/02_14:55:38 ERROR: print_elem: [Action 2]: Pending (id: all_stopped, type: pseduo, priority: 0)
crm_simulate[1677]: 2012/03/02_14:55:38 ERROR: print_elem: * [Input 6]: Pending (id: dummy01_stop_0, loc: bl460g6a, priority: 0)
An invalid transition was produced
Revised cluster status:
crm_simulate[1677]: 2012/03/02_14:55:38 notice: unpack_config: On loss of CCM Quorum: Ignore
crm_simulate[1677]: 2012/03/02_14:55:38 WARN: unpack_rsc_op: Processing failed op dummy02_last_failure_0 on bl460g6a: unknown error (1)
Online: [ bl460g6a bl460g6b ]
Resource Group: dummy-g
dummy01 (ocf::heartbeat:Dummy): Started bl460g6a
dummy02 (ocf::heartbeat:Dummy-stop-NG): Started bl460g6a (unmanaged) FAILED
-------------- next part --------------
A non-text attachment was scrubbed...
Name: hb_report.tar.bz2
Type: application/x-bzip2
Size: 69839 bytes
Desc: not available
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20120302/ef836a8f/attachment-0004.bz2>
More information about the Pacemaker
mailing list