[Pacemaker] Problem with pingd.

Jayakrishnan jayakrishnanlll at gmail.com
Tue Feb 23 03:21:53 EST 2010


Sir,
Thank you for your advice but still my resources cant run anywhere as per
crm_verify -LV.
My slony resources are dependent of vir-ip. vir-ip can't start on any node
as per this configuration... please check it..
------------------------
node $id="3952b93e-786c-47d4-8c2f-a882e3d3d105" node2 \
        attributes standby="off"
node $id="ac87f697-5b44-4720-a8af-12a6f2295930" node1 \
        attributes standby="off"
primitive pgsql lsb:postgresql-8.4 \
        meta target-role="Started" resource-stickness="inherited" \
        op monitor interval="15s" timeout="25s" on-fail="standby"
primitive pingd ocf:heartbeat:pingd \
        params name="pingd" hostlist="192.168.10.1 192.168.10.69"
multiplier="100" \
        op monitor interval="15s" timeout="5s"
primitive slony-fail lsb:slony_failover \
        meta target-role="Started"
primitive slony-fail2 lsb:slony_failover2 \
        meta target-role="Started"
primitive vir-ip ocf:heartbeat:IPaddr2 \
        params ip="192.168.10.10" nic="eth0" cidr_netmask="24"
broadcast="192.168.10.255" \
        op monitor interval="15s" timeout="25s" on-fail="standby" \
        meta target-role="Started"
clone pgclone pgsql \
        meta notify="true" globally-unique="false" interleave="true"
target-role="Started"
clone pingclone pingd \
        meta globally-unique="false" clone-max="2" clone-node-max="1"
location vir-ip-with-pingd vir-ip \
rule $id="vir-ip-with-pingd-rule" -inf: not_defined pingd or pingd
number:lte 0
colocation ip-with-slony inf: slony-fail vir-ip
colocation ip-with-slony2 inf: slony-fail2 vir-ip
order ip-b4-slony2 inf: vir-ip slony-fail2
order slony-b4-ip inf: vir-ip slony-fail
property $id="cib-bootstrap-options" \
        dc-version="1.0.5-3840e6b5a305ccb803d29b468556739e75532d56" \
        cluster-infrastructure="Heartbeat" \
        no-quorum-policy="ignore" \
        stonith-enabled="false" \
        last-lrm-refresh="1266912039"
rsc_defaults $id="rsc-options" \
        resource-stickiness="INFINITY"


--------------------------------------------
the log files are also attached...

In crm_mon

============
Last updated: Tue Feb 23 13:49:41 2010
Stack: Heartbeat
Current DC: node1 (ac87f697-5b44-4720-a8af-12a6f2295930) - partition with
quorum
Version: 1.0.5-3840e6b5a305ccb803d29b468556739e75532d56
2 Nodes configured, unknown expected votes
5 Resources configured.
============

Online: [ node2 node1 ]

Clone Set: pgclone
        Started: [ node2 node1 ]
Clone Set: pingclone
        Started: [ node2 node1 ]
---------------------------------------------------------------

Sir,
What could be the issue [?]

With thanks,
Jayakrishnan. L


On Tue, Feb 23, 2010 at 1:32 PM, Andrew Beekhof <andrew at beekhof.net> wrote:

> On Tue, Feb 23, 2010 at 8:51 AM, Jayakrishnan <jayakrishnanlll at gmail.com>
> wrote:
> > Sir,
> > Could you explain that a bit more. I have been reading the same document
> for
> > 2 days and cant specify the type as integer. It is showing as
> >
> > "Parsing error, do you want tot edit it again"
>
> [08:18 AM] root at f12 ~ # crm configure help location
> Signon to CIB failed: connection failed
> Init failed, could not perform requested operations
> ERROR: cannot parse xml: no element found: line 1, column 0
>
> `location` defines the preference of nodes for the given
> resource. The location constraints consist of one or more rules
> which specify a score to be awarded if the rule matches.
>
> Usage:
> ...............
>        location <id> <rsc> {node_pref|rules}
>
>        node_pref :: <score>: <node>
>
>        rules ::
>          rule [id_spec] [$role=<role>] <score>: <expression>
>          [rule [id_spec] [$role=<role>] <score>: <expression> ...]
>
>        id_spec :: $id=<id> | $id-ref=<id>
>        score :: <number> | <attribute> | [-]inf
>        expression :: <simple_exp> [bool_op <simple_exp> ...]
>        bool_op :: or | and
>        simple_exp :: <attribute> [type:]<binary_op> <value>
>                      | <unary_op> <attribute>
>                      | date <date_expr>
>        type :: string | version | number
>        binary_op :: lt | gt | lte | gte | eq | ne
>        unary_op :: defined | not_defined
>
>        date_expr :: lt <end>
>                     | gt <start>
>                     | in_range start=<start> end=<end>
>                     | in_range start=<start> <duration>
>                     | date_spec <date_spec>
>        duration|date_spec ::
>                     hours=<value>
>                     | monthdays=<value>
>                     | weekdays=<value>
>                     | yearsdays=<value>
>                     | months=<value>
>                     | weeks=<value>
>                     | years=<value>
>                     | weekyears=<value>
>                     | moon=<value>
> ...............
> Examples:
> ...............
>        location conn_1 internal_www 100: node1
>
>        location conn_1 internal_www \
>          rule 50: #uname eq node1 \
>          rule pingd: defined pingd
>
>        location conn_2 dummy_float \
>           rule -inf: not_defined pingd or pingd lte 0
> ...............
>
>
> Though this last example is wrong:
>
>        location conn_2 dummy_float \
>          rule -inf: not_defined pingd or pingd number:lte 0
>
> _______________________________________________
> Pacemaker mailing list
> Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>



-- 
Regards,

Jayakrishnan. L

Visit: www.jayakrishnan.bravehost.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20100223/213d8ec1/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 33A.gif
Type: image/gif
Size: 581 bytes
Desc: not available
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20100223/213d8ec1/attachment-0003.gif>
-------------- next part --------------
Feb 23 13:42:49 node1 crmd: [13691]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change
Feb 23 13:42:49 node1 crmd: [13691]: info: need_abort: Aborting on change to admin_epoch
Feb 23 13:42:49 node1 cib: [13687]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="334" num_updates="4" >
Feb 23 13:42:49 node1 crmd: [13691]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Feb 23 13:42:49 node1 cib: [13687]: info: log_data_element: cib:diff: -   <configuration >
Feb 23 13:42:49 node1 crmd: [13691]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Feb 23 13:42:49 node1 cib: [13687]: info: log_data_element: cib:diff: -     <resources >
Feb 23 13:42:49 node1 crmd: [13691]: info: do_pe_invoke: Query 389: Requesting the current CIB: S_POLICY_ENGINE
Feb 23 13:42:49 node1 cib: [13687]: info: log_data_element: cib:diff: -       <clone id="pingclone" >
Feb 23 13:42:50 node1 cib: [13687]: info: log_data_element: cib:diff: -         <primitive provider="pacemaker" id="pingd" />
Feb 23 13:42:50 node1 cib: [13687]: info: log_data_element: cib:diff: -       </clone>
Feb 23 13:42:50 node1 cib: [13687]: info: log_data_element: cib:diff: -     </resources>
Feb 23 13:42:50 node1 cib: [13687]: info: log_data_element: cib:diff: -   </configuration>
Feb 23 13:42:50 node1 cib: [13687]: info: log_data_element: cib:diff: - </cib>
Feb 23 13:42:50 node1 cib: [13687]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="335" num_updates="1" >
Feb 23 13:42:50 node1 cib: [13687]: info: log_data_element: cib:diff: +   <configuration >
Feb 23 13:42:50 node1 cib: [13687]: info: log_data_element: cib:diff: +     <resources >
Feb 23 13:42:50 node1 cib: [13687]: info: log_data_element: cib:diff: +       <clone id="pingclone" >
Feb 23 13:42:50 node1 cib: [13687]: info: log_data_element: cib:diff: +         <primitive provider="heartbeat" id="pingd" />
Feb 23 13:42:50 node1 cib: [13687]: info: log_data_element: cib:diff: +       </clone>
Feb 23 13:42:50 node1 cib: [13687]: info: log_data_element: cib:diff: +     </resources>
Feb 23 13:42:50 node1 cib: [13687]: info: log_data_element: cib:diff: +   </configuration>
Feb 23 13:42:50 node1 cib: [13687]: info: log_data_element: cib:diff: + </cib>
Feb 23 13:42:50 node1 cib: [13687]: info: cib_process_request: Operation complete: op cib_replace for section resources (origin=local/cibadmin/2, version=0.335.1): ok (rc=0)
Feb 23 13:42:50 node1 crmd: [13691]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1266912770-433, seq=28, quorate=1
Feb 23 13:42:50 node1 pengine: [13959]: notice: unpack_config: On loss of CCM Quorum: Ignore
Feb 23 13:42:50 node1 pengine: [13959]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Feb 23 13:42:50 node1 pengine: [13959]: info: determine_online_status: Node node2 is online
Feb 23 13:42:50 node1 pengine: [13959]: info: determine_online_status: Node node1 is online
Feb 23 13:42:50 node1 pengine: [13959]: info: unpack_rsc_op: slony-fail2_monitor_0 on node1 returned 0 (ok) instead of the expected value: 7 (not running)
Feb 23 13:42:50 node1 pengine: [13959]: notice: unpack_rsc_op: Operation slony-fail2_monitor_0 found resource slony-fail2 active on node1
Feb 23 13:42:50 node1 pengine: [13959]: info: unpack_rsc_op: pingd:1_monitor_0 on node1 returned 0 (ok) instead of the expected value: 7 (not running)
Feb 23 13:42:50 node1 pengine: [13959]: notice: unpack_rsc_op: Operation pingd:1_monitor_0 found resource pingd:1 active on node1
Feb 23 13:42:50 node1 pengine: [13959]: info: unpack_rsc_op: slony-fail_monitor_0 on node1 returned 0 (ok) instead of the expected value: 7 (not running)
Feb 23 13:42:50 node1 pengine: [13959]: notice: unpack_rsc_op: Operation slony-fail_monitor_0 found resource slony-fail active on node1
Feb 23 13:42:50 node1 pengine: [13959]: notice: native_print: vir-ip#011(ocf::heartbeat:IPaddr2):#011Stopped 
Feb 23 13:42:50 node1 pengine: [13959]: notice: native_print: slony-fail#011(lsb:slony_failover):#011Stopped 
Feb 23 13:42:50 node1 pengine: [13959]: notice: clone_print: Clone Set: pgclone
Feb 23 13:42:50 node1 pengine: [13959]: notice: print_list: #011Started: [ node2 node1 ]
Feb 23 13:42:50 node1 pengine: [13959]: notice: native_print: slony-fail2#011(lsb:slony_failover2):#011Stopped 
Feb 23 13:42:50 node1 pengine: [13959]: notice: clone_print: Clone Set: pingclone
Feb 23 13:42:50 node1 pengine: [13959]: notice: print_list: #011Started: [ node2 node1 ]
Feb 23 13:42:50 node1 pengine: [13959]: notice: check_rsc_parameters: Forcing restart of pingd:0 on node2, provider changed: pacemaker -> heartbeat
Feb 23 13:42:50 node1 pengine: [13959]: notice: DeleteRsc: Removing pingd:0 from node2
Feb 23 13:42:50 node1 pengine: [13959]: notice: check_rsc_parameters: Forcing restart of pingd:1 on node1, provider changed: pacemaker -> heartbeat
Feb 23 13:42:50 node1 pengine: [13959]: notice: DeleteRsc: Removing pingd:1 from node1
Feb 23 13:42:50 node1 pengine: [13959]: info: native_merge_weights: vir-ip: Rolling back scores from slony-fail
Feb 23 13:42:50 node1 pengine: [13959]: info: native_merge_weights: vir-ip: Rolling back scores from slony-fail2
Feb 23 13:42:50 node1 pengine: [13959]: WARN: native_color: Resource vir-ip cannot run anywhere
Feb 23 13:42:50 node1 pengine: [13959]: WARN: native_color: Resource slony-fail cannot run anywhere
Feb 23 13:42:50 node1 pengine: [13959]: WARN: native_color: Resource slony-fail2 cannot run anywhere
Feb 23 13:42:50 node1 pengine: [13959]: notice: RecurringOp:  Start recurring monitor (15s) for pingd:0 on node2
Feb 23 13:42:50 node1 pengine: [13959]: notice: RecurringOp:  Start recurring monitor (15s) for pingd:1 on node1
Feb 23 13:42:50 node1 pengine: [13959]: notice: LogActions: Leave resource vir-ip#011(Stopped)
Feb 23 13:42:50 node1 pengine: [13959]: notice: LogActions: Leave resource slony-fail#011(Stopped)
Feb 23 13:42:50 node1 pengine: [13959]: notice: LogActions: Leave resource pgsql:0#011(Started node2)
Feb 23 13:42:50 node1 pengine: [13959]: notice: LogActions: Leave resource pgsql:1#011(Started node1)
Feb 23 13:42:50 node1 pengine: [13959]: notice: LogActions: Leave resource slony-fail2#011(Stopped)
Feb 23 13:42:50 node1 pengine: [13959]: notice: LogActions: Restart resource pingd:0#011(Started node2)
Feb 23 13:42:50 node1 pengine: [13959]: notice: LogActions: Restart resource pingd:1#011(Started node1)
Feb 23 13:42:50 node1 cib: [29805]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-56.raw
Feb 23 13:42:50 node1 crmd: [13691]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Feb 23 13:42:50 node1 crmd: [13691]: info: unpack_graph: Unpacked transition 74: 15 actions in 15 synapses
Feb 23 13:42:50 node1 crmd: [13691]: info: do_te_invoke: Processing graph 74 (ref=pe_calc-dc-1266912770-433) derived from /var/lib/pengine/pe-warn-146.bz2
Feb 23 13:42:50 node1 crmd: [13691]: info: te_pseudo_action: Pseudo action 35 fired and confirmed
Feb 23 13:42:50 node1 crmd: [13691]: info: te_rsc_command: Initiating action 6: stop pingd:0_stop_0 on node2
Feb 23 13:42:50 node1 crmd: [13691]: info: te_rsc_command: Initiating action 9: stop pingd:1_stop_0 on node1 (local)
Feb 23 13:42:50 node1 crmd: [13691]: info: do_lrm_rsc_op: Performing key=9:74:0:7504afd9-1ce0-4005-a4ba-678033d67a33 op=pingd:1_stop_0 )
Feb 23 13:42:50 node1 lrmd: [13688]: info: rsc:pingd:1:99: stop
Feb 23 13:42:50 node1 crmd: [13691]: info: process_lrm_event: LRM operation pingd:1_monitor_15000 (call=81, rc=-2, cib-update=0, confirmed=true) Cancelled unknown exec error
Feb 23 13:42:50 node1 cib: [29805]: info: write_cib_contents: Wrote version 0.335.0 of the CIB to disk (digest: b198ee2658262b9449e44aa34f97bca3)
Feb 23 13:42:50 node1 pingd: [23637]: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
Feb 23 13:42:50 node1 crmd: [13691]: info: process_lrm_event: LRM operation pingd:1_stop_0 (call=99, rc=0, cib-update=390, confirmed=true) complete ok
Feb 23 13:42:50 node1 crmd: [13691]: info: match_graph_event: Action pingd:1_stop_0 (9) confirmed on node1 (rc=0)
Feb 23 13:42:50 node1 crmd: [13691]: info: te_rsc_command: Initiating action 10: delete pingd:1_delete_0 on node1 (local)
Feb 23 13:42:50 node1 crmd: [13691]: info: do_lrm_invoke: Removing resource pingd:1 from the LRM
Feb 23 13:42:50 node1 crmd: [13691]: info: send_direct_ack: ACK'ing resource op pingd:1_delete_0 from 10:74:0:7504afd9-1ce0-4005-a4ba-678033d67a33: lrm_invoke-lrmd-1266912770-437
Feb 23 13:42:50 node1 crmd: [13691]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1266912770-437 from node1
Feb 23 13:42:50 node1 crmd: [13691]: info: match_graph_event: Action pingd:1_delete_0 (10) confirmed on node1 (rc=0)
Feb 23 13:42:50 node1 crmd: [13691]: info: te_crm_command: Executing crm-event (11): lrm_refresh on node1
Feb 23 13:42:50 node1 cib: [13687]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']//lrm_resource[@id='pingd:1'] (origin=local/crmd/391, version=0.335.3): ok (rc=0)
Feb 23 13:42:50 node1 crmd: [13691]: info: abort_transition_graph: te_update_diff:267 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=pingd:1_monitor_0, magic=0:0;7:52:7:7504afd9-1ce0-4005-a4ba-678033d67a33, cib=0.335.3) : Resource op removal
Feb 23 13:42:50 node1 pengine: [13959]: WARN: process_pe_message: Transition 74: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-146.bz2
Feb 23 13:42:50 node1 crmd: [13691]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000
Feb 23 13:42:50 node1 pengine: [13959]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Feb 23 13:42:50 node1 crmd: [13691]: info: update_abort_priority: Abort action done superceeded by restart
Feb 23 13:42:50 node1 cib: [29805]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.93dtL0 (digest: /var/lib/heartbeat/crm/cib.MXPkuM)
Feb 23 13:42:50 node1 crmd: [13691]: info: do_lrm_invoke: Forcing a local LRM refresh
Feb 23 13:42:50 node1 crmd: [13691]: info: match_graph_event: Action pingd:0_stop_0 (6) confirmed on node2 (rc=0)
Feb 23 13:42:50 node1 crmd: [13691]: info: te_pseudo_action: Pseudo action 36 fired and confirmed
Feb 23 13:42:50 node1 crmd: [13691]: info: run_graph: ====================================================
Feb 23 13:42:50 node1 crmd: [13691]: notice: run_graph: Transition 74 (Complete=6, Pending=0, Fired=0, Skipped=8, Incomplete=1, Source=/var/lib/pengine/pe-warn-146.bz2): Stopped
Feb 23 13:42:50 node1 crmd: [13691]: info: te_graph_trigger: Transition 74 is now complete
Feb 23 13:42:50 node1 crmd: [13691]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Feb 23 13:42:50 node1 crmd: [13691]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.
Feb 23 13:42:50 node1 crmd: [13691]: info: do_pe_invoke: Query 393: Requesting the current CIB: S_POLICY_ENGINE
Feb 23 13:42:50 node1 crmd: [13691]: info: do_pe_invoke_callback: Invoking the PE: ref=pe_calc-dc-1266912770-439, seq=28, quorate=1
Feb 23 13:42:50 node1 pengine: [13959]: notice: unpack_config: On loss of CCM Quorum: Ignore
Feb 23 13:42:50 node1 pengine: [13959]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Feb 23 13:42:50 node1 pengine: [13959]: info: determine_online_status: Node node2 is online
Feb 23 13:42:50 node1 pengine: [13959]: info: determine_online_status: Node node1 is online
Feb 23 13:42:50 node1 pengine: [13959]: info: unpack_rsc_op: slony-fail2_monitor_0 on node1 returned 0 (ok) instead of the expected value: 7 (not running)
Feb 23 13:42:50 node1 pengine: [13959]: notice: unpack_rsc_op: Operation slony-fail2_monitor_0 found resource slony-fail2 active on node1
Feb 23 13:42:50 node1 pengine: [13959]: info: unpack_rsc_op: slony-fail_monitor_0 on node1 returned 0 (ok) instead of the expected value: 7 (not running)
Feb 23 13:42:50 node1 pengine: [13959]: notice: unpack_rsc_op: Operation slony-fail_monitor_0 found resource slony-fail active on node1
Feb 23 13:42:50 node1 pengine: [13959]: notice: native_print: vir-ip#011(ocf::heartbeat:IPaddr2):#011Stopped 
Feb 23 13:42:50 node1 pengine: [13959]: notice: native_print: slony-fail#011(lsb:slony_failover):#011Stopped 
Feb 23 13:42:50 node1 pengine: [13959]: notice: clone_print: Clone Set: pgclone
Feb 23 13:42:50 node1 pengine: [13959]: notice: print_list: #011Started: [ node2 node1 ]
Feb 23 13:42:50 node1 pengine: [13959]: notice: native_print: slony-fail2#011(lsb:slony_failover2):#011Stopped 
Feb 23 13:42:50 node1 pengine: [13959]: notice: clone_print: Clone Set: pingclone
Feb 23 13:42:50 node1 pengine: [13959]: notice: print_list: #011Stopped: [ pingd:0 pingd:1 ]
Feb 23 13:42:50 node1 pengine: [13959]: info: native_merge_weights: vir-ip: Rolling back scores from slony-fail
Feb 23 13:42:50 node1 pengine: [13959]: info: native_merge_weights: vir-ip: Rolling back scores from slony-fail2
Feb 23 13:42:50 node1 pengine: [13959]: WARN: native_color: Resource vir-ip cannot run anywhere
Feb 23 13:42:50 node1 pengine: [13959]: WARN: native_color: Resource slony-fail cannot run anywhere
Feb 23 13:42:50 node1 pengine: [13959]: WARN: native_color: Resource slony-fail2 cannot run anywhere
Feb 23 13:42:50 node1 pengine: [13959]: notice: RecurringOp:  Start recurring monitor (15s) for pingd:0 on node2
Feb 23 13:42:50 node1 pengine: [13959]: notice: RecurringOp:  Start recurring monitor (15s) for pingd:1 on node1
Feb 23 13:42:50 node1 pengine: [13959]: notice: LogActions: Leave resource vir-ip#011(Stopped)
Feb 23 13:42:50 node1 pengine: [13959]: notice: LogActions: Leave resource slony-fail#011(Stopped)
Feb 23 13:42:50 node1 pengine: [13959]: notice: LogActions: Leave resource pgsql:0#011(Started node2)
Feb 23 13:42:50 node1 pengine: [13959]: notice: LogActions: Leave resource pgsql:1#011(Started node1)
Feb 23 13:42:50 node1 pengine: [13959]: notice: LogActions: Leave resource slony-fail2#011(Stopped)
Feb 23 13:42:50 node1 pengine: [13959]: notice: LogActions: Start pingd:0#011(node2)
Feb 23 13:42:50 node1 pengine: [13959]: notice: LogActions: Start pingd:1#011(node1)
Feb 23 13:42:50 node1 lrmd: [13688]: info: rsc:pingd:1:100: monitor
Feb 23 13:42:50 node1 crmd: [13691]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Feb 23 13:42:50 node1 crmd: [13691]: WARN: destroy_action: Cancelling timer for action 10 (src=637)
Feb 23 13:42:50 node1 crmd: [13691]: info: unpack_graph: Unpacked transition 75: 10 actions in 10 synapses
Feb 23 13:42:50 node1 crmd: [13691]: info: do_te_invoke: Processing graph 75 (ref=pe_calc-dc-1266912770-439) derived from /var/lib/pengine/pe-warn-147.bz2
Feb 23 13:42:50 node1 crmd: [13691]: info: te_rsc_command: Initiating action 7: monitor pingd:1_monitor_0 on node1 (local)
Feb 23 13:42:50 node1 crmd: [13691]: info: do_lrm_rsc_op: Performing key=7:75:7:7504afd9-1ce0-4005-a4ba-678033d67a33 op=pingd:1_monitor_0 )
Feb 23 13:42:50 node1 crmd: [13691]: info: process_lrm_event: LRM operation pingd:1_monitor_0 (call=100, rc=7, cib-update=394, confirmed=true) complete not running
Feb 23 13:42:50 node1 crmd: [13691]: info: match_graph_event: Action pingd:1_monitor_0 (7) confirmed on node1 (rc=0)
Feb 23 13:42:50 node1 pengine: [13959]: WARN: process_pe_message: Transition 75: WARNINGs found during PE processing. PEngine Input stored in: /var/lib/pengine/pe-warn-147.bz2
Feb 23 13:42:50 node1 crmd: [13691]: info: te_rsc_command: Initiating action 6: probe_complete probe_complete on node1 (local) - no waiting
Feb 23 13:42:50 node1 pengine: [13959]: info: process_pe_message: Configuration WARNINGs found during PE processing.  Please run "crm_verify -L" to identify issues.
Feb 23 13:42:50 node1 lrmd: [13688]: info: rsc:pingd:1:101: start
Feb 23 13:42:50 node1 crmd: [13691]: info: te_pseudo_action: Pseudo action 4 fired and confirmed
Feb 23 13:42:50 node1 crmd: [13691]: info: te_pseudo_action: Pseudo action 30 fired and confirmed
Feb 23 13:42:50 node1 crmd: [13691]: info: te_pseudo_action: Pseudo action 28 fired and confirmed
Feb 23 13:42:50 node1 crmd: [13691]: info: te_rsc_command: Initiating action 24: start pingd:0_start_0 on node2
Feb 23 13:42:50 node1 crmd: [13691]: info: te_rsc_command: Initiating action 26: start pingd:1_start_0 on node1 (local)
Feb 23 13:42:50 node1 crmd: [13691]: info: do_lrm_rsc_op: Performing key=26:75:0:7504afd9-1ce0-4005-a4ba-678033d67a33 op=pingd:1_start_0 )
Feb 23 13:42:50 node1 pingd: [29832]: info: Invoked: /usr/lib/heartbeat/pingd -D -p /var/run/heartbeat/rsctmp/pingd-pingd:1 -a pingd -d 1s -m 100 
Feb 23 13:42:50 node1 crmd: [13691]: info: process_lrm_event: LRM operation pingd:1_start_0 (call=101, rc=0, cib-update=395, confirmed=true) complete ok
Feb 23 13:42:50 node1 crmd: [13691]: info: match_graph_event: Action pingd:1_start_0 (26) confirmed on node1 (rc=0)
Feb 23 13:42:50 node1 crmd: [13691]: info: te_rsc_command: Initiating action 27: monitor pingd:1_monitor_15000 on node1 (local)
Feb 23 13:42:50 node1 crmd: [13691]: info: do_lrm_rsc_op: Performing key=27:75:0:7504afd9-1ce0-4005-a4ba-678033d67a33 op=pingd:1_monitor_15000 )
Feb 23 13:42:50 node1 crmd: [13691]: info: process_lrm_event: LRM operation pingd:1_monitor_15000 (call=102, rc=0, cib-update=396, confirmed=false) complete ok
Feb 23 13:42:50 node1 crmd: [13691]: info: match_graph_event: Action pingd:1_monitor_15000 (27) confirmed on node1 (rc=0)
Feb 23 13:42:51 node1 pingd: [29833]: info: do_node_walk: Requesting the list of configured nodes
Feb 23 13:42:52 node1 pingd: [29833]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining
Feb 23 13:42:52 node1 pingd: [29833]: info: main: Starting pingd
Feb 23 13:42:52 node1 crmd: [13691]: info: match_graph_event: Action pingd:0_start_0 (24) confirmed on node2 (rc=0)
Feb 23 13:42:52 node1 crmd: [13691]: info: te_rsc_command: Initiating action 25: monitor pingd:0_monitor_15000 on node2
Feb 23 13:42:52 node1 crmd: [13691]: info: te_pseudo_action: Pseudo action 29 fired and confirmed
Feb 23 13:42:52 node1 heartbeat: [13678]: WARN: 1 lost packet(s) for [node2] [212:214]
Feb 23 13:42:52 node1 heartbeat: [13678]: info: No pkts missing from node2!
Feb 23 13:42:54 node1 crmd: [13691]: info: match_graph_event: Action pingd:0_monitor_15000 (25) confirmed on node2 (rc=0)
Feb 23 13:42:54 node1 crmd: [13691]: info: run_graph: ====================================================
Feb 23 13:42:54 node1 crmd: [13691]: notice: run_graph: Transition 75 (Complete=10, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-warn-147.bz2): Complete
Feb 23 13:42:54 node1 crmd: [13691]: info: te_graph_trigger: Transition 75 is now complete
Feb 23 13:42:54 node1 crmd: [13691]: info: notify_crmd: Transition 75 status: done - <null>
Feb 23 13:42:54 node1 crmd: [13691]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Feb 23 13:42:54 node1 crmd: [13691]: info: do_state_transition: Starting PEngine Recheck Timer
Feb 23 13:44:17 node1 cib: [13687]: info: cib_stats: Processed 120 operations (8833.00us average, 0% utilization) in the last 10min


More information about the Pacemaker mailing list