[Pacemaker] Failed to connection to cluster

龙龙 longvslong at gmail.com
Mon Sep 24 08:35:20 EDT 2012


Here are the "Attempting connection to the
cluster............................................" node's logs.
/var/log/corosync/corosync.log
Sep 24 20:29:57 corosync [MAIN  ] Corosync Cluster Engine ('1.4.2'):
started and ready to provide service.
Sep 24 20:29:57 corosync [MAIN  ] Corosync built-in features: nss
Sep 24 20:29:57 corosync [MAIN  ] Successfully read main configuration file
'/etc/corosync/corosync.conf'.
Sep 24 20:29:57 corosync [TOTEM ] Token Timeout (5000 ms) retransmit
timeout (247 ms)
Sep 24 20:29:57 corosync [TOTEM ] token hold (187 ms) retransmits before
loss (20 retrans)
Sep 24 20:29:57 corosync [TOTEM ] join (1000 ms) send_join (0 ms) consensus
(7500 ms) merge (200 ms)
Sep 24 20:29:57 corosync [TOTEM ] downcheck (1000 ms) fail to recv const
(2500 msgs)
Sep 24 20:29:57 corosync [TOTEM ] seqno unchanged const (30 rotations)
Maximum network MTU 1402
Sep 24 20:29:57 corosync [TOTEM ] window size per rotation (50 messages)
maximum messages per rotation (20 messages)
Sep 24 20:29:57 corosync [TOTEM ] missed count const (5 messages)
Sep 24 20:29:57 corosync [TOTEM ] RRP token problem counter (2000 ms)
Sep 24 20:29:57 corosync [TOTEM ] RRP threshold (10 problem count)
Sep 24 20:29:57 corosync [TOTEM ] RRP multicast threshold (100 problem
count)
Sep 24 20:29:57 corosync [TOTEM ] RRP automatic recovery check timeout
(1000 ms)
Sep 24 20:29:57 corosync [TOTEM ] RRP mode set to none.
Sep 24 20:29:57 corosync [TOTEM ] heartbeat_failures_allowed (0)
Sep 24 20:29:57 corosync [TOTEM ] max_network_delay (50 ms)
Sep 24 20:29:57 corosync [TOTEM ] HeartBeat is Disabled. To enable set
heartbeat_failures_allowed > 0
Sep 24 20:29:57 corosync [TOTEM ] Initializing transport (UDP/IP Multicast).
Sep 24 20:29:57 corosync [TOTEM ] Initializing transmit/receive security:
libtomcrypt SOBER128/SHA1HMAC (mode 0).
Sep 24 20:29:57 corosync [IPC   ] you are using ipc api v2
Set r/w permissions for uid=0, gid=0 on /var/log/corosync/corosync.log
Sep 24 20:29:57 corosync [TOTEM ] Receive multicast socket recv buffer size
(320000 bytes).
Sep 24 20:29:57 corosync [TOTEM ] Transmit multicast socket send buffer
size (320000 bytes).
Sep 24 20:29:57 corosync [TOTEM ] The network interface [192.168.1.14] is
now up.
Sep 24 20:29:57 corosync [TOTEM ] Created or loaded sequence id
108ac.192.168.1.14 for this ring.
Sep 24 20:29:57 corosync [pcmk  ] debug: pcmk_user_lookup: Cluster user
root has uid=0 gid=0
Sep 24 20:29:57 corosync [pcmk  ] info: process_ais_conf: Reading configure
Sep 24 20:29:57 corosync [pcmk  ] info: config_find_init: Local handle:
4730966301143465987 for logging
Sep 24 20:29:57 corosync [pcmk  ] info: config_find_next: Processing
additional logging options...
Sep 24 20:29:57 corosync [pcmk  ] info: get_config_opt: Found 'on' for
option: debug
Sep 24 20:29:57 corosync [pcmk  ] info: get_config_opt: Found 'yes' for
option: to_logfile
Sep 24 20:29:57 corosync [pcmk  ] info: get_config_opt: Found
'/var/log/corosync/corosync.log' for option: logfile
Sep 24 20:29:58 corosync [pcmk  ] info: get_config_opt: Found 'yes' for
option: to_syslog
Sep 24 20:29:58 corosync [pcmk  ] info: get_config_opt: Found 'daemon' for
option: syslog_facility
Sep 24 20:29:58 corosync [pcmk  ] info: config_find_init: Local handle:
7739444317642555396 for quorum
Sep 24 20:29:58 corosync [pcmk  ] info: config_find_next: No additional
configuration supplied for: quorum
Sep 24 20:29:58 corosync [pcmk  ] info: get_config_opt: No default for
option: provider
Sep 24 20:29:58 corosync [pcmk  ] info: config_find_init: Local handle:
5650605097994944517 for service
Sep 24 20:29:58 corosync [pcmk  ] info: config_find_next: Processing
additional service options...
Sep 24 20:29:58 corosync [pcmk  ] info: get_config_opt: Found '1' for
option: ver
Sep 24 20:29:58 corosync [pcmk  ] info: get_config_opt: Defaulting to
'pcmk' for option: clustername
Sep 24 20:29:58 corosync [pcmk  ] info: get_config_opt: Defaulting to 'no'
for option: use_logd
Sep 24 20:29:58 corosync [pcmk  ] info: get_config_opt: Defaulting to 'no'
for option: use_mgmtd
Sep 24 20:29:58 corosync [pcmk  ] info: pcmk_startup: CRM: Initialized
Sep 24 20:29:58 corosync [pcmk  ] Logging: Initialized pcmk_startup
Sep 24 20:29:58 corosync [pcmk  ] info: pcmk_startup: Maximum core file
size is: 18446744073709551615
Sep 24 20:29:58 corosync [pcmk  ] debug: pcmk_user_lookup: Cluster user
hacluster has uid=109 gid=116
Sep 24 20:29:58 corosync [pcmk  ] info: pcmk_startup: Service: 9
Sep 24 20:29:58 corosync [pcmk  ] info: pcmk_startup: Local hostname: node4
Sep 24 20:29:58 corosync [pcmk  ] info: pcmk_update_nodeid: Local node id:
234989760
Sep 24 20:29:58 corosync [pcmk  ] info: update_member: Creating entry for
node 234989760 born on 0
Sep 24 20:29:58 corosync [pcmk  ] info: update_member: 0x1adb700 Node
234989760 now known as node4 (was: (null))
Sep 24 20:29:58 corosync [pcmk  ] info: update_member: Node node4 now has 1
quorum votes (was 0)
Sep 24 20:29:58 corosync [pcmk  ] info: update_member: Node 234989760/node4
is now: member
Sep 24 20:29:58 corosync [SERV  ] Service engine loaded: Pacemaker Cluster
Manager 1.1.6
Sep 24 20:29:58 corosync [SERV  ] Service engine loaded: corosync extended
virtual synchrony service
Sep 24 20:29:58 corosync [SERV  ] Service engine loaded: corosync
configuration service
Sep 24 20:29:58 corosync [SERV  ] Service engine loaded: corosync cluster
closed process group service v1.01
Sep 24 20:29:58 corosync [SERV  ] Service engine loaded: corosync cluster
config database access v1.01
Sep 24 20:29:58 corosync [SERV  ] Service engine loaded: corosync profile
loading service
Sep 24 20:29:58 corosync [SERV  ] Service engine loaded: corosync cluster
quorum service v0.1
Sep 24 20:29:58 corosync [MAIN  ] Compatibility mode set to whitetank.
 Using V1 and V2 of the synchronization engine.
Sep 24 20:29:58 corosync [TOTEM ] entering GATHER state from 15.
Sep 24 20:29:58 corosync [TOTEM ] Creating commit token because I am the
rep.
Sep 24 20:29:58 corosync [TOTEM ] Saving state aru 0 high seq received 0
Sep 24 20:29:58 corosync [TOTEM ] Storing new sequence id for ring 108b0
Sep 24 20:29:58 corosync [TOTEM ] entering COMMIT state.
Sep 24 20:29:58 corosync [TOTEM ] got commit token
Sep 24 20:29:58 corosync [TOTEM ] entering RECOVERY state.
Sep 24 20:29:58 corosync [TOTEM ] position [0] member 192.168.1.14:
Sep 24 20:29:58 corosync [TOTEM ] previous ring seq 108ac rep 192.168.1.14
Sep 24 20:29:58 corosync [TOTEM ] aru 0 high delivered 0 received flag 1
Sep 24 20:29:58 corosync [TOTEM ] Did not need to originate any messages in
recovery.
Sep 24 20:29:58 corosync [TOTEM ] got commit token
Sep 24 20:29:58 corosync [TOTEM ] Sending initial ORF token
Sep 24 20:29:58 corosync [TOTEM ] token retrans flag is 0 my set retrans
flag0 retrans queue empty 1 count 0, aru 0
Sep 24 20:29:58 corosync [TOTEM ] install seq 0 aru 0 high seq received 0
Sep 24 20:29:58 corosync [TOTEM ] token retrans flag is 0 my set retrans
flag0 retrans queue empty 1 count 1, aru 0
Sep 24 20:29:58 corosync [TOTEM ] install seq 0 aru 0 high seq received 0
Sep 24 20:29:58 corosync [TOTEM ] token retrans flag is 0 my set retrans
flag0 retrans queue empty 1 count 2, aru 0
Sep 24 20:29:58 corosync [TOTEM ] install seq 0 aru 0 high seq received 0
Sep 24 20:29:58 corosync [TOTEM ] token retrans flag is 0 my set retrans
flag0 retrans queue empty 1 count 3, aru 0
Sep 24 20:29:58 corosync [TOTEM ] install seq 0 aru 0 high seq received 0
Sep 24 20:29:58 corosync [TOTEM ] retrans flag count 4 token aru 0 install
seq 0 aru 0 0
Sep 24 20:29:58 corosync [TOTEM ] Resetting old ring state
Sep 24 20:29:58 corosync [TOTEM ] recovery to regular 1-0
Sep 24 20:29:58 corosync [TOTEM ] Delivering to app 1 to 0
Sep 24 20:29:58 corosync [pcmk  ] notice: pcmk_peer_update: Transitional
membership event on ring 67760: memb=0, new=0, lost=0
Sep 24 20:29:58 corosync [pcmk  ] notice: pcmk_peer_update: Stable
membership event on ring 67760: memb=1, new=1, lost=0
Sep 24 20:29:58 corosync [pcmk  ] info: pcmk_peer_update: NEW:  node4
234989760
Sep 24 20:29:58 corosync [pcmk  ] debug: pcmk_peer_update: Node 234989760
has address r(0) ip(192.168.1.14)
Sep 24 20:29:58 corosync [pcmk  ] info: pcmk_peer_update: MEMB: node4
234989760
Sep 24 20:29:58 corosync [pcmk  ] debug: send_cluster_id: Leaving born-on
unset: 67760
Sep 24 20:29:58 corosync [pcmk  ] debug: send_cluster_id: Local update:
id=234989760, born=0, seq=67760
Sep 24 20:29:58 corosync [SYNC  ] This node is within the primary component
and will provide service.
Sep 24 20:29:58 corosync [TOTEM ] entering OPERATIONAL state.
Sep 24 20:29:58 corosync [TOTEM ] A processor joined or left the membership
and a new membership was formed.
Sep 24 20:29:58 corosync [TOTEM ] mcasted message added to pending queue
Sep 24 20:29:58 corosync [TOTEM ] Delivering 0 to 1
Sep 24 20:29:58 corosync [TOTEM ] Delivering MCAST message with seq 1 to
pending delivery queue
Sep 24 20:29:58 corosync [pcmk  ] debug: pcmk_cluster_id_callback: Node
update: node4 (1.1.6)
Sep 24 20:29:58 corosync [SYNC  ] confchg entries 1
Sep 24 20:29:58 corosync [SYNC  ] Barrier Start Received From 234989760
Sep 24 20:29:58 corosync [SYNC  ] Barrier completion status for nodeid
234989760 = 1.
Sep 24 20:29:58 corosync [SYNC  ] Synchronization barrier completed
Sep 24 20:29:58 corosync [SYNC  ] Synchronization actions starting for
(dummy CLM service)
Sep 24 20:29:58 corosync [TOTEM ] mcasted message added to pending queue
Sep 24 20:29:58 corosync [TOTEM ] Delivering 1 to 2
Sep 24 20:29:58 corosync [TOTEM ] Delivering MCAST message with seq 2 to
pending delivery queue
Sep 24 20:29:58 corosync [TOTEM ] mcasted message added to pending queue
Sep 24 20:29:58 corosync [TOTEM ] releasing messages up to and including 1
Sep 24 20:29:58 corosync [TOTEM ] Delivering 2 to 3
Sep 24 20:29:58 corosync [TOTEM ] Delivering MCAST message with seq 3 to
pending delivery queue
Sep 24 20:29:58 corosync [SYNC  ] confchg entries 1
Sep 24 20:29:58 corosync [SYNC  ] Barrier Start Received From 234989760
Sep 24 20:29:58 corosync [SYNC  ] Barrier completion status for nodeid
234989760 = 1.
Sep 24 20:29:58 corosync [SYNC  ] Synchronization barrier completed
Sep 24 20:29:58 corosync [SYNC  ] Committing synchronization for (dummy CLM
service)
Sep 24 20:29:58 corosync [SYNC  ] Synchronization actions starting for
(dummy AMF service)
Sep 24 20:29:58 corosync [TOTEM ] releasing messages up to and including 2
Sep 24 20:29:58 corosync [TOTEM ] mcasted message added to pending queue
Sep 24 20:29:58 corosync [TOTEM ] releasing messages up to and including 3
Sep 24 20:29:58 corosync [TOTEM ] Delivering 3 to 4
Sep 24 20:29:58 corosync [TOTEM ] Delivering MCAST message with seq 4 to
pending delivery queue
Sep 24 20:29:58 corosync [SYNC  ] confchg entries 1
Sep 24 20:29:58 corosync [SYNC  ] Barrier Start Received From 234989760
Sep 24 20:29:58 corosync [SYNC  ] Barrier completion status for nodeid
234989760 = 1.
Sep 24 20:29:58 corosync [SYNC  ] Synchronization barrier completed
Sep 24 20:29:58 corosync [SYNC  ] Committing synchronization for (dummy AMF
service)
Sep 24 20:29:58 corosync [SYNC  ] Synchronization actions starting for
(dummy CKPT service)
Sep 24 20:29:58 corosync [TOTEM ] mcasted message added to pending queue
Sep 24 20:29:58 corosync [TOTEM ] releasing messages up to and including 4
Sep 24 20:29:58 corosync [TOTEM ] Delivering 4 to 5
Sep 24 20:29:58 corosync [TOTEM ] Delivering MCAST message with seq 5 to
pending delivery queue
Sep 24 20:29:58 corosync [SYNC  ] confchg entries 1
Sep 24 20:29:58 corosync [SYNC  ] Barrier Start Received From 234989760
Sep 24 20:29:58 corosync [SYNC  ] Barrier completion status for nodeid
234989760 = 1.
Sep 24 20:29:58 corosync [SYNC  ] Synchronization barrier completed
Sep 24 20:29:58 corosync [SYNC  ] Committing synchronization for (dummy
CKPT service)
Sep 24 20:29:58 corosync [SYNC  ] Synchronization actions starting for
(dummy EVT service)
Sep 24 20:29:58 corosync [TOTEM ] mcasted message added to pending queue
Sep 24 20:29:58 corosync [TOTEM ] Delivering 5 to 6
Sep 24 20:29:58 corosync [TOTEM ] Delivering MCAST message with seq 6 to
pending delivery queue
Sep 24 20:29:58 corosync [TOTEM ] mcasted message added to pending queue
Sep 24 20:29:58 corosync [TOTEM ] releasing messages up to and including 5
Sep 24 20:29:58 corosync [TOTEM ] Delivering 6 to 7
Sep 24 20:29:58 corosync [TOTEM ] Delivering MCAST message with seq 7 to
pending delivery queue
Sep 24 20:29:58 corosync [TOTEM ] releasing messages up to and including 6
Sep 24 20:29:58 corosync [TOTEM ] mcasted message added to pending queue
Sep 24 20:29:58 corosync [TOTEM ] releasing messages up to and including 7
Sep 24 20:29:58 corosync [TOTEM ] Delivering 7 to 8
Sep 24 20:29:58 corosync [TOTEM ] Delivering MCAST message with seq 8 to
pending delivery queue
Sep 24 20:29:58 corosync [SYNC  ] confchg entries 1
Sep 24 20:29:58 corosync [SYNC  ] Barrier Start Received From 234989760
Sep 24 20:29:58 corosync [SYNC  ] Barrier completion status for nodeid
234989760 = 1.
Sep 24 20:29:58 corosync [SYNC  ] Synchronization barrier completed
Sep 24 20:29:58 corosync [SYNC  ] Committing synchronization for (dummy EVT
service)
Sep 24 20:29:58 corosync [SYNC  ] Synchronization actions starting for
(corosync cluster closed process group service v1.01)
Sep 24 20:29:58 corosync [TOTEM ] mcasted message added to pending queue
Sep 24 20:29:58 corosync [TOTEM ] mcasted message added to pending queue
Sep 24 20:29:58 corosync [TOTEM ] Delivering 8 to a
Sep 24 20:29:58 corosync [TOTEM ] Delivering MCAST message with seq 9 to
pending delivery queue
Sep 24 20:29:58 corosync [TOTEM ] Delivering MCAST message with seq a to
pending delivery queue
Sep 24 20:29:58 corosync [CPG   ] comparing: sender r(0) ip(192.168.1.14) ;
members(old:0 left:0)
Sep 24 20:29:58 corosync [CPG   ] chosen downlist: sender r(0)
ip(192.168.1.14) ; members(old:0 left:0)
Sep 24 20:29:58 corosync [TOTEM ] mcasted message added to pending queue
Sep 24 20:29:58 corosync [TOTEM ] releasing messages up to and including 8
Sep 24 20:29:58 corosync [TOTEM ] Delivering a to b
Sep 24 20:29:58 corosync [TOTEM ] Delivering MCAST message with seq b to
pending delivery queue
Sep 24 20:29:58 corosync [SYNC  ] confchg entries 1
Sep 24 20:29:58 corosync [SYNC  ] Barrier Start Received From 234989760
Sep 24 20:29:58 corosync [SYNC  ] Barrier completion status for nodeid
234989760 = 1.
Sep 24 20:29:58 corosync [SYNC  ] Synchronization barrier completed
Sep 24 20:29:58 corosync [SYNC  ] Committing synchronization for (corosync
cluster closed process group service v1.01)
Sep 24 20:29:58 corosync [TOTEM ] mcasted message added to pending queue
Sep 24 20:29:58 corosync [TOTEM ] releasing messages up to and including a
Sep 24 20:29:58 corosync [TOTEM ] Delivering b to c
Sep 24 20:29:58 corosync [TOTEM ] Delivering MCAST message with seq c to
pending delivery queue
Sep 24 20:29:58 corosync [MAIN  ] Completed service synchronization, ready
to provide service.
Sep 24 20:29:58 corosync [TOTEM ] releasing messages up to and including b
Sep 24 20:29:58 corosync [TOTEM ] releasing messages up to and including c
Sep 24 20:29:59 corosync [TOTEM ] entering GATHER state from 11.
Sep 24 20:30:00 corosync [TOTEM ] Creating commit token because I am the
rep.
Sep 24 20:30:00 corosync [TOTEM ] Saving state aru c high seq received c
Sep 24 20:30:00 corosync [TOTEM ] Storing new sequence id for ring 108b4
Sep 24 20:30:00 corosync [TOTEM ] entering COMMIT state.
Sep 24 20:30:00 corosync [TOTEM ] got commit token
Sep 24 20:30:00 corosync [TOTEM ] entering RECOVERY state.
Sep 24 20:30:00 corosync [TOTEM ] TRANS [0] member 192.168.1.14:
Sep 24 20:30:00 corosync [TOTEM ] position [0] member 192.168.1.14:
Sep 24 20:30:00 corosync [TOTEM ] previous ring seq 108b0 rep 192.168.1.14
Sep 24 20:30:00 corosync [TOTEM ] aru c high delivered c received flag 1
Sep 24 20:30:00 corosync [TOTEM ] position [1] member 192.168.1.17:
Sep 24 20:30:00 corosync [TOTEM ] previous ring seq 108b0 rep 192.168.1.17
Sep 24 20:30:00 corosync [TOTEM ] aru c high delivered c received flag 1
Sep 24 20:30:00 corosync [TOTEM ] Did not need to originate any messages in
recovery.
Sep 24 20:30:00 corosync [TOTEM ] got commit token
Sep 24 20:30:00 corosync [TOTEM ] Sending initial ORF token
Sep 24 20:30:00 corosync [TOTEM ] token retrans flag is 0 my set retrans
flag0 retrans queue empty 1 count 0, aru 0
Sep 24 20:30:00 corosync [TOTEM ] install seq 0 aru 0 high seq received 0
Sep 24 20:30:00 corosync [TOTEM ] token retrans flag is 0 my set retrans
flag0 retrans queue empty 1 count 1, aru 0
Sep 24 20:30:00 corosync [TOTEM ] install seq 0 aru 0 high seq received 0
Sep 24 20:30:00 corosync [TOTEM ] token retrans flag is 0 my set retrans
flag0 retrans queue empty 1 count 2, aru 0
Sep 24 20:30:00 corosync [TOTEM ] install seq 0 aru 0 high seq received 0
Sep 24 20:30:00 corosync [TOTEM ] token retrans flag is 0 my set retrans
flag0 retrans queue empty 1 count 3, aru 0
Sep 24 20:30:00 corosync [TOTEM ] install seq 0 aru 0 high seq received 0
Sep 24 20:30:00 corosync [TOTEM ] retrans flag count 4 token aru 0 install
seq 0 aru 0 0
Sep 24 20:30:00 corosync [TOTEM ] Resetting old ring state
Sep 24 20:30:00 corosync [TOTEM ] recovery to regular 1-0
Sep 24 20:30:00 corosync [TOTEM ] Delivering to app d to c
Sep 24 20:30:00 corosync [pcmk  ] notice: pcmk_peer_update: Transitional
membership event on ring 67764: memb=1, new=0, lost=0
Sep 24 20:30:00 corosync [pcmk  ] info: pcmk_peer_update: memb: node4
234989760
Sep 24 20:30:00 corosync [pcmk  ] notice: pcmk_peer_update: Stable
membership event on ring 67764: memb=2, new=1, lost=0
Sep 24 20:30:00 corosync [pcmk  ] info: update_member: Creating entry for
node 285321408 born on 67764
Sep 24 20:30:00 corosync [pcmk  ] info: update_member: Node
285321408/unknown is now: member
Sep 24 20:30:00 corosync [pcmk  ] info: pcmk_peer_update: NEW:  .pending.
285321408
Sep 24 20:30:00 corosync [pcmk  ] debug: pcmk_peer_update: Node 285321408
has address r(0) ip(192.168.1.17)
Sep 24 20:30:00 corosync [pcmk  ] info: pcmk_peer_update: MEMB: node4
234989760
Sep 24 20:30:00 corosync [pcmk  ] info: pcmk_peer_update: MEMB: .pending.
285321408
Sep 24 20:30:00 corosync [pcmk  ] debug: pcmk_peer_update: 1 nodes changed
Sep 24 20:30:00 corosync [pcmk  ] info: send_member_notification: Sending
membership update 67764 to 0 children
Sep 24 20:30:00 corosync [pcmk  ] debug: send_cluster_id: Born-on set to:
67764 (peer)
Sep 24 20:30:00 corosync [pcmk  ] debug: send_cluster_id: Local update:
id=234989760, born=67764, seq=67764
Sep 24 20:30:00 corosync [pcmk  ] info: update_member: 0x1adb700 Node
234989760 ((null)) born on: 67764
Sep 24 20:30:00 corosync [SYNC  ] This node is within the primary component
and will provide service.
Sep 24 20:30:00 corosync [TOTEM ] entering OPERATIONAL state.
Sep 24 20:30:00 corosync [TOTEM ] A processor joined or left the membership
and a new membership was formed.
Sep 24 20:30:00 corosync [TOTEM ] mcasted message added to pending queue
Sep 24 20:30:00 corosync [TOTEM ] Delivering 0 to 2
Sep 24 20:30:00 corosync [TOTEM ] Received ringid(192.168.1.14:67764) seq 2
Sep 24 20:30:00 corosync [TOTEM ] Delivering 0 to 2
Sep 24 20:30:00 corosync [TOTEM ] Delivering 0 to 2
Sep 24 20:30:00 corosync [TOTEM ] Received ringid(192.168.1.14:67764) seq 3
Sep 24 20:30:00 corosync [TOTEM ] Delivering 0 to 3
Sep 24 20:30:00 corosync [TOTEM ] Delivering 0 to 3
Sep 24 20:30:00 corosync [TOTEM ] Received ringid(192.168.1.14:67764) seq 4
Sep 24 20:30:00 corosync [TOTEM ] Delivering 0 to 4
Sep 24 20:30:00 corosync [TOTEM ] Delivering 0 to 4
Sep 24 20:30:00 corosync [TOTEM ] Delivering 0 to 4
Sep 24 20:30:00 corosync [TOTEM ] Delivering 0 to 4
Sep 24 20:30:00 corosync [TOTEM ] Delivering 0 to 4
Sep 24 20:30:00 corosync [TOTEM ] Delivering 0 to 4
Sep 24 20:30:00 corosync [TOTEM ] Delivering 0 to 4
Sep 24 20:30:00 corosync [TOTEM ] Delivering 0 to 4
Sep 24 20:30:00 corosync [TOTEM ] Delivering 0 to 4
Sep 24 20:30:00 corosync [TOTEM ] Delivering 0 to 4
Sep 24 20:30:00 corosync [TOTEM ] Delivering 0 to 4
Sep 24 20:30:00 corosync [TOTEM ] Delivering 0 to 4
Sep 24 20:30:00 corosync [TOTEM ] Delivering 0 to 4
Sep 24 20:30:00 corosync [TOTEM ] Delivering 0 to 4
Sep 24 20:30:00 corosync [TOTEM ] Delivering 0 to 4
Sep 24 20:30:00 corosync [TOTEM ] Delivering 0 to 4
Sep 24 20:30:00 corosync [TOTEM ] Delivering 0 to 4

2012/9/24 Andrew Beekhof <andrew at beekhof.net>

> Descriptions of summaries of logs are no substitute for the real thing.
> Not much we can do to help without them, sorry.
>
> On Fri, Sep 21, 2012 at 1:23 PM, 龙龙 <longvslong at gmail.com> wrote:
> > Hii,
> >    Now,I have two nodes(7 and 3) installed with pacemaker,and the two
> nodes
> > are online.But when I try to add a node(4) into the cluster,it can't
> join in
> > the current cluster.
> > The problem description as following:
> > 1.If I start 3 and 7 first,they can works.But when I start 4,it
> shows"Failed
> > to connection to cluster".
> > 2.If I start 4 first,it works.Then I start 3 and 7,the two failed to
> > connection to cluster.And I use crm_mon to see the status,it
> > shows"Attempting to connnection the Cluster........"
> >
> > _______________________________________________
> > Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
> >
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20120924/0f488051/attachment-0003.html>


More information about the Pacemaker mailing list