[ClusterLabs] Problem in starting cman
Vijay Partha
vijaysarathy94 at gmail.com
Mon Aug 3 08:37:05 EDT 2015
service cman start
Starting cluster:
Checking if cluster has been disabled at boot... [ OK ]
Checking Network Manager... [ OK ]
Global setup... [ OK ]
Loading kernel modules... [ OK ]
Mounting configfs... [ OK ]
Starting cman... [ OK ]
Waiting for quorum... [ OK ]
Starting fenced... [ OK ]
Starting dlm_controld... [ OK ]
Tuning DLM kernel config... [ OK ]
Starting gfs_controld... [ OK ]
Unfencing self... [ OK ]
Joining fence domain...
It doesnt go beyond this.
This is my pacemaker log
Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info:
crm_ipc_connect: Could not establish pacemakerd connection: Connection
refused (111)
Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info:
config_find_next: Processing additional service options...
Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info:
get_config_opt: Found 'corosync_quorum' for option: name
Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info:
config_find_next: Processing additional service options...
Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info:
get_config_opt: Found 'corosync_cman' for option: name
Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info:
config_find_next: Processing additional service options...
Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info:
get_config_opt: Found 'openais_ckpt' for option: name
Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info:
config_find_next: No additional configuration supplied for: service
Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info:
config_find_next: Processing additional quorum options...
Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info:
get_config_opt: Found 'quorum_cman' for option: provider
Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info:
get_cluster_type: Detected an active 'cman' cluster
Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info:
mcp_read_config: Reading configure for stack: cman
Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info:
config_find_next: Processing additional logging options...
Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info:
get_config_opt: Defaulting to 'off' for option: debug
Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info:
get_config_opt: Found 'yes' for option: to_logfile
Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info:
get_config_opt: Found '/var/log/cluster/corosync.log' for option:
logfile
Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: notice:
crm_add_logfile: Additional logging available in
/var/log/cluster/corosync.log
Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info:
get_config_opt: Found 'yes' for option: to_syslog
Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info:
get_config_opt: Found 'local4' for option: syslog_facility
Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: notice: main:
Starting Pacemaker 1.1.11 (Build: 97629de): generated-manpages
agent-manpages ascii-docs ncurses libqb-logging libqb-ipc nagios
corosync-plugin cman acls
Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info: main:
Maximum core file size is: 18446744073709551615
Jul 29 13:38:32 [22563] vmx-occ-004 pacemakerd: info:
qb_ipcs_us_publish: server name: pacemakerd
Jul 29 13:46:17 [22563] vmx-occ-004 pacemakerd: error:
cluster_connect_cpg: Could not join the CPG group 'pacemakerd': 6
Jul 29 13:46:17 [22563] vmx-occ-004 pacemakerd: error: main:
Couldn't connect to Corosync's CPG service
Jul 29 13:46:17 [22563] vmx-occ-004 pacemakerd: info:
crm_xml_cleanup: Cleaning up memory from libxml2
this is my message log
Aug 3 14:35:31 vmx-occ-004 dlm_controld[2383]: daemon cpg_join error
retrying
Aug 3 14:35:33 vmx-occ-004 /usr/sbin/gmetad[4295]: data_thread() for [my
cluster] failed to contact node 10.61.40.194
Aug 3 14:35:33 vmx-occ-004 /usr/sbin/gmetad[4295]: data_thread() got no
answer from any [my cluster] datasource
Aug 3 14:35:34 vmx-occ-004 fenced[2359]: daemon cpg_join error retrying
Aug 3 14:35:37 vmx-occ-004 /usr/sbin/gmond[4590]: Error creating multicast
server mcast_join=10.61.40.194 port=8649 mcast_if=NULL family='inet4'.
Will try again...#012
Aug 3 14:35:38 vmx-occ-004 gfs_controld[2458]: daemon cpg_join error
retrying
Aug 3 14:35:41 vmx-occ-004 dlm_controld[2383]: daemon cpg_join error
retrying
Aug 3 14:35:44 vmx-occ-004 fenced[2359]: daemon cpg_join error retrying
Aug 3 14:35:48 vmx-occ-004 /usr/sbin/gmetad[4295]: data_thread() for [my
cluster] failed to contact node 10.61.40.194
Aug 3 14:35:48 vmx-occ-004 /usr/sbin/gmetad[4295]: data_thread() got no
answer from any [my cluster] datasource
Aug 3 14:35:48 vmx-occ-004 gfs_controld[2458]: daemon cpg_join error
retrying
Aug 3 14:35:51 vmx-occ-004 dlm_controld[2383]: daemon cpg_join error
retrying
Aug 3 14:35:54 vmx-occ-004 fenced[2359]: daemon cpg_join error retrying
Aug 3 14:35:58 vmx-occ-004 gfs_controld[2458]: daemon cpg_join error
retrying
Aug 3 14:36:01 vmx-occ-004 dlm_controld[2383]: daemon cpg_join error
retrying
Aug 3 14:36:03 vmx-occ-004 /usr/sbin/gmetad[4295]: data_thread() for [my
cluster] failed to contact node 10.61.40.194
Aug 3 14:36:03 vmx-occ-004 /usr/sbin/gmetad[4295]: data_thread() got no
answer from any [my cluster] datasource
Aug 3 14:36:04 vmx-occ-004 fenced[2359]: daemon cpg_join error retrying
Aug 3 14:36:08 vmx-occ-004 gfs_controld[2458]: daemon cpg_join error
retrying
Aug 3 14:36:11 vmx-occ-004 dlm_controld[2383]: daemon cpg_join error
retrying
Aug 3 14:36:14 vmx-occ-004 fenced[2359]: daemon cpg_join error retrying
Aug 3 14:36:18 vmx-occ-004 gfs_controld[2458]: daemon cpg_join error
retrying
Aug 3 14:36:18 vmx-occ-004 /usr/sbin/gmetad[4295]: data_thread() for [my
cluster] failed to contact node 10.61.40.194
Aug 3 14:36:18 vmx-occ-004 /usr/sbin/gmetad[4295]: data_thread() got no
answer from any [my cluster] datasource
Aug 3 14:36:21 vmx-occ-004 dlm_controld[2383]: daemon cpg_join error
retrying
Aug 3 14:36:24 vmx-occ-004 fenced[2359]: daemon cpg_join error retrying
On Mon, Aug 3, 2015 at 6:02 PM, emmanuel segura <emi2fast at gmail.com> wrote:
> Sorry, but I think is more easy to help you, If you provide more
> information about your problem.
>
> 2015-08-03 14:14 GMT+02:00 Vijay Partha <vijaysarathy94 at gmail.com>:
> > Hi
> >
> > When i start cman it hangs in joining fence domain.
> >
> > this is my message log.
> >
> >
> > Aug 3 14:12:16 vmx-occ-005 dlm_controld[2112]: daemon cpg_join error
> > retrying
> > Aug 3 14:12:19 vmx-occ-005 gfs_controld[2191]: daemon cpg_join error
> > retrying
> > Aug 3 14:12:24 vmx-occ-005 fenced[2098]: daemon cpg_join error retrying
> > Aug 3 14:12:27 vmx-occ-005 dlm_controld[2112]: daemon cpg_join error
> > retrying
> > Aug 3 14:12:29 vmx-occ-005 gfs_controld[2191]: daemon cpg_join error
> > retrying
> > Aug 3 14:12:34 vmx-occ-005 fenced[2098]: daemon cpg_join error retrying
> > Aug 3 14:12:37 vmx-occ-005 dlm_controld[2112]: daemon cpg_join error
> > retrying
> > Aug 3 14:12:39 vmx-occ-005 gfs_controld[2191]: daemon cpg_join error
> > retrying
> > Aug 3 14:12:44 vmx-occ-005 fenced[2098]: daemon cpg_join error retrying
> > Aug 3 14:12:47 vmx-occ-005 dlm_controld[2112]: daemon cpg_join error
> > retrying
> > Aug 3 14:12:49 vmx-occ-005 gfs_controld[2191]: daemon cpg_join error
> > retrying
> > Aug 3 14:12:54 vmx-occ-005 fenced[2098]: daemon cpg_join error retrying
> >
> > How to solve this issue?
> > --
> > With Regards
> > P.Vijay
> >
> > _______________________________________________
> > Users mailing list: Users at clusterlabs.org
> > http://clusterlabs.org/mailman/listinfo/users
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
> >
>
>
>
> --
> .~.
> /V\
> // \\
> /( )\
> ^`~'^
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
--
With Regards
P.Vijay
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20150803/f7f0c826/attachment-0003.html>
More information about the Users
mailing list