admin管理员组

文章数量:1611931

发现自己真是。。。

今天的有一次出现了之前的错误:

hadoop@hapmaster:~/hadoop-2.3.0/sbin$ hdfs dfsadmin -report
Configured Capacity: 0 (0 B)
Present Capacity: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used: 0 (0 B)
DFS Used%: NaN%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 0 (0 total, 0 dead)

hadoop@hapmaster:~/hadoop-2.3.0/sbin$ 
昨天遇到这个错误是因为多次hdfs namenode -format导致namespaceID不同,删掉datanode配置的dfs.data.dir目录后就好了。

但这次的区别在于datanode已经启动了:

hadoop@hapmaster:~/hadoop-2.3.0/sbin$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [hapmaster]
hapmaster: namenode running as process 15828. Stop it first.
hapslave4: datanode running as process 8675. Stop it first.
hapslave3: datanode running as process 8854. Stop it first.
hapslave1: datanode running as process 9104. Stop it first.
hapslave2: datanode running as process 8986. Stop it first.
Starting secondary namenodes [hapmaster]
hapmaster: secondarynamenode running as process 16151. Stop it first.
starting yarn daemons
resourcemanager running as process 16309. Stop it first.
hapslave4: nodemanager running as process 8895. Stop it first.
hapslave1: nodemanager running as process 9324. Stop it first.
hapslave3: nodemanager running as process 9079. Stop it first.
hapslave2: nodemanager running as process 9202. Stop it first.
难道是网络连接有什么问题?
ping一下,namenode可以ping通datenode,反过来则不行。发现ip不在一个段,修改,重启,好了。

本文标签: 错误configuredDFSremainingCapacity