我在Ubuntu 12.04上使用两个8核的节点设置hadoop 2.6集群。 sbin/start-dfs.sh和sbin/start-yarn.sh都成功了。 我可以在主节点上看到jps后面的内容。
22437 DataNode 22988 ResourceManager 24668 Jps 22748 SecondaryNameNode 23244 NodeManager从节点上的jps结果是
19693 DataNode 19966 NodeManager然后我运行PI示例。
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar pi 30 100
这给了我错误日志
java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had invalid wire type.; Host Details : local host is: "Master-R5-Node/xxx.ww.y.zz"; destination host is: "Master-R5-Node":54310; at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) at org.apache.hadoop.ipc.Client.call(Client.java:1472) at org.apache.hadoop.ipc.Client.call(Client.java:1399) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752)这个问题似乎与HDFS文件系统有关,因为尝试命令bin/hdfs dfs -mkdir /user失败并出现类似异常。
java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had invalid wire type.; Host Details : local host is: "Master-R5-Node/xxx.ww.y.zz"; destination host is: "Master-R5-Node":54310;其中xxx.ww.y.zz是Master-R5-Node的ip-address
我检查并遵循了Apache和本网站上ConnectionRefused的所有建议。
尽管经过了一周的努力,我无法修复它。
谢谢。
I Set up hadoop 2.6 cluster using two nodes of 8 cores each on Ubuntu 12.04. sbin/start-dfs.sh and sbin/start-yarn.sh both succeed. And I can see the following after jps on the master node.
22437 DataNode 22988 ResourceManager 24668 Jps 22748 SecondaryNameNode 23244 NodeManagerThe jps outcome on the slave node is
19693 DataNode 19966 NodeManagerI then run the PI example.
bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar pi 30 100
Which gives me there error-log
java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had invalid wire type.; Host Details : local host is: "Master-R5-Node/xxx.ww.y.zz"; destination host is: "Master-R5-Node":54310; at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) at org.apache.hadoop.ipc.Client.call(Client.java:1472) at org.apache.hadoop.ipc.Client.call(Client.java:1399) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752)The problem seems with the HDFS file system since trying out the command bin/hdfs dfs -mkdir /user fails with the similar exception.
java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had invalid wire type.; Host Details : local host is: "Master-R5-Node/xxx.ww.y.zz"; destination host is: "Master-R5-Node":54310;where xxx.ww.y.zz is the ip-address of Master-R5-Node
I have checked and followed all the recommendations of ConnectionRefused on Apache and on this site.
Despite the week long effort, I cannot get it fixed.
Thanks.
最满意答案
可能导致我遇到问题的原因有很多。 但我终于使用以下一些方法来修复它。
确保您拥有/hadoop和hdfs temporary文件所需的权限。 (你必须弄清楚你的病例在哪里) 从$HADOOP_CONF_DIR/core-site.xml fs.defaultFS中删除端口号。 它应该如下所示:`<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://my.master.ip.address/</value> <description>NameNode URI</description> </property> </configuration>`
将以下两个属性添加到`$ HADOOP_CONF_DIR / hdfs-site.xml<property> <name>dfs.datanode.use.datanode.hostname</name> <value>false</value> </property> <property> <name>dfs.namenode.datanode.registration.ip-hostname-check</name> <value>false</value> </property>
瞧! 你现在应该开始运行了!
There are so many reasons to what may lead to the problem I faced. But I finally ended up fixing it using some of the following things.
Make sure that you have the needed permission to the /hadoop and hdfs temporary files. (you have to figure out where that is for your paticular case) remove the port number from fs.defaultFS in $HADOOP_CONF_DIR/core-site.xml. It should look like this:`<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://my.master.ip.address/</value> <description>NameNode URI</description> </property> </configuration>`
Add the following two properties to `$HADOOP_CONF_DIR/hdfs-site.xml<property> <name>dfs.datanode.use.datanode.hostname</name> <value>false</value> </property> <property> <name>dfs.namenode.datanode.registration.ip-hostname-check</name> <value>false</value> </property>
Voila! You should now be up and running!
更多推荐
发布评论