hadoop协议消息标记具有无效的线路类型(hadoop Protocol message tag had invalid wire type)

编程入门 行业动态 更新时间:2024-10-28 22:26:08
hadoop协议消息标记具有无效的线路类型(hadoop Protocol message tag had invalid wire type)

我在Ubuntu 12.04上使用两个8核的节点设置hadoop 2.6集群。 sbin/start-dfs.sh和sbin/start-yarn.sh都成功了。 我可以在主节点上看到jps后面的内容。

22437 DataNode 22988 ResourceManager 24668 Jps 22748 SecondaryNameNode 23244 NodeManager

从节点上的jps结果是

19693 DataNode 19966 NodeManager

然后我运行PI示例。

bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar pi 30 100

这给了我错误日志

java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had invalid wire type.; Host Details : local host is: "Master-R5-Node/xxx.ww.y.zz"; destination host is: "Master-R5-Node":54310; at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) at org.apache.hadoop.ipc.Client.call(Client.java:1472) at org.apache.hadoop.ipc.Client.call(Client.java:1399) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752)

这个问题似乎与HDFS文件系统有关,因为尝试命令bin/hdfs dfs -mkdir /user失败并出现类似异常。

java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had invalid wire type.; Host Details : local host is: "Master-R5-Node/xxx.ww.y.zz"; destination host is: "Master-R5-Node":54310;

其中xxx.ww.y.zz是Master-R5-Node的ip-address

我检查并遵循了Apache和本网站上ConnectionRefused的所有建议。

尽管经过了一周的努力,我无法修复它。

谢谢。

I Set up hadoop 2.6 cluster using two nodes of 8 cores each on Ubuntu 12.04. sbin/start-dfs.sh and sbin/start-yarn.sh both succeed. And I can see the following after jps on the master node.

22437 DataNode 22988 ResourceManager 24668 Jps 22748 SecondaryNameNode 23244 NodeManager

The jps outcome on the slave node is

19693 DataNode 19966 NodeManager

I then run the PI example.

bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar pi 30 100

Which gives me there error-log

java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had invalid wire type.; Host Details : local host is: "Master-R5-Node/xxx.ww.y.zz"; destination host is: "Master-R5-Node":54310; at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) at org.apache.hadoop.ipc.Client.call(Client.java:1472) at org.apache.hadoop.ipc.Client.call(Client.java:1399) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752)

The problem seems with the HDFS file system since trying out the command bin/hdfs dfs -mkdir /user fails with the similar exception.

java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message tag had invalid wire type.; Host Details : local host is: "Master-R5-Node/xxx.ww.y.zz"; destination host is: "Master-R5-Node":54310;

where xxx.ww.y.zz is the ip-address of Master-R5-Node

I have checked and followed all the recommendations of ConnectionRefused on Apache and on this site.

Despite the week long effort, I cannot get it fixed.

Thanks.

最满意答案

可能导致我遇到问题的原因有很多。 但我终于使用以下一些方法来修复它。

确保您拥有/hadoop和hdfs temporary文件所需的权限。 (你必须弄清楚你的病例在哪里) 从$HADOOP_CONF_DIR/core-site.xml fs.defaultFS中删除端口号。 它应该如下所示:
`<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://my.master.ip.address/</value> <description>NameNode URI</description> </property> </configuration>`
将以下两个属性添加到`$ HADOOP_CONF_DIR / hdfs-site.xml
<property> <name>dfs.datanode.use.datanode.hostname</name> <value>false</value> </property> <property> <name>dfs.namenode.datanode.registration.ip-hostname-check</name> <value>false</value> </property>

瞧! 你现在应该开始运行了!

There are so many reasons to what may lead to the problem I faced. But I finally ended up fixing it using some of the following things.

Make sure that you have the needed permission to the /hadoop and hdfs temporary files. (you have to figure out where that is for your paticular case) remove the port number from fs.defaultFS in $HADOOP_CONF_DIR/core-site.xml. It should look like this:
`<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://my.master.ip.address/</value> <description>NameNode URI</description> </property> </configuration>`
Add the following two properties to `$HADOOP_CONF_DIR/hdfs-site.xml
<property> <name>dfs.datanode.use.datanode.hostname</name> <value>false</value> </property> <property> <name>dfs.namenode.datanode.registration.ip-hostname-check</name> <value>false</value> </property>

Voila! You should now be up and running!

更多推荐

本文发布于:2023-07-29 05:03:00,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1312597.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:标记   协议   线路   消息   类型

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!