admin管理员组文章数量:1612098
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException): failed to create file /hisdata/20150121/20150121 for DFSClient_attempt_1471504264423_0029_r_000003_0_-472905518_1 for client 192.168.186.151 because current leaseholder is trying to recreate file.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:3075)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInternal(FSNamesystem.java:2905)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFileInt(FSNamesystem.java:3186)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:3149)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:611)
at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.append(AuthorizationProviderProxyClientProtocol.java:124)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:416)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
最近在做hadoop 自定义输出目录 刚开始 在map阶段写下如下代码
FSDataOutputStream os=null;
Configuration conf= context.getConfiguration();
FileSystem fs = FileSystem.get(conf);
Path outputDir= new Path("/hisdata/"+key.toString()+"/"+key.toString());
try {
if(!fs.exists(outputDir)){
os=fs.create(outputDir);
}else{
os=fs.append(outputDir);
}
for(Text text:values){
String content=text.toString()+"\n";
byte[] buff = content.getBytes();
os.write(buff, 0, buff.length);
}
} catch (Exception e) {
e.printStackTrace();
}finally{
fs.close();
os.close();
}
没考虑map 并发的问题,map阶段很多并行的就会存在此问题,
后来把代码移植到reduce 里把map 的key作为reduce 的输出就不会出现了,主要还是对mapreduce 的原理理解的不够透彻,有时候就是那样有些问题很简单,
只是你有注意或者理解透彻。
本文标签: createfailedHadoopfilerecreate
版权声明:本文标题:hadoop出现 failed to create file because current leaseholder is trying to recreate file. 内容由热心网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:https://www.elefans.com/xitong/1728630572a1167041.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论