如果我在集群中只停用一个只有两个数据节点的datanode,该怎么办?(what if I decommission one of datanode in the cluster which only

编程入门 行业动态 更新时间:2024-10-28 05:18:45
如果我在集群中只停用一个只有两个数据节点的datanode,该怎么办?(what if I decommission one of datanode in the cluster which only has two datanodes?)

我设置了一个hdfs集群,它有一个主服务器(namenode)和两个从服务器(datanode)

并且dfs.replication设置为“2”

所以每个块都将被复制到两个从站中,并且从站中的文件都是相同的。

我的问题是,如果我想要停用两个奴隶中的一个,它总是显示“停止进行中”,但没有文件被复制(通过使用sar来监视网络)

所以我认为如果集群只有两个数据节点,并且复制设置为“2”,我就不能解除任何数据节点,因为如果我停用任何一个节点,那么只剩下一个节点,所以文件可以不被复制2。

你这么认为吗?

I setup a hdfs cluster, which has one master(namenode) and two slaves(datanode)

and the dfs.replication is set to be "2"

so every block will be replicated in the two slaves, and the files in the slaves are all the same.

my question is, if I want to decommission one of the two slaves, it always shows "Decommission In Progress", but there is no files being copied(by use sar to moniter the network)

So I think if the cluster only have two datanodes, and the replication is set to be "2", I can not decommission any datanode, because if I decommission any of the node, there will be only one node left, so the file can not be replicated 2.

Do you think so?

最满意答案

我相信在群集中复制因子为2,如果您停用一个数据节点,那么hadoop将识别为一个数据节点的崩溃,并将继续使用数据节点。 但是将来如果你将该节点放回集群中,hadoop将开始将文件复制到该节点。

所以你可以在集群中只有一个节点的复制因子为2,它不会以任何方式妨碍hadoop的工作。

I believe with replication factor of 2 in a cluster, if you decommissioned one data node then hadoop will identify as a crash of one data node and will continue working with data node. However in future if you ever put that node back in cluster hadoop will start replicating files to that node.

So you can have replication factor of 2 with only one node in cluster, it will not hamper working of hadoop in any way.

更多推荐

本文发布于:2023-08-02 15:59:00,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1378290.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:我在   节点   集群   两个   数据

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!