Hadoop 2.2.0中HDFS的配置文件在哪里?

编程入门 行业动态 更新时间:2024-10-26 23:25:01
本文介绍了Hadoop 2.2.0中HDFS的配置文件在哪里?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述

我正在学习Hadoop,目前我正在设置一个Hadoop 2.2.0单节点。我下载最新的发行版,解压缩它,现在我试图设置Hadoop分布式文件系统(HDFS)。

现在,我试图遵循可用的Hadoop指令这里,但我很迷失。

在左侧栏中,引用以下文件:

  • core-default.xml
  • hdfs-default.xml
  • mapred-default.xml
  • yarn-default.xml
$ b b

但是这些文件是怎么回事?

我找到了/etc/hadoop/hdfs-site.xml,但是它是空的!

/share/doc/hadoop/hadoop-project-dist/hadoop-common/core-default.xml但它只是一块doc!

所以,什么文件我要修改配置HDFS吗?

解决方案

div>

这些文件都在hadoop / conf目录中。

要设置HDFS,你需要配置core-site.xml和hdfs-site.xml 。

HDFS以两种模式工作:分布式(多节点群集)和伪分布式(单个机器的群集)。

对于伪分布式模式,您必须配置:

在core-site.xml中:

<! - namenode - > < property> < name> fs.default.name< / name> < value> hdfs:// localhost:8020< / value> < / property>

在hdfs-site.xml中:

< - HDFS的存储目录 - hadoop.tmp.dir属性,其默认值为/tmp/hadoop-${user.name} - > < property> < name> hadoop.tmp.dir< / name> < value> / your-dir /< / value> < / property>

每个属性都有其硬编码的默认值。

请务必在启动HDFS之前为hadoop用户设置ssh无密码登录。

PS

您从Apache下载Hadoop,您可以考虑切换到Hadoop发行版:

Cloudera的CDH ,HortonWorks或MapR。

如果安装Cloudera CDH或Hortonworks HDP你会发现/ etc / hadoop / conf /.

中的文件

I'm studying Hadoop and currently I'm trying to set up an Hadoop 2.2.0 single node. I downloaded the latest distribution, uncompressed it, now I'm trying to set up the Hadoop Distributed File System (HDFS).

Now, I'm trying to follow the Hadoop instructions available here but I'm quite lost.

In the left bar you see there are references to the following files:

  • core-default.xml
  • hdfs-default.xml
  • mapred-default.xml
  • yarn-default.xml

But how those files are ?

I found /etc/hadoop/hdfs-site.xml, but it is empty!

I found /share/doc/hadoop/hadoop-project-dist/hadoop-common/core-default.xml but it is just a piece of doc!

So, what files I have to modify to configure HDFS ? Where the deaults values are read from ?

Thanks in advance for your help.

解决方案

These files are all found in the hadoop/conf directory.

For setting HDFS you have to configure core-site.xml and hdfs-site.xml.

HDFS works in two modes: distributed (multi-node cluster) and pseudo-distributed (cluster of one single machine).

For the pseudo-distributed mode you have to configure:

In core-site.xml:

<!-- namenode --> <property> <name>fs.default.name</name> <value>hdfs://localhost:8020</value> </property>

In hdfs-site.xml:

<-- storage directories for HDFS - the hadoop.tmp.dir property, whose default is /tmp/hadoop-${user.name} --> <property> <name>hadoop.tmp.dir</name> <value>/your-dir/</value> </property>

Each property has its hardcoded default value.

Please remember to set ssh password-less login for hadoop user before starting HDFS.

P.S.

It you download Hadoop from Apache, you can consider switching to a Hadoop distribution:

Cloudera's CDH, HortonWorks or MapR.

If you install Cloudera CDH or Hortonworks HDP you will find the files in /etc/hadoop/conf/.

更多推荐

Hadoop 2.2.0中HDFS的配置文件在哪里?

本文发布于:2023-10-15 11:56:07,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1494269.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:配置文件   Hadoop   HDFS

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!