HA高可用配置

编程入门 行业动态 更新时间:2024-10-09 04:24:08

<a href=https://www.elefans.com/category/jswz/34/1752065.html style=HA高可用配置"/>

HA高可用配置

HA (High Available)配置

详见.html进行参考配置
本指南概述了 HDFS 高可用性 (HA) 功能以及如何使用 Quorum Journal Manager (QJM) 功能配置和管理 HA HDFS 集群。
(一)集群的规划

(二)创建虚拟机

  1. 配置网络
    查看子网ip
vi /etc/sysconfig/network-scripts/ifcfg-ens33

  1. 配置主机名
hostnamectl set-hostname ha-01

其他两个也需在相应窗口配置

hostnamectl set-hostname ha-02
hostnamectl set-hostname ha-03
  1. 免密登录
    在secureCRT中send to all sessions窗口输入以下内容
ssh-keygen -t rsa
ssh-copy-id ha-01
ssh-copy-id ha-02
ssh-copy-id ha-03


4. 创建目录
压缩包放在software
hadoop zookeeper解压至servers下

mkdir export
cd export
mkdir data
mkdir servers
mkdir software


5.克隆出两台虚拟机ha-02,ha-03

配置各虚拟机ip地址和主机名之间的映射关系

vi /etc/hosts

添加以下内容
192.168.88.151 ha-01
192.168.88.152 ha-02
192.168.88.153 ha-03
或者

echo '192.168.88.151 ha-01
> 192.168.88.152 ha-02
> 192.168.88.153 ha-03
> ' >>/etc/hosts

分发/etc/hosts

for i in `seq 2 3`
do
scp -r /etc/hosts ha-0$i:/etc/
scp -r /etc/hosts ha-0$i:/etc/
done

Windows下编辑/etc/hosts
192.168.88.151 ha-01
192.168.88.152 ha-02
192.168.88.153 ha-03

安装配置hadoop,jdk、zookeeper环境变量

#Hadoop
export HADOOP_HOME=/export/servers/hadoop-3.3.1
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
#jdk
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.302.b08-0.el8_4.x86_64/jre
export PATH=$PATH:$JAVA_HOME/bin
#zookeeper
export ZOOKEEPER_HOME=/export/servers/apache-zookeeper-3.6.3-bin
export PATH=$PATH:$ZOOKEEPER_HOME/bin

这里jps 为系统自带版本没有jps
需要下载

yum -y  install java-1.8.0-openjdk-devel.x86_64

hadoop

hdfs.site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License at.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file.
--><!-- Put site-specific property overrides in this file. --><configuration><!--设置副本的个数 --><property><name>dfs.replication</name><value>2</value></property><!--指定name目录 --><property><name>dfs.namenode.name.dir</name><value>/export/data/hadoop/name</value></property><!--指定data目录 --><property><name>dfs.datanode.data.dir</name><value>/export/data/hadoop/data</value></property><!--开启webHDFS--><property><name>dfs.webhdfs.enabled</name><value>true</value></property><!--指定nameservice为ns1--><property><name>dfs.nameservices</name><value>ns1</value></property><!--指定ns1下面有两个namenode,nn1和nn2	--><property><name>dfs.ha.namenodes.ns1</name><value>nn1,nn2</value></property><!--指定nn1的rpc地址 --><property><name>dfs.namenode.rpc-address.ns1.nn1</name><value>ha-01:9000</value></property><!--指定nn1的http地址 --><property><name>dfs.namenode.http-address.ns1.nn1</name><value>ha-01:50070</value></property><!--指定nn2的rpc地址 --><property><name>dfs.namenode.rpc-address.ns1.nn2</name><value>ha-02:9000</value></property><!--指定nn2的http地址 --><property><name>dfs.namenode.http-address.ns1.nn2</name><value>ha-02:50070</value></property><!--指定nm的元数据在journalnode上的存放位置--><property><name>dfs.namenode.shared.edits.dir</name><value>qjournal://ha-01:8485;ha-02:8485;ha-03:8485/ns1</value></property><!--指定nn2的http地址 --><property><name>dfs.journalnode.edits.dir</name><value>/export/data/hadoop/journaldata</value></property><!--开启namenode失败自动切换 --><property><name>dfs.ha.automatic-failover.enabled</name><value>true</value></property><!--配置namenode失败自动切换的实现方式 --><property><name>dfs.client.failover.proxy.provider.ns1</name><value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value></property><!--配置隔离机制的方法--><property><name>dfs.ha.fencing.methods</name><value>sshfenceshell(/bin/true)</value></property><!--开启sshfence隔离的免登录--><property><name>dfs.ha.fencing.ssh.private-key-files</name><value>/root/.ssh/id_rsa</value></property><!--开启sshfence隔离的超时时间--><property><name>dfs.ha.fencing.ssh.connect-timeout</name><value>30000</value></property></configuration>

core-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License at.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file.
--><!-- Put site-specific property overrides in this file. --><configuration><!--指定HDFS的nameservice名称 --><property><name>fs.defaultFS</name><value>hdfs://ns1</value></property><!--指定hadoop的临时目录 --><property><name>hadoop.tmp.dir</name><value>/export/data/hadoop/tmp</value></property><!--指定zookeeper地址--><property><name>ha.zookeeper.quorum</name><value>ha-01:2181,ha-02:2181,ha-03:2181</value></property><!-- root 为用户名  方便beeline 连接-->
<property>     <name>hadoop.proxyuser.root.hosts</name>     <value>*</value> 
</property> 
<property>     <name>hadoop.proxyuser.root.groups</name>     <value>*</value> 
</property><!--   给予权限 实现dfs webUI操作 -->
<property>
<name>hadoop.http.staticuser.user</name>
<value>root</value>
</property></configuration>

hadoop-env.sh

export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
export HDFS_JOURNALNODE_USER=root
export   HDFS_ZKFC_USER=root

mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License at.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file.
--><!-- Put site-specific property overrides in this file. --><configuration><property><name>mapreduce.framework.name</name><value>yarn</value></property> 
</configuration>

yarn-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License at.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file.
--><!-- Put site-specific property overrides in this file. --><configuration><!-- Site specific YARN configuration properties --><property>  <name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value>  </property><!-- Site specific YARN configuration properties --><!--启用resourcemanager ha-->  <!--是否开启RM ha,默认是开启的-->  <property>  <name>yarn.resourcemanager.ha.enabled</name>  <value>true</value>  </property><!--声明两台resourcemanager的地址--><property><name>yarn.resourcemanager.cluster-id</name>  <value>yrc</value>        </property><property>  <name>yarn.resourcemanager.ha.rm-ids</name>  <value>rm1,rm2</value>  </property><property>  <name>yarn.resourcemanager.hostname.rm1</name>  <value>ha-01</value> </property>  <property>  <name>yarn.resourcemanager.hostname.rm2</name><value>ha-02</value>  </property>  <!--指定zookeeper集群的地址-->   <property>  <name>yarn.resourcemanager.zk-address</name><value>ha-01:2181,ha-02:2181,ha-03:2181</value></property>  <!--启用自动恢复,当任务进行一半,rm坏掉,就要启动自动恢复,默认是false--><property>  <name>yarn.resourcemanager.recovery.enabled</name><value>true</value>  </property>  <!--指定resourcemanager的状态信息存储在zookeeper集群,默认是存放在FileSystem里面。--><property>  <name>yarn.resourcemanager.store.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value></property> <property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value></property><!--每个容器请求的最小内存资源(以MB为单位)--><property><name>yarn.shceduler.minimum-allocation-mb</name><value>512</value></property><!--每个容器请求的最大内存资源(以MB为单位)--><property><name>yarn.shceduler.maximum-allocation-mb</name><value>2048</value></property><!--容器虚拟内存与物理内存之间的比率--><property><name>yarn.nodemanager.vmem-pmem-ratio</name><value>4</value></property><property><name>yarn.resourcemanager.webapp.address.rm1</name><value>ha-01:8088</value>
</property>
<property><name>yarn.resourcemanager.webapp.address.rm2</name><value>ha-02:8088</value>
</property></configuration>

workers

ha-01
ha-02
ha-03

zookeeper配置

#zookeeper
export ZOOKEEPER_HOME=/export/servers/apache-zookeeper-3.6.3-bin
export PATH=$PATH:$ZOOKEEPER_HOME/bin
 cd /export/servers/apache-zookeeper-3.6.3-bin/conf/cp zoo_sample.cfg zoo.cfgvi zoo.cfg
dataDir=/export/data/zookeeper
server.1=ha-01:2888:3888
server.2=ha-02:2888:3888
server.3=ha-03:2888:3888


在dataDir下创建myid文件,ha-01写入1,ha-02写入2,以此类推
ha-01下

cd /export/data/zookeeper 
touch myid 
echo '1' >> myid

分发zookeeper的bin和conf 文件到其他节点

cd /export/servers/apache-zookeeper-3.6.3-bin
scp bin/ ha-02:$PWD
scp bin/ ha-03:$PWD
scp conf/ ha-02:$PWD
scp conf/ ha-03:$PWD

ha-01上格式化节点

hdfs namenode -format

按照集群规划将生成的name文件分发至ha-02

scp -r /export/data/hadoop/name ha-02:/export/data/hadoop/
hdfs zkfc -formatZK

ha-01进程如图所示即为成功

更多推荐

HA高可用配置

本文发布于:2024-02-07 00:01:49,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1751944.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:HA

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!