Hadoop HA集群搭建
Hadoop HA集群简介
本教程用于搭建Hadoop HA集群,关于HA集群有以下几点说明:
- 在hadoop2.0中通常由两个NameNode组成,一个处于active状态,另一个处于standby状态。 Active NameNode对外提供服务,Standby NameNode不对外提供服务,仅同步active namenode的状态,以便能够在它失败时快速进行切换。 hadoop2.0官方提供了两种HDFS HA的解决方案,一种是NFS,另一种是QJM。这里我们使用简单的QJM。 在该方案中,主备NameNode之间通过一组JournalNode同步元数据信息,一条数据只要成功写入多数JournalNode即认为写入成功。 通常配置奇数个JournalNode。
- hadoop-2.2.0中依然存在一个问题,就是ResourceManager只有一个,存在单点故障,从hadoop-2.4.1开始解决了这个问题,有两个ResourceManager,一个是Active,一个是Standby,状态由zookeeper进行协调。
- 这里还配置了一个zookeeper集群,用于ZKFC(DFSZKFailoverController)故障转移,当Active NameNode/ResourceManager挂掉了,会自动切换Standby NameNode/ResourceManager为Active状态
版本介绍
software | version |
---|---|
OS | CentOS-7-x86_64-DVD-1810.iso |
Hadoop | hadoop-2.8.4 |
Zookeeper | zookeeper-3.4.10 |
系统设置
集群角色分配
node | actor |
---|---|
master1 | NameNode、DFSZKFailoverController(zkfc)、ResourceManager |
master2 | NameNode、DFSZKFailoverController(zkfc)、ResourceManager |
node1 | DataNode、NodeManager、JournalNode、QuorumPeerMain |
node2 | DataNode、NodeManager、JournalNode、QuorumPeerMain |
node3 | DataNode、NodeManager、JournalNode、QuorumPeerMain |
配置hosts [all]
192.168.56.101 node1 192.168.56.102 node2 192.168.56.103 node3 192.168.56.201 master1 192.168.56.202 master2
新增Hadoop用户 [all]
useradd hadoop passwd hadoop
chmod -v u+w /etc/sudoers
vi /etc/sudoers hadoop ALL=(ALL) ALL
chmod -v u-w /etc/sudoers
修改hostname [all]
hostnamectl set-hostname $hostname [master1|master2|node1|node2|node3]
systemctl reboot -i
免密登录 [all]
ssh-keygen -t rsa cat .ssh/id_rsa.pub >> .ssh/authorized_keys scp .ssh/authorized_keys $next_node:~/.ssh/
sudo vi /etc/ssh/sshd_config RSAAuthentication yes StrictModes no PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys
systemctl restart sshd.service
关闭并禁用防火墙 [all]
sudo systemctl stop firewalld sudo firewall-cmd --state systemctl disable firewalld.service
security策略 [all]
vi /etc/selinux/config SELINUX=disabled
软件安装
安装JDK [all]
sudo mkdir -p /opt/env sudo chown -R hadoop:hadoop /opt/env tar -xvf jdk-8u121-linux-i586.tar.gz
sudo vi /etc/profile export JAVA_HOME=/opt/env/jdk1.8.0_121 export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar export PATH=$JAVA_HOME/bin:$PATH
source /etc/profile
安装ZK [node1 node2 node3]
sudo mkdir -p /opt/zookeeper sudo chown -R hadoop:hadoop /opt/zookeeper tar -zxvf /tmp/zookeeper-3.4.10.tar.gz -C /opt/zookeeper/ sudo chown -R hadoop:hadoop /opt/zookeeper
vi conf/zoo.cfg dataDir=/opt/zookeeper/zookeeper-3.4.10/data ...... server.1=node1:2888:3888 server.2=node2:2888:3888 server.3=node3:2888:3888
mkdir data echo $zk_id [1|2|3] > data/myid
安装Hadoop [all]
sudo mkdir -p /opt/hadoop/data sudo chown -R hadoop:hadoop /opt/hadoop/ tar -zxvf hadoop-2.8.4.tar.gz -C /opt/hadoop/ mkdir journaldata
配置Hadoop [all]
vi core-site.xml <configuration> <!-- 指定hdfs的nameservice为ns1 --> <property> <name>fs.defaultFS</name> <value>hdfs://ns1/</value> </property> <!-- 指定hadoop临时目录 --> <property> <name>hadoop.tmp.dir</name> <value>/opt/hadoop/data</value> </property> <!-- 指定zookeeper地址 --> <property> <name>ha.zookeeper.quorum</name> <value>node1:2181,node2:2181,node3:2181</value> </property> <property> <name>ha.zookeeper.session-timeout.ms</name> <value>3000</value> </property> </configuration>
vi hdfs-site.xml <configuration> <!--指定hdfs的nameservice为ns1,需要和core-site.xml中的保持一致 --> <property> <name>dfs.nameservices</name> <value>ns1</value> </property> <!-- ns1下面有两个NameNode,分别是nn1,nn2 --> <property> <name>dfs.ha.namenodes.ns1</name> <value>nn1,nn2</value> </property> <!-- nn1的RPC通信地址 --> <property> <name>dfs.namenode.rpc-address.ns1.nn1</name> <value>master1:9000</value> </property> <!-- nn1的http通信地址 --> <property> <name>dfs.namenode.http-address.ns1.nn1</name> <value>master1:50070</value> </property> <!-- nn2的RPC通信地址 --> <property> <name>dfs.namenode.rpc-address.ns1.nn2</name> <value>master2:9000</value> </property> <!-- nn2的http通信地址 --> <property> <name>dfs.namenode.http-address.ns1.nn2</name> <value>master2:50070</value> </property> <!-- 指定NameNode的元数据在JournalNode上的存放位置 --> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://node1:8485;node2:8485;node3:8485/ns1</value> </property> <!-- 指定JournalNode在本地磁盘存放数据的位置 --> <property> <name>dfs.journalnode.edits.dir</name> <value>/opt/hadoop/journaldata</value> </property> <!-- 开启NameNode失败自动切换 --> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <!-- 配置失败自动切换实现方式 --> <property> <name>dfs.client.failover.proxy.provider.ns1</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行--> <property> <name>dfs.ha.fencing.methods</name> <value> sshfence shell(/bin/true) </value> </property> <!-- 使用sshfence隔离机制时需要ssh免登陆 --> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/hadoop/.ssh/id_rsa</value> <!-- 这个是安装用户的ssh的id_rsa文件的路径,我使用的是hadoop用户 --> </property> <!-- 配置sshfence隔离机制超时时间 --> <property> <name>dfs.ha.fencing.ssh.connect-timeout</name> <value>30000</value> </property> <!--指定namenode名称空间的存储地址 --> <property> <name>dfs.namenode.name.dir</name> <value>file:///opt/hadoop/hdfs/name</value> </property> <!--指定datanode数据存储地址 --> <property> <name>dfs.datanode.data.dir</name> <value>file:///opt/hadoop/hdfs/data</value> </property> <!--指定数据冗余份数 --> <property> <name>dfs.replication</name> <value>3</value> </property> </configuration>
vi mapred-site.xml <configuration> <!-- 指定mr框架为yarn方式 --> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <!-- 配置 MapReduce JobHistory Server 地址 ,默认端口10020 --> <property> <name>mapreduce.jobhistory.address</name> <value>0.0.0.0:10020</value> </property> <!-- 配置 MapReduce JobHistory Server web ui 地址, 默认端口19888 --> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>0.0.0.0:19888</value> </property> </configuration>
vi yarn-site.xml <configuration> <!-- Site specific YARN configuration properties --> <!-- 开启RM高可用 --> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <!--开启自动恢复功能 --> <property> <name>yarn.resourcemanager.recovery.enabled</name> <value>true</value> </property> <!-- 指定RM的cluster id --> <property> <name>yarn.resourcemanager.cluster-id</name> <value>yrc</value> </property> <!-- 指定RM的名字 --> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <!-- 分别指定RM的地址 --> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>master1</value> </property> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>master2</value> </property> <property> <name>yarn.resourcemanager.ha.id</name> <value>$ResourceManager_Id [rm1|rm2]</value> <description>If we want to launch more than one RM in single node,we need this configuration</description> </property> <!-- 指定zk集群地址 --> <property> <name>yarn.resourcemanager.zk-address</name> <value>node1:2181,node2:2181,node3:2181</value> </property> <property> <name>ha.zookeeper.quorum</name> <value>node1:2181,node2:2181,node3:2181</value> </property> <!--配置与zookeeper的连接地址--> <property> <name>yarn.resourcemanager.zk-state-store.address</name> <value>node1:2181,node2:2181,node3:2181</value> </property> <property> <name>yarn.resourcemanager.store.class</name> <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value> </property> <property> <name>yarn.resourcemanager.ha.automatic-failover.zk-base-path</name> <value>/yarn-leader-election</value> <description>Optionalsetting.Thedefaultvalueis/yarn-leader-election</description> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
vi hadoop-env.sh vi mapred-env.sh vi yarn-env.sh export JAVA_HOME=/opt/env/jdk1.8.0_121 export CLASS_PATH=$JAVA_HOME/lib:$JAVA_HOME/jre/lib export HADOOP_HOME=/opt/hadoop/hadoop-2.8.4 export HADOOP_PID_DIR=/opt/hadoop/hadoop-2.8.4/pids export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export HADOOP_OPTS="$HADOOP_OPTS-Djava.library.path=$HADOOP_HOME/lib/native" export HADOOP_PREFIX=$HADOOP_HOME export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export YARN_HOME=$HADOOP_HOME export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop export HDFS_CONF_DIR=$HADOOP_HOME/etc/hadoop export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
vi masters master2
vi slaves node1 node2 node3
启动集群
启动zookeeper [node1 node2 node3]
./zkServer.sh start
[hadoop@node1 bin]$ ./zkServer.sh status ZooKeeper JMX enabled by default Using config: /opt/zookeeper/zookeeper-3.4.10/bin/../conf/zoo.cfg Mode: leader [hadoop@node2 bin]$ ./zkServer.sh status ZooKeeper JMX enabled by default Using config: /opt/zookeeper/zookeeper-3.4.10/bin/../conf/zoo.cfg Mode: follower [hadoop@node3 bin]$ ./zkServer.sh status ZooKeeper JMX enabled by default Using config: /opt/zookeeper/zookeeper-3.4.10/bin/../conf/zoo.cfg Mode: follower
格式化NameNode [master1]
bin/hdfs namenode -format
[hadoop@master2 ~]$ ll /opt/hadoop/data/dfs/name/current/ total 16 -rw-rw-r--. 1 hadoop hadoop 323 Jul 11 01:17 fsimage_0000000000000000000 -rw-rw-r--. 1 hadoop hadoop 62 Jul 11 01:17 fsimage_0000000000000000000.md5 -rw-rw-r--. 1 hadoop hadoop 2 Jul 11 01:17 seen_txid -rw-rw-r--. 1 hadoop hadoop 219 Jul 11 01:17 VERSION
格式化zkfc [master1 master2]
bin/hdfs zkfc -formatZK
启动NameNode/ResourceManager [master1]
sbin/start-dfs.sh
[hadoop@master1 sbin]$ sh start-dfs.sh which: no start-dfs.sh in (/opt/env/jdk1.8.0_121/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/hadoop/.local/bin:/home/hadoop/bin) Java HotSpot(TM) Client VM warning: You have loaded library /opt/hadoop/hadoop-2.8.4/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. 19/07/25 01:00:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [master1 master2] master2: starting namenode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-namenode-master2.out master1: starting namenode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-namenode-master1.out node2: starting datanode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-datanode-node2.out node1: starting datanode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-datanode-node1.out node3: starting datanode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-datanode-node3.out Starting journal nodes [node1 node2 node3] node2: starting journalnode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-journalnode-node2.out node3: starting journalnode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-journalnode-node3.out node1: starting journalnode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-journalnode-node1.out Java HotSpot(TM) Client VM warning: You have loaded library /opt/hadoop/hadoop-2.8.4/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. 19/07/25 01:01:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting ZK Failover Controllers on NN hosts [master1 master2] master2: starting zkfc, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-zkfc-master2.out master1: starting zkfc, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-zkfc-master1.out [hadoop@master1 sbin]$ jps 5552 NameNode 5940 Jps 5869 DFSZKFailoverController
sbin/start-yarn.sh
[hadoop@master1 sbin]$ sh start-yarn.sh starting yarn daemons which: no start-yarn.sh in (/opt/env/jdk1.8.0_121/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/hadoop/.local/bin:/home/hadoop/bin) starting resourcemanager, logging to /opt/hadoop/hadoop-2.8.4/logs/yarn-hadoop-resourcemanager-master1.out node2: starting nodemanager, logging to /opt/hadoop/hadoop-2.8.4/logs/yarn-hadoop-nodemanager-node2.out node3: starting nodemanager, logging to /opt/hadoop/hadoop-2.8.4/logs/yarn-hadoop-nodemanager-node3.out node1: starting nodemanager, logging to /opt/hadoop/hadoop-2.8.4/logs/yarn-hadoop-nodemanager-node1.out [hadoop@master1 sbin]$ jps 5552 NameNode 5994 ResourceManager 6092 Jps 5869 DFSZKFailoverController
此时DataNode
[hadoop@node1 hadoop-2.8.4]$ jps 3808 QuorumPeerMain 5062 Jps 4506 DataNode 4620 JournalNode 4732 NodeManager
此时master2
[hadoop@master2 sbin]$ jps 6092 Jps 5869 DFSZKFailoverController
格式化NameNode Standby [master2]
bin/hdfs namenode -bootstrapStandby
启动NameNode Standby [master2]
sbin/hadoop-daemon.sh start namenode
启动ResourceManager Standby [master2]
sbin/yarn-daemon.sh start resourcemanager
[hadoop@master2 hadoop-2.8.4]$ jps 4233 Jps 3885 DFSZKFailoverController 4189 ResourceManager 4030 NameNode
ResourceManager状态
[hadoop@master2 hadoop-2.8.4]$ bin/yarn rmadmin -getServiceState rm2 Java HotSpot(TM) Client VM warning: You have loaded library /opt/hadoop/hadoop-2.8.4/lib/native/libhadoop.so which might have disabled stack guard. The VM will try to fix the stack guard now. It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. 19/07/25 01:48:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable active [hadoop@master2 hadoop-2.8.4]$ bin/yarn rmadmin -getServiceState rm1 Java HotSpot(TM) Client VM warning: You have loaded library /opt/hadoop/hadoop-2.8.4/lib/native/libhadoop.so which might have disabled stack guard. The VM will try to fix the stack guard now. It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. 19/07/25 01:48:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable standby
enjoy
相关推荐
changjiang 2020-11-16
minerd 2020-10-28
WeiHHH 2020-09-23
Aleks 2020-08-19
WeiHHH 2020-08-17
飞鸿踏雪0 2020-07-26
tomli 2020-07-26
deyu 2020-07-21
strongyoung 2020-07-19
eternityzzy 2020-07-19
Elmo 2020-07-19
飞鸿踏雪0 2020-07-09
飞鸿踏雪0 2020-07-04
xieting 2020-07-04
WeiHHH 2020-06-28
genshengxiao 2020-06-26
Hhanwen 2020-06-25