Hadoop2.7.3+Spark2.1.0完全分布式集群搭建过程
Hadoop2.7.3+Spark2.1.0完全分布式集群搭建过程
一、修改hosts文件
在主节点,就是第一台主机的命令行下;
vim /etc/hosts
我的是三台云主机:
在原文件的基础上加上;
ip1 master worker0 namenode ip2 worker1 datanode1 ip3 worker2 datanode2
其中的ipN代表一个可用的集群IP,ip1为master的主节点,ip2和iip3为从节点。
二、ssh互信(免密码登录)
注意我这里配置的是root用户,所以以下的家目录是/root
如果你配置的是用户是xxxx,那么家目录应该是/home/xxxxx/
#在主节点执行下面的命令:ssh-keygen -t rsa -P '' #一路回车直到生成公钥scp /root/.ssh/id_rsa.pub root@worker1:/root/.ssh/id_rsa.pub.master #从master节点拷贝id_rsa.pub到worker主机上,并且改名为id_rsa.pub.master scp /root/.ssh/id_rsa.pub root@worker1:/root/.ssh/id_rsa.pub.master #同上,以后使用workerN代表worker1和worker2.scp /etc/hosts root@workerN:/etc/hosts #统一hosts文件,让几个主机能通过host名字来识别彼此 #在对应的主机下执行如下命令: cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys #master主机 cat /root/.ssh/id_rsa.pub.master >> /root/.ssh/authorized_keys #workerN主机
这样master主机就可以无密码登录到其他主机,这样子在运行master上的启动脚本时和使用scp命令时候,就可以不用输入密码了。
三、安装基础环境(JAVA和SCALA环境)
1.Java1.8环境搭建:
配置master的java环境
#下载jdk1.8的rpm包wget --no-check-certificate --no-cookies --header "Cookie: Oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u112-b15/jdk-8u112-linux-x64.rpm rpm -ivh jdk-8u112-linux-x64.rpm #增加JAVA_HOMEvim etc/profile #增加如下行: #Java home export JAVA_HOME=/usr/java/jdk1.8.0_112/#刷新配置:source /etc/profile #当然reboot也是可以的
配置workerN主机的java环境
#使用scp命令进行拷贝 scp jdk-8u112-linux-x64.rpm root@workerN:/root #其他的步骤如master节点配置一样
2.Scala2.12.2环境搭建:
Master节点:
#下载scala安装包: wget -O "scala-2.12.2.rpm" "https://downloads.lightbend.com/scala/2.12.1/scala-2.12.2.rpm"#安装rpm包: rpm -ivh scala-2.12.2.rpm #增加SCALA_HOME vim /etc/profile #增加如下内容; #Scala Home export SCALA_HOME=/usr/share/scala #刷新配置 source /etc/profile
WorkerN节点;
#使用scp命令进行拷贝 scp scala-2.12.2.rpm root@workerN:/root #其他的步骤如master节点配置一样
四、Hadoop2.7.3完全分布式搭建
MASTER节点:
1.下载二进制包:
wget http://www-eu.apache.org/dist/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz
2.解压并移动至相应目录
我的习惯是将软件放置/opt目录下:
tar -xvf hadoop-2.7.3.tar.gz mv hadoop-2.7.3 /opt
3.修改相应的配置文件:
(1)/etc/profile:
增加如下内容:
#hadoop enviroment export HADOOP_HOME=/opt/hadoop-2.7.3/ export PATH="$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH" export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
(2)$HADOOP_HOME/etc/hadoop/hadoop-env.sh
修改JAVA_HOME 如下:
export JAVA_HOME=/usr/java/jdk1.8.0_112/
(3)$HADOOP_HOME/etc/hadoop/slaves
worker1 worker2
(4)$HADOOP_HOME/etc/hadoop/core-site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master:9000</value> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/opt/hadoop-2.7.3/tmp</value> </property> </configuration>
(5)$HADOOP_HOME/etc/hadoop/hdfs-site.xml
<configuration> <property> <name>dfs.namenode.secondary.http-address</name> <value>master:50090</value> </property> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/opt/hadoop-2.7.3/hdfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/opt/hadoop-2.7.3/hdfs/data</value> </property> </configuration>
(6)$HADOOP_HOME/etc/hadoop/mapred-site.xml
复制template,生成xml:
cp mapred-site.xml.template mapred-site.xml
内容:
(7)$HADOOP_HOME/etc/hadoop/yarn-site.xml
<!-- Site specific YARN configuration properties --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>master:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>master:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>master:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>master:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>master:8088</value> </property>
至此master节点的hadoop搭建完毕
再启动之前我们需要
格式化一下namenode
hadoop namenode -format
WorkerN节点:
(1)复制master节点的hadoop文件夹到worker上:
scp -r /opt/hadoop-2.7.3 root@wokerN:/opt #注意这里的N要改为1或者2
(2)修改/etc/profile:
过程如master一样
五、Spark2.1.0完全分布式环境搭建:
MASTER节点:
1.下载文件:
wget -O "spark-2.1.0-bin-hadoop2.7.tgz" "http://d3kbcqa49mib13.cloudfront.net/spark-2.1.0-bin-hadoop2.7.tgz"
2.解压并移动至相应的文件夹;
tar -xvf spark-2.1.0-bin-hadoop2.7.tgz mv spark-2.1.0-bin-hadoop2.7 /opt
3.修改相应的配置文件:
(1)/etc/profie
#Spark enviroment export SPARK_HOME=/opt/spark-2.1.0-bin-hadoop2.7/ export PATH="$SPARK_HOME/bin:$PATH"
(2)$SPARK_HOME/conf/spark-env.sh
cp spark-env.sh.template spark-env.sh
#配置内容如下:export SCALA_HOME=/usr/share/scala export JAVA_HOME=/usr/java/jdk1.8.0_112/ export SPARK_MASTER_IP=master export SPARK_WORKER_MEMORY=1g export HADOOP_CONF_DIR=/opt/hadoop-2.7.3/etc/hadoop
(3)$SPARK_HOME/conf/slaves
cp slaves.template slaves
配置内容如下
master worker1 worker2
WorkerN节点:
将配置好的spark文件复制到workerN节点
scp spark-2.1.0-bin-hadoop2.7 root@workerN:/opt
修改/etc/profile,增加spark相关的配置,如MASTER节点一样
相关推荐
summerinsist 2020-08-21
yutou0 2020-10-17
codedecode 2020-11-14
87901735 2020-08-19
benico 2020-08-19
Rain 2020-08-15
RemixGdc 2020-08-15
Jaystrong 2020-08-02
KFLING 2020-08-01
zhangll00 2020-07-29
elitechen 2020-07-28
suosuo 2020-07-28
benico 2020-07-28
xiyoukeke 2020-07-28
小惠 2020-07-27
此处省略三千字 2020-07-20
泥淖 2020-07-19
安得情怀似旧时 2020-07-06
sunzhihaofuture 2020-07-04