Spark集群式安装部署
1.前提条件
1.1创建3台虚拟机,且配置好网络,建立好互信(ssh免密)。
1.2 Java1.8环境已经配置好
1.3 Hadoop集群已经完成搭建
1.4 Scala软件包和Spark软件包的下载
https://www.scala-lang.org/download/
http://spark.apache.org/downloads.html
2.安装Scala
2.1解压安装包:tar -zxvf scala-2.13.0.tgz
2.2 配置环境变量
vim /etc/profile //编辑环境变量 export SCALA_HOME=/usr/local/scala/scala-2.13.1 export PATH=PATH:SCALA_HOME/bin source /etc/profile //使立即生效 3.验证安装 [ ~]# scala //进入scala交互式界面
4.安装spark
4.1 解压安装包:tar -zxvf spark-2.4.3-bin-hadoop2.7.tgz
4.2 配置环境变量
vim /etc/profile //编辑环境变量 export SPARK_HOME=/usr/hadoop/spark/spark-2.4.3-bin-hadoop2.7 export PATH=$PATH:$SPARK_HOME/bin source /etc/profile //使立即生效 4.3 spark-env.sh配置 []# cd /usr/hadoop/spark/spark-2.4.3-bin-hadoop2.7/conf []# vi spark-env.sh
#添加如下内容: export JAVA_HOME=/usr/local/java/jdk1.8.0_161 export SCALA_HOME=/usr/local/scala/scala-2.13.1 export HADOOP_HOME=/usr/local/hadoop export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop export SPARK_MASTER_HOST=node01 export SPARK_WORKER_MEMORY=1g export SPARK_WORKER_CORES=2 export SPARK_HOME=/usr/local/spark/spark-2.4.4-bin-hadoop2.7 export SPARK_DIST_CLASSPATH=$(/usr/local/hadoop/bin/hadoop classpath) //hadoop的bin目录
4.4 slaves配置
hdp01 hdp02 hdp03 // spark集群的主机名
4.5 复制到其他节点
在master节点上安装配置完成Spark后,将整个spark目录拷贝到其他节点,并在各个节点上更新/etc/profile文件中的环境变量
5.测试Spark
- 在master节点启动Hadoop集群
- 在master节点启动spark
[ spark-2.4.3-bin-hadoop2.7]# sbin/start-all.sh
打开浏览器输入192.168.xx.xx:8080,看到如下活动的Workers,证明安装配置并启动成功:
相关推荐
Hhanwen 2020-06-25
xclxcl 2020-05-31
BornZhu 2020-05-20
zhixingheyitian 2020-06-08
Hhanwen 2020-05-04
kekeromer 2020-04-16
Johnson0 2020-07-28
Hhanwen 2020-07-26
zhixingheyitian 2020-07-19
yanqianglifei 2020-07-07
Hhanwen 2020-07-05
rongwenbin 2020-06-15
sxyhetao 2020-06-12
hovermenu 2020-06-10
Oeljeklaus 2020-06-10
Johnson0 2020-06-08
zhixingheyitian 2020-06-01