pyspark demo
demo
spark 在不使用mllib的情况下,可以使用python的数据分析。
使用方法如下
代码可以运行
eclipse添加 window--preferences---pydev----interpreters---python interpreter
1、环境变量
SPARK_HOME=C:\spark-2.3.1-bin-hadoop2.6
SPARK_LOCAL_IP=本机ip/localhost
2、jar包
libraries中添加
C:\spark-2.3.1-bin-hadoop2.6\python
C:\spark-2.3.1-bin-hadoop2.6\python\lib\*
3、代码
# coding=UTF-8 import findspark findspark.init() from pyspark import SparkContext def show(x): print(x) sc = SparkContext("local", "First App") lines = sc.textFile("../../../words").cache() words=lines.flatMap(lambda line:line.split(" "),True) pairWords = words.map(lambda word : (word,1),True) result = pairWords.reduceByKey(lambda v1,v2:v1+v2, 3) result.foreach(lambda x:show(x)) result.saveAsTextFile("../../../wc-result2")
代码执行(eclipse 可直接运行,集群提交如下)
$SPARK_HOME/bin/spark-submit firstapp.py
相关推荐
Johnson0 2020-07-28
Hhanwen 2020-07-26
zhixingheyitian 2020-07-19
yanqianglifei 2020-07-07
Hhanwen 2020-07-05
Hhanwen 2020-06-25
rongwenbin 2020-06-15
sxyhetao 2020-06-12
hovermenu 2020-06-10
Oeljeklaus 2020-06-10
zhixingheyitian 2020-06-08
Johnson0 2020-06-08
zhixingheyitian 2020-06-01
xclxcl 2020-05-31
Hhanwen 2020-05-29
zhixingheyitian 2020-05-29
Oeljeklaus 2020-05-29