reference:
we highly recommend you to switch to use Dataset, which has better performance than RDD
The most important task
: create SparkContext
Connected to the Spark "cluster": local, standalone, yarn, mesos
To create RDD, broadcast variable to the cluster by SparkContext
We need to create a SparkConf objects before creating SparkContext
bin directory into the spark of
./pyspark
In the PySpark shell, a special interpreter-aware SparkContext is already created for you, in the variable called sc.
appName
./pyspark --help View Help
RDD way to create
Parallelized Collections
data = [1, 2, 3, 4, 5] distData = sc.parallelize(data)
External Datasets
distFile = sc.textFile("file:////root/app/test/hello.txt")
If using a path on the local filesystem, the file must also be accessible at the same path on worker nodes