Study Notes --spark installation configuration

Today, the installation configuration spark, mainly on the basis woods rain teacher tutorial installation. Where to download and use sbt sbt will spend a lot of time when the program is packaged (possibly because of poor network it).

A. Installation

1. From the spark official website to download selected version. Hadoop Ruoyi with the second option is shown in FIG.

 

 

 2. extract the installation, also need to modify the configuration file spark-env.sh Spark

cd /usr/local/spark
cp ./conf/spark-env.sh.template ./conf/spark-env.sh

Edit spark-env.sh file (vim ./conf/spark-env.sh), add the following configuration information in the first line:

export SPARK_DIST_CLASSPATH=$(/usr/local/hadoop/bin/hadoop classpath)

3. By way of example comes running Spark, Spark verify whether the installation was successful.

cd /usr/local/spark
bin/run-example SparkPi

Will output a lot of operational information is executed, the output is not easy to find, you can be filtered by grep command (command 2> & 1 can have all the information to stdout, otherwise due to the nature of the output log, or will output to the screen):

bin/run-example SparkPi 2>&1 | grep "Pi is"

Validation results:

 

 II. In the spark shell run code

Use the command into the spark-shell environment, you can start spark-shell environment with the following command

cd /usr/local/spark

bin/spark-shell

After starting spark-shell, will enter the "scala>" command prompt, as shown in FIG.

 

 You can use the command ": quit" to exit the Spark Shell, or you can directly use "Ctrl + D" key combination to exit Spark Shell.

Guess you like

Origin www.cnblogs.com/zwang/p/12308045.html