Encountered some small problems in Spark development

The scala version does not match the spark version 

error code:

Exception in thread "main" java.lang.NoSuchMethodError: scala.Product.$init$(Lscala/Product;)V
    at org.apache.spark.SparkConf$DeprecatedConfig.<init>(SparkConf.scala:799)
    at org. apache.spark.SparkConf$.<init>(SparkConf.scala:596)
    at org.apache.spark.SparkConf$.<clinit>(SparkConf.scala)
    at org.apache.spark.SparkConf.set(SparkConf.scala: 94)
    at org.apache.spark.SparkConf.set(SparkConf.scala:83)
    at org.apache.spark.SparkConf.setAppName(SparkConf.scala:120)
    at WordCount$.main(WordCount.scala:6)
    at WordCount .main(WordCount.scala)

Process finished with exit code 1

failure reason

The compilation failed due to the inconsistency between the Scala version and the Spark version.

I am using scala2.11.8, Spark3.2.0 version, it does not match. Spark 3.2.0 requires Scala 2.12+, the project needs to be compiled as Scala 2.12+ to use Spark 3.2.0. Otherwise, a Spark 3.1.x version and a Scala 2.11.x version are required.

Version

Spark

 scala

 

Solution:

Upgrade the scala version.

Missing Hadoop executable winutils.exe

 java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

  1. Make sure the environment variables for Hadoop are set correctly and restart the terminal.

  2. Download and install the Hadoop binaries, and extract them into the correct directory (for example, C:/Hadoop/).

  3. Download the winutils.exe file from the Apache official website, and copy it to the bin directory of Hadoop.

The file path is written as a file

Exception in thread "main" org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/D:/workspace/spark/src/main/Data/data1.txt

Guess you like

Origin blog.csdn.net/qq_52128187/article/details/131167469