Spark 在windows环境中源码编译遇到的错误

1 首先有可能是各种jar包错误,删除掉maven里面的jar包就好,

2 其他:

Error:(21, 8) Symbol 'term org.apache.spark.annotation' is missing from the classpath.
This symbol is required by ' <none>'.
Make sure that term annotation is in your classpath and check for conflicting dependencies with `-Ylog-classpath`.
A full rebuild may help if 'package.class' was compiled against an incompatible version of org.apache.spark.
import org.apache.spark.sql.SparkSession

File-> Project -Structure-> Modules -> spark exampes 2.11 -> 点击dependancies -> 点击右边绿色+按钮 -> addd jar or directories -> 进入spark/common/tags/targets 选择对应的jar包

参考:https://stackoverflow.com/questions/42327777/spark-sql-errors

Error:(111, 5) Symbol 'term org.apache.hadoop' is missing from the classpath.
This symbol is required by 'type org.apache.spark.rdd.RDD._$12'.
Make sure that term hadoop is in your classpath and check for conflicting dependencies with `-Ylog-classpath`.
A full rebuild may help if 'RDD.class' was compiled against an incompatible version of org.apache.
    fileRDD.saveAsTextFile(dfsFilename)

错误:
Intellij compile failures: “is already defined as”   这是因为我又重复的把scala marked as source dierectory了
https://stackoverflow.com/questions/16885489/intellij-compile-failures-is-already-defined-as/16888290,第二个答案 

猜你喜欢

转载自blog.csdn.net/innersense/article/details/82185110