下载IDEA并安装scala插件

配置内容

    %SCALA_HOME%    D:\ENV\scala-2.10.7
    %PATH%           %SCALA_HOME%\bin
  • 1
  • 2

配置完成后命令行输入scala返回版本信息

2.下载IDEA并安装scala插件

eclipse插件对scala支持不怎么友好,建议使用idea 
下载链接 https://www.jetbrains.com/idea/download/#section=windows

安装插件步骤

File –> Setting –> Plugin –> 输入scala –> Install –> Restart

3新建项目

scala的sbt和maven差不多,直接使用maven就可以管理scala项目

File –> New –> Project –> maven

4.设置项目参数

File –> Project Structure –>global libraries

4.1设置快捷键位eclipse风格
在弹出的setting页面中左侧导航中选择Keymap;在keymaps下拉列表中选择Eclipse
  • 1
  • 2

5.Hello world

new –> Scala class -> object

   scala和java大致对应关系
    object --> static class
    class  --> class trait -> interface

6.设置hadoop_home

下载链接 https://github.com/srccodes/hadoop-common-2.2.0-bin

    %HADOOP_HOME%   D:\ENV\hadoop-common-2.2.0-bin
    %PATH%           %HADOOP_HOME%\bin
  • 1
  • 2

7.本地就可以跑spark worldcount了

   <dependencies>
           <dependency>
               <groupId>org.scala-lang</groupId> <artifactId>scala-library</artifactId> <version>${scala.version}</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.11</artifactId> <version>${spark.version}</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-streaming_2.11</artifactId> <version>www.qinlinyule.cn${spark.version}</version> </dependency> </dependencies>
 def main(args: Array[String])www.hjd1956.com: Unit = {
    val conf = new SparkConf().setAppName("SparkWorldCount").setMaster("local[4]") val sc = new SparkContext(conf) val line = sc.textFile("doc/md/环境搭建.md") line.flatMap(_.www.thd178.com/  split(" www.233077.cn")).map((www.meiwanyule.cn_, 1)).reduceByKey(_www.douniu157.com+_).collect().foreach(println) sc.stop() }

猜你喜欢

转载自www.cnblogs.com/qwangxiao/p/9114292.html