spark-shell 高级操作

一、系统环境

Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.2.0
      /_/

Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_71)
hadoop-2.6.0

二、数据集 自己爬取的淘宝交易记录 下面看一下样例

[hadoop@localhost conf]$ $HADOOP_INSTALL/bin/hadoop dfs -tail /user/spark/quikstart/DealRecord.csv
8 09:05:07,口味:香辣,http://item.taobao.com/item.htm?id=36296603821
四**a (匿名),¥1.8,10,2014-12-08 08:54:27,口味:香辣,http://item.taobao.com/item.htm?id=36296603821
t**2 (匿名),¥1.79,10,2014-12-08 08:52:26,口味:香辣,http://item.taobao.com/item.htm?id=36296603821
浅**8 (匿名),¥1.74,6,2014-12-08 08:19:46,口味:香辣,http://item.taobao.com/item.htm?id=36296603821
t**3 (匿名),¥1.8,11,2014-12-08 08:16:15,口味:香辣,http://item.taobao.com/item.htm?id=36296603821
z**5 (匿名),¥1.71,5,2014-12-08 07:59:42,口味:香辣,http://item.taobao.com/item.htm?id=36296603821
t**d (匿名),¥1.84,22,2014-12-08 04:12:43,口味:香辣,http://item.taobao.com/item.htm?id=36296603821
z**u (匿名),¥1.77,8,2014-12-08 02:41:12,口味:香辣,http://item.taobao.com/item.htm?id=36296603821
z**j (匿名),¥1.59,3,2014-12-08 01:45:56,口味:香辣,http://item.taobao.com/item.htm?id=36296603821
w**y (匿名),¥1.8,10,2014-12-08 01:42:36,口味:香辣,http://item.taobao.com/item.htm?id=36296603821


三、利用这笔数据完成一些高级任务 不用shark,自己写scala代码。

3.1 任务一:计算价格每件货品的在爬取2014-12-08日的销售总量。注意每一个ID对应一个货品。最后形式希望是(ID, 销售总量)

<pre name="code" class="html">scala> val dealRecord = sc.textFile("/user/spark/quikstart/DealRecord.csv")
15/02/03 10:59:38 INFO storage.MemoryStore: ensureFreeSpace(81443) called with curMem=195552, maxMem=280248975
15/02/03 10:59:38 INFO storage.MemoryStore: Block broadcast_4 stored as values in memory (estimated size 79.5 KB, free 267.0 MB)
15/02/03 10:59:39 INFO storage.MemoryStore: ensureFreeSpace(31329) called with curMem=276995, maxMem=280248975
15/02/03 10:59:39 INFO storage.MemoryStore: Block broadcast_4_piece0 stored as bytes in memory (estimated size 30.6 KB, free 267.0 MB)
15/02/03 10:59:39 INFO storage.BlockManagerInfo: Added broadcast_4_piece0 in memory on localhost:59773 (size: 30.6 KB, free: 267.2 MB)
15/02/03 10:59:39 INFO storage.BlockManagerMaster: Updated info of block broadcast_4_piece0
15/02/03 10:59:39 INFO spark.SparkContext: Created broadcast 4 from textFile at <console>:12
dealRecord: org.apache.spark.rdd.RDD[String] = /user/spark/quikstart/DealRecord.csv MappedRDD[4] at textFile at <console>:12

scala> val dealRecord20141208 = dealRecord.filter(line => line.contains("2014-12-08") && line.contains("http"))
dealRecord20141208: org.apache.spark.rdd.RDD[String] = FilteredRDD[5] at filter at <console>:14
scala> val dealQuanlityPairs = dealRecord20141208.map(line => (line.split(",")(5).split("id=")(1), line.split(",")(2).toInt))
val dealQuanlitySum = dealQuanlityPairs.reduceByKey(_+_)


 
 
 
 

四 小知识scala java 互相操作

scala> import java.util.Date
import java.util.Date

scala> val now = new Date
now: java.util.Date = Tue Feb 03 15:40:28 CST 2015

scala> import java.text.SimpleDateFormat
import java.text.SimpleDateFormat

scala> val pattern = "yyyy-mm-dd HH:MM:ss"
pattern: String = yyyy-mm-dd HH:MM:ss

scala> val sformat = new SimpleDateFormat(pattern)
sformat: java.text.SimpleDateFormat = java.text.SimpleDateFormat@59bf79a0

scala> sformat.format(now)
res28: String = 2015-40-03 15:02:28

scala> val fnow = sformat.format(now)
fnow: String = 2015-40-03 15:02:28




猜你喜欢

转载自blog.csdn.net/cleverbegin/article/details/43446765
今日推荐