Spark经典案例之求平均值,最大最小值,求top值,非结构数据处理,统计每天新增用户数

1、需求分析
对输入文件中数据进行就算学生平均成绩。输入文件中的每行内容均为一个学生的姓名和他相应的成绩,如果有多门学科,则每门学科为一个文件。
要求在输出中每行有两个间隔的数据,其中,第一个代表学生的姓名,第二个代表其平均成绩。

2、原始数据
1)math:
张三,88
李四,99
王五,66
赵六,77

2)china:
张三,78
李四,89
王五,96
赵六,67

3)english:
张三,80
李四,82
王五,84
赵六,86

样本输出:
张三,82
李四,90
王五,82
赵六,76

package ClassicCase

import org.apache.spark.{SparkConf, SparkContext}

/**
  * 业务场景:求平局值
  * Created by YJ on 2017/2/8.
  */


object case4 {
  def main(args: Array[String]): Unit = {
    val conf = new SparkConf().setMaster("local").setAppName("reduce")
    val sc = new SparkContext(conf)
    sc.setLogLevel("ERROR")
    val four = sc.textFile("hdfs://192.168.109.130:8020//user/flume/ClassicCase/case4/*", 3)

    val a = four.filter(_.trim.length > 0) //数据过滤
      .map(line => //数据整理
      (line.trim().split(",")(0), line.trim.split(",")(1).toInt)
    )
      .groupByKey() //按key分组 (张三,CompactBuffer(78, 80, 88))
      .map(x => {
      var num = 0.0
      var sum = 0
      for (i <- x._2) {
        //遍历该值
        sum = sum + i
        num = num + 1
      }
      val avg = sum / num
      val fm = f"$avg%1.2f"   //1.2->取后面两位小数,格式化数据
      println("fm:"+fm)
      (x._1, fm)
    }
    ).collect.foreach(x => println(x._1+"\t"+x._2))

    //资源学习
    var floatVar = 12.456
    var intVar = 2000
    var stringVar = "资源学习!"
    var fs = printf(
      "浮点型变量为 " + "%1.2f, " +
        "整型变量为  " + "%d,"+
        "字符串为 " + " %s", floatVar, intVar, stringVar)
    println(fs)


  }

}

输出结果
fm:90.00
fm:82.00
fm:82.00
fm:76.67
李四 90.00
王五 82.00
张三 82.00
赵六 76.67
浮点型变量为 12.46, 整型变量为 2000,字符串为 资源学习!()

Spark经典案之求最大最小值

数据准备
eightteen_a.txt
102
10
39
109
200
11
3
90
28

eightteen_b.txt
5
2
30
838
10005

package ClassicCase

import org.apache.spark.{SparkConf, SparkContext}

/**
  * 业务场景:求最大最小值
  * Created by YJ on 2017/2/8.
  */


object case5 {
  def main(args: Array[String]): Unit = {
    val conf = new SparkConf().setMaster("local").setAppName("reduce")
    val sc = new SparkContext(conf)
    sc.setLogLevel("ERROR")
    val fifth = sc.textFile("hdfs://192.168.109.130:8020//user/flume/ClassicCase/case5/*", 2)
    val res = fifth.filter(_.trim().length>0).map(line => ("key",line.trim.toInt)).groupByKey().map(x => {
      var min = Integer.MAX_VALUE
      var max = Integer.MIN_VALUE
      for(num <- x._2){
        if(num>max){
          max = num
        }
        if(num<min){
          min = num
        }
      }
      (max,min)
    }).collect.foreach(x => {
      println("max\t"+x._1)
      println("min\t"+x._2)
    })
  }

}

方法2

package com.neusoft

import org.apache.spark.{SparkConf, SparkContext}

/**
  * Created by Administrator on 2019/3/4.
  */
object FileMaxMin {
  def main(args: Array[String]): Unit = {
    val sparkConf = new SparkConf().setAppName("FileOrder").setMaster("local")

    val sc = new SparkContext(sparkConf)

    val rdd = sc.textFile("demo4/*")
    //key,list(102,10,39,......)
    rdd.filter(_.length > 0).map(x => ("key",x.toInt)).groupByKey().map(x => {
      println("max:" + x._2.max)
      println("max:" + x._2.min)
    }).collect()

  }
}

结果输出
max 10005
min 2

求top值

需求分析
orderid,userid,payment,productid
求topN的payment值
a.txt
1,9819,100,121
2,8918,2000,111
3,2813,1234,22
4,9100,10,1101
5,3210,490,111
6,1298,28,1211
7,1010,281,90
8,1818,9000,20

b.txt
100,3333,10,100
101,9321,1000,293
102,3881,701,20
103,6791,910,30
104,8888,11,39

scala代码

package ClassicCase

import org.apache.spark.{SparkConf, SparkContext}

/**
  * 业务场景:求top值
  * Created by YJ on 2017/2/8.
  */


object case6 {
  def main(args: Array[String]): Unit = {
    val conf = new SparkConf().setMaster("local").setAppName("reduce")
    val sc = new SparkContext(conf)
    sc.setLogLevel("ERROR")
    val six = sc.textFile("hdfs://192.168.109.130:8020//user/flume/ClassicCase/case6/*", 2)
    var idx = 0;
    val res = six.filter(x => (x.trim().length > 0) && (x.split(",").length == 4))
      .map(_.split(",")(2))
      .map(x => (x.toInt, ""))
      .sortByKey(false)    //fasle ->倒序
      .map(x => x._1).take(5)
      .foreach(x => {
        idx = idx + 1
        println(idx + "\t" + x)
      })
  }

}

方法2

package com.neusoft

import org.apache.spark.{SparkConf, SparkContext}

/**
  * Created by Administrator on 2019/3/4.
  */
object FileTopN {
  def main(args: Array[String]): Unit = {
    val sparkConf = new SparkConf().setAppName("FileOrder").setMaster("local")

    val sc = new SparkContext(sparkConf)

    val rdd = sc.textFile("demo5/*")

    var idx = 0
    rdd.map(x => x.split(",")(2)).map(x => (x.toInt, "")).sortByKey(false).take(5).map(x => x._1).foreach(x => {
      idx+=1
      println(idx + " " + x)
    })
  }
}

结果输出:
1 9000
2 2000
3 1234
4 1000
5 910

Spark经典案例之非结构数据处理

需求:根据tomcat日志计算url访问了情况,具体的url如下,
要求:区别统计GET和POST URL访问量
结果为:访问方式、URL、访问量
测试数据集:
在CODE上查看代码片派生到我的代码片
196.168.2.1 - - [03/Jul/2014:23:36:38 +0800] “GET /course/detail/3.htm HTTP/1.0” 200 38435 0.038
182.131.89.195 - - [03/Jul/2014:23:37:43 +0800] “GET /html/notes/20140617/888.html HTTP/1.0” 301 - 0.000
196.168.2.1 - - [03/Jul/2014:23:38:27 +0800] “POST /service/notes/addViewTimes_23.htm HTTP/1.0” 200 2 0.003
196.168.2.1 - - [03/Jul/2014:23:39:03 +0800] “GET /html/notes/20140617/779.html HTTP/1.0” 200 69539 0.046
196.168.2.1 - - [03/Jul/2014:23:43:00 +0800] “GET /html/notes/20140318/24.html HTTP/1.0” 200 67171 0.049
196.168.2.1 - - [03/Jul/2014:23:43:59 +0800] “POST /service/notes/addViewTimes_779.htm HTTP/1.0” 200 1 0.003
196.168.2.1 - - [03/Jul/2014:23:45:51 +0800] “GET /html/notes/20140617/888.html HTTP/1.0” 200 70044 0.060
196.168.2.1 - - [03/Jul/2014:23:46:17 +0800] “GET /course/list/73.htm HTTP/1.0” 200 12125 0.010
196.168.2.1 - - [03/Jul/2014:23:46:58 +0800] “GET /html/notes/20140609/542.html HTTP/1.0” 200 94971 0.077
196.168.2.1 - - [03/Jul/2014:23:48:31 +0800] “POST /service/notes/addViewTimes_24.htm HTTP/1.0” 200 2 0.003
196.168.2.1 - - [03/Jul/2014:23:48:34 +0800] “POST /service/notes/addViewTimes_542.htm HTTP/1.0” 200 2 0.003
196.168.2.1 - - [03/Jul/2014:23:49:31 +0800] “GET /notes/index-top-3.htm HTTP/1.0” 200 53494 0.041
196.168.2.1 - - [03/Jul/2014:23:50:55 +0800] “GET /html/notes/20140609/544.html HTTP/1.0” 200 183694 0.076
196.168.2.1 - - [03/Jul/2014:23:53:32 +0800] “POST /service/notes/addViewTimes_544.htm HTTP/1.0” 200 2 0.004
196.168.2.1 - - [03/Jul/2014:23:54:53 +0800] “GET /service/notes/addViewTimes_900.htm HTTP/1.0” 200 151770 0.054
196.168.2.1 - - [03/Jul/2014:23:57:42 +0800] “GET /html/notes/20140620/872.html HTTP/1.0” 200 52373 0.034
196.168.2.1 - - [03/Jul/2014:23:58:17 +0800] “POST /service/notes/addViewTimes_900.htm HTTP/1.0” 200 2 0.003
196.168.2.1 - - [03/Jul/2014:23:58:51 +0800] “GET /html/notes/20140617/888.html HTTP/1.0” 200 70044 0.057
186.76.76.76 - - [03/Jul/2014:23:48:34 +0800] “POST /service/notes/addViewTimes_542.htm HTTP/1.0” 200 2 0.003
186.76.76.76 - - [03/Jul/2014:23:46:17 +0800] “GET /course/list/73.htm HTTP/1.0” 200 12125 0.010
8.8.8.8 - - [03/Jul/2014:23:46:58 +0800] “GET /html/notes/20140609/542.html HTTP/1.0” 200 94971 0.077

由于Tomcat日志是不规则的,需要先过滤清洗数据。

package ClassicCase

import org.apache.spark.{SparkConf, SparkContext}

/**
  * 业务场景:分析非结构化数据
  * Created by YJ on 2017/2/8.
  */


object case7 {
  def main(args: Array[String]): Unit = {
    val conf = new SparkConf().setMaster("local").setAppName("reduce")
    val sc = new SparkContext(conf)
    sc.setLogLevel("ERROR")
    val data = sc.textFile("hdfs://192.168.109.130:8020//user/flume/ClassicCase/case7/*")

    //filter 过滤长度小于0, 过滤不包含GET与POST的URL   
    val filtered = data.filter(_.length() > 0).filter(line => (line.indexOf("GET") > 0 || line.indexOf("POST") > 0))

    //转换成键值对操作  
    val res = filtered.map(line => {
      if (line.indexOf("GET") > 0) {
        //截取 GET 到URL的字符串  
        (line.substring(line.indexOf("GET"), line.indexOf("HTTP/1.0")).trim, 1)
      } else {
        //截取 POST 到URL的字符串  
        (line.substring(line.indexOf("POST"), line.indexOf("HTTP/1.0")).trim, 1)
      } //最后通过reduceByKey求sum  
    }).reduceByKey(_ + _)

    //触发action事件执行  
    res.collect()
  }
}

输出结果
(POST /service/notes/addViewTimes_779.htm,1),
(GET /service/notes/addViewTimes_900.htm,1),
(POST /service/notes/addViewTimes_900.htm,1),
(GET /notes/index-top-3.htm,1),
(GET /html/notes/20140318/24.html,1),
(GET /html/notes/20140609/544.html,1),
(POST /service/notes/addViewTimes_542.htm,2),
(POST /service/notes/addViewTimes_544.htm,1),
(GET /html/notes/20140609/542.html,2),
(POST /service/notes/addViewTimes_23.htm,1),
(GET /html/notes/20140617/888.html,3),
(POST /service/notes/addViewTimes_24.htm,1),
(GET /course/detail/3.htm,1),
(GET /course/list/73.htm,2),
(GET /html/notes/20140617/779.html,1),
(GET /html/notes/20140620/872.html,1)

统计每天新增用户数

1、原始数据

2017-01-01  a
2017-01-01  b
2017-01-01  c
2017-01-02  a
2017-01-02  b
2017-01-02  d
2017-01-03  b
2017-01-03  e
2017-01-03  f

根据数据可以看出我们要求的结果为:
2017-01-01 新增三个用户(a,b,c)
2017-01-02 新增一个用户(d)
2017-01-03 新增两个用户(e,f)

2、解题思路

2.1 对原始数据进行倒排索引

结果如下:

用户名 列一 列二 列三
a 2017-01-01 2017-01-02  
b 2017-01-01 2017-01-02 2017-01-03
c 2017-01-01    
d 2017-01-02    
e 2017-01-03    
f 2017-01-03    

2.2 统计列一中每个日期出现的次数

这样我们只看列一,统计每个日期在列一出现的次数,即为对应日期新增用户数。

3、代码

package com.dkl.leanring.spark.test

import org.apache.spark.sql.SparkSession

object NewUVDemo {
  def main(args: Array[String]): Unit = {
    val spark = SparkSession.builder().appName("NewUVDemo").master("local").getOrCreate()
    val rdd1 = spark.sparkContext.parallelize(
      Array(
        ("2017-01-01", "a"), ("2017-01-01", "b"), ("2017-01-01", "c"),
        ("2017-01-02", "a"), ("2017-01-02", "b"), ("2017-01-02", "d"),
        ("2017-01-03", "b"), ("2017-01-03", "e"), ("2017-01-03", "f")))
    //倒排
    val rdd2 = rdd1.map(kv => (kv._2, kv._1))
    //倒排后的key分组
    val rdd3 = rdd2.groupByKey()
    //取最小时间
    val rdd4 = rdd3.map(kv => (kv._2.min, 1))
    rdd4.countByKey().foreach(println)
  }
}

结果:

(2017-01-03,2)
(2017-01-02,1)
(2017-01-01,3)

附图:



 

猜你喜欢

转载自blog.csdn.net/zhang__rong/article/details/88355394
今日推荐