spark(11)-Top N(course20)

说明:只要是改变列的个数数据,一般都是用map操作,map操作可以将原有的一行(作为一个key),映射成新的键值对,如file.map(line => (line.toInt,line))//原来的line作为value,原来的line进行变换后作为key

1. 基础topN

原理:

将读取的文件的每一行中,多增加一个字段作为key,map操作即可(java中mapToPair)。再利用排序操作(前一节已详细讲解)sortByKey排序,最后用take取出前n即可。

操作实践:排序–>取前N项

待排序数据topNBasic.txt:

36
2
10
9
8
9
6
7
11
20
package cn.whbing.spark.SparkApps.cores;

import java.util.List;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.PairFunction;

import scala.Tuple2;

public class TopNBasic {
    public static void main(String[] args) {
        SparkConf conf = new SparkConf();
        conf.setAppName("SecondSort").setMaster("local");

        JavaSparkContext sc = new JavaSparkContext(conf);
        sc.setLogLevel("OFF");
        JavaRDD<String> lines = sc.textFile("D://javaTools//EclipseWork2//SparkApps//topNBasic.txt");

        //将line转化为(Integer,line)
        JavaPairRDD<Integer, String> pairs = lines.mapToPair(new PairFunction<String, Integer, String>() {

            @Override
            public Tuple2<Integer, String> call(String line) throws Exception {
                return new Tuple2<Integer, String>(Integer.valueOf(line), line);
            }
        });
        JavaPairRDD<Integer, String>  sortedPairs = pairs.sortByKey(false);//Integer,String型已经默认可以排序
        List<Tuple2<Integer, String>> topN = sortedPairs.take(5);
        for(Tuple2<Integer, String> perTopN : topN){
            System.out.println(perTopN._2);
        }       
    }
}

结果:

36
20
11
10
9

2. 分组topN

分组排序:有不同类型的数据,要找出每种类型数据中的topN

先按类型分组,然后再排序

操作实践:分组–>比较的方式获得topN

待排序数据topNGroup.txt:

java 12
spark 30
hadoop 10
java 16
spark 302
hadoop 101
java 19
spark 210
hadoop 108
hadoop 88
java 123
spark 85
hadoop 95
java 76
java 800
spark 456
hadoop 45
package cn.whbing.spark.SparkApps.cores;

import java.util.Arrays;
import java.util.Iterator;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.api.java.function.PairFunction;
import org.apache.spark.api.java.function.VoidFunction;

import scala.Tuple2;

/*
 * 分组topN:
 * 第一步:先分组
 * 第二步:在每个组中获得前N个
 */
public class TopNGroup {

    public static void main(String[] args) {
        SparkConf conf = new SparkConf();
        conf.setAppName("SecondSort").setMaster("local");

        JavaSparkContext sc = new JavaSparkContext(conf);
        sc.setLogLevel("WARN");
        JavaRDD<String> lines = sc.textFile("D://javaTools//EclipseWork2//SparkApps//topNGroup.txt");
        JavaPairRDD<String, Integer> pairs = lines.mapToPair(new PairFunction<String, String,Integer>() {
            //String:lines读进来的内容,

            @Override
            public Tuple2<String, Integer> call(String line) throws Exception {
                String[] splited = line.split(" ");
                return new Tuple2(splited[0], Integer.valueOf(splited[1]));
            }           
        });
        JavaPairRDD<String,Iterable<Integer>> grouped = pairs.groupByKey();
        //groupByKey的key为原来的key,value为原来的value的集合

        //选取value中的前N个
        JavaPairRDD<String,Iterable<Integer>> top5 = grouped.mapToPair(
                new PairFunction<Tuple2<String,Iterable<Integer>>, String, Iterable<Integer>>() {
                    //传进来的是Tuple2<String,Iterable<Integer>>
                    //返回的仍然是该类型,只是iterable进行处理了
                    @Override
                    public Tuple2<String, Iterable<Integer>> call(Tuple2<String, Iterable<Integer>> t)
                            throws Exception {
                        String key = t._1;
                        Integer[] value = new Integer[5];//用于存放top5
                        Iterator it = t._2.iterator();
                        while(it.hasNext()){
                            Integer top = (int)it.next();
                            for(int i=0;i<5;i++){
                                if(value[i]==null){
                                    value[i]=top;
                                    break;
                                }else if( top > value[i]){
                                    //第i个及以后的数据往后移
                                    for(int j=4;j>i;j--){
                                        value[j]=value[j-1];
                                    }
                                    value[i] = top;
                                    break;
                                }
                            }
                        }
                        return new Tuple2<String, Iterable<Integer>>(key, Arrays.asList(value));
                    }
        });
        //打印出来
        top5.foreach(new VoidFunction<Tuple2<String,Iterable<Integer>>>() {

            @Override
            public void call(Tuple2<String, Iterable<Integer>> t) throws Exception {
                System.out.println("group by key and the top5,key:"+t._1);
                System.out.println(t._2);
            }
        });

    }
}

结果:、

group by key and the top5,key:spark
[456, 302, 210, 85, 30]
group by key and the top5,key:hadoop
[108, 101, 95, 88, 45]
group by key and the top5,key:java
[800, 123, 76, 19, 16]

猜你喜欢

转载自blog.csdn.net/answer100answer/article/details/78776132