Hive实战:影音视频网站各种TopN指标分析(上)

版权声明:转载注明出处 https://blog.csdn.net/qq_15973399/article/details/89192753

1. 项目分析

谷粒影音Hive实战项目,上篇。

本篇主要准备环境和数据,下篇针对几个需求进行分析和操作。

1.1 数据结构

(1)视频表

字段

备注

详细描述

video id

视频唯一id

11位字符串

uploader

视频上传者

上传视频的用户名String

age

视频年龄

视频在平台上的整数天

category

视频类别

上传视频指定的视频分类

length

视频长度

整形数字标识的视频长度

views

观看次数

视频被浏览的次数

rate

视频评分

满分5分

ratings

流量

视频的流量,整型数字

conments

评论数

一个视频的整数评论数

related ids

相关视频id

相关视频的id,最多20个

(2)用户表

字段

备注

字段类型

uploader

上传者用户名

string

videos

上传视频数

int

friends

朋友数量

int

 1.2 环境准备

操作系统:虚拟机Linux CentOS7 安装了CDH-Hadoop-15.4.2相关组件

开发工具:IDEA2018

数据下载:链接:https://pan.baidu.com/s/1in4xxogEocxA9A2N04oQVA  提取码:50n0 

2. 数据清洗

通过观察原始数据,发现类别字段存在空格,容易将类别字段统计错误,如" People"和"People "实际上相同类别,但是由于空格的存在,导致类别不一致。

相关视频也是用"\t"分割的,会与前面的字段分割混淆,且该字段可有可无。

综上,数据清洗要完成的任务:

(1)清洗掉小于9长度的数据;

(2)将类别字段的空格去掉

(3)将相关id视频的分隔符修改为"&"

2.1 MapReduce清洗数据

(1)pom.xml文件

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.fanling</groupId>
    <artifactId>myvideo</artifactId>
    <packaging>jar</packaging>
    <version>1.0.0</version>
    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <hadoop_version>2.7.6</hadoop_version>
    </properties>
    <dependencies>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.11</version>
        </dependency>
        <dependency>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
            <version>1.2.17</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-common</artifactId>
            <version>${hadoop_version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>${hadoop_version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-hdfs</artifactId>
            <version>${hadoop_version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-mapreduce-client-core</artifactId>
            <version>${hadoop_version}</version>
        </dependency>
    </dependencies>
    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-jar-plugin</artifactId>
                <version>2.4</version>
                <configuration>
                    <archive>
                        <manifest>
                            <mainClass>com.fanling.myvideo.MyDriver</mainClass>
                        </manifest>
                    </archive>
                </configuration>
            </plugin>
        </plugins>
    </build>
</project>

(2)Mapper类

package com.fanling.myvideo;

import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

import java.io.IOException;

public class MyMapper extends Mapper<LongWritable, Text, NullWritable, Text> {
    Text v = new Text();

    @Override
    protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
        //读取行
        String line = value.toString();
        //分割数据
        String[] fields = line.split("\t");
        //少于长度9的数据舍弃
        if (fields.length < 9) {
            return;
        } else {
            //将类别的空格去掉
            fields[3] = fields[3].replaceAll(" ", "");
            //将相关视频类别的/t替换成&
            int count = fields.length;
            String temp = "";
            for (int i = 10; i <= count - 1; i++) {
                if (i == count - 1) {
                    temp += fields[i];
                } else {
                    temp += fields[i] + "&";
                }
            }
            //得到清理后的数据
            StringBuilder sb = new StringBuilder();
            for (int i = 0; i < 9; i++) {
                sb.append(fields[i] + "\t");
            }
            v.set(sb.toString() + temp);
            context.write(NullWritable.get(), v);
        }
    }
}

(3)Reducer类

不需要Reducer类

写完以上程序,打包到Linux上执行。

(4)在Linux上执行清理数据

首先将2.txt和user.txt上传到HDFS上,使用yarn jar 执行数据清洗,得到新的数据

[fanl@centos7 hadoop-cdh5.14.12]$ bin/hdfs dfs -ls /user/fanl/gulivideo
Found 2 items
-rw-r--r--   1 fanl supergroup   36498078 2019-04-12 21:22 /user/fanl/gulivideo/user.txt
-rw-r--r--   1 fanl supergroup    6794173 2019-04-12 21:22 /user/fanl/gulivideo/video.txt

 执行数据清洗

[fanl@centos7 hadoop-cdh5.14.12]$ bin/hadoop jar \
> /home/fanl/gulivideo/myvideo-1.0.0.jar \
> /user/fanl/gulivideo/video.txt /user/fanl/gulivideo/result

数据清洗成功! 

2.2 Hive 表准备

(1)创建数据库

hive (default)> create database videos;
OK
Time taken: 4.918 seconds
hive (default)> 

(2)创建两张源表和操作临时表

video_src和vide_temp都按照下面的方式建表

hive (videos)> create table video_src(
             > video_id string,
             > uploader string,
             > age int,
             > category array<string>,
             > length int,
             > views int,
             > rate float,
             > ratings int,
             > comments int,
             > related_id array<string>)
             > row format delimited fields terminated by '\t'
             > collection items terminated by '&';

 user_src和use_temp按照下面的字段建表

hive (videos)> create table user_src(
             > uploader string,
             > videos int,
             > friends int)
             > row format delimited fields terminated by '\t';

2.3 Hive 数据导入

(1)本地加载user.txt表的数据

hive (videos)> load data local inpath '/home/fanl/gulivideo/user.txt' into table user_src;
Loading data to table videos.user_src
Table videos.user_src stats: [numFiles=1, totalSize=36498078]
OK
Time taken: 0.933 seconds
hive (videos)> 

(2) HDFS上加载/user/fanl/gulivideo/result/part-r-00000

hive (videos)> load data inpath '/user/fanl/gulivideo/result/part-r-00000' into table user_src;
Loading data to table videos.user_src
Table videos.user_src stats: [numFiles=2, totalSize=42972470]
OK
Time taken: 0.384 seconds
hive (videos)> 

(3)将源表的数据导入到临时表

hive (videos)> insert into table video_temp select * from video_src;
hive (videos)> insert into table user_temp select * from user_src;

经过查询,两张表数据都成功导入。 

猜你喜欢

转载自blog.csdn.net/qq_15973399/article/details/89192753