flume + hadoop + hive large data acquisition and processing

Copyright Notice: Copyright © 2018-2019 cool white open no ice Copyright https://blog.csdn.net/qq_21082615/article/details/91489493

Description: The overall structure of the entire off-line analysis is to use Flume collect log files from the FTP server and stored on Hadoop HDFS file system, followed by the Hadoop mapreduce cleaning log files, last used to build a data warehouse HIVE do offline analysis.

About how to deploy drew a map, I used a total of four servers, if you do not own so much you can simplify it, the flume and data processing are to be deployed on top of hadoop-master, so that only two to

Here Insert Picture Description

A, hadoop deployment preparations

准备三台新的服务器名为ip为
192.168.0.130 (这台服务器上部署hadoop-master/mysql/hive)
192.168.0.131 (这台服务器上部署hadoop-slave,从节点不需要部署,准备好服务器就行)
192.168.0.132 (这台服务器上部署flume)  
这些IP地址可以根据自己环境定义
所有的安装文件都是在/usr/local目录下

1、下载jdk
wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.tar.gz 

2、下载hadoop
wget http://apache.fayea.com/hadoop/common/hadoop-3.1.1/hhadoop-3.1.1.tar.gz

3、下载hive
wget http://apache.fayea.com/hive/hive-3.1.1/apache-hive-3.1.1-bin.tar.gz

4、修改192.168.0.130 主机名称为 hadoop-master ,其他两台一样
vi /etc/hosts
192.168.0.130   hadoop-master 
192.168.0.131   hadoop-slave 
192.168.0.132   flume 

5、配置三台服务器之间免密登录秘钥 authorized_keys
分别在三台服务器上执行命令,然后将三台服务器秘钥交换(就是将130,131秘钥放到132;将131,132秘钥放到130;将132,130秘钥放到131)
ssh-keygen -t rsa

eg:130上秘钥
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDyRmzmOolxy9jx8IA7GJl9SHxVXAC3HdEiAECZ+AxpDS5e+iISa53Q/EsW1MEc6H+QEnstJ7Vd7NvXC+H4W59JzJETJnmeOMsmjlYoqDarXYCCqZmlHg+3VprcWEdHI4zw2pJUpc5pwyiyutGFDeJ1Iu2wRgneVRiBAoTJJRuGj+OcXAuhV1+lEpdEzArVPwpdUXil4Umc+hK0RV+BvPMx2y253fXcMzowLz9iyRX75Prc/4r65YDMe8BRxQeYcUV/ioPpbkMhNDwlhiHhd5tKMYU3I9zQvL+fD7HAJtyjlkosRiVYxL+kyPyX6KbyXU+cLAaqJaJhTsqPggMAR3pl root@hadoop-master

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCch3bZdcvMmvkm9XGbZqsnmPuAeqAA+yqpPmB2ocehfRK23gY8hz8dxZ347qjbTp8v/b3uzmSWV7imZcm4+3LQftEeVXna+2JjbgeI41BfPg5k5kpKjjl3+xLKClx4EbVV2tlS3PtdnKzFMKuopyjVNnNA6uFqa1J68XO8USF7nkTr/AosicA5x1k6jR7CH+l7pP8OXJIffcPqgqE3aQzextlxZ69eUiBqqx8hahY5A0sM1Lyg4XI7pbWKGWR3Syxxb7jx005UfxwxiTs/zKiBfXxFn7IYWKC9X0QD1+FMIxOkdX1Kd4ddzw5L7sDl+GTz7bA3L7Q12tCEPUqPIZgh root@hadoop-slave

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDmhwkMaLCF6vNIFxsmBKAv87StNlPsOzlwERfCl1RL+caRlaJVmvw/Y1YXk/LYLwxPl5jxA4Gr9SVbYLaszTpq+uhZBqtYVlMYjhoDC9TYTZB/7jZsgbBaf/dGx0IXydzngMiGWMnIBl4Rorz57bHw1vc7S7F4nk0Bk8K6/O6Gm6PpUG4atvNSO6LNFWPBraPuBjbmaDRtmAVGB0AQJRmr+IFW5sMrZhJH7ZCf402PHQZDrXmYARtpXlXs/7i+m3IACSf/79bEXYrCmSXsQ2bsb+vyZLft9GJccayJVcu3nWuOzm9wgrv5tq7YnhwNcDm8tZDeas1+d2ff2ndJbB11 root@flume

6、分别在三台服务器上安装jdk
解压jdk
tar -zxvf jdk-8u131-linux-x64.tar.gz

创建软链
ln -s jdk-8u131-linux-x64 jdk8

配置环境变量
vi /etc/profile

export JAVA_HOME=/usr/local/jdk8
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH

source /etc/profile

Second, the installation Hadoop (server 192.168.0.130)

1、解压hadoop压缩包
tar -zxvf hadoop-3.1.1-src.tar.gz

2、创建软链
ln -s hadoop-3.1.1-src hadoop

配置文件都是在hadoop/etc下面
3、修改配置文件core-site.xml
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://192.168.0.130:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/usr/local/hadoop/tmp</value>
    </property> 
    <property>
        <name>io.file.buffer.size</name>
        <value>1024</value>
    </property> 
</configuration> 

 
注:
<fs.defaultFS>:默认文件系统的名称,URI形式,uri的scheme需要由(fs.SCHEME.impl)指定文件系统实现类;uri的authority部分用来指定host、port等;默认是本地文件系统。HA方式,这里设置服务名,例如:hdfs:// 192.168.0.194:9000,HDFS的客户端访问HDFS需要此参数;
<hadoop.tmp.dir>:Hadoop的临时目录,其它目录会基于此路径,本地目录。只可以设置一个值;建议设置到一个足够空间的地方,而不是默认的/tmp下,服务端参数,修改需重启;
<io.file.buffer.size>:在读写文件时使用的缓存大小,这个大小应该是内存Page的倍数,建议1M。 

4、修改配置文件hdfs-site.xml
<configuration>
    <property> 
         <name>dfs.namenode.name.dir</name>
         <value>file:/usr/local/hadoop/dfs/name</value> 
    </property>
    <property> 
         <name>dfs.datanode.data.dir</name>
         <value>file:/usr/local/hadoop/dfs/data</value> 
    </property> 
    <property> 
         <name>dfs.replication</name>
         <value>2</value> 
    </property> 
    <property> 
         <name>dfs.namenode.secondary.http-address</name>
         <value>192.168.0.130:9001</value> 
    </property> 
    <property> 
         <name>dfs.webhdfs.enabled</name>
         <value>true</value> 
    </property>  
</configuration>  
 
注:
<dfs.namenode.name.dir>:本地磁盘目录,NN存储fsimage文件的地方;可以是按逗号分隔的目录列表,fsimage文件会存储在全部目录,冗余安全;这里多个目录设定,最好在多个磁盘,另外,如果其中一个磁盘故障,不会导致系统故障,会跳过坏磁盘。由于使用了HA,建议仅设置一个,如果特别在意安全,可以设置2个;
<dfs.datanode.data.dir>:本地磁盘目录,HDFS数据应该存储Block的地方。可以是逗号分隔的目录列表(典型的,每个目录在不同的磁盘),这些目录被轮流使用,一个块存储在这个目录,下一个块存储在下一个目录,依次循环;每个块在同一个机器上仅存储一份,不存在的目录被忽略;必须创建文件夹,否则被视为不存在;
<dfs.replication>:数据块副本数,此值可以在创建文件是设定,客户端可以只有设定,也可以在命令行修改;不同文件可以有不同的副本数,默认值用于未指定时。 
<dfs.namenode.secondary.http-address>:SNN的http服务地址,如果是0,服务将随机选择一个空闲端口,使用了HA后,就不再使用SNN;
<dfs.webhdfs.enabled>:在NN和DN上开启WebHDFS (REST API)功能。 

5、修改配置文件mapred-site.xml
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>192.168.0.130:10020</value>
    </property> 
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>192.168.0.130:19888</value>
    </property> 
</configuration>  

6、修改配置文件yarn-site.xml   
<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property> 
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>192.168.0.130:8032</value>
    </property> 
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>192.168.0.130:8030</value>
    </property> 
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>192.168.0.130:8031</value>
    </property> 
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>192.168.0.130:8033</value>
    </property> 
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>192.168.0.130:8088</value>
    </property> 
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>8192</value>
    </property>  
</configuration>

注:
<mapreduce.framework.name>:MapReduce按照任务大小和设置的不同,提供了两种任务模
式:①本地模式(LocalJobRunner实现)mapreduce.framework.name设置为local,则不会使
用YARN集群来分配资源,在本地节点执行。在本地模式运行的任务,无法发挥集群的优势。在
web UI是查看不到本地模式运行的任务。②Yarn模式(YARNRunner实现)
mapreduce.framework.name设置为yarn,当客户端配置mapreduce.framework.name为yarn
时,客户端会使用YARNRunner与服务端通信,而YARNRunner真正的实现是通过
ClientRMProtocol与RM交互,包括提交Application,查询状态等功能。
<mapreduce.jobhistory.address><mapreduce.jobhistory.webapp.address>:Hadoop自带了一个历史服务器,可以通过历史
服务器查看已经运行完的Mapreduce作业记录,比如用了多少个Map、用了多少个Reduce、作
业提交时间、作业启动时间、作业完成时间等信息。 

7、配置hadoop环境变量
vi /etc/profile

export HADOOP_HOME=/usr/local/hadoop
export PATH=$HADOOP_HOME/bin:$PATH

8、修改脚本hadoop-env.sh
export JAVA_HOME=/usr/local/jdk8

9、修改节点文件slave
vi slave
192.168.0.131

10、初始化hadoop-master
hadoop namenode -format

11、启动服务或关闭
./sbin/start-all.sh
./sbin/stop-all.sh

如果启动报错
ERROR: Attempting to operate on hdfs namenode as root
 ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.
 Starting datanodes
 ERROR: Attempting to operate on hdfs datanode as root
 ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation.
 Starting secondary namenodes [hserver1]
 ERROR: Attempting to operate on hdfs secondarynamenode as root
 ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting operation.
 Starting resourcemanager
 ERROR: Attempting to operate on yarn resourcemanager as root
 ERROR: but there is no YARN_RESOURCEMANAGER_USER defined. Aborting operation.
 Starting nodemanagers
 ERROR: Attempting to operate on yarn nodemanager as root
 ERROR: but there is no YARN_NODEMANAGER_USER defined. Aborting operation.

错误原因缺少用户自定义
vim sbin/start-dfs.sh
vim sbin/stop-dfs.sh

在顶部空白添加               
HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root

vim sbin/start-yarn.sh
vim sbin/stop-yarn.sh
在顶部添加
YARN_RESOURCEMANAGER_USER=root  
HADOOP_SECURE_DN_USER=yarn 
YARN_NODEMANAGER_USER=root 

12、查看组件
jps
19906 Jps
19588 NodeManager
19450 ResourceManager
19196 SecondaryNameNode
18845 NameNode
18975 DataNode

13、访问hadoop
http://192.168.0.130:8088/
http://192.168.0.130:9870/

Third, the deployment hive (server 192.168.0.130)

1、解压压缩包
tar -zxvf apache-hive-3.1.1-bin.tar.gz

2、创建软链
ln -s apache-hive-3.1.1-bin hive

3、修改脚本hive-env.sh
cd conf
cp hive-env.sh.template hive-env.sh

HADOOP_HOME=/usr/local/hadoop
export HIVE_CONF_DIR=/usr/local/hive/conf
export HIVE_AUX_JARS_PATH=/usr/local/hive/lib

Fourth, installation and deployment mysql

https://blog.csdn.net/qq_21082615/article/details/91489275

Fifth, modify the configuration file hive

1、修改配置文件hive-site.xml{system:java.io.tmpdir} 改成 /data/hive/tmp{system:user.name} 改成 {user.name}
        
修改数据库连接
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://地址:3306/hive?createDatabaseIfNotExist=true&characterEncoding=UTF-8&useSSL=false</value>

<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>

<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>

<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value> 

2、下载mysql驱动到hive/lib
wget https://www.mysql.com//Downloads/Connector-J/mysql-connector-java-5.1.45.tar.gz

tar -zxvf mysql-connector-java-5.1.45.tar.gz

cp mysql-connector-java-5.1.45-bin.jar /usr/local/hive/lib/

3、初始化mysql hive/bin
./schematool -initSchema -dbType mysql

4、启动hive
hive --service metastore > metastore.log 2>&1 &

5、查看是否启动成功
ps -ef|grep hive
看看数据库hive是不是生成了很多表

6、启动 hiveserver2服务
hive --service hiveserver2 > hiveserver2.log 2>&1 &

修改配置文件hive-site.xml
<name>hive.server2.thrift.bind.host</name>
<value>127.0.0.1</vaule>

<name>hive.server2.thrift.port</name>
<value>10000</value>
 
账号密码
<name>hive.server2.thrift.client.user</name>
<value>root</vaule>
 
<name>hive.server2.thrift.client.password</name>
<value>123456</value>
  
配置远程访问
<property>
    <name>hive.metastore.local</name>
    <value>false</value>
</property>
<property>
    <name>hive.metastore.uris</name>
    <value>thrift://192.168.0.130:9083</value>
</property> 

7、启动beeline 
执行命令 beeline
执行链接!connect jdbc:hive2://127.0.0.1:10000 

如果连接报以下错:
Could not open client transport with JDBC Uri: jdbc:hive2://127.0.0.1:10000: Failed to open new session: java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: root is not allowed to impersonate root (state=08S01,code=0)

修改hadoop文件配置core-site.xml xxx 为用户名
<property>
   <name>hadoop.proxyuser.xxx.hosts</name>
   <value>*</value>
</property>
<property>
   <name>hadoop.proxyuser.xxx.groups</name>
   <value>*</value>
</property> 

8、创建一张表
CREATE TABLE t2(id int, name string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';

Sixth, the deployment flume (server 192.168.0.132)

  1. flume is distributed log collection system, the data is transferred to a destination to be collected.
  2. flume which has a core concept, called the agent. agent is a java process running log collection node.
  3. agent which contains three core components: source, channel, sink. source assembly dedicated to the collection of logs, the log can handle various types of data formats, including avro, thrift, exec, jms, spooling directory, netcat, sequence generator, syslog, http, legacy, custom. source data collected after the assembly, temporarily stored in the channel. channel assembly is for temporarily storing data in a secondary agent, can be stored in memory, jdbc, file, custom. The data channel will only be deleted after sink sent successfully. sink assembly is used to send data to the destination component, the destination including hdfs, logger, avro, thrift, ipc, file, null, hbase, solr, custom.
  4. In the entire data transmission process, the flow event. In the event the transaction is guaranteed to be level.
  5. flume agent can support multi-level flume, supporting fan (fan-in), a fan (fan-out).
1、下载解压文件
wget http://mirror.bit.edu.cn/apache/flume/1.9.0/apache-flume-1.9.0-bin.tar.gz 

tar -zxvf apache-flume-1.9.0-bin.tar.gz 

软链
ln -s apache-flume-1.9.0-bin flume

2、配置环境变量
export FLUME_HOME=/usr/local/flume
export PATH=$PATH:$FLUME_HOME/bin

3、修改配置文件flume-env.sh
cp flume-env.sh.template flume-env.sh
export JAVA_HOME=/usr/local/jdk8

4、修改属性文件flume-conf.properties
cp flume-conf.properties.template flume-conf.properties

#配置Agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
#配置Source
a1.sources.r1.type = exec
a1.sources.r1.channels = c1
a1.sources.r1.deserializer.outputCharset = UTF-8 
#配置需要监控的日志输出目录
a1.sources.r1.command = tail -f /opt/logs/test.log 
#配置Sink
a1.sinks.k1.type = hdfs
a1.sinks.k1.channel = c1
a1.sinks.k1.hdfs.useLocalTimeStamp = true
#hive表目录
a1.sinks.k1.hdfs.path = hdfs://192.168.0.130:9000/user/hive/warehouse/t2/   
a1.sinks.k1.hdfs.filePrefix = %Y-%m-%d-%H
a1.sinks.k1.hdfs.fileSuffix = .log
a1.sinks.k1.hdfs.minBlockReplicas = 1
a1.sinks.k1.hdfs.fileType = DataStream
a1.sinks.k1.hdfs.writeFormat = Text
#多长时间推送一次
a1.sinks.k1.hdfs.rollInterval = 86400
#文件大小
a1.sinks.k1.hdfs.rollSize = 1000000
#条数
a1.sinks.k1.hdfs.rollCount = 10000
a1.sinks.k1.hdfs.idleTimeout = 0
#配置Channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
#将三者连接
a1.sources.r1.channel = c1
a1.sinks.k1.channel = c1

5、启动
flume-ng agent -n a1 -c /usr/local/flume/conf -f /usr/local/flume/conf/flume-conf.properties -n agent1 -Dflume.root.logger=INFO,console &

6、如果启动报错的话,基本上就是缺少hadoop架包,将架包复制到flume/lib
架包在/usr/local/hadoop/share/hadoop
hadoop-auth-3.1.1.jar
hadoop-common-3.1.1.jar
hadoop-hdfs-3.1.1.jar
hadoop-hdfs-client-3.1.1.jar
hadoop-mapreduce-client-core-3.1.1.jar
commons-configuration2-2.1.1.jar
woodstox-core-5.0.3.jar
stax2-api-3.1.4.jar
htrace-core4-4.1.0-incubating.jar
七、hive操作命令

 show databases; # 查看某个数据库
 use 数据库;      # 进入某个数据库
 show tables;    # 展示所有表
 desc 表名;            # 显示表结构
 show partitions 表名; # 显示表名的分区
 show create table_name;   # 显示创建表的结构
 建表语句
 内部表
 use xxdb; create table xxx;
 创建一个表,结构与其他一样
 create table xxx like xxx;
 外部表
 use xxdb; create external table xxx;
 分区表
 use xxdb; create external table xxx (l int) partitoned by (d string)
 内外部表转化
 alter table table_name set TBLPROPROTIES ('EXTERNAL'='TRUE'); # 内部表转外部表
 alter table table_name set TBLPROPROTIES ('EXTERNAL'='FALSE');# 外部表转内部表
 表结构修改
 重命名表
 use xxxdb; alter table table_name rename to new_table_name;
 增加字段
 alter table table_name add columns (newcol1 int comment ‘新增’)
 修改字段
 alter table table_name change col_name new_col_name new_type;
 删除字段(COLUMNS中只放保留的字段)
 alter table table_name replace columns (col1 int,col2 string,col3 string)
 删除表
 use xxxdb; drop table table_name;
 删除分区
 注意:若是外部表,则还需要删除文件(hadoop fs -rm -r -f  hdfspath)
 alter table table_name drop if exists partitions (d=‘2016-07-01');
 字段类型
 tinyint, smallint, int, bigint, float, decimal, boolean, string
 复合数据类型
 # struct, array, map

Eight, hadoop operation command

要从HDFS中删除文件,可以使用以下命令:
hadoop fs -rm -r -skipTrash /path_to_file/file_name
要从HDFS中删除文件夹,可以使用以下命令:
hadoop fs -rm -r -skipTrash /folder_name 
hadoop fs -ls  /查看目录
hadoop fs -lsr / 递归查询文件 

Nine, the program reads the hive next step is data processing and analysis

https://blog.csdn.net/qq_21082615/article/details/91374550

Guess you like

Origin blog.csdn.net/qq_21082615/article/details/91489493