Flume拦截器---实现按照时间生成数据目录

Flume中的拦截器(interceptor),当Source读取events发送到Sink的时候,在events header中加入一些有用的信息,或者对events的内容进行过滤,完成初步的数据清洗。这在实际业务场景中非常有用,Flume-ng 1.7中目前提供了以下拦截器:

Timestamp Interceptor;
Host Interceptor;
Static Interceptor;
UUID Interceptor;
Morphline Interceptor;
Search and Replace Interceptor;
Regex Filtering Interceptor;
Regex Extractor Interceptor;

可以对一个source指定多个拦截器,按先后顺序依次处理。如:

a1.sources.r1.interceptors=i1 i2  
a1.sources.r1.interceptors.i1.type=regex_filter  
a1.sources.r1.interceptors.i1.regex=\\{.*\\}  
a1.sources.r1.interceptors.i2.type=timestamp

Timestamp Interceptor
时间戳拦截器,将当前时间戳(毫秒)加入到events header中,key名字为:timestamp,值为当前时间戳。用的不是很多。比如在使用HDFS Sink时候,根据events的时间戳生成结果文件,

hdfs.path = hdfs://cdh5/tmp/dap/%Y%m%d
hdfs.filePrefix = log_%Y%m%d_%H

会根据时间戳将数据写入相应的文件中。
但可以用其他方式代替(设置useLocalTimeStamp = true)。

[hadoop@h71 conf]$ vi timestamp.conf
a1.sinks = k1
a1.channels = c1
 
a1.sources.r1.type = exec
a1.sources.r1.channels = c1
a1.sources.r1.command = tail -F /home/hadoop/hui/hehe.txt
a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type = timestamp
 
a1.sinks.k1.type=hdfs
a1.sinks.k1.channel=c1
a1.sinks.k1.hdfs.path=hdfs://h71:9000/hui/%y-%m-%d/%H
a1.sinks.k1.hdfs.filePrefix = log_%Y%m%d_%H
#配置fileType和writeFormat为下面的参数才能保证导入hdfs中的数据为文本格式
a1.sinks.k1.hdfs.fileType=DataStream
a1.sinks.k1.hdfs.writeFormat=Text
#当有数据存储时会每10秒滚动将文件名的后缀.tmp去掉,默认值为30秒
a1.sinks.k1.hdfs.rollInterval=10
#上面是按时间滚动,下面这个是按文件大小进行滚动
#a1.sinks.k1.hdfs.rollSize=1024
 
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
 
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

启动flume进程:

[hadoop@h71 apache-flume-1.6.0-cdh5.5.2-bin]$ bin/flume-ng agent -c conf/ -f conf/timestamp.conf -n a1 -Dflume.root.logger=INFO,console

产生数据:

[hadoop@h71 hui]$ echo "hello world" >> hehe.txt

查看结果:

[hadoop@h71 hui]$ hadoop fs -lsr /hui
drwxr-xr-x   - hadoop supergroup          0 2017-03-18 02:41 /hui/17-03-18
drwxr-xr-x   - hadoop supergroup          0 2017-03-18 02:41 /hui/17-03-18/02
-rw-r--r--   2 hadoop supergroup         12 2017-03-18 02:41 /hui/17-03-18/02/log_20170318_02.1489776083025.tmp

10秒中之后:(文件中的后缀1489776083025为时间戳)

[hadoop@h71 hui]$ hadoop fs -lsr /hui
drwxr-xr-x   - hadoop supergroup          0 2017-03-18 02:41 /hui/17-03-18
drwxr-xr-x   - hadoop supergroup          0 2017-03-18 02:41 /hui/17-03-18/02
-rw-r--r--   2 hadoop supergroup         12 2017-03-18 02:41 /hui/17-03-18/02/log_20170318_02.1489776083025

Host Interceptor
主机名拦截器。将运行Flume agent的主机名或者IP地址加入到events header中,key名字为:host(也可自定义)。

[hadoop@h71 conf]$ vi host.conf
a1.sinks = k1
a1.channels = c1
 
a1.sources.r1.type = exec
a1.sources.r1.channels = c1
a1.sources.r1.command = tail -F /home/hadoop/hui/hehe.txt
a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type = host
#参数为true时用IP192.168.8.71,参数为false时用主机名h71,默认为true
a1.sources.r1.interceptors.i1.useIP = false
a1.sources.r1.interceptors.i1.hostHeader = agentHost
 
a1.sinks.k1.type=hdfs
a1.sinks.k1.channel = c1
a1.sinks.k1.hdfs.path = hdfs://h71:9000/hui/%y%m%d
a1.sinks.k1.hdfs.filePrefix = qiang_%{agentHost}
#往生成的文件加后缀名.log
a1.sinks.k1.hdfs.fileSuffix = .log
a1.sinks.k1.hdfs.fileType = DataStream
a1.sinks.k1.hdfs.writeFormat = Text
a1.sinks.k1.hdfs.rollInterval = 10
 
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
 
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

启动flume进程:

[hadoop@h71 apache-flume-1.6.0-cdh5.5.2-bin]$ bin/flume-ng agent -c conf/ -f conf/host.conf -n a1 -Dflume.root.logger=INFO,console

报错:Caused by: java.lang.NullPointerException: Expected timestamp in the Flume event headers, but it was null
解决:在host.conf文件中加这么一行a1.sinks.k1.hdfs.useLocalTimeStamp = true

产生数据:

[hadoop@h71 hui]$ echo "hello world" >> hehe.txt

查看结果:

[hadoop@h71 apache-flume-1.6.0-cdh5.5.2-bin]$ hadoop fs -lsr /hui
drwxr-xr-x   - hadoop supergroup          0 2017-03-18 03:36 /hui/170318
-rw-r--r--   2 hadoop supergroup          2 2017-03-18 03:36 /hui/170318/qiang_h71.1489779401946.log

说明:Timestamp Interceptor和Host Interceptor这两个实验有毒啊。。。我一开始做的时候还正常,在重做一次的时候启动flume进程SINK, name: k1 started后就莫名其妙的卡在哪里不动了,也不报错,死活不好使,我也是醉了。。。

Static Interceptor
静态拦截器,用于在events header中加入一组静态的key和value。

[hadoop@h71 conf]$ vi static.conf
a1.sinks = k1
a1.channels = c1
 
a1.sources.r1.type = exec
a1.sources.r1.channels = c1
a1.sources.r1.command = tail -F /home/hadoop/hui/hehe.txt
a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type = static
a1.sources.r1.interceptors.i1.key = static_key
a1.sources.r1.interceptors.i1.value = static_value
 
a1.sinks.k1.type=hdfs
a1.sinks.k1.channel = c1
a1.sinks.k1.hdfs.path = hdfs://h71:9000/hui/
a1.sinks.k1.hdfs.filePrefix = qiang_%{static_key}
a1.sinks.k1.hdfs.fileType = DataStream
a1.sinks.k1.hdfs.writeFormat = Text
a1.sinks.k1.hdfs.rollInterval = 10
a1.sinks.k1.hdfs.useLocalTimeStamp = true
 
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
 
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

查看结果:

[hadoop@h71 apache-flume-1.6.0-cdh5.5.2-bin]$ hadoop fs -lsr /hui
drwxr-xr-x   - hadoop supergroup          0 2017-03-18 03:36 /hui/
-rw-r--r--   2 hadoop supergroup          2 2017-03-18 03:36 /hui/qiang_static_value.1489779401946

UUID Interceptor
UUID拦截器,用于在每个events header中生成一个UUID字符串,例如:b5755073-77a9-43c1-8fad-b7a586fc1b97。生成的UUID可以在sink中读取并使用。

[hadoop@h71 conf]$ vi uuid.conf
a1.sources = r1  
a1.sinks = k1  
a1.channels = c1  
 
a1.sources.r1.type = exec  
a1.sources.r1.channels = c1  
a1.sources.r1.command = tail -F /home/hadoop/hui/hehe.txt
a1.sources.r1.interceptors = i1
#type的参数不能写成uuid,得写具体,否则找不到类
a1.sources.r1.interceptors.i1.type = org.apache.flume.sink.solr.morphline.UUIDInterceptor$Builder
#如果UUID头已经存在,它应该保存
a1.sources.r1.interceptors.i1.preserveExisting = true
a1.sources.r1.interceptors.i1.prefix = UUID_
 
a1.sinks.k1.type = logger  
 
a1.channels.c1.type = memory  
a1.channels.c1.capacity = 1000  
a1.channels.c1.transactionCapacity = 100  
 
a1.sources.r1.channels = c1  
a1.sinks.k1.channel = c1  

运行flume进程后可看到:

Event: { headers:{id=UUID_1cb50ac7-fef0-4385-99da-45530cb50271} body: 68 65 6C 6C 6F 20 77 6F 72 6C 64                hello world }

Morphline Interceptor
Morphline拦截器,该拦截器使用Morphline对每个events数据做相应的转换。关于Morphline的使用,可参考
http://kitesdk.org/docs/current/morphlines/morphlines-reference-guide.html
后续再研究这块。

Search and Replace Interceptor
该拦截器用于将events中的正则匹配到的内容做相应的替换。

[hadoop@h71 conf]$ vi search.conf
a1.sources = r1  
a1.sinks = k1  
a1.channels = c1  
 
a1.sources.r1.type = exec  
a1.sources.r1.channels = c1  
a1.sources.r1.command = tail -F /home/hadoop/hui/hehe.txt
a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type = search_replace
a1.sources.r1.interceptors.i1.searchPattern = [0-9]+
a1.sources.r1.interceptors.i1.replaceString = xiaoqiang
a1.sources.r1.interceptors.i1.charset = UTF-8
 
a1.sinks.k1.type = logger  
 
a1.channels.c1.type = memory  
a1.channels.c1.capacity = 1000  
a1.channels.c1.transactionCapacity = 100  
 
a1.sources.r1.channels = c1  
a1.sinks.k1.channel = c1

启动flume进程:

[hadoop@h71 apache-flume-1.6.0-cdh5.5.2-bin]$ bin/flume-ng agent -c conf/ -f conf/search.conf -n a1 -Dflume.root.logger=INFO,console

产生数据:

[hadoop@h71 hui]$ echo "message 1" >> hehe.txt
[hadoop@h71 hui]$ echo "message 23" >> hehe.txt

在控制台可看到:

Event: { headers:{} body: 6D 65 73 73 61 67 65 20 78 69 61 6F 71 69 61 6E message xiaoqian }
Event: { headers:{} body: 6D 65 73 73 61 67 65 20 78 69 61 6F 71 69 61 6E message xiaoqian }

Regex Filtering Interceptor
该拦截器使用正则表达式过滤原始events中的内容。

[hadoop@h71 conf]$ vi filter.conf
a1.sources = r1  
a1.sinks = k1  
a1.channels = c1  
 
a1.sources.r1.type = exec  
a1.sources.r1.channels = c1  
a1.sources.r1.command = tail -F /home/hadoop/hui/hehe.txt
a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type = regex_filter
a1.sources.r1.interceptors.i1.regex = ^lxw1234.*
#该配置表示过滤掉不是以lxw1234开头的events。如果excludeEvents设为true,则表示过滤掉以lxw1234开头的events。
a1.sources.r1.interceptors.i1.excludeEvents = false
 
a1.sinks.k1.type = logger  
 
a1.channels.c1.type = memory  
a1.channels.c1.capacity = 1000  
a1.channels.c1.transactionCapacity = 100  
 
a1.sources.r1.channels = c1  
a1.sinks.k1.channel = c1

原始events内容为:

[hadoop@h71 hui]$ echo "message 1" >> hehe.txt 
[hadoop@h71 hui]$ echo "lxw1234 message 3" >> hehe.txt 
[hadoop@h71 hui]$ echo "message 2" >> hehe.txt 
[hadoop@h71 hui]$ echo "lxw1234 message 4" >> hehe.txt 

拦截后的events内容为:

Event: { headers:{} body: 6C 78 77 31 32 33 34 20 6D 65 73 73 61 67 65 20 lxw1234 message  }
Event: { headers:{} body: 6C 78 77 31 32 33 34 20 6D 65 73 73 61 67 65 20 lxw1234 message  }

Regex Extractor Interceptor
该拦截器使用正则表达式抽取原始events中的内容,并将该内容加入events header中。

[hadoop@h71 conf]$ vi extractor.conf
a1.sources = r1  
a1.sinks = k1  
a1.channels = c1  
 
a1.sources.r1.type = exec  
a1.sources.r1.channels = c1  
a1.sources.r1.command = tail -F /home/hadoop/hui/hehe.txt
a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type = regex_extractor
a1.sources.r1.interceptors.i1.regex = cookieid is (.*?) and ip is (.*)
a1.sources.r1.interceptors.i1.serializers = s1 s2
a1.sources.r1.interceptors.i1.serializers.s1.name = cookieid
a1.sources.r1.interceptors.i1.serializers.s2.name = ip
 
a1.sinks.k1.type = logger  
 
a1.channels.c1.type = memory  
a1.channels.c1.capacity = 1000  
a1.channels.c1.transactionCapacity = 100  
 
a1.sources.r1.channels = c1  
a1.sinks.k1.channel = c1

注意:1.把原博客中的a1.sources.r1.interceptors.i1.serializers.s1.type = default这两个删除掉,否则会报错:

Caused by: java.lang.ClassNotFoundException: default

2.正则表达式cookieid is (.?) and ip is (.?)改为cookieid is (.?) and ip is (.),否则无法匹配IP,events header中IP为空

该配置从原始events中抽取出cookieid和ip,加入到events header中。

原始的events内容为:

[hadoop@h71 hui]$ echo "cookieid is c_1 and ip is 127.0.0.1" >> hehe.txt 
[hadoop@h71 hui]$ echo "cookieid is c_2 and ip is 127.0.0.2" >> hehe.txt 
[hadoop@h71 hui]$ echo "cookieid is c_3 and ip is 127.0.0.3" >> hehe.txt

events header中的内容为:

Event: { headers:{cookieid=c_1, ip=127.0.0.1} body: 63 6F 6F 6B 69 65 69 64 20 69 73 20 63 5F 31 20 cookieid is c_1  }
Event: { headers:{cookieid=c_2, ip=127.0.0.2} body: 63 6F 6F 6B 69 65 69 64 20 69 73 20 63 5F 32 20 cookieid is c_2  }
Event: { headers:{cookieid=c_3, ip=127.0.0.3} body: 63 6F 6F 6B 69 65 69 64 20 69 73 20 63 5F 33 20 cookieid is c_3  }

Flume的拦截器可以配合Sink完成许多业务场景需要的功能,
比如:按照时间及主机生成目标文件目录及文件名;
配合Kafka Sink完成多分区的写入等等。


原文:https://blog.csdn.net/m0_37739193/article/details/77584909

猜你喜欢

转载自blog.csdn.net/jiede1/article/details/83451186
今日推荐