flume sinks 到hbase 消费数据慢

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/chenzhenguo123/article/details/84108036

1、情况:flume   消费数据到habse中。

source 和 channel  都正常,速度很快,唯独sinks 到hbase的时候速度特别慢。大概是一秒存入一条数据 

top 查看flume cpu 飙升至99%

原的配置文件

a1.sources = r1
a1.sinks = k1
a1.channels = c1
 
# Describe/configure the source
a1.sources.r1.type =spooldir 
a1.sources.r1.spoolDir=/urldata/
# 通过以下配置指定消费完成后文件后缀
a1.sources.r1.fileSuffix = .COMPLETED  
a1.sources.r1.checkperiodic = 50

# Describe the sink
 #输入格式,DELIMITED和json
 
a1.sinks.k1.type = hbase
a1.sinks.k1.table = t_url
a1.sinks.k1.columnFamily = info
a1.sinks.k1.serializer = org.apache.flume.sink.hbase.RegexHbaseEventSerializer
##指定列名称,数据的第一列为ROW_KEY
a1.sinks.k1.serializer.colNames = rid,dir,username,sip,sport,dip,dport,bytes,starttime,action,url,descid,domain,type,subtype,words,line,platform,browser,grpids,referer,termtype
a1.sinks.k1.serializer.regex = (.*)\\|(.*)\\|(.*)\\|(.*)\\|(.*)\\|(.*)\\|(.*)\\|(.*)\\|(.*)\\|(.*)\\|(.*)\\|(.*)\\|(.*)\\|(.*)\\|(.*)\\|(.*)\\|(.*)\\|(.*)\\|(.*)\\|(.*)\\|(.*)\\|(.*)







a1.sinks.k1.channel = memoryChannel
 
# Use a channel which buffers events in memory

 #The maximum number of events stored in the channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 100000
a1.channels.c1.transactionCapacity = 10000

#The maximum number of events the channel will take from a source or give to a sink per transaction
 
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
a1.sinks.k1.batchSize = 200

解决办法:

是由于匹配正则的表达式太复杂导致 flume在处理数据的时候特别慢。需要优化flume的正则表达式分隔符

a1.sinks.k1.serializer.regex = ([^\\|]*)\\|([^\\|]*)\\|([^\\|]*)\\|([^\\|]*)\\|([^\\|]*)\\|([^\\|]*)\\|([^\\|]*)\\|([^\\|]*)\\|([^\\|]*)\\|([^\\|]*)\\|([^\\|]*)\\|([^\\|]*)\\|([^\\|]*)\\|([^\\|]*)\\|([^\\|]*)\\|([^\\|]*)\\|([^\\|]*)\\|([^\\|]*)\\|([^\\|]*)\\|([^\\|]*)\\|([^\\|]*)\\|([^\\|]*)

按照|分割符、分割正则表达式

后续测试数据正常

猜你喜欢

转载自blog.csdn.net/chenzhenguo123/article/details/84108036