Mongodb迁移数据至clickhouse,并使用clickhouse的kafak引擎消费数据。使用pykafka、pymongo、clickhouse_driver

前段时间工作中遇到一个需求,产品在查询mongo中的相关数据时发现随着总体的数据量越来越大,通过平台查询想要的数据需要等待时间比较长,再加上使用的mongo是由aws管理的,每月也有一定的成本,感觉长期以后整体体验肯定不好。于是经过调研选择了几个方案,我负责先使用clickhouse做为解决方案,看效果如何。同时mongodb还正常使用。
先说下思路:
1.分析mongo中原有的数据提炼出需要的字段作为clikchouse中的建表字段。
2.clickhouse建库建表,通过脚本将mongo中数据解析并插入clickhouse中。
3.给clickhouse接入kafka
4.备份数据至S3,防止数据意外丢失

思路理顺了,开干!遇到问题再解决。

1.分析mongo中原有的数据提炼出需要的字段作为clikchouse中的建表字段。
经过分析,采用mongo中文档对象的key作为clickhouse的建表字段,建表语句很简单。此次需求共需要3张表,其中一张建表sql:
CREATE TABLE event (字段一 类型,字段二 类型,……,event_part String,event_date date)ENGINE = MergeTree() PARTITION BY toYYYYMM(event_date) ORDER BY (event_part)
2.使用python脚本将mongo中数据迁移至clickhouse中:

#导包
import time
from pymongo import MongoClient
from datetime import datetime
import argparse
from clickhouse_driver import Client

class MongodbUtil:
    
    def __init__(self,mongo_url):
        self.mongo_url = mongo_url
        self.myclient = MongoClient(host = mongo_url)
        #选择数据库
        self.db = self.myclient.data_analysis
        #必须加这条,否则报错 账号和密码
        self.db.authenticate('账号','密码')
        #选择集合(表)
        self.mycol_event = self.db["event"]
        self.mycol_ios_dau = self.db["ios_dau"]
        self.mycol_ios_dnu = self.db["ios_dnu"]
    def select_event(self,sum,date):
        print(time.strftime('%Y-%m-%d %H:%M:%S', time.localtime()),"即将导入:",date,"的event数据----")
        start = time.perf_counter()
        insert_ck_sql = "insert into event(app_key,cdate,event_part,event_date) VALUES"
        cars = self.mycol_event.find({
    
    "app_key": {
    
    "$in":["key1", "key2"]},"cdate": date})
        event = []
        temp = []
        count = 0
        for car in cars:
            app_key = self.str_is_none(car.get('app_key'))
            cdate = self.number_is_none(car.get('cdate'))
            event_part = self.str_is_none(car.get('event_part'))
            event_date_fro = datetime.fromtimestamp(event_time/ 1000.0)
            event_date = datetime.strptime(event_date_fro.strftime('%Y%m%d'),"%Y%m%d").date()
            temp = [app_key,cdate,event_part,event_date]
            event.append(temp)
            count = count + 1
            if count == 10000:
				#执行插入clickhouse中,避免频繁IO,每10000条差一次
                clickhouse_util.client.execute(insert_ck_sql,event,types_check=True)
                sum+=10000
                print(time.strftime('%Y-%m-%d %H:%M:%S', time.localtime()),"导出结束,导出日期:",date,",累计导出:",sum,"条数据")
                event = []
                count = 0
        #最后将不足10000的剩余部分插入
        clickhouse_util.client.execute(insert_ck_sql,event,types_check=True)
        end = time.perf_counter()
        print(time.strftime('%Y-%m-%d %H:%M:%S', time.localtime()),"导出event结束,导出日期:",date,",累计导出:",sum+count,"条数据,","共耗时:",end-start,"秒")

    #两个判空函数,有的文档没有部分key值,防止报错,赋予空值
    def str_is_none(self,column):
        if type(column).__name__ == 'NoneType':
            return ""
        else:
            return str(column)
            
    def number_is_none(self,column):
        if type(column).__name__ == 'NoneType':
            return 0
        else:
            return column
#连接mongo
mongodb_util = MongodbUtil("填写mongourl")

class ClickHouseUtil:

    def __init__(self, host, port, user, password, database):
        self.host = host
        self.port = port
        self.user = user
        self.password = password
        self.database = database
        self.client = Client(host=self.host, port=self.port, user=self.user, password=self.password, database=self.database)

#连接clickhouse     
clickhouse_util = ClickHouseUtil("IP地址",9000,"用户名","密码","数据库")

def main():
    print("main---begin")
    start = time.perf_counter()
    #通过argparse在运行脚本时赋值date,选择想导出的日期
    ap = argparse.ArgumentParser()
    ap.add_argument('-d', '--date', dest="date", type=int, default="")
    args = ap.parse_args()
    # 导出mongodb中的event并插入到clickhouse中的event中
    mongodb_util.select_event(0,args.date)
    end = time.perf_counter()
    print("从mongo中导出数据到clickhouse中,累计耗时:",(end-start),"秒,导出日期:",args.date)
    print("main---over")
main()

运行脚本并传入日期参数,由于每天数据量较大,使用nohup后台运行,防止意外中断

for var in 20210601 20210602 20210603; do nohup python -u MongoClickhouseTest.py --date $var >> MongoClickhouseTest.out 2>&1 & done;

3.为clickhouse接入kafka,摆脱对mongodb的数据依赖
使用clickhouse的kafak引擎消费数据,使用kafka引擎的表有个缺点,无法直接将数据存储至我们想存储的表中,因为采用kafka引擎的表不能持久化数据,需要创建一个物化视图将数据转移到我们想存储的表中。

-- clickhouse数据库中创建kafka引擎表,该表不能持久化数据,所以需要物化视图将其导出到持久化表
create table source(temp String)ENGINE = Kafka() SETTINGS kafka_broker_list = 'ip:端口号', kafka_topic_list = 'kafka主题',kafka_group_name = '消费组', kafka_format = 'JSONAsString', kafka_num_consumers = 1
-- 这里有两点需要注意,
-- 1.必须指定与mongo消费同一个主题的不同消费组,否则会与mongo原有数据冲突。
-- 2.kafka_format最好指定为'JSONAsString',而不是json,原因是clickhouse无法直接解析多层嵌套的json数据。
-- 创建物化视图,执行即开启,
CREATE MATERIALIZED VIEW source_mv TO event 
AS select 
JSONExtractString(JSONExtractString(JSONExtractRaw(temp,'msg_header'),'app_key') as app_key,
startsWith(JSONExtractString(JSONExtractRaw(temp,'msg'),'event_name'),'event1') = 1,'event2','event3') as event_part,
toDate(toDateTime(FROM_UNIXTIME(toInt64(intDiv(JSONExtract(JSONExtractRaw(temp,'msg'),'server_time','UInt64'),1000)), '%Y-%m-%d'))) AS event_date  
from source
-- clickhouse有着非常强大的内置函数,可进行很复杂的逻辑处理

4.至此,数据迁移工作已经完成,为防止clickhouse中数据意外丢失问题,将kafka中相同主题不同消费组的数据备份一份至S3。
先将每天的文件写一份到当前服务器上:

from sys import flags
from pykafka import KafkaClient
import kafka
from boto3.session import Session
import time
import json

client = KafkaClient(hosts="ip1:端口号,ip2:端口号")
consumer = kafka.KafkaConsumer('消费主题名',
                         group_id='消费组名',
                         bootstrap_servers=['ip1:端口号'],
                         value_deserializer=lambda m: json.loads(m.decode('ascii'))
                         )
res = []
count = 0 
now_time= int(round(time.time()*1000))
now_date = int(time.strftime("%Y%m%d", time.localtime(now_time/1000)))
temp_date = now_date
filename = 'data_'+str(now_date)+'.txt'
tomorrow_filename = 'data_'+str(now_date+1)+'.txt'

def write(cdate,row_data,filename):
        global res
        global count
        global now_date
        global temp_date
        if cdate > temp_date:
            s3_file = open(filename, 'a')
            s3_file.writelines(res)
            s3_file.close()
            count = 0
            res = []
        res.append(row_data)
        res.append("\r\n")
        count += 1
        if count == 10000:
            s3_file = open(filename, 'a')
            s3_file.writelines(res)
            print("write------------------complete--------------")
            s3_file.close()
            count = 0
            res = []
    
def main(): 
    global now_date
    global filename
    global temp_date
    global now_time
    for message in consumer:       
        timestamp = message.value.get('msg').get('server_time')
        time_local = time.localtime(timestamp/1000)
        cdate = int(time.strftime("%Y%m%d", time_local))
        #
        if cdate > now_date:
            #先写如之前剩余不足10000的
            write(cdate,str(message.value),filename)
            filename = 'ios_data_'+str(cdate)+'.txt'
            # 复原now_date
            now_time= int(round(time.time()*1000))
            now_date = int(time.strftime("%Y%m%d", time.localtime(now_time/1000)))
            temp_date = now_date
        #
        elif cdate == now_date:
            now_time= int(round(time.time()*1000))
            now_date = int(time.strftime("%Y%m%d", time.localtime(now_time/1000)))
            if now_date>temp_date:
                # 防止凌晨过了12点可能还有前一天的数据     
                now_date = now_date - 1
                write(cdate,str(message.value),filename)   
                continue
            # 写入文件
            write(cdate,str(message.value),filename)
main()  

定时脚本每天将备份数据上传至S3:

from boto3.session import Session
import time
import datetime

aws_key = 'awskey'
aws_secret = 'aws密码'
session = Session(aws_access_key_id=aws_key,
aws_secret_access_key=aws_secret,
region_name='us-west-2')

s3 = session.resource('s3')
client = session.client('s3')
#桶名
bucket = 'event-bucket'

now_time= int(round(time.time()*1000))
now_date = int(time.strftime("%Y%m%d", time.localtime(now_time/1000)))
oneDayAgo = (datetime.datetime.now() - datetime.timedelta(days = 1))
temp_date = oneDayAgo.strftime("%Y%m%d")
filename = "data_"+str(temp_date)+".txt"
obj_name =  "s3_data_"+str(temp_date)+".txt"
def upload():
    filename = 'data_'+str(temp_date)+'.txt'
    objkey = filename
    data = open(objkey,'rb')
    #上传
    file_obj = s3.Bucket(bucket).put_object(Key=obj_name,Body=data)

upload()

到此已经完成了mongo到clickhouse的迁移及备份工作,如果大家有问题或者好的方法可以评论,看到都会回复。

猜你喜欢

转载自blog.csdn.net/weixin_44123540/article/details/118303490
今日推荐