构建实时数据可视化监控的全栈实现(Kafka+Spark+TimescaleDB+Flask+Node.js)

因为项目需求,需要构建一个实时的数据监控系统,把平台上报的业务数据以1分钟的粒度进行呈现。为此我构建了以下的一个架构来实现。

平台上报的业务数据会实时的发送消息给Kafka,例如平台每次为车辆进行OTA升级时,会发送一个OTA业务请求的事件,一个OTA业务完成或失败的事件。这些事件会发送到Kafka,然后Spark Streaming会进行实时的数据处理并保存到时序数据库。前端的WEB 报表平台会每一分钟调用后端提供的RESTAPI来读取数据库,进行报表的刷新。

Kafka消息系统的搭建

首先是搭建Kafka,我下载的是kafka 2.12版本。按照官网的介绍先启动zookeeper: bin/zookeeper-server-start.sh config/zookeeper.properties,然后启动kafka: bin/kafka-server-start.sh config/server.properties

编写一个producer程序模拟发送OTA的请求事件,以及OTA执行成功或失败的事件,如以下代码:

from kafka import KafkaProducer
import datetime
import time
import random

producer = KafkaProducer(bootstrap_servers='localhost:9092')
for i in range(100):
    ts = datetime.datetime.now().isoformat()
    msg = ts+','+'OTA request'
    producer.send('OTA',msg.encode('utf8'))
    time.sleep(0.5)
    flag = random.randint(1,10)
    ts = datetime.datetime.now().isoformat()
    if flag>2:
        msg = ts+','+'OTA complete'
    else:
        msg = ts+','+'OTA failure'
    producer.send('OTA',msg.encode('utf8'))

编写一个consumer程序订阅OTA这个主题,接收事件:

from kafka import KafkaConsumer
 
consumer = KafkaConsumer('OTA')
for msg in consumer:
    print((msg.value).decode('utf8'))

运行producer和consumer,可以看到能正常发送和接收事件。

创建时序数据库

因为数据是按照时间顺序实时上报的,因此采用时序数据库来进行数据的存放和后期的读取是最有效的。TimescaleDB是一个开源的时序数据库,可以作为postgres的插件运行。具体如何使用可以上官网https://docs.timescale.com/了解,官网上还有很好的一些教程,对纽约的出租车的出行情况进行数据分析。安装好timescaleDB之后,我们就可以在postgres上来创建一个数据库了。以下是创建一个OTA数据库,里面定义了一张ota的数据表,包括了2个字段时间戳和业务类型

create database ota;
\c ota
CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;
CREATE TABLE "ota" (ts TIMESTAMPTZ, serviceType text);

SPARK Streaming处理实时数据

接下来就是用一个实时数据处理平台来订阅Kafka的数据并实时写入到时序数据库了。这里我用的是SPARK streaming。先定义一个获取数据库连接池的程序connection_pool.py

import psycopg2
from psycopg2.pool import SimpleConnectionPool

conn_pool = SimpleConnectionPool(1,10,"dbname=ota user=postgres password=XXXXXX")

def getConnection():
    return conn_pool.getconn()

def putConnection(conn):
    conn_pool.putconn(conn)

def closeConnection():
    conn_pool.closeall()

实时处理Kafka数据并写入数据库sparkstream.py

from kafka import KafkaProducer
from pyspark.streaming import StreamingContext
from pyspark.streaming.kafka import KafkaUtils, TopicAndPartition
from pyspark import SparkConf, SparkContext
import connection_pool
offsetRanges = [] 

def start():  
    sconf=SparkConf()  
    sconf.set('spark.cores.max',3)  
    sc=SparkContext(appName='OTAStream',conf=sconf)  
    ssc=StreamingContext(sc,5)  
    brokers ="localhost:9092"  
    topic='OTA'  
    start = 0  
    partition = 0  
    ota_data = KafkaUtils.createDirectStream(
        ssc,
        [topic],
        kafkaParams={"metadata.broker.list":brokers},
        fromOffsets={TopicAndPartition(topic,partition):start}
    )  
    ota_data.foreachRDD(offset)
    ota_fields = ota_data.map(lambda x:x[1].split(','))
    ota_fields.foreachRDD(lambda rdd: rdd.foreachPartition(echo))
    ssc.start()  
    ssc.awaitTermination(15)  
    connection_pool.closeConnection()
 
def offset(rdd):  
    global offsetRanges  
    offsetRanges = rdd.offsetRanges()  

def echo(recordOfPartition):  
    conn = connection_pool.getConnection()
    cursor = conn.cursor()
    for record in recordOfPartition:
        sql = "insert into ota values('%s', '%s')" %(record[0], record[1])
        cursor.execute(sql)
    conn.commit()
    connection_pool.putConnection(conn)

if __name__ == '__main__':  
    start()

在SPARK的目录下运行以下命令来提交任务:bin/spark-submit --jars jars/spark-streaming-kafka-0-8-assembly_2.11-2.4.5.jar --py-files ~/projects/monitor/connection_pool.py ~/projects/monitor/sparkstream.py,同时启动Kafka的producer程序,之后查询ota数据库,可以看到里面会有相应的数据产生。

定义后端RESTAPI查询数据库

后端需要提供数据给前端,这里我用Flask来创建一个RESTAPI,代码如下:

from flask import make_response, Flask
from flask_cors import CORS
import psycopg2
import json
import math
conn = psycopg2.connect("dbname=ota user=postgres password=123456")
cursor = conn.cursor()
sql = "select a.five_sec, a.cnt as complete, b.cnt as failure, cast(a.cnt as float)/(a.cnt+b.cnt) as percent from " + \
    "(SELECT time_bucket('5 second', time) AS five_sec, count(*) as cnt FROM ota " + \
    "WHERE servicetype='OTA complete' GROUP BY five_sec ORDER BY five_sec) a " + \
    "full join " + \
    "(SELECT time_bucket('5 second', time) AS five_sec, count(*) as cnt FROM ota " + \
    "WHERE servicetype='OTA failure' GROUP BY five_sec ORDER BY five_sec) b " + \
    "on a.five_sec=b.five_sec ORDER BY a.five_sec DESC LIMIT 10;"
app = Flask(__name__)
CORS(app, supports_credentials=True)
@app.route('/ota')
def get_ota():
    cursor.execute(sql)
    timebucket = []
    complete_cnt = []
    failure_cnt = []
    complete_rate = []
    records = cursor.fetchall()
    for record in records:
        #timebucket.append(record[0].strftime('%Y-%m-%d %H:%M:%S'))
        timebucket.append(record[0].strftime('%H:%M:%S'))
        complete_cnt.append(0 if record[1]==None else record[1])
        failure_cnt.append(0 if record[2]==None else record[2])
        if record[1]==None:
            rate = 0.
        elif record[2]==None:
            rate = 100.
        else:
            rate = round(record[3],3)*100
        complete_rate.append(rate)
    timebucket = list(reversed(timebucket))
    complete_cnt = list(reversed(complete_cnt))
    failure_cnt = list(reversed(failure_cnt))
    complete_rate = list(reversed(complete_rate))
    result = {'category':timebucket,'complete':complete_cnt,'failure':failure_cnt,'rate':complete_rate}
    response = make_response(json.dumps(result))
    return response

以上代码中可以见到,采用时序数据库,可以很方便的对数据按时间顺序进行分桶查询,例如我以上的代码是对数据按照每5秒的间隔统计一次,计算每5秒内的OTA业务完成的次数和失败的次数,并计算完成率。最后把结果以JSON方式返回。

前端报表监控

最后一部分就是在前端进行报表展现,这里采用的是ECHARTS,这是百度开源的一个报表Javascript模块。我用Node.js来搭建前端的界面。

在命令行输入以下命令:

mkdir react-monitor
npm init -y
npm -i webpack webpack-cli -D
npm -i -D babel-core babel-loader@7 babel-preset-env babel-preset-react

编辑webpack.config.js文件:

var webpack = require('webpack');
var path = require('path');
const {CleanWebpackPlugin} = require("clean-webpack-plugin");
var APP_DIR = path.resolve(__dirname, 'src');
var BUILD_DIR = path.resolve(__dirname, 'dist');
const HtmlWebpackPlugin = require("html-webpack-plugin");
var config = {
    entry:APP_DIR+'/index.jsx',
    output:{
        path:BUILD_DIR,
        filename:'bundle.js'
    },
    module:{
        rules:[
            {
                test:/\.(js|jsx)$/,
                exclude:/node_modules/,
                use:{
                    loader:"babel-loader"
                }
            },
            {
                test:/\.css$/,
                loader:'style-loader!css-loader'
            }
        ]
    },
    devServer:{
        port:3000,
        contentBase:"./dist"
    },
    plugins:[
        new HtmlWebpackPlugin({
            template: "index.html",
            inject: true,
            sourceMap: true,
            chunksSortMode: "dependency"
        }),
        new CleanWebpackPlugin()
    ]
};
module.exports = config;

创建.babelrc文件,如以下配置:

{
    "presets": ["env","react"],
    "plugins": [[
        "transform-runtime",
        {
          "helpers": false,
          "polyfill": false,
          "regenerator": true,
          "moduleName": "babel-runtime"
        }
    ]]
}

在命令行中输入以下命令,安装NPM的包

npm install react react-dom -S
npm install html-webpack-plugin clean-webpack-plugin -D
npm install axios --save
npm install echarts --save

创建一个index.html页面,用于放置图表:

<!DOCTYPE html>
<html lang="en">
    <head>
        <meta charset="UTF-8">
        <meta name="viewport" content="width=device-width,initial-scale=1.0">
        <meta http-equiv="X-UA-Compatible" content="ie=edge">
        <title>hello world</title>
    </head>
    <body>
        <div id="chart" style="height:500px;width:1000px"></div>
    </body>
</html>

在src目录下创建一个index.jsx文件,用于创建react component封装echarts:

import React from 'react';
import {render} from 'react-dom';
import echarts from 'echarts';
import axios from 'axios';

let option = {
    tooltip: {
        trigger: 'axis',
        axisPointer: {
            type: 'cross',
            crossStyle: {
                color: '#999'
            }
        }
    },
    toolbox: {
        feature: {
            dataView: {show: true, readOnly: false},
            magicType: {show: true, type: ['line', 'bar']},
            restore: {show: true},
            saveAsImage: {show: true}
        }
    },
    legend: {
        data: ['OTA Complete', 'OTA Failure', 'OTA Complete Rate']
    },
    xAxis: [
        {
            type: 'category',
            data: [],
            axisPointer: {
                type: 'shadow'
            },
            axisLabel: {
                interval: 0,
                rotate: 60
            }
        }
    ],
    yAxis: [
        {
            type: 'value',
            name: 'Count',
            min: 0,
            max: 20,
            interval: 5,
            axisLabel: {
                formatter: '{value}'
            }
        },
        {
            type: 'value',
            name: 'Percent',
            min: 0,
            max: 100,
            interval: 10,
            axisLabel: {
                formatter: '{value} %'
            }
        }
    ],
    series: [
        {
            name: 'OTA Complete',
            type: 'bar',
            data: []
        },
        {
            name: 'OTA Failure',
            type: 'bar',
            data: []
        },
        {
            name: 'OTA Complete Rate',
            type: 'line',
            yAxisIndex: 1,
            data: []
        }
    ]
};

class App extends React.Component{
    constructor(props){
        super(props);
        this.state={
            category:[],
            series_data_1:[],
            series_data_2:[],
            series_data_3:[]
        };
    }
    async componentDidMount () {
        let myChart = echarts.init(document.getElementById('chart'));
        await axios.get('http://localhost:5000/ota')
        .then(function (response){
            option.xAxis[0].data = response.data.category;
            option.series[0].data = response.data.complete;
            option.series[1].data = response.data.failure;
            option.series[2].data = response.data.rate;
        })
        .catch(function (error) {
            console.log(error);
        });
        myChart.setOption(option,true)
        this.state.timer=setInterval(async ()=>{
            await axios.get('http://localhost:5000/ota')
            .then(function (response){
                option.xAxis[0].data = response.data.category;
                option.series[0].data = response.data.complete;
                option.series[1].data = response.data.failure;
                option.series[2].data = response.data.rate;
            })
            .catch(function (error) {
                console.log(error);
            });
            myChart.setOption(option,true);
        }, 1000*2)
    }
    async getData(){
        return await axios.get('http://localhost:5000/ota');
    }
    render(){
        return 'abc'
    }
    componentWillUnmount() {
        clearInterval(this.interval);
    }
}
render(<App/>, document.getElementById('chart'));

运行效果

现在可以检验一下效果了,按照以下步骤执行:

  1. 开启Kafka, 往OTA topic发布测试数据
  2. 提交SPARK任务,实时处理数据并写入timescaleDB
  3. 运行Flask,执行命令export FLASK_APP=flaskapi.py, flask run
  4. 在React项目中执行npm run start
  5. 打开浏览器,输入localhost:3000,即可看到报表每2秒更新一次,如以下界面

 

 

猜你喜欢

转载自blog.csdn.net/gzroy/article/details/104849966
今日推荐