Databend vs Clickhouse performance comparison based on object storage data warehouse | Guess who wins

Introduction: It takes nearly 30 minutes to read the full text. Those who care about the results can skip directly to the end. It contains pressure testing methods and scripts.

MySQL has been very stable in recent years, and various architectures are also very mature. A new business requirement brought by now is that the data is getting bigger and bigger, and MySQL is a bit difficult to support in the analysis scenario, which is why various HTAP architectures have appeared now. If the cost of running the new HTAP in your scenario is a bit over budget, you can use Clickhouse and Databend to support the business. This article tests the performance difference between Clickhouse and Databend based on object storage. Both products can currently support S3 as storage and realize on-demand allocation according to storage.

Clickhouse is known as the fastest wide-table query database on the planet: clickhouse.com/ Clickhouse is also working in the direction of cloud native after receiving investment last year. One of the important contents is to support S3 as a new type of storage.

Databend is a cloud-native new object storage-based data warehouse that achieves: low cost, high performance, and elastic scaling.

Documentation: databend.rs/doc (including Databend architecture diagram)

Databend Repo:  github.com/datafuselab… (Star is welcome to follow)

First of all, we need to understand what is cloud native? This concept is about to be broken. Many people think that MySQL on the cloud is cloud-native MySQL, but it is not. The real cloud-native should meet:

• No need to do hardware management and configuration

• No software installation and management required

• No need to care about fault management, upgrades and optimizations

• Supports rapid elastic expansion and contraction

• Pay only for storage and resources when you use them, if you don't have a business request, you don't have to pay

• Don't worry too much about the resources you use

对于 MySQL 跑在云上,我们按上面的条件对一下,会发现用户还是需要管理很多关于升级,配置,优化,故障方面的问题,同样对于云平台上 MySQL 支持人员还需要有一堆的故障,升级,优化方面的工作。最重要的是云上的 MySQL 没办法达到秒级伸缩,只为使用的资源付费。云原生就是向这个方向在努力,让用户生活得更轻松幸福。

如果具体化一点,现在有没有云原生应用呢?答案当然是有的,这些产品也给我们树立了一个标秆。 例如:数据库界的 CockroachDB Cloud, PlanetScale,  数仓领域的 SnowflakeDB 他们现在已经做到了上面的要求。Databend 目前也是按这个目标要求来做开发的实现。

Databend 为什么要使用 S3 对象存储?

对于做一款数据库的开发者,开发一款专属的存储可能也是技术从业者的追求的。Databend 在设计之初对存储提出以下几个问题:

•支持高可用

•不用关心副本数

•多 IDC 可用,及多云切换

•支持全球内数据共享及分布

•不用考虑预留空间,只为使用的空间付费

•支持基于同一份数据多集群并发读写,并提供快照级隔离能力

•   完善的事务支持

•不用管理备份,支持在规定的时间内随意闪回(以表,DB 这样的粒度)

在基于上面的要求评估后,我们发现云上的对象存储正是我们需要的。对象存储的不用考虑预留,所有的写入都是增量,delete,drop 可以支持快照隔离级别的惰性操作,所以计划在对象存储打造一款云原生数仓。目前 Databend 支持 AWS s3, Qcloud COS, 阿里云 OSS, minio 等支持 S3 协议的产品。更多部署方式,可以参考:databend.rs/doc/categor…

测试的步骤

在 AWS 申请了一台 c5n.9xlarge 机器, 36C,72G, 200G(只用来放 ontime 的数据)

- 系统:ubuntu 20

- Clickhouse:22.2.3 (参考官网安装)

- Databend : 获取 Github 当天的二进制版本部

github.com/datafuselab…

数据下载

wget --no-check-certificate --continue https://transtats.bts.gov/PREZIP/On_Time_Reporting_Carrier_On_Time_Performance_1987_present_{1987..2021}_{1..12}.zip

复制代码

Clickhouse 配置及表结构

在/etc/clickhouse-server/config.d 下添加: storage.xml 内容如下

<yandex>
    <storage_configuration>
        <disks>
            <s3>
                <type>s3</type>
                <endpoint>https://databend-shared.s3.us-east-2.amazonaws.com/ch-data-s3/</endpoint>
                <access_key_id>your-key-id</access_key_id>
                <secret_access_key>your-key</secret_access_key>
                <cache_enabled>true</cache_enabled>
            </s3>
        </disks>
        <policies>
            <s3>
                <volumes>
                    <main>
                        <disk>s3</disk>
                    </main>
                </volumes>
            </s3>
        </policies>
    </storage_configuration>
</yandex>

复制代码

需要替换上面的: your-key-id, your-key 为你真实环境的内容。Clikhouse 数据导入及表结构参考:clickhouse.com/docs/en/get…

Clickhouse Ontime 表结构需要修改一下, 只需要尾部添加 storage_policy='s3' :

CREATE TABLE `ontime`
(
    ...
) ENGINE = MergeTree
      PARTITION BY Year
      ORDER BY (IATA_CODE_Reporting_Airline, FlightDate)
      SETTINGS index_granularity = 8192, storage_policy='s3' ;

复制代码

这样可以把数据存储在 s3 上面,但 meta 信息还在 Clickhouse 本地。

Databend 配置及表结构

Databend 配置参考:databend.rs/doc/deploy/… 需要保证,Databend 和 Clickhouse 连接一个 bucket 就可以。Databend 表结构:create_ontime.sql

CREATE TABLE ontime
(
    Year                            UInt16 NOT NULL,
    Quarter                         UInt8 NOT NULL,
    Month                           UInt8 NOT NULL,
    DayofMonth                      UInt8 NOT NULL,
    DayOfWeek                       UInt8 NOT NULL,
    FlightDate                      Date NOT NULL,
    Reporting_Airline               String NOT NULL,
    DOT_ID_Reporting_Airline        Int32 NOT NULL,
    IATA_CODE_Reporting_Airline     String NOT NULL,
    Tail_Number                     String NOT NULL,
    Flight_Number_Reporting_Airline String NOT NULL,
    OriginAirportID                 Int32 NOT NULL,
    OriginAirportSeqID              Int32 NOT NULL,
    OriginCityMarketID              Int32 NOT NULL,
    Origin                          String NOT NULL,
    OriginCityName                  String NOT NULL,
    OriginState                     String NOT NULL,
    OriginStateFips                 String NOT NULL,
    OriginStateName                 String NOT NULL,
    OriginWac                       Int32 NOT NULL,
    DestAirportID                   Int32 NOT NULL,
    DestAirportSeqID                Int32 NOT NULL,
    DestCityMarketID                Int32 NOT NULL,
    Dest                            String NOT NULL,
    DestCityName                    String NOT NULL,
    DestState                       String NOT NULL,
    DestStateFips                   String NOT NULL,
    DestStateName                   String NOT NULL,
    DestWac                         Int32 NOT NULL,
    CRSDepTime                      Int32 NOT NULL,
    DepTime                         Int32 NOT NULL,
    DepDelay                        Int32 NOT NULL,
    DepDelayMinutes                 Int32 NOT NULL,
    DepDel15                        Int32 NOT NULL,
    DepartureDelayGroups            String NOT NULL,
    DepTimeBlk                      String NOT NULL,
    TaxiOut                         Int32 NOT NULL,
    WheelsOff                       Int32 NOT NULL,
    WheelsOn                        Int32 NOT NULL,
    TaxiIn                          Int32 NOT NULL,
    CRSArrTime                      Int32 NOT NULL,
    ArrTime                         Int32 NOT NULL,
    ArrDelay                        Int32 NOT NULL,
    ArrDelayMinutes                 Int32 NOT NULL,
    ArrDel15                        Int32 NOT NULL,
    ArrivalDelayGroups              Int32 NOT NULL,
    ArrTimeBlk                      String NOT NULL,
    Cancelled                       UInt8 NOT NULL,
    CancellationCode                String NOT NULL,
    Diverted                        UInt8 NOT NULL,
    CRSElapsedTime                  Int32 NOT NULL,
    ActualElapsedTime               Int32 NOT NULL,
    AirTime                         Int32 NOT NULL,
    Flights                         Int32 NOT NULL,
    Distance                        Int32 NOT NULL,
    DistanceGroup                   UInt8 NOT NULL,
    CarrierDelay                    Int32 NOT NULL,
    WeatherDelay                    Int32 NOT NULL,
    NASDelay                        Int32 NOT NULL,
    SecurityDelay                   Int32 NOT NULL,
    LateAircraftDelay               Int32 NOT NULL,
    FirstDepTime                    String NOT NULL,
    TotalAddGTime                   String NOT NULL,
    LongestAddGTime                 String NOT NULL,
    DivAirportLandings              String NOT NULL,
    DivReachedDest                  String NOT NULL,
    DivActualElapsedTime            String NOT NULL,
    DivArrDelay                     String NOT NULL,
    DivDistance                     String NOT NULL,
    Div1Airport                     String NOT NULL,
    Div1AirportID                   Int32 NOT NULL,
    Div1AirportSeqID                Int32 NOT NULL,
    Div1WheelsOn                    String NOT NULL,
    Div1TotalGTime                  String NOT NULL,
    Div1LongestGTime                String NOT NULL,
    Div1WheelsOff                   String NOT NULL,
    Div1TailNum                     String NOT NULL,
    Div2Airport                     String NOT NULL,
    Div2AirportID                   Int32 NOT NULL,
    Div2AirportSeqID                Int32 NOT NULL,
    Div2WheelsOn                    String NOT NULL,
    Div2TotalGTime                  String NOT NULL,
    Div2LongestGTime                String NOT NULL,
    Div2WheelsOff                   String NOT NULL,
    Div2TailNum                     String NOT NULL,
    Div3Airport                     String NOT NULL,
    Div3AirportID                   Int32 NOT NULL,
    Div3AirportSeqID                Int32 NOT NULL,
    Div3WheelsOn                    String NOT NULL,
    Div3TotalGTime                  String NOT NULL,
    Div3LongestGTime                String NOT NULL,
    Div3WheelsOff                   String NOT NULL,
    Div3TailNum                     String NOT NULL,
    Div4Airport                     String NOT NULL,
    Div4AirportID                   Int32 NOT NULL,
    Div4AirportSeqID                Int32 NOT NULL,
    Div4WheelsOn                    String NOT NULL,
    Div4TotalGTime                  String NOT NULL,
    Div4LongestGTime                String NOT NULL,
    Div4WheelsOff                   String NOT NULL,
    Div4TailNum                     String NOT NULL,
    Div5Airport                     String NOT NULL,
    Div5AirportID                   Int32 NOT NULL,
    Div5AirportSeqID                Int32 NOT NULL,
    Div5WheelsOn                    String NOT NULL,
    Div5TotalGTime                  String NOT NULL,
    Div5LongestGTime                String NOT NULL,
    Div5WheelsOff                   String NOT NULL,
    Div5TailNum                     String NOT NULL
);

复制代码

加载数据方法:

cat load_ontime.sh

echo "unzip ontime ,input your ontime zip dir: ./load_ontime.sh zip_dir"

ls $1/*.zip |xargs -I{} -P 4 bash -c "echo {}; unzip -q {} '*.csv' -d ./dataset"

if [ $? -eq  0 ];
then
    echo "unzip success"
else
    echo "unzip was wrong!!!"
    exit 1
fi

cat create_ontime.sql |mysql -h127.0.0.1 -P3307 -uroot
if [ $? -eq  0 ];
then
    echo "Ontime table create success"
else
    echo "Ontime table create was wrong!!!"
    exit 1
fi


time ls ./dataset/*.csv|xargs -P 8 -I{} curl -H "insert_sql:insert into ontime format CSV" -H "skip_header:1" -F "upload=@{}" -XPUT http://localhost:8081/v1/streaming_load


复制代码
# 把 local_ontime.sh 给可执行权限

chmod +x load_ontime.sh 
#安装 MySQL 客户端
sudo apt-get install mysql-client

复制代码

直接使用 load_ontime.sh 跟参数 ontime 压缩文件存储的位置。就可以实现数据的载入。

这里出现一个有意义的事情,因为Clickhouse没有事务支持,在不同的并发 load 下,加载的数据可能不一致。

测试脚本

压测中使用了:hyperfine 需要自行安装:

wget https://github.com/sharkdp/hyperfine/releases/download/v1.13.0/hyperfine_1.13.0_amd64.deb
sudo dpkg -i hyperfine_1.13.0_amd64.deb

复制代码

压测脚本:

cat run_ontime.sh

 #!/bin/bash

cat << EOF > bench.sql
SELECT DayOfWeek, count(*) AS c FROM ontime WHERE Year >= 2000 AND Year <= 2008 GROUP BY DayOfWeek ORDER BY c DESC;
SELECT DayOfWeek, count(*) AS c FROM ontime WHERE DepDelay>10 AND Year >= 2000 AND Year <= 2008 GROUP BY DayOfWeek ORDER BY c DESC;
SELECT Origin, count(*) AS c FROM ontime WHERE DepDelay>10 AND Year >= 2000 AND Year <= 2008 GROUP BY Origin ORDER BY c DESC LIMIT 10;
SELECT IATA_CODE_Reporting_Airline AS Carrier, count(*) FROM ontime WHERE DepDelay>10 AND Year = 2007 GROUP BY Carrier ORDER BY count(*) DESC;
SELECT IATA_CODE_Reporting_Airline AS Carrier, avg(DepDelay>10)*1000 AS c3 FROM ontime WHERE Year=2007 GROUP BY Carrier ORDER BY c3 DESC;
SELECT IATA_CODE_Reporting_Airline AS Carrier, avg(DepDelay>10)*1000 AS c3 FROM ontime WHERE Year>=2000 AND Year <=2008 GROUP BY Carrier ORDER BY c3 DESC;
SELECT IATA_CODE_Reporting_Airline AS Carrier, avg(DepDelay) * 1000 AS c3 FROM ontime WHERE Year >= 2000 AND Year <= 2008 GROUP BY Carrier;
SELECT Year, avg(DepDelay) FROM ontime GROUP BY Year;
select Year, count(*) as c1 from ontime group by Year;
SELECT avg(cnt) FROM (SELECT Year,Month,count(*) AS cnt FROM ontime WHERE DepDel15=1 GROUP BY Year,Month) a;
select avg(c1) from (select Year,Month,count(*) as c1 from ontime group by Year,Month) a;
SELECT OriginCityName, DestCityName, count(*) AS c FROM ontime GROUP BY OriginCityName, DestCityName ORDER BY c DESC LIMIT 10;
SELECT OriginCityName, count(*) AS c FROM ontime GROUP BY OriginCityName ORDER BY c DESC LIMIT 10;
EOF

WARMUP=3
RUN=10

export script="hyperfine -w $WARMUP -r $RUN"

script=""

function run() {
        port=$1
        result=$2
        script="hyperfine -w $WARMUP -r $RUN"
        i=0
        while read SQL; do
                f=/tmp/bench_${i}.sql
        echo "$before_sql" > $f
                echo "$SQL" >> $f
                #s="cat $f | clickhouse-client --host 127.0.0.1 --port $port"
                s="cat $f | mysql -h127.0.0.1 -P$port -uroot -s"
                script="$script '$s'"
                i=$[i+1]
        done <./bench.sql

        script="$script  --export-markdown $result"
        echo $script | bash -x
}


run "3307"  "$1"

echo "select version() as version" |mysql  -h127.0.0.1 -P3307 -uroot >> $result

复制代码

Clickhouse 压缩修改对应的 run ,可以copy run_ontime.sh 为ch_run.sh 修改 run 部分:

script=""
function run() {
        port=$1
        result=$2
        script="hyperfine -w $WARMUP -r $RUN"
        i=0
        while read SQL; do
                f=/tmp/bench_${i}.sql
                echo "$SQL" > $f
                s="cat $f | clickhouse-client --host 127.0.0.1 --port $port"
                script="$script '$s'"
                i=$[i+1]
        done <<< $(cat bench.sql)

        script="$script  --export-markdown $result"
        echo $script | bash -x
}


run "9000"  "$1"

复制代码

使用方法:

./run_time.sh D20220322.md
./ch_run.sh  C202220322.md

复制代码

最后对比结果里的两个 md 文件。

对比结果

环境 Clickhouse on S3(ms) Databend on S3(ms)
Q1 498.2 186.6
Q2 682.1 247.2
Q3 620.7 354.7
Q4 269.6 125.1
Q5 160 146.6
Q6 694.3 371.3
Q7 699.9 389.2
Q8 994.9 524.9
Q9 35.9 372.1
Q10 1484.6 521.2
Q11 741.2 439.5
Q12 1945 2898.1
Q13 1129 1183.1

Graphic comparison

图片

From the above results, only Q9 is found, and Clickhouse is better than Databend. Through analysis, it is found that Clickhouse directly uses dictionary queries, which also provides a direction for Databend to optimize Q9.

Summarize

At present, it seems that in the direction of object-based storage, Databend is better than Clickhouse in terms of large-width table computing capabilities. In essence, Databend now outperforms Snowflake in performance. If you are interested in these, you can also pay attention to our upcoming Meetup.

If you are interested in the direction of using Databend, you can learn more through the following links:

Databend on minio :     databend.rs/doc/deploy/…

Databend on COS :     databend.rs/doc/deploy/…

Databend on AWS S3:    databend.rs/doc/deploy/…

Databend vectorized computing capabilities:  databend.rs/doc/perform…

If you encounter problems during testing, you can also add WeChat: 82565387 for support, and add a password: Databend.

Guess you like

Origin juejin.im/post/7086426647758389284