服务器强制关闭、异常断电等导致clickhouse数据损坏Suspiciously many broken parts to remove

问题

现象

  • 机房断电,服务器重启服务启动后,发现数据写入报错,查看clickhouse报错日志
  • 关键提示为TOO_MANY_PARTSDB::Exception: Suspiciously many (12 parts, 0.00 B in total) broken parts to remove while maximum allowed broken parts count is 10.,大概意思是太多数据碎片,要移除的损坏碎片为12,但允许的最大数量为10
  • 后面也给了修复意见You can change the maximum value with merge tree setting 'max_suspicious_broken_parts' in <merge_tree> configuration section or in table settings in .sql file (don't forget to return setting back to default value)。大致意思是,修改表参数max_suspicious_broken_parts,但是别忘了恢复后改回来
  • 也指明了出错的表的part Cannot attach table radar.signal_status from metadata file /var/lib/clickhouse/store/422/4222e684-3a04-4de6-bacc-879c855ef94c/signal_status.sql from query ATTACH TABLE radar.signal_status
0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xaebed1a in /usr/bin/clickhouse
1. DB::Exception::Exception<unsigned long&>(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned long&) @ 0xaf096b2 in /usr/bin/clickhouse
2. DB::MergeTreeData::delayInsertOrThrowIfNeeded(Poco::Event*) const @ 0x15488280 in /usr/bin/clickhouse
3. ? @ 0x15c0a4e9 in /usr/bin/clickhouse
4. DB::ExceptionKeepingTransform::work() @ 0x15c09c94 in /usr/bin/clickhouse
5. DB::ExecutionThreadContext::executeTask() @ 0x15a60ca3 in /usr/bin/clickhouse
6. DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x15a54b7e in /usr/bin/clickhouse
7. DB::PipelineExecutor::executeStep(std::__1::atomic<bool>*) @ 0x15a54280 in /usr/bin/clickhouse
8. DB::PushingPipelineExecutor::start() @ 0x15a68d14 in /usr/bin/clickhouse
9. DB::SystemLog<DB::MetricLogElement>::flushImpl(std::__1::vector<DB::MetricLogElement, std::__1::allocator<DB::MetricLogElement> > const&, unsigned long) @ 0x14d9b618 in /usr/bin/clickhouse
10. DB::SystemLog<DB::MetricLogElement>::savingThreadFunction() @ 0x14d99ab5 in /usr/bin/clickhouse
11. ? @ 0xaf80cb1 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xaf62837 in /usr/bin/clickhouse
13. ? @ 0xaf662fd in /usr/bin/clickhouse
14. start_thread @ 0x7ea5 in /usr/lib64/libpthread-2.17.so
15. __clone @ 0xfe96d in /usr/lib64/libc-2.17.so
 (version 22.2.2.1)
2022.12.08 15:07:08.647866 [ 18188 ] {} <Error> void DB::SystemLog<DB::MetricLogElement>::flushImpl(const std::vector<LogElement> &, uint64_t) [LogElement = DB::MetricLogElement]: Code: 252. DB::Exception: Too many parts (300). Merges are processing significantly slower than inserts. (TOO_MANY_PARTS), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0xaebed1a in /usr/bin/clickhouse
1. DB::Exception::Exception<unsigned long&>(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned long&) @ 0xaf096b2 in /usr/bin/clickhouse
2. DB::MergeTreeData::delayInsertOrThrowIfNeeded(Poco::Event*) const @ 0x15488280 in /usr/bin/clickhouse
3. ? @ 0x15c0a4e9 in /usr/bin/clickhouse
4. DB::ExceptionKeepingTransform::work() @ 0x15c09c94 in /usr/bin/clickhouse
5. DB::ExecutionThreadContext::executeTask() @ 0x15a60ca3 in /usr/bin/clickhouse
6. DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) @ 0x15a54b7e in /usr/bin/clickhouse
7. DB::PipelineExecutor::executeStep(std::__1::atomic<bool>*) @ 0x15a54280 in /usr/bin/clickhouse
8. DB::PushingPipelineExecutor::start() @ 0x15a68d14 in /usr/bin/clickhouse
9. DB::SystemLog<DB::MetricLogElement>::flushImpl(std::__1::vector<DB::MetricLogElement, std::__1::allocator<DB::MetricLogElement> > const&, unsigned long) @ 0x14d9b618 in /usr/bin/clickhouse
10. DB::SystemLog<DB::MetricLogElement>::savingThreadFunction() @ 0x14d99ab5 in /usr/bin/clickhouse
11. ? @ 0xaf80cb1 in /usr/bin/clickhouse
12. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0xaf62837 in /usr/bin/clickhouse
13. ? @ 0xaf662fd in /usr/bin/clickhouse
14. start_thread @ 0x7ea5 in /usr/lib64/libpthread-2.17.so
15. __clone @ 0xfe96d in /usr/lib64/libc-2.17.so
 (version 22.2.2.1)
2022.12.08 15:07:09.448149 [ 18117 ] {} <Error> Application: DB::Exception: Suspiciously many (12 parts, 0.00 B in total) broken parts to remove while maximum allowed broken parts count is 10. You can change the maximum value with merge tree setting 'max_suspicious_broken_parts' in <merge_tree> configuration section or in table settings in .sql file (don't forget to return setting back to default value): Cannot attach table `radar`.`signal_status` from metadata file /var/lib/clickhouse/store/422/4222e684-3a04-4de6-bacc-879c855ef94c/signal_status.sql from query ATTACH TABLE radar.signal_status UUID '85937e46-5e81-4d48-bcfc-dbd2a738c086' (`time_stamp` DateTime COMMENT '信控上报时间(当前灯色开始时间)', `intersection_number` Int32 COMMENT '交叉口编号', `pattern_number` Int16 COMMENT '方案编号', `working_mode` String COMMENT '信号机工作模式', `stage_index` Int8 COMMENT '当前阶段序号', `phase_number` Int8 COMMENT '当前执行相位', `stage_status` String COMMENT '当前阶段灯色', `len_lamp_light` Int16 COMMENT '当前灯色的持续时间', `cycle` Int16 COMMENT '方案周期(秒)', `green_movement` Array(String) COMMENT '当前相位绿灯可通行进口流向', `green_lane` Array(String) COMMENT '当前相位绿灯可通行进口车道编号', `stage_end_time` DateTime COMMENT '当前灯色结束时间', `cycle_start_time` DateTime COMMENT '本周期开始时间') ENGINE = MergeTree PARTITION BY toYYYYMM(time_stamp) PRIMARY KEY time_stamp ORDER BY (time_stamp, intersection_number) TTL time_stamp + toIntervalMonth(6) SETTINGS index_granularity = 8192, old_parts_lifetime = 300

原因

  • 这个是发生在机器断电场景下的报错,查找原因是说因为写入数据造成的元数据和数据不一致问题
  • clickhouse在重启服务的时候会重新加载MergeTree表引擎数据,数据可能存在损坏情况
  • clickhouse配置原因
    配置参数当中包含一个参数max_suspicious_broken_parts,默认值是10,可选值范围是任意正整数,如果单个分区中的损坏部分数量超过max_suspicious_broken_parts 配置的值,则拒绝自动修复或者拒绝删除损坏部分的数据,并且服务启动时候直接报错退出
  • 目前需要尽量避免该错误以免服务启动失败,推荐把该参数配置为1000或者更大的值

解决

  • 我是按照提示,找到对应文件/var/lib/clickhouse/store/422/4222e684-3a04-4de6-bacc-879c855ef94c/signal_status.sql
  • 修改了表配置,设置max_suspicious_broken_parts=20(大于损坏文件数量)
  • 重启数据库就好了

其他处理方式

单表配置方式

在创建MergeTree表的时候特别配置一下max_suspicious_broken_parts参数

CREATE TABLE foo
(
    `A` Int64
)
ENGINE = MergeTree
ORDER BY tuple()
SETTINGS max_suspicious_broken_parts = 1000;

命令行方式

使用ALTER TABLE … MODIFY SETTING命令修改

ALTER TABLE foo
    MODIFY SETTING max_suspicious_broken_parts = 1000;

-- 恢复默认值
-- reset to default (use value from system.merge_tree_settings)
ALTER TABLE foo
    RESET SETTING max_suspicious_broken_parts;

配置文件方式

如果服务起不来了,就只能使用这个方式解决

  • 新建文件max_suspicious_broken_parts.xml写入如下内容
<?xml version="1.0"?>
<yandex>
     <merge_tree>
         <max_suspicious_broken_parts>1000</max_suspicious_broken_parts>
     </merge_tree>
</yandex>
  • clickhouse的配置文件推荐放置在/etc/clickhouse-server/config.d/文件夹下生效
  • 如果是在Ubuntu或者Centos上面以DEB或RPM安装包的形式启动的,需要把该文件放到/etc/clickhouse-server/config.d/,最后重启clickhouse就可以了
  • 如果是docker compose方式启动
    修改compose.yaml配置如下,主要也是把对应文件挂载进入容器内部相应位置
services:
  clickhouse:
    image: clickhouse/clickhouse-server
    ulimits:
      nofile:
        soft: 262144
        hard: 262144
    restart: always
    container_name: demo-clickhouse
    environment:
    - CLICKHOUSE_USER=demo
    - CLICKHOUSE_PASSWORD=demo-pass
    - CLICKHOUSE_DB=demo
    ports:
      - "8123:8123"
      - "9000:9000"
    volumes:
      - ./max_suspicious_broken_parts.xml:/etc/clickhouse-server/config.d/max_suspicious_broken_parts.xml
      - demo-clickhouse:/var/lib/clickhouse
    healthcheck:
      test: 'wget -O - http://127.0.0.1:8123 || exit 1'
     
volumes:
  demo-clickhouse: {}

验证配置是否生效

  • 连接clickhouse,查询
SELECT *
FROM system.merge_tree_settings
WHERE name LIKE '%max_suspicious_broken_parts%'

参考

猜你喜欢

转载自blog.csdn.net/u010882234/article/details/128553785
今日推荐