clickhouse (12, the road to the pit)

  • Q1
    DB::Exception: Cannot create table from metadata file /data/clickhouse/metadata/default/dwd_test.sql, error: DB::Exception: The local set of parts of table default.dwd_test doesn’t look like the set of parts in ZooKeeper: 65.88 million rows of 85.04 million total rows in filesystem are suspicious. There are 545 unexpected parts with 65883643 rows (191 of them is not just-written with 65883643 rows), 0 missing parts (with 0 blocks).
  • A1
    This is because on cluster is used in ddl statements such as truncate and alter, which occasionally causes zookeeper synchronization abnormalities. Solution 1: Delete the local table data rm -r /data/clickhouse/data/default/dwd_test of the problematic node, and restart CK, the copy will automatically resynchronize the table data. (Please don't use this method if you don't have a copy.)
    Solution 2: Execute sudo -u clickhouse touch /data/clickhouse/flags/force_restore_data from the command line and then manually restore the partition in question
  • Q2
    Connected to ClickHouse server version 20.3.8 revision 54433.
    Poco::Exception. Code: 1000, e.code() = 13, e.displayText() = Access to file denied: /home/qspace/.clickhouse-client-history (version 20.3.8.53 (official build))
  • A2

    creates the file and sets open permissions.
    chown clickhouse:clickhouse /home/qspace/.clickhouse-client-history (create if not)
  • Q3
    Application: DB::Exception: Listen [::]:8124 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 20.5.2.7 (official build))
  • A3
    does not open ipv6 on this machine, it can only take effect on ipv4. In /etc/click-house/config.xml, change <listen_host> to 0.0.0.0 or::
  • Q4
    Code: 32, e.displayText() = DB::Exception: Received from hadoop8:9900. DB::Exception: Attempt to read after eof: Cannot parse Int32 from String, because value is too short. (version 20.3.8.53 (official build))
    String to number is abnormal, some empty or non-number characters cause the conversion to fail
  • A4
    uses the toUInt64OrZero function, if conversion fails, it is 0.
  • Q5
    Application: DB::Exception: Suspiciously many (125) broken parts to remove.: Cannot attach table default.test
    Code: 231. DB::Exception: Received from ck10:9000. DB::Exception: Suspiciously many (125) broken parts to remove…
  • A5
    Metadata and data inconsistency caused by writing data. First delete the disk data, then restart the node to delete the local table, if it is a copy of the table, then go to zookeeper to delete the copy, and then rebuild the table. If the table is copied, the data will be synchronized with other copies.
  • Q6
    Cannot execute replicated DDL query on leader.
  • A6
    Due to the time-consuming distributed ddl statement, it will time out response, change to local execution or reduce the data range of the effect. For example, the entire table of ALTER and OPTIMIZE is changed to a specific partition.
  • Q7
    Code: 76. DB::Exception: Received from 0.0.0.0:9900. DB::Exception: Cannot open file /data/clickhouse/data/default/test/tmp_insert_20200523_55575_55575_0/f0560_deep_conversion_optimization_goal.mrk2, errno: 24, strerror: Too many open files.
  • A7
    Modify /etc/security/limits.conf and add:
    clickhouse soft nofile 262144
    clickhouse hard nofile 262144
    Use ulimit -n to query and see the total number that all users can open, and the size that ck can open is only the default value of the system, so Don't be disturbed by this command. After restarting ck, get the ck process, and then pass cat /proc/${pid}/limits |grep openit to determine whether the configuration takes effect

Guess you like

Origin blog.csdn.net/yyoc97/article/details/108576891