Note|Kafka|kafka-dump-logs.sh: Kafka-logs log analysis method

This tool helps to parse a log file and dump its contents to the console, useful for debugging a seemingly corrupt log segment.

Kafka's various log files (such as , and , etc. ) can be parsed using kafka-dump-logs.shscripts ..timeindex.index.log

In the official Kafka documentation, the description kafka-dump-logof is as follows:

Dump Log Tool

The kafka-dump-log tool can be used to debug the log segments and snapshots for the cluster metadata directory. The tool will scan the provided files and decode the metadata records. For example, this command decodes and prints the records in the first log segment:

  > bin/kafka-dump-log.sh --cluster-metadata-decoder --files metadata_log_dir/__cluster_metadata-0/00000000000000000000.log

This command decodes and prints the recrods in the a cluster metadata snapshot:

  > bin/kafka-dump-log.sh --cluster-metadata-decoder --files metadata_log_dir/__cluster_metadata-0/00000000000000000100-0000000001.checkpoint

kafka-dump-logs.shThe parameters are as follows:

Option Description
--deep-iteration if set, uses deep instead of shallow iteration. Automatically set if print-data-log is enabled.
--files <String: file1, file2, ...> REQUIRED: The comma separated list of data and index log files to be dumped.
--help Print usage information.
--index-sanity-check if set, just checks the index sanity without printing its content. This is the same check that is executed on broker startup to determine if an index needs rebuilding or not.
--key-decoder-class [String] if set, used to deserialize the keys. This class should implement kafka.serializer. Decoder trait. Custom jar should be available in kafka/libs directory. (default: kafka.serializer.StringDecoder)
--max-message-size <Integer: size> Size of largest message. (default: 5242880)
--offsets-decoder if set, log data will be parsed as offset data from the __consumer_offsets topic.
--print-data-log if set, printing the messages content when dumping data logs. Automatically set if any decoder option is specified.
--transaction-log-decoder if set, log data will be parsed as transaction metadata from the __transaction_state topic.
--value-decoder-class [String] if set, used to deserialize the messages. This class should implement kafka serializer. Decoder trait. Custom jar should be available in kafka/libs directory. (default: kafka.serializer.StringDecoder)
--verify-index-only if set, just verify the index log without printing its content.
--version Display Kafka version.

Shell script to dump Kafka logs in batches (where /home/myself/src-logsis the log directory and /home/myself/dump-logsis the result directory):

#!/bin/bash

# 指定遍历的目录
dir="/home/myself/src-logs"

# 如果目录不存在或不是目录,则输出错误信息并退出
if [ ! -d "$dir" ]; then
  echo "Error: $dir is not a directory"
  exit 1
fi

# 遍历目录找到所有后缀为 .log 的文件,并执行给定的命令
for file in "$dir"/*.log; do
  if [ -f "$file" ]; then
    sh /opt/kafka/bin/kafka-dump-log.sh --files "$file" --print-data-log > "/home/myself/dump-logs/${file##*/}"
  fi
done

Guess you like

Origin blog.csdn.net/Changxing_J/article/details/130330236