A collection of columns that you can save for emergencies
Spring Cloud practical column:https://blog.csdn.net/superdangbo/category_9270827.html
Python practical column:https://blog.csdn.net/superdangbo/category_9271194.html
Logback detailed explanation column:https://blog.csdn.net/superdangbo/category_9271502.html
tensorflow专栏:https://blog.csdn.net/superdangbo/category_8691332.html
Redis专栏:https://blog.csdn.net/superdangbo/category_9950790.html
Spring Cloud actual combat:
1024 Programmers Day special article:
1024 Programmer's Day Special | OKR VS KPI, who is more suitable?
1024 Programmers Day Special | Spring Boot Practical MongoDB Sharding or Replica Set Operation
Spring practical series of articles:
Spring Practical | Spring AOP Core Tips - Sunflower Collection
Spring Practice | The secret that Spring IOC cannot tell?
National Day and Mid-Autumn Festival special series of articles:
National Day and Mid-Autumn Festival Special (8) How to use JPA in Spring Boot projects
National Day and Mid-Autumn Festival Special (5) How to performance tune MySQL? Next article
National Day and Mid-Autumn Festival Special (4) How to performance tune MySQL? Previous article
1. Logging library in Python
Python's logging library (logging) provides a flexible logging system. Here is a simple example of how to use Python's logging library for logging:
- First, import the logging library:
import logging
- Next, configure basic settings for logging, such as log level, log format, and log file name:
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s - %(levelname)s - %(message)s',
filename='app.log',
filemode='w')
Here, we set the log level to DEBUG, the log format to时间 - 级别 - 消息
, and save the log records to a file named app.log.
3. Then, use the different levels and methods of the logging library for logging:
logging.debug('This is a debug log.')
logging.info('This is an info log.')
logging.warning('This is a warning log.')
logging.error('This is an error log.')
logging.critical('This is a critical log.')
These log records will correspond to different log levels, from low to high: DEBUG, INFO, WARNING, ERROR, CRITICAL.
4. After running the above code, you will see a log file named app.log in the current directory, which contains the log you just recorded. The contents of the file are as follows:
2021-01-01 12:34:56,789 - DEBUG - This is a debug log.
2021-01-01 12:34:56,789 - INFO - This is an info log.
2021-01-01 12:34:56,789 - WARNING - This is a warning log.
2021-01-01 12:34:56,789 - ERROR - This is an error log.
2021-01-01 12:34:56,789 - CRITICAL - This is a critical log.
The above example shows how to use Python's logging library for basic logging. You can adjust the log level, format, and other settings according to your actual needs. If you want to output logs to the console and the file at the same time, you can not set the filemode
parameter, so that the logs will be output to the console and the file at the same time. In addition, you can also use the handlers
parameter to configure the log processor to achieve more complex logging requirements.
2. Use Python’s logging library (logging) and pandas library to analyze log data
In Python, there are many ways to implement log collection and analysis. Here I will introduce you to a simple example that uses Python's logging library (logging) and the pandas library to analyze log data.
First, make sure the pandas library is installed. If it is not installed yet, use the following command to install it:
pip install pandas
The following is a simple Python log collection and analysis example:
- Import the required libraries:
import logging
import pandas as pd
- Set log format:
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s - %(levelname)s - %(message)s')
- Simulation log data:
log_data = [
{
'timestamp': '2021-01-01 00:00:00', 'level': 'DEBUG', 'message': 'This is a debug log.'},
{
'timestamp': '2021-01-01 00:01:00', 'level': 'INFO', 'message': 'This is an info log.'},
{
'timestamp': '2021-01-01 00:02:00', 'level': 'WARNING', 'message': 'This is a warning log.'},
{
'timestamp': '2021-01-01 00:03:00', 'level': 'ERROR', 'message': 'This is an error log.'},
]
- Save log data to a CSV file:
import os
if not os.path.exists('logs'):
os.makedirs('logs')
with open('logs/log_data.csv', 'w', newline='', encoding='utf-8') as csvfile:
fieldnames = ['timestamp', 'level', 'message']
writer = pd.writer(csvfile, fieldnames=fieldnames)
writer.writerow(fieldnames)
for log in log_data:
writer.writerow(log)
- Use pandas to read the CSV file and parse it:
import pandas as pd
log_df = pd.read_csv('logs/log_data.csv')
# 按日志级别统计数量
level_counts = log_df['level'].value_counts()
print("日志级别统计:")
print(level_counts)
# 按时间分析日志
hour_counts = log_df.groupby('timestamp').hour().value_counts()
print("\n按小时统计:")
print(hour_counts)
# 按日志级别和时间进行分组,统计日志数量
grouped_logs = log_df.groupby(['level', 'timestamp']).size().unstack(fill_value=0)
print("\n按级别和时间分组的日志数量:")
print(grouped_logs)
The above code saves the simulated log data to a CSV file and uses pandas to perform simple statistics and analysis on it. In actual applications, you can modify the logic of log collection and analysis as needed.