Overview of linux--inode and block and simulated inode number exhaustion | ext and xfs type file recovery | log type | analysis tool

Overview of inode and block

Overview:

➤Files and sectors
Files are stored on the hard disk. The smallest storage unit of the hard disk is called a "sector", and each sector stores 512 bytes.
➤Block
generally consists of eight consecutive sectors to form a "block". A block is 4K in size and is the smallest unit of file access. File data is stored in the "block".
➤File data
1. File data includes actual data and meta information (similar to file attributes).
2. File data is stored in "blocks", and file meta information is stored in inodes.
➤inode (index node)
1. The inode does not contain the file name. The file name is stored in the directory. Everything in a Linux system is a file, so a directory is also a kind of file.
2. Inode is the area where file meta-information (such as file creator, creation date, file size, file permissions, etc.) is stored.
➤Conclusion
When a user tries to access a file in the Linux system, the system will first find its corresponding inode number according to the file name; obtain the inode information through the inode number; according to the inode information, see if the user has the permission to access the file ; If there is, point to the corresponding data block and read the data.
➤There are two ways to view the inode number of the
file : ls -i file
stat file

➤Three main time attributes of Linux system files

➤ctime
last time to change a file or directory (attribute)
➤atime
last time to access a file or directory
➤mtime
last time to modify a file or directory (content)

➤ Content of inode

1. The inode contains the meta information of the
file. The number of bytes of the
file. The user ID of the file owner. The
Group ID
file.
The time stamp of the file.
2. Use the stat command to view the inode information
stat file of a file.
3. The structure of directory files
➤ Directory is also a kind of file
➤ The structure of directory files
4. Each inodea has a number, and the operating system uses the inode number to identify different files.
5. The Linux system internally uses the file name instead of the inode number To identify the file
6. For users, the file name is just another name for the inode number for easy identification

➤inode number

➤When the user opens the file by the file name, the internal process of the
system 1. The system finds the inode number corresponding to the file name
2. Gets the inode information through the inode number
3. Finds the block where the file data is located according to the inode information, and reads out the data

➤Inode size

1. Inode will also consume hard disk space
. The size of each inode:
generally 128 bytes or 256 bytes
. 2. Determine the total number of inodes when formatting the file system.
3. Use the df -i command to view the total number of inodes for each hard disk partition. And the amount used
Insert picture description here

➤Special role of inode

➤Due to the separation of the inode number and the file name, some unix/linux systems have the following phenomena:
1. When the file name contains special characters, the file may not be deleted normally, directly delete the inode, or delete the file
2. When moving or renaming the file , Only change the file name, does not affect the inode number
3. After opening a file, the system will identify the file by the inode number, no longer consider the file name
4. After the file data is modified and saved, a new inode number will be generated

➤Inode node fault handling


➤The steps are as follows: 1.fdisk /dev/sdb #Manage the disk and create a new partition
2.mkfs.xfs /dev/sdb1
#Format the disk 3.mkdir /test #Create the sdb1 directory under the / directory for mounting
4 .mount /dev/sdb1 /mnt
#Mount 5.df -i #Query available inode number
6. Simulate inode node exhaustion
for ((i=1;i<=7700;i++)); do touch /test$i ;done;
or touch {1…xxx}.txt
7.df -Th Check whether the inode is full
8.rm -ef /test/*
9.df -Th Check whether the inode number is back to normal

➤EXT type file recovery

➤extundelete is an open source Linux data recovery tool that supports ext3 and ext4 file systems. (Ext4 can only be restored in centos6 version)
➤Download URL: extundelete-0.2.4.tar.bz2

➤Step (1) is as follows:
fdisk /dev/sdb Create a new hard disk,
format the new partition mkfs.ext3 /dev/sdb2, and change the type to ext3
mkdir /sdb2 Create an empty directory
mount /dev/sdb2 /sdb2 Mount Load
yum -y install e2fsprogs-devel e2fsprogs-libs
#install dependency package extundelete compile and install
cd /sdb2
drag into the compressed package
tar jxvf extundelete-0.2.4.tar.bz2 #decompress
cd extundelete-0.2.4/
./configure --prefix=/usr/local/extundelete && make && make install
ln -s /usr/local/extundelete/bin/* /usr/bin/ Create soft link ➤Step
(2) is as follows:
cd /sdb1 Create file
echo a >a
echo a>b
echo a>c
echo a>d
ls ➤Step
(3) is as follows:
extundelete /dev/sdb2 --inode 2 Query the node file of the mounted hard disk
rm -rf ab
cd ~
umount /sdb2
extundelete /dev/sdb2 --restore-all Restore all the contents of the /dev/sdb2 file system.
A RECOVERED_FILES/ directory will appear in the current directory, which saves the restored files
ls RECOVERED_FILES View the restored files

➤Xfs type file repair

➤CentOS 7 system uses xfs type files by default, and xfs type files can be backed up and restored using xfsdump and xfsrestore tools.

➤Commonly used options of the xfsdump command:
-f: specify the backup file directory
-L: specify the label session label
-M: specify the device label media label
-s: backup a single file, the path cannot be directly followed by -s

➤xfsdump usage restrictions:
1. Only the mounted file system can be backed up
2. You must use root permissions to operate
3. Only the XFS file system can be backed up
4. The data after the backup can only be parsed by xfsrestore
5. You cannot back up two File systems with the same UUID (check with the blkid command)
Steps (1) are as follows:
fdisk /dev/sdb create a new partition
partprobe /dev/sdb refresh the hard disk
mkfs.xfs /dev/sdb1 format to xfs type
mkdir /sdb1 create empty Directory
mount /dev/sdb1 /sdb1 Mount
cd /sdb1 Switch to the sdb1 directory
cp /etc/passwd ./ Copy passwd to the sdb1 directory ➤Step
(2) is as follows:
yum install -y xfsdump install xfsdump
xfsdump -f /opt/dump_sdb1 /dev/sdb1 -L dump_sdb1 -M sdb1 backup ➤Step
(3) is as follows:
cd /sdb1
rm -rf
./* xfsrestore -f /opt/dump_sdb1 /sdb1 recovery

Log file

➤Log function

➤It is used to record the various events that occur in the system and program operation. The
log file is used to record various operating messages in the Linux system. It is equivalent to the "diary" of the Linux host.
Different log files record different types of information, such as Linux kernel messages, user login events, program errors, etc.
➤ By reading logs, it is helpful to diagnose and solve system faults
. Programs running in Linux systems usually write system messages and error messages into corresponding log files
when the host is under attack , Log files can also help find traces left by attackers

➤Classification of logs


➤Kernel and system logs are managed uniformly by the system service rsyslog, and the log format is basically similar

➤User log
Record system user login and logout related information, including user name, login terminal, login time, source host, process operation in use, etc.

➤Program log
The log files independently managed by various applications, the record format is not uniform
. The log file will not be generated after the program is installed. The log file will be generated only when it is started. If it is not accessed, the log file will be empty

➤Log save location

➤The default location is:
the log files of the Linux system itself and most server programs in the /var/log directory are stored in /var/log by default.
Some programs share a log file, some programs use a single log file, and some large server programs are due to There are many log files, so corresponding sub-directories will be created in the /var/log directory to store log files.
A considerable part of the log files can only be read by root users, ensuring the security of related log information
Insert picture description here

➤Introduction of log files

Kernel and public message log var / log / messages
Scheduled task log / var / log / cron
System boot log var / log / dmesg
Mail system log / var / log / maillog
User login log / var / log / lastlog , / var / log / secure , / var / log / wtmp , / var / log / btmp

➤/var/log/messages: Record Linux kernel messages and public log information of various applications, including startup, I/O errors, network errors, program failures, etc.
➤For applications or services that do not use independent log files, generally You can get the relevant time record information from the log file
➤/var/log/cron: record the event information generated by the crond scheduled task
➤/var/log/dmesg: record the various event information of the Linux system during the boot process
➤ /var/log/maillog: record the email activity entering or sending out the system
➤/var/log/lastlog: record the latest login time of each user
➤/var/log/secure: record the security event information related to user authentication
➤/ var/log/wtmp: record each user login, logout and system startup and shutdown events
➤/var/log/btmp: record failed, wrong login attempts and verification events
➤ all installations of yum are stored in /var/log
➤ Manually compile and install the directory
vim /etc/rsyslong.conf specified by yourself to view the rsyslong.conf configuration file

➤Log file analysis

➤The purpose of analyzing log files is to find key information by browsing logs, debug system services, and determine the cause of failures.
➤For most log files in text format (such as kernel and system logs, most program logs), As long as you use text processing tools such as tail, more, less, cat, etc., you can view the log content
➤ For some log files in binary format (such as user logs), you need to use specific query commands

Kernel and system logs

➤Unified management by system service rsyslogd

➤Software package: rsyslog-7.4.7-16.el7.x86_64 ➤Main
program: /sbin/rsyslogd ➤Configuration
file: /etc/rsyslog.conf

➤The level of system log messages

➤The log files managed by the rsyslogd service are the most important log files in the Linux system. They record the most basic system messages
in the Linux system, such as the kernel, user authentication, mail, and scheduled tasks. ➤In the Linux kernel, according to the log messages The importance is different, and it is divided into different priorities (the smaller the number, the higher the priority, the more important the message)

level Description
0 EMERG (emergency) Will cause the host system to be unavailable
1 ALERT (warning) Problems that must be resolved immediately
2 CRIT (serious) More serious situation (some functions are not available)
3 ERR (error) Run error
4 WARNING Events that may affect system functions
5 NOTICE (note) Will not affect the system but it is worth noting
6 INFO (information) General information
7 DEBUG (debugging) Program or system debugging information, etc. (may be used during maintenance)

*.info means that all information of the info level and above is written to the corresponding log file.
mail.none means that the information of an event is not written to the log file (such as the content of the post) . ➤The
kernel and most system messages are written Recorded to the public log file /var/log/messages, and some other program messages are recorded to their own independent log files
➤Log messages can also be recorded to a specific storage device, or sent directly to designated users

➤General format of public log records

Insert picture description here
Insert picture description here

➤System save directory

Saved user login, logout and other related information
➤/var/log/lastlog: recent user login events
Insert picture description here

➤/var/log/wtmp: user login, logout, system startup and shutdown events
➤/var/log/utmp: detailed information of each user currently logged in
➤/var/log/secure: security related to user authentication event

➤System log analysis tool

➤Analysis tools
users, who, w, last, lastb ➤Query
current logged-in users: users, who, w commands
1. The user command simply outputs the currently logged-in user name, and each displayed user name corresponds to a login session . If a user has more than one login session, his user name will be displayed the same number of times.
2. Who command the user to report the information of each user currently logged in to the system.
With this command, the system administrator can view the current system. legitimate users to perform audits and handle them
who commands the default output, including user name, terminal type, login date and the remote host
3.w command displays the current system for each user and process information running over The output of the users, who command should be richer
4. Query the history of user login: last, lastb command
*last command is used to query the user record of successfully logged in to the system, and the latest login status will be displayed at the top
through the last command The login status of the Linux host can be grasped in time. If an unauthorized user is found to have logged in, it means that the current host may have been compromised. The
*lastb command is used to query the user record of failed logins, such as the login user name is incorrect, the password is incorrect, etc. The situation will be recorded.
The login failure is a security event, because it means that someone may be trying to guess your password.
In addition to using the lastb command to view, you can also directly obtain relevant information from the security log file /var/log/secure

Program log

➤Independently managed by the corresponding program

➤Web service: /var/log/httpd/
access_log, error_log (The two log files access_log and error_log used by the httpd website service program record customer access events and error events respectively.)

➤Proxy service: /var/log/squid/
access.log, cache.log

➤FTP service: /var/log/xferlog

➤Program log analysis tool

Text viewing, grep filter retrieval, webmin management suite viewing
awk, sed and other text filtering, formatting editing tools
webalizer, awstats and other special log analysis tools

➤Log management strategy

➤Make backups and archives in a timely manner
➤Extend the log retention period
➤Control log access permissions. The
logs may contain various sensitive information, such as accounts, passwords, etc.
➤Centralized management of logs.
Send server log files to a unified log file server
for easy logging Information statistics collection, sorting and analysis
to prevent accidental loss, tampering and or malicious deletion of log information

Guess you like

Origin blog.csdn.net/Dark_Tk/article/details/113574175