Linux just learn it this way-disk partitioning

du command: its format is "du [option] [file]". Simple This command is used to view how much hard disk space one or more files occupy

Configure the local yum source in order to prevent the loss of the startup mount disk, you need to add the mount point and mount directory to the configuration file /etc/fstab

Add swap partition

SWAP (swap) partition is a way of pre-dividing a certain amount of space in the hard disk, and then temporarily storing temporarily infrequently used data in the memory on the hard disk, in order to solve the problem of insufficient real physical memory. But because the swap partition reads and writes data through the hard disk device after all, the speed is definitely slower than the physical memory, so the swap partition resources will only be called when the real physical memory is exhausted.

Check whether the newly added disk is divided

Mount swap

[root@my-server ~]# mks
mksquashfs  mkswap      
[root@my-server ~]# mkswap /dev/sdb
sdb   sdb1  
[root@my-server ~]# mkswap /dev/sdb1
Setting up swapspace version 1, size = 4 GiB (4294963200 bytes)
no label, UUID=b2e6e5a8-c4f5-467c-919a-c29a0483d3da
[root@my-server ~]# free -g
              total        used        free      shared  buff/cache   available
Mem:              1           1           0           0           0           0
Swap:             2           0           2
[root@my-server ~]# swapon /dev/sdb1
[root@my-server ~]# free -g
              total        used        free      shared  buff/cache   available
Mem:              1           1           0           0           0           0
Swap:             6           0           6
[root@my-server ~]# 

In order to avoid boot loss

Disk capacity quota

The quota command also has the function of soft limit and hard limit.

Soft limit: When the soft limit is reached, the user will be prompted, but the user is still allowed to continue using it within the limited amount.

Hard limit: When the hard limit is reached, the user will be prompted and the user's operation will be forcibly terminated

The quota disk capacity quota service package has been installed in the RHEL 8 system, but the storage device does not support quota technology by default. At this time, you need to manually edit the configuration file and restart again to allow the boot directory (/boot) in the system to be able to Support quota disk quota technology

Check whether the setting is successful after saving:

Add test users for testing

A command designed specifically for the XFS file system to manage the quota disk capacity quota service. The -c parameter is used to set the command to be executed in the form of a parameter; the -x parameter is an expert mode , allowing the operation and maintenance personnel to perform the quota service More complicated configuration. Next, use the xfs_quota command to set user redhat's quota disk capacity quota for the /boot directory . The specific quota control includes: the soft and hard limits for hard disk usage are 3MB and 6MB respectively; the soft and hard limits for the number of files created are 3 and 6 respectively.

The parameters used above are divided into two groups, isoft/ihard and bsoft/bhard, let's explain them in depth. As mentioned in section 6.3, each file in the Linux system will be saved by an independent inode information block. Each file corresponds to an inode information block. All isoft and ihard limit the maximum number of inodes used by the system. Numbers limit the file format. bsoft and bhard represent the block size occupied by the file, that is, the total statistics occupied by the largest file.

Soft is a soft limit. If it is exceeded, it will only be written to the log without restricting user behavior. The hard limit is a hard limit, once exceeded, it will be immediately prohibited, and no hard disk capacity can be created or newly occupied.

After configuring the above various hardware and software restrictions, try to switch to this ordinary user, and then try to create a file with a volume of 5MB and 8MB respectively. It can be found that there are system limitations when creating 8MB files:

The edquota command is used to activate a new exchange partition device. The full English name is "edit quota", and the syntax format is "edquota [parameter] user name".

After setting the quota disk capacity quota limit for the user, you can use the edquota command to modify the value of the quota as needed. Among them, the -u parameter indicates which user is to be set; the -g parameter indicates which user group is to be set, as shown in Table 6-6.

Table 6-6 Parameters and functions available in the edquota command

parameter effect
-u Set up that user
-g Set up that user group
-p Copy the original rules to the new user/group
-t Restricted grace period

 

The edquota command will call the Vi or Vim editor to allow the root administrator to modify the specific details to be restricted . Remember to save and exit with wq. Let's increase the hard limit of user redhat's hard disk usage from 5MB to 8MB:

The test verification is as follows

 Soft and hard way link

Soft link (symbolic link): Also called symbolic link, it only contains the name and path of the linked file, like a label that records the address. When the original file is deleted or moved, the new linked file will also become invalid and cannot be accessed. You can make soft links to files and directories. Cross-file system is not a problem. From this point of view, it is compatible with the Windows system. "Method" has the same nature. The effect of user access is shown in Figure 6-15.

Chapter 6 Storage Structure and Management of Hard Disks Chapter 6 Storage Structure and Management of Hard Disks

Delete the original file, the symbolic link file is abnormal 

After the original file of the symbolic link is deleted, after the meeting is re-added, the information of the symbolic link will display the newly added content

Hard link: It can be understood as a "pointer to the original file block", the system will create an inode information block exactly the same as the original one. Therefore, the hard-linked file is exactly the same as the original file, but the name is different. Every time a hard link is added, the number of inodes of the file will increase by 1; and only when the number of inodes of the file is 0, it is considered as completely deleted. In other words, because the hard link is actually a pointer to the original file block, even if the original file is deleted, it can still be accessed through the hard link file. It should be noted that due to technical limitations, it is not possible to hard link directory files across partitions.

Chapter 6 Storage Structure and Management of Hard Disks Chapter 6 Storage Structure and Management of Hard Disks

-s Create a "symbolic link" (if there is no -s parameter, a hard link is created by default)
-f Mandatory creation of links to files or directories
-i Ask before covering
-v Show the process of creating a link

 Redundant Array of RAID Disks

RAID 0 RAID 1 RAID 5 RAID 10 information is as follows:

RAID level Minimal hard disk Usable capacity Read and write performance safety Features
0 2 n n low In pursuit of maximum capacity and speed, if any disk is damaged, all data is abnormal.
1 2 n/2 n high In pursuit of maximum security, as long as there is a hard disk available in the array group, the data will not be affected.
5 3 n-1 n-1 in Under the premise of cost control, the maximum capacity, speed and safety of hard disks are pursued, and one hard disk is allowed to be abnormal without data being affected.
10 4 n/2 n/2 high Combining the advantages of RAID1 and RAID0, pursuing the speed and safety of hard disks, allowing half of the hard disks to be abnormal (not in the same group), and the data will not be affected

Common parameters and functions of the mdadm command

parameter effect
-a Testing equipment name
-n Specify the number of devices
-l Specify RAID level
-C create
-v Show process
-f Simulate equipment damage
-r Remove device
-Q View summary information
-D check the detail information
-S Stop RAID disk array

 

 Deploy disk array

First of all, you need to add 4 hard disk devices to the virtual machine to make a RAID 10 disk array. Remember to use SCSI or SATA interface type. For quick implementation, you can set 20GB to verify the effect.

 The -C parameter represents the creation of a RAID array card; the -v parameter shows the process of creation, and at the same time appends a device name /dev/md0, so that /dev/md0 is the name of the RAID disk array after creation; -n 4 parameter represents Use 4 hard disks to deploy this RAID disk array; and the -l 10 parameter represents the RAID 10 scheme; finally add the names of 4 hard disk devices to get it done

View the detailed progress of RAID generation

 

Format the prepared RAID disk array into ext4 format

Create a mount point and then mount the hard disk device. After the mount is successful, you can see that the available space is 40GB (because the use of RAID10 is only 50%).

And view the detailed information of the /dev/md0 disk array

Write the mount information to the configuration file to make it permanent.

Damaged disk array and repair

The purpose of deploying RAID10 disk array groups in the production environment is to improve the IO reading and writing speed of storage devices and the security of data, but because this time is a hard disk device simulated on the local computer, it is possible to improve the reading and writing speed. It is not intuitive. First, after confirming that a physical hard disk device is damaged and can no longer continue to be used normally, you should use the mdadm command to remove it and check that the status of the RAID disk array group has been changed.

 Simulate disk D damage:

 We are simulating the hard disk in the virtual machine, so we restart the system first, and then add the new hard disk to the RAID disk array.

 Disk array + backup disk

A maximum of 50% of the hard disk devices in a RAID 10 disk array are allowed to fail, but there is an extreme situation, that is, if all the hard disk devices in the same RAID 1 disk array are damaged, data will also be lost. When deploying a RAID 5 disk array, at least 3 hard disks are required, and a backup hard disk is also required. Therefore, a total of 4 hard disk devices need to be simulated in the virtual machine. The simulation information is as follows:

-C create -v display process -n number of devices -l RAID level -x backup disk

The synchronization information is complete, there is a backup disk to see 

[root@my-server ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Sun Jan 24 17:04:26 2021
        Raid Level : raid5
        Array Size : 10475520 (9.99 GiB 10.73 GB)
     Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Sun Jan 24 17:04:53 2021
             State : clean 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : my-server:0  (local to host my-server)
              UUID : 3d867578:36690051:d95c1b74:0b36b19f
            Events : 18

    Number   Major   Minor   RaidDevice State
       0       8       32        0      active sync   /dev/sdc
       1       8       48        1      active sync   /dev/sdd
       4       8       64        2      active sync   /dev/sde

       3       8       80        -      spare   /dev/sdf

Format the deployed RAID 5 disk array into ext4 file format, and then mount it to the directory 

We move the hard disk device /dev/sde out of the disk array again, and then quickly check the status of the /dev/md0 disk array, we will find that the backup disk has been automatically replaced and data synchronization has started. This kind of backup disk technology in RAID is very practical and can further improve data reliability on the basis of ensuring the data security of RAID disk arrays. Therefore, if the company is not short of money, it is better to buy another backup disk just in case.

[root@my-server ~]# mdadm /dev/md0 -f /dev/sde
mdadm: set /dev/sde faulty in /dev/md0
[root@my-server ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Sun Jan 24 17:04:26 2021
        Raid Level : raid5
        Array Size : 10475520 (9.99 GiB 10.73 GB)
     Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Sun Jan 24 17:15:20 2021
             State : clean, degraded, recovering 
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 1
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 29% complete

              Name : my-server:0  (local to host my-server)
              UUID : 3d867578:36690051:d95c1b74:0b36b19f
            Events : 24

    Number   Major   Minor   RaidDevice State
       0       8       32        0      active sync   /dev/sdc
       1       8       48        1      active sync   /dev/sdd
       3       8       80        2      spare rebuilding   /dev/sdf

       4       8       64        -      faulty   /dev/sde
[root@my-server ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Sun Jan 24 17:04:26 2021
        Raid Level : raid5
        Array Size : 10475520 (9.99 GiB 10.73 GB)
     Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Sun Jan 24 17:15:39 2021
             State : clean 
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 1
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : my-server:0  (local to host my-server)
              UUID : 3d867578:36690051:d95c1b74:0b36b19f
            Events : 37

    Number   Major   Minor   RaidDevice State
       0       8       32        0      active sync   /dev/sdc
       1       8       48        1      active sync   /dev/sdd
       3       8       80        2      active sync   /dev/sdf

       4       8       64        -      faulty   /dev/sde
[root@my-server ~]# 

 Restart the virtual machine and remount it again

 

Guess you like

Origin blog.csdn.net/yanghuadong_1992/article/details/113088880
Recommended