Linux LVM logical basis and _RAID

Disclaimer: This article is a blogger original article, reproduced, please declare the source! https://blog.csdn.net/weixin_42758707/article/details/90736749

▼RAID

The RAID: Redundant Arrays, of Inexpensive (work of the Independent) Disks
1988 years "of Inexpensive Disks A Case for Redundant Arrays" by the University of California at Berkeley (University of California-Berkeley)
a plurality of disks into one "array" to provide better performance, redundant , or both to provide

RAID

IO capabilities improve
disk read and write in parallel
to improve the durability of the
disk redundancy to achieve
Level: multiple disks grouped together work differently

RAID implementation mode

  1. External disk arrays: adaptation capabilities provided by the expansion card
  2. Internal contact type RAID: front onboard RAID controller, mounted in the BIOS, OS configuration
  3. Software RAID: OS achieved through

Here Insert Picture Description
Here Insert Picture Description
Here Insert Picture Description

RAID Level
RAID-0 Striped volume, strip
reading and writing performance
of free space: N * min (S1, S2 , ...)
without the fault tolerance of
the minimum number of disks: 2, 2+
RAID-1 Mirrored volume, Mirror
read performance, write performance slightly decreased
the space available: 1 * min (S1, S2 , ...)
has redundancy
minimum number of disks: 2, 2N
the RAID. 4-:
a plurality of data storage disks exclusive OR operation value the dedicated parity disk
RAID-5 Reading and writing performance
of free space: (N-1) * min (S1, S2, ...)
have fault tolerance: a disk damage Maximum
Minimum number of disks: 3, 3+
RAID-6 Reading and writing performance
of free space: (N-2) * min (S1, S2, ...)
have fault tolerance: Maximum two disk damage
minimum number of disks: 4, 4+
RAID7 Can be understood as a separate storage computer
with its own operating system and management tools
theoretically highest performance RAID mode can run independently,
RAID-10 Reading and writing performance
of free space: N * min (S1, S2 , ...) / 2
fault tolerant: each up to a bad image only
the minimum number of disks: 4, 4+
RAID-01 Multiple disks to achieve RAID0, and then combined into RAID1
RAID-50 Multiple disks to achieve RAID5, and then combined into a RAID0
JBOD Just a Bunch Of Disks
features: a plurality of disk space combined use of a large contiguous space
available space: sum (S1, S2, ... )
  • Common levels:
    the RAID-0, the RAID-. 1,. 5-the RAID, the RAID-10, the RAID-50, JBOD

▼ Software RAID

  • mdadm: provide management interface for software RAID
  • Add spare disk redundancy
  • Binding kernel md (multi devices)
  • RAID devices may be named / dev / md0, / dev / md1, / dev / md2, / dev / md3 etc.

Implement software RAID

  • mdadm: model of the tool

  • Command syntax:
    mdadm [MODE] <raiddevice> [options] <component-devices>

  • Supported RAID levels: LINEAR, RAID0, RAID1, RAID4, RAID5, RAID6, RAID10

  • [MODE] Mode:
    Create: -C
    Assembly: -A
    monitoring: -F
    Management: -f, -r, -a

  • <raiddevice>: /dev/md#

  • <component-devices>: Any piece of equipment

  • [MODE] Mode:

    • -C: creation mode
      -n #: # blocks using this device to create RAID
      the -l #: specify the level you want to create the RAID
      -a {yes | no}: create device files automatically target RAID devices
      -c CHUNK_SIZE: specify block size, the unit k
      number specified idle disks: -x #
    • -D: show details of the raid
      mdadm -D / dev / md #
    • -f: Specifies the disk marked as damaged
    • -a: Adding a Disk
    • -r: Remove disk
  • Md state observed: cat / proc / mdstat

Soft RAID configuration examples

  1. Use mdadm to create and define the RAID device
    mdadm -C /dev/md0 -a yes -l 5 -n 3 -x 1 /dev/sd{b,c,d,e}1
  2. Format for each file system with RAID device
    mkfs.xfs /dev/md0
  3. Test equipment RAID: RAID device to use to check the status of mdadm
    mdadm --detail|D /dev/md0
  4. Adding new members
    mdadm –G /dev/md0 –n4 -a /dev/sdf1

Soft RAID test and repair

  1. Simulated disk failure
    mdadm /dev/md0 -f /dev/sda1
  2. Remove disk
    mdadm /dev/md0 –r /dev/sda1
  3. RAID disk repair software from disk failure
    • Replace the failed disk and boot
    • On the spare drive rebuild partition
    mdadm /dev/md0 -a /dev/sda1
  4. mdadm, / proc / mdstat and system log information

Software RAID Management

  1. Generate a configuration file: mdadm -D -s >> /etc/mdadm.conf
  2. Stop devices: mdadm -S / dev / md0
  3. Activating the device: mdadm -A -s / dev / md0 activation
  4. Forced start: mdadm -R / dev / md0
  5. Delete raid information: mdadm --zero-superblock / dev / sdb1

▼ Logical Volume Manager (LVM)

Abstraction layer allows for convenient operation of the volume, including resetting the file system size
to allow reorganizing a file system among a plurality of physical devices
• be designated as the physical volumes
• with one or more physical volumes to create a volume group
• physical volume is a physical area of a fixed size (physical Extent, PE) to define the
logical volumes created on • physical volume
is a physical area (PE) composition
• can create a file system on the logical volume

LVM Introduction

  • LVM: Logical Volume Manager, Version 2
    • PE (Physical Extend): physical region, PV cells can be used for storing the minimum allocation can be formulated (by default 4MB) created when the PV.
    • PV (Physical Volume): physical volume, may be a physical hard disk or partition.
    • LV (Logical Volume): logical size can be changed dynamically.
    • VG (Volume Group): a volume group.

Here Insert Picture Description

  • dm: device mapper, one or more underlying devices organized into blocks of a logical device module
  • Device name: / dev / dm- #
  • 软链接:
    /dev/mapper/VG_NAME-LV_NAME
    /dev/mapper/vol0-root
    /dev/VG_NAME/LV_NAME
    /dev/vol0/root
  • The capacity of the file system changes LVM
    LVM elastically LVM volume change is
    performed through the exchange of data conversion PE, PE transferring to the original LV other device to reduce the capacity of the LV, or other device is added to PE LV to increase capacity

pv Management Tools

  • Pv display information (pvs: pv brief information display)
    pvdisplay
  • Creating pv
    pvcreate /dev/DEVICE
  • Delete pv
    pvremove /dev/DEVICE

vg management tools

  • Display volume group
    vgs
    vgdisplay
  • Create a volume group
    vgcreate [-s #[kKmMgGtTpPeE]] VolumeGroupName PhysicalDevicePath...
  • Volume Group Management
    vgextend VolumeGroupName PhysicalDevicePath...
    vgreduce VolumeGroupName PhysicalDevicePath...
  • Delete volume group
    do first pvmove, dovgremove

lv management tools

  • Displaying the Logical Volume
    lvs
    Lvdisplay
  • Creating a Logical Volume
    lvcreate -L #[mMgGtT] -n NAME VolumeGroup
    lvcreate -l 60%VG -n mylv testvg
    lvcreate -l 100%FREE -n yourlv testvg
  • Removing a Logical Volume
    lvremove /dev/VG_NAME/LV_NAME
  • Reset the file system size
    fsadm [options] resize device [new_size[BKMGTEP]]
    resize2fs [-f] [-F] [-M] [-P] [-p] device [new_size]
    xfs_growfs /mountpoint

Expansion and reduction of logical volumes

  • Extending a Logical Volume:
lvextend -L [+]#[mMgGtT] /dev/VG_NAME/LV_NAME
或lvextend -l +1000 /dev/vg0/lv0 #增加1000个PE大小的空间
或lvresize -r -l +100%FREE /dev/VG_NAME/LV_NAME  #增加空间并同步
resize2fs /dev/VG_NAME/LV_NAME  #ext4文件系统同步
xfs_growfs /mount_point #xfs文件系统同步
  • Reduced logical volume:
umount /dev/VG_NAME/LV_NAME
e2fsck -f /dev/VG_NAME/LV_NAME
resize2fs /dev/VG_NAME/LV_NAME #[mMgGtT]
lvreduce -L [-]#[mMgGtT] /dev/VG_NAME/LV_NAME
mount

Migrate across hosts volume group

On the source computer

  1. In the old system, umount all logical volumes on the volume group
  2. Disable Volume Group
vgchange –a n vg0
lvdisplay
  1. Export volume group
vgexport vg0
pvscan
vgdisplay
  1. Remove the old hard disk
  2. Old hard disk installed in a new system, and to import the volume group:vgimport vg0
  3. vgchange –ay vg0 Enable
  4. mountAll logical volumes on the volume group

partx -d --nr 1 /dev/sdb
fuser -v /mount-point

Create a logical example

  1. Create a physical volume
    pvcreate /dev/sda3
  2. Physical volume allocated to the volume group
    vgcreate vg0 /dev/sda3
  3. Create logical volumes from a volume group
    lvcreate -L 256M -n data vg0
  4. format
    mkfs.xfs /dev/vg0/data
  5. Mounting
    mount /dev/vg0/data /mnt/data

lvm move (to remove the hard disk)

pvmove /dev/sdd
vgreduce vg0 /dev/sdd
pvremove /dev/sdd

Here Insert Picture Description

Logical Volume Manager Snapshots

  • A snapshot is a special logical volume, it is an exact copy of the logical volume exists when the snapshot is generated
  • For existing temporary copy or copy data to back up and other operations, the most appropriate choice snapshot
  • Not only a snapshot at the same time will consume space in their original logical volume
  • When a snapshot is assigned to it some space, but only change will be in the original logical volume or snapshot to use these spaces
  • When the change of the original logical volume, copy the old data to the snapshot
  • Data snapshot contains only the original data is changed or logical changes from the snapshot of a snapshot
  • A snapshot of the logical volume size is less than equal to the original, may be extended using the snapshot lvextend
    Here Insert Picture Description
  • The snapshot is recorded at the time of system information, like photography in general, if there are any future changes to the data, the raw data will be moved to the snapshot area, there is no change in the region by the region and the snapshot file system sharing
  • Since many of the snapshot block regions common with the original PE LV, thus snapshot with the snapshot must LV. When the number of file system recovery can not be higher than the actual capacity of the snapshot in the same region in a VG

Use LVM snapshots

  1. A snapshot of the existing logical volume
#xfs文件系统可读写
#ext4文件系统必须只读(-p r)
lvcreate -l 64 -s -n data-snapshot -p r /dev/vg0/data
  1. Mount Snapshot
mkdir -p /mnt/snap
#只读挂载
mount -o ro /dev/vg0/data-snapshot /mnt/snap
  1. Snapshot recovery
umount /dev/vg0/data-snapshot
umount /dev/vg0/data
lvconvert --merge /dev/vg0/data-snapshot
  1. Deleting a snapshot
umount /mnt/databackup
lvremove /dev/vg0/databackup

Exercise

  1. Create a space available for the RAID1 device 1G, the file system as ext4, there is a free disk, the boot can be automatically mounted to / backup directory
  2. Created by three disk space available to RAID5 equipment 2G, the chunk size is required to 256K, ext4 file system, it can be switched to automatic loading / mydata directory
  3. Create a size of the at least two PV VG is composed of named testvg of 20G; PE claim size 16MB, then the size of the logical volume testlv 5G created in the volume group; to mount / users directory
  4. New User archlinux, requires its home directory is / users / archlinux, then su to switch to archlinux users, copy /etc/pam.d directory to your home directory
  5. Testlv extended to 7G, archlinux requires the user's file can not be lost
  6. Testlv to shrink 3G, archlinux requires the user's file can not be lost
  7. Create a snapshot of testlv, and try to snapshot-based backup data validation snapshot function

To exercises

  1. Create a space available for the RAID1 device 1G, the file system as ext4, there is a free disk, the boot can be automatically mounted to / backup directory
#首先在虚拟机添加硬盘

#创建3个分区
fdisk /dev/sdb
fdisk /dev/sdb
fdisk /dev/sdb
...
lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda       8:0    0  20G  0 disk
├─sda1    8:1    0   1G  0 part /boot
├─sda2    8:2    0  10G  0 part /
├─sda3    8:3    0   2G  0 part [SWAP]
├─sda4    8:4    0   1K  0 part
└─sda5    8:5    0   5G  0 part /data
sdb       8:16   0  20G  0 disk
└─sdb1    8:17   0   1G  0 part
sdc       8:32   0  20G  0 disk
└─sdc1    8:33   0   1G  0 part
sdd       8:48   0  20G  0 disk
└─sdd1    8:49   0   1G  0 part

#创建raid1
mdadm -C /dev/md0 -a yes -n 2 -x 1 /dev/sd{b,c,d}1

#创建文件系统
mkfs.ext4 /dev/md0

#查看创建的RAID1
mdadm -D /dev/md0

#创建/backup
mkdir /backup

#设置开机启动(先查看UUID)
blkid | grep /dev/md0
echo "UUID=f448ac3d-40bd-4293-a016-a4285fd890e5 /backup ext4 defaults 0 0 " >> /etc/fstab

#挂载
mount -a
  1. Created by three disk space available to RAID5 equipment 2G, the chunk size is required to 256K, ext4 file system, it can be switched to automatic loading / mydata directory
fdisk /dev/sdb
fdisk /dev/sdb
fdisk /dev/sdb
...
lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda       8:0    0  20G  0 disk
├─sda1    8:1    0   1G  0 part /boot
├─sda2    8:2    0  10G  0 part /
├─sda3    8:3    0   2G  0 part [SWAP]
├─sda4    8:4    0   1K  0 part
└─sda5    8:5    0   5G  0 part /data
sdb       8:16   0  20G  0 disk
└─sdb1    8:17   0   1G  0 part
sdc       8:32   0  20G  0 disk
└─sdc1    8:33   0   1G  0 part
sdd       8:48   0  20G  0 disk
└─sdd1    8:49   0   1G  0 part

#创建raid5
mdadm -C /dev/md0 -a yes -l 5 -n 3 -c 256 /dev/sd{b,c,d}1

#创建ext4文件系统
mkfs.ext4 /dev/md0

#设置开机启动(先查看UUID)
mkdir /mydata
blkid /dev/md0
echo "UUID=5815c922-df18-46f4-975a-53172cff5aa9 /mydata ext4 defaults 0 0 " >> /etc/fstab

#挂载
mount -a
  1. Create a size of the at least two PV VG is composed of named testvg of 20G; PE claim size 16MB, then the size of the logical volume testlv 5G created in the volume group; to mount / users directory
#创建2个10G的分区(fdisk)
[root@centos7 ~]$lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda       8:0    0  20G  0 disk
├─sda1    8:1    0   1G  0 part /boot
├─sda2    8:2    0  10G  0 part /
├─sda3    8:3    0   2G  0 part [SWAP]
├─sda4    8:4    0   1K  0 part
└─sda5    8:5    0   5G  0 part /data
sdb       8:16   0  20G  0 disk
└─sdb1    8:17   0  10G  0 part
sdc       8:32   0  20G  0 disk
└─sdc1    8:33   0  10G  0 part

#创建物理卷
pvcreate /dev/sd{b,c}1
#查看创建的pv物理卷
pvs

#创建vg卷组
vgcreate -s 16M testvg /dev/sd{b,c}1
#查看
vgs或vgdisplay

#创建lv逻辑卷
lvcreate -n testlv -L 5G testvg
#查看
lvs

#挂载
mkdir /users
mount /dev/testvg/testlv /users
  1. New User archlinux, requires its home directory is / users / archlinux, then su to switch to archlinux users, copy /etc/pam.d directory to your home directory
useradd  -d /users/archlinux archlinux
su archlinux
cp /etc/pam.d /user/archlinux
  1. Testlv extended to 7G, archlinux requires the user's file can not be lost
lvextend  -L 7G /dev/testvg/testlv
resize2fs /dev/testvg/testlv
  1. Testlv to shrink 3G, archlinux requires the user's file can not be lost
#取消挂载
umount /users
#检查文件系统
fsck -f /dev/testvg/testlv
#缩减文件系统至3G
resize2fs /dev/testvg/testlv 3G
#缩减逻辑卷至3G
lvreduce -L 3G /dev/testvg/testlv
#挂载
mount /dev/testvg/testlv /users

6 = 7. To create a snapshot of testlv, and try to function-based snapshot backup data validation snapshot

#创建快照
lvcreate -L 1G -s  -n testlv-data -p r /dev/testvg/testlv

#挂载
mkdir shot
mount /dev/testvg/testlv-shot /mnt/shot
mount -o ro /dev/testvg/testlv-shot /mnt/snap

rm -f /users/test1.txt
umount /dev/testvg/testlv-shot
umount /dev/testvg/testlv

#恢复快照
lvconvert --merge /dev/testvg/testlv-shot

Guess you like

Origin blog.csdn.net/weixin_42758707/article/details/90736749