Raid disk array with Lvm

A, Raid redundant array of disks

RAID After art devices by a plurality of larger disks into one composition, better safety disk array, and the data into a plurality of segments are stored in respective different physical hard disk device, and then use technology striping to improve the overall performance of the disk array, and the plurality of copies of important data is synchronized to a different physical disk devices, which play a very good data redundancy backup.

RAID technology does have very good data redundancy features, but it is also a corresponding increase in costs. RAID not only reduces the chance of data loss if damaged hard disk device, a hard disk read and write speeds also improve the device, so it can be widely deployed and used in the vast majority of carriers or medium-sized enterprises.

For cost and technical considerations need to be made to weigh in on the read and write performance and data reliability for different needs, develop different solutions to meet their needs.

1、Raid0

RAID 0 technology to multiple physical hard disk device (at least two) in series by way of hardware or software together to form a large volume group, and each data is sequentially written to the physical hard disk. Thus, in the ideal state, read and write performance of a hard disk device would increase several times, but if any one disk failure will cause the entire system data are damaged. Popular speaking, RAID 0 technology can effectively improve the throughput speed of hard disk data, but do not have the ability to backup and bug fixes.

2, Radi1

Although RAID 0 techniques to enhance the access speed of the hard disk device, but the data is sequentially written to each physical hard disk, that is, its data are stored separately, any of a hard disk failure can damage the entire system data. Therefore, if the production of hard disk read and write speed of the device is not required, but to increase the security of your data, you need to use RAID 1 technology. RAID 1 technology, while great attention to data security, but because it is written in the same data in multiple hard disk device, a hard disk and therefore the utilization of the device is dropped. Since the required data is written simultaneously to two or more hard disk device, which undoubtedly increases the load on the system computing capabilities to a certain extent.

3, Raid5

RAID5 technique is to store the data parity information to the hard disk device is the hard disk device other. Parity information in the data sets of RAID 5 disk array is not stored separately to a hard disk device, but is stored on each of the other devices other than the hard disk itself, so that the benefits which the device will not damage any a fatal flaw; RAID 5 technology is actually no real data to back up the hard drive, but there is a problem when the hard disk device to attempt to reconstruct damaged data by parity information. RAID technology characteristics such "compromise" to both read and write speed of the hard disk device, data security and storage costs.

4 RAID10

RAID 10 RAID 1 + technology is a technology of RAID 0 "combination." RAID 10 requires at least four techniques to form a hard disk, wherein the first twenty-two were made into disk RAID 1 array, in order to ensure the security of data; then two RAID 1 to RAID 0 arrays art embodiment, to further improve the hard disk device reads write speed. So in theory, all of the hard disk is not bad as long as the same group, then the damage can be up to 50% of the hard disk device without losing data. Since RAID 10 technologies inherited high write speed RAID 0 and RAID 1 data security and RAID performance are more than 10 in the RAID 5 without considering the cost, so the current storage technology has become a widely used.

5, mdadm command

Role: to manage Linux software RAID hard disk array system.

Format: mdadm [Mode] <RAID device name> [options] [name of member device].

Options:

parameter

effect

-a

Testing equipment Name

-n

Specifies the number of devices

-l

Assign a RAID level

-C

create

-v

Display process

-f

Simulation equipment damage

-r

Remove Device

-Q

View summary information

-D

check the detail information

-S

Stop RAID disk array

-x

Designated backup disk

Two, Lvm Logical Volume Manager

LVM allows users to dynamically adjust resources on the hard disk. Linux Logical Volume Manager is a system for hard disk partition management mechanism, the theory is strong, its original intention was to create a device to solve the hard disk partition size is not easy to modify defects after the partition is created. Despite the forced expansion of the traditional hard disk partition or volume reduction is theoretically possible, but may result in loss of data. LVM technology and between the hard disk partition and file system adds a logical layer that provides an abstraction of the volume group, multiple disks can be combined and volume. As a result, users do not have the underlying architecture and layout of the physical machine's hard disk can be achieved dynamically adjust the hard disk partition.

LVM physical volume at the bottom of which can be understood as physical hard disk partition or a RAID, which can be. Volume Group built on the physical volumes, a volume group can contain multiple physical volumes, but also can continue to add new physical volumes to which after you create a volume group. Logical volume group is created using idle resources, and the logical volume can be dynamically expanded or reduced space after establishment. This is the core concept of LVM.

1. Create a Logical Volume

LVM common deployment command:

Function / Command

Physical Volume Management

Volume Group Management

Logical Volume Management

scanning

pvscan

vgscan

lvscan

set up

pvcreate

vgcreate

lvcreate

display

pvdisplay

vgdisplay

lvdisplay

delete

pvremove

vgremove

lvremove

Spread

 

vgextend

lvextend

Narrow

 

vgreduce

lvreduce

① make two hard drives newly added device support LVM technology

[root@yxf]# pvcreate /dev/sdb /dev/sdc

Physical volume "/dev/sdb" successfully created

Physical volume "/dev/sdc" successfully created

② the two hard disk storage device is added to the volume group, and then view the status of the volume group.

[root@yxf]# vgcreate storage /dev/sdb /dev/sdc

Volume group "storage" successfully created

[root@yxf]# vgdisplay

③切割出一个约为150MB的逻辑卷设备。

在对逻辑卷进行切割时有两种计量单位。第一种是以容量为单位,所使用的参数为-L。例如,使用-L 150M生成一个大小为150MB的逻辑卷。另外一种是以基本单元的个数为单位,所使用的参数为-l。每个基本单元的大小默认为4MB。例如,使用-l 37可以生成一个大小为37×4MB=148MB的逻辑卷。

[root@yxf]# lvcreate -n vo -l 37 storage

Logical volume "vo" created

[root@yxf]# lvdisplay

2、逻辑卷扩容

逻辑卷vo扩展至290MB

[root@yxf]# lvextend -L 290M /dev/storage/vo

Rounding size to boundary between physical extents: 292.00 MiB

Extending logical volume vo to 292.00 MiB

Logical volume vo successfully resized

检查硬盘完整性,并重置硬盘容量。

[root@yxf]# e2fsck -f /dev/storage/vo

e2fsck 1.42.9 (28-Dec-2013)

Pass 1: Checking inodes, blocks, and sizes

Pass 2: Checking directory structure

Pass 3: Checking directory connectivity

Pass 4: Checking reference counts

Pass 5: Checking group summary information

/dev/storage/vo: 11/38000 files (0.0% non-contiguous), 10453/151552 blocks

[root@yxf]# resize2fs /dev/storage/vo

resize2fs 1.42.9 (28-Dec-2013)

Resizing the filesystem on /dev/storage/vo to 299008 (1k) blocks.

The filesystem on /dev/storage/vo is now 299008 blocks long.

3、逻辑卷缩容

相较于扩容逻辑卷,在对逻辑卷进行缩容操作时,其丢失数据的风险更大。所以在生产环境中执行相应操作时,一定要提前备份好数据。另外Linux系统规定,在对LVM逻辑卷进行缩容操作之前,要先检查文件系统的完整性(当然这也是为了保证我们的数据安全)。在执行缩容操作前记得先把文件系统卸载掉。

①检查文件系统的完整性。

[root@yxf]# e2fsck -f /dev/storage/vo

e2fsck 1.42.9 (28-Dec-2013)

Pass 1: Checking inodes, blocks, and sizes

Pass 2: Checking directory structure

Pass 3: Checking directory connectivity

Pass 4: Checking reference counts

Pass 5: Checking group summary information

/dev/storage/vo: 11/74000 files (0.0% non-contiguous), 15507/299008 blocks

②把逻辑卷vo的容量减小到120MB。

[root@yxf]# resize2fs /dev/storage/vo 120M

resize2fs 1.42.9 (28-Dec-2013)

Resizing the filesystem on /dev/storage/vo to 122880 (1k) blocks.

The filesystem on /dev/storage/vo is now 122880 blocks long.

[root@yxf]# lvreduce -L 120M /dev/storage/vo

WARNING: Reducing active logical volume to 120.00 MiB

THIS MAY DESTROY YOUR DATA (filesystem etc.)

Do you really want to reduce vo? [y/n]: y

Reducing logical volume vo to 120.00 MiB

Logical volume vo successfully resized

4、逻辑卷快照

LVM还具备有"快照卷"功能,该功能类似于虚拟机软件的还原时间点功能。例如,可以对某一个逻辑卷设备做一次快照,如果日后发现数据被改错了,就可以利用之前做好的快照卷进行覆盖还原。LVM的快照卷功能有两个特点:

快照卷的容量必须等同于逻辑卷的容量;

快照卷仅一次有效,一旦执行还原操作后则会被立即自动删除。

①使用-s参数生成一个快照卷,使用-L参数指定切割的大小。另外,还需要在命令后面写上是针对哪个逻辑卷执行的快照操作。

[root@yxf]# lvcreate -L 120M -s -n SNAP /dev/storage/vo

Logical volume "SNAP" created

②在逻辑卷所挂载的目录中创建一个100MB的垃圾文件,然后再查看快照卷的状态。可以发现存储空间占的用量上升了。

[root@yxf]# dd if=/dev/zero of=/linuxprobe/files count=1 bs=100M

1+0 records in

1+0 records out

104857600 bytes (105 MB) copied, 3.35432 s, 31.3 MB/s

③为了校验SNAP快照卷的效果,需要对逻辑卷进行快照还原操作。在此之前记得先卸载掉逻辑卷设备与目录的挂载。

[root@yxf]# umount /linuxprobe

[root@yxf]# lvconvert --merge /dev/storage/SNAP

Merging of volume SNAP started.

vo: Merged: 21.4%

vo: Merged: 100.0%

Merge of snapshot into logical volume vo has finished.

Logical volume "SNAP" successfully removed

④快照卷会被自动删除掉,并且刚刚在逻辑卷设备被执行快照操作后再创建出来的100MB的垃圾文件也被清除了。

5、删除逻辑卷

①取消逻辑卷与目录的挂载关联,删除配置文件中永久生效的设备参数。

②删除逻辑卷设备,需要输入y来确认操作。

[root@yxf]# lvremove /dev/storage/vo

Do you really want to remove active logical volume vo? [y/n]: y

Logical volume "vo" successfully removed

③删除卷组,此处只写卷组名称即可,不需要设备的绝对路径。

[root@yxf]# vgremove storage

Volume group "storage" successfully removed

④删除物理卷设备。

[root@yxf]# pvremove /dev/sdb /dev/sdc

Labels on physical volume "/dev/sdb" successfully wiped

Labels on physical volume "/dev/sdc" successfully wiped

Guess you like

Origin www.cnblogs.com/yxf-/p/11409878.html