Twelve, use RAID disk array technology (a)

7.1 RAID Redundant Array Disk

In recent years, the processing performance of the CPU to maintain high growth, Intel company in 2017, the latest release of i9-7980XE processor chip is to reach 18 core 36 thread. At the same time, enhance the performance of the hard disk device is not very large, so gradually become the bottleneck of the overall performance of modern computers. Moreover, since the hard disk device needs to be sustained, frequently, a large number of IO operations, compared to other devices, it also significantly increases the chance of damage, likely to lead to loss of important data also increases.

In 1988, the University of California at Berkeley and the first time defines the concept of RAID technology. RAID technology by the plurality of devices larger disks into one composition, better safety disk array, and the data into a plurality of segments are stored in respective different physical hard disk device, and then use technology striping to improve the overall performance of the disk array, and the plurality of copies of important data is synchronized to a different physical disk devices, which play a very good data redundancy backup.

Everything has its two sides. RAID technology does have very good data redundancy features, but it is also a corresponding increase in costs. We only had like a phone book, but in order to avoid losing, we will contact number information written in the two natural to buy this purpose more than a phone book, which has correspondingly improved costs. RAID technology is designed to reduce expenses due to purchase equipment to bring the hard disk, but compared with the value of the data itself, modern enterprises pay more attention RAID technology is included in the redundancy backup mechanism and bring the hard disk throughput upgrade. In other words, RAID not only reduces the chance of data loss if damaged hard disk device, a hard disk read and write speeds also improve the device, so it can be widely deployed and used in the vast majority of carriers or medium-sized enterprises.

For cost and technical considerations need to be made to weigh in on the read and write performance and data reliability for different needs, develop different solutions to meet their needs. At present, the RAID disk array program has at least a dozen, but Liu Trent teacher will explain in detail the following RAID 0, RAID 1, RAID 5 and RAID 10 the four most common scenario.

1. RAID 0

RAID 0 hard disk technology to multiple physical devices (at least two) in series by way of hardware or software together to form a large volume group, and each data is sequentially written to the physical hard disk. Thus, in the ideal state, read and write performance of a hard disk device would increase several times, but if any one disk failure will cause the entire system data are damaged. Popular speaking, RAID 0 technology can effectively improve the throughput speed of hard disk data, but do not have the ability to backup and bug fixes. As shown in Figure 7-1, data are written separately to different hard disk apparatus, i.e., a hard disk device will disk1 disk2 and data were stored, and ultimately enhance the reading, writing speed effect.

Chapter 7 Using LVM and RAID disk array technology.  Chapter 7 Using LVM and RAID disk array technology.

FIG art schematic 7-1 RAID 0

2. RAID 1

Although RAID 0 techniques to enhance the access speed of the hard disk device, but the data is sequentially written to each physical hard disk, that is, its data are stored separately, any of a hard disk failure can damage the entire system data. Therefore, if the production of hard disk read and write speed of the device is not required, but to increase the security of your data, you need to use RAID 1 technology.

1 a schematic view can be seen in RAID technology shown in FIG. 7-2, which is more than two hard disk device to bind, when data is written, the data is written simultaneously to multiple drives equipment ( it is regarded as mirroring or backup data). Wherein when a failure of a hard disk, typically in a heat exchange automatically and immediately to restore normal use of the data.

Chapter 7 Using LVM and RAID disk array technology.  Chapter 7 Using LVM and RAID disk array technology.

7-2 RAID 1 technique schematic diagram

RAID 1 technique though great attention to data security, but because it is written in the same data multiple hard disk device, the disk utilization device is thus decreased, in theory, the hard disk space 7-2 in FIG. true only 50% availability, availability RAID 1 by three hard disk array apparatus composed only about 33%, and so on. Further, since the data is written simultaneously to two or more hard disk device, which undoubtedly increases the load on the system computing capabilities to a certain extent.

So, is there a RAID scheme takes into account both the read and write speeds and hard disk data security equipment, also taking into account the cost of it? In fact the case, just from the data in terms of safety and cost issues, it is impossible to maintain the utilization of existing equipment and are not hard to add new equipment, can significantly enhance the security of the data. Liu Trent teacher is no need to flicker you readers, as will be explained in RAID 5 technology, while taking into account the three theoretically (read and write speed, data security, cost), but this is actually more like three "mutually compromise".

3. RAID 5

As shown in FIG 7-3, RAID5 technique is to store the data parity information to the hard disk device is the hard disk device other. Parity information in the data sets of RAID 5 disk array is not stored separately to a hard disk device, but is stored on each of the other devices other than the hard disk itself, so that the benefits which the device will not damage any fatal defects; Figure 7-3 is stored in the parity part of the parity information is data, in other words, RAID 5 technology is actually no real data backup hard disk, but the problem occurs when the hard disk device by a parity checking information to attempt to reconstruct corrupted data. RAID technology characteristics such "compromise" to both read and write speed of the hard disk device, data security and storage costs.

Chapter 7 Using LVM and RAID disk array technology.  Chapter 7 Using LVM and RAID disk array technology.

7-3 RAID5 art schematic diagram

4.  RAID 10

Given RAID 5 technology because the cost of equipment for the hard disk read and write speed performance and data security while there had been some compromise, but most companies care about is the value of the data itself rather than the hard drive prices, and therefore mainly used in the production environment RAID 10 technology.

顾名思义,RAID 10技术是RAID 1+RAID 0技术的一个“组合体”。如图7-4所示,RAID 10技术需要至少4块硬盘来组建,其中先分别两两制作成RAID 1磁盘阵列,以保证数据的安全性;然后再对两个RAID 1磁盘阵列实施RAID 0技术,进一步提高硬盘设备的读写速度。这样从理论上来讲,只要坏的不是同一组中的所有硬盘,那么最多可以损坏50%的硬盘设备而不丢失数据。由于RAID 10技术继承了RAID 0的高读写速度和RAID 1的数据安全性,在不考虑成本的情况下RAID 10的性能都超过了RAID 5,因此当前成为广泛使用的一种存储技术。

Chapter 7 Using LVM and RAID disk array technology.  Chapter 7 Using LVM and RAID disk array technology.

图7-4  RAID 10技术示意图

7.1.1  部署磁盘阵列

在具备了上一章的硬盘设备管理基础之后,再来部署RAID和LVM就变得十分轻松了。首先,需要在虚拟机中添加4块硬盘设备来制作一个RAID 10磁盘阵列,如图7-5所示。

Chapter 7 Using LVM and RAID disk array technology.  Chapter 7 Using LVM and RAID disk array technology.

图7-5  为虚拟机系统模拟添加4块硬盘设备

这几块硬盘设备是模拟出来的,不需要特意去买几块真实的物理硬盘插到电脑上。需要注意的是,一定要记得在关闭系统之后,再在虚拟机中添加硬盘设备,否则可能会因为计算机架构的不同而导致虚拟机系统无法识别添加的硬盘设备。

mdadm命令用于管理Linux系统中的软件RAID硬盘阵列,格式为“mdadm [模式] <RAID设备名称> [选项] [成员设备名称]”。

当前,生产环境中用到的服务器一般都配备RAID阵列卡,尽管服务器的价格越来越便宜,但是我们没有必要为了做一个实验而去单独购买一台服务器,而是可以学会用mdadm命令在Linux系统中创建和管理软件RAID磁盘阵列,而且它涉及的理论知识的操作过程与生产环境中的完全一致。mdadm命令的常用参数以及作用如表7-1所示。

表7-1                                            mdadm命令的常用参数和作用

参数 作用
-a 检测设备名称
-n 指定设备数量
-l 指定RAID级别
-C 创建
-v 显示过程
-f 模拟设备损坏
-r 移除设备
-Q 查看摘要信息
-D 查看详细信息
-S 停止RAID磁盘阵列

 

接下来,使用mdadm命令创建RAID 10,名称为“/dev/md0”。

第6章中讲到,udev是Linux系统内核中用来给硬件命名的服务,其命名规则也非常简单。我们可以通过命名规则猜测到第二个SCSI存储设备的名称会是/dev/sdb,然后依此类推。使用硬盘设备来部署RAID磁盘阵列很像是将几位同学组成一个班级,但总不能将班级命名为/dev/sdbcde吧。尽管这样可以一眼看出它是由哪些元素组成的,但是并不利于我们的记忆和阅读。更何况如果我们是使用10、50、100个硬盘来部署RAID磁盘阵列呢?

此时,就需要使用mdadm中的参数了。其中,-C参数代表创建一个RAID阵列卡;-v参数显示创建的过程,同时在后面追加一个设备名称/dev/md0,这样/dev/md0就是创建后的RAID磁盘阵列的名称;-a yes参数代表自动创建设备文件;-n 4参数代表使用4块硬盘来部署这个RAID磁盘阵列;而-l 10参数则代表RAID 10方案;最后再加上4块硬盘设备的名称就搞定了。

[root@linuxprobe ~]# mdadm -Cv /dev/md0 -a yes -n 4 -l 10 /dev/sdb /dev/sdc /dev/sdd /dev/sde
mdadm: layout defaults to n2
mdadm: layout defaults to n2
mdadm: chunk size defaults to 512K
mdadm: size set to 20954624K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

其次,把制作好的RAID磁盘阵列格式化为ext4格式。

[root@linuxprobe ~]# mkfs.ext4 /dev/md0
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
2621440 inodes, 10477312 blocks
523865 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2157969408
320 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

再次,创建挂载点然后把硬盘设备进行挂载操作。挂载成功后可看到可用空间为40GB。

[root@linuxprobe ~]# mkdir /RAID
[root@linuxprobe ~]# mount /dev/md0 /RAID
[root@linuxprobe ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel-root 18G 3.0G 15G 17% /
devtmpfs 905M 0 905M 0% /dev
tmpfs 914M 84K 914M 1% /dev/shm
tmpfs 914M 8.9M 905M 1% /run
tmpfs 914M 0 914M 0% /sys/fs/cgroup
/dev/sr0 3.5G 3.5G 0 100% /media/cdrom
/dev/sda1 497M 119M 379M 24% /boot
/dev/md0 40G 49M 38G 1% /RAID

最后,查看/dev/md0磁盘阵列的详细信息,并把挂载信息写入到配置文件中,使其永久生效。

[root@linuxprobe ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Tue May 5 07:43:26 2017
Raid Level : raid10
Array Size : 41909248 (39.97 GiB 42.92 GB)
Used Dev Size : 20954624 (19.98 GiB 21.46 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Tue May 5 07:46:59 2017
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Name : localhost.localdomain:0 (local to host localhost.localdomain)
UUID : cc9a87d4:1e89e175:5383e1e8:a78ec62c
Events : 17
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
2 8 48 2 active sync /dev/sdd
3 8 64 3 active sync /dev/sde
[root@linuxprobe ~]# echo "/dev/md0 /RAID ext4 defaults 0 0" >> /etc/fstab

 

 



Guess you like

Origin www.cnblogs.com/doudou3680/p/12043755.html