37 Marco Education Week blog -RAID work with LVM

RAID 1. Remove the hard disk partition (and remove Disc available two methods delete a single partition) Step: Empty partition dd if = / dev / zero of = / dev / sdb bs = 1 count = 512 (b consolidation disk deleted) fdisk / dev / sdc d 6 w (using fdisk command to delete a single partition) Step 2: use lsblk view disk partition, a partition was found still needs to be synchronized. lsblk viewing a memory partitions, fdisk -l viewing a hard disk partition, the latter is more accurate. The third step: synchronizing #partx -d --nr 1-4 after deleting partitions (c synchronized disk / dev / sdb (four partitions on the disk synchronization b) #partx -d --nr 1 / dev / sdc of a partition) 2. create the RAID (1) mdadm command to create a raid device (2) ll / dev / md * Check raid device creates raid0 first step: mdadm -C / dev / md0 -a yes -l 0 -n 2 / dev / sdb / dev / sde -C represents the creation of initialization device -a -l (level) indicates the level of 0 -n 2 shows two members of a member Step two: mdadm -D / dev / md0 (-D is View information created by the raid) the third step: mkfs. ext4 / dev / md0 (now this can be used as a device to format the partition) Step Four: Mount #mkdir / mnt / raid0 #mount / dev / md0 / mnt / raid0 Step five: Test Performance Test write performance: dd if = / dev / zero of = file bs = 1M count = 1024 reading test performance: dd if = file of = / dev / null create raid5 first step: mdadm -C / dev / md5 -a yes -l 5 -n 3 -x 1 / dev / sdd / dev / sdf / dev / sdg / dev / sdh -x represents 1 denotes a spare disk, a spare disk in general the rearmost / dev / sdh (when being used the problem occurs when a disk, backup disk will automatically replace the damaged disk) step two: format third step: then you can mount as a high-performance disk to use the fourth step: the disk is damaged and there is a real analog simulation damage sdb1 damage: mdadm / dev / md5 -f / dev / sdb1 (-f fault error) real damage: the virtual machine settings will remove the damaged disc is removed disks: mdadm / dev / md5 -r / dec / sdb1 (-r remove) / etc / fstab: fstab to automatically mount a fifth step the various file system formats the hard disk partition, removable devices and remote devices such as: let rai d5 permanent, write / etc / fstab to write format: / dev / md5 / mnt / raid5 ext4 defaults 0 0 3. (1) add a spare disk raid mdadm / dev / md5 -a / dev / sde1 (2) add real raid members (raid5 created when n is 3, the group now is to add 3 years, and add different spare disk, for expansion) mdadm -G / dev / md5 -n4 -a / dev / sdf1 (-G group) question: when you add a new member, found that capacity does not raise up, and why? The reason: the newly added member is not formatted, that is, no file system, so capacity is not upgraded. Solution: sync this raid filesystem # resize2fs / dev / md5 4. Delete raid Step one: Uninstall #umount / dev / raid5 second step: stop using the partition #mdadm -S / dev / md5 (-S stop) the third step: partitioning a method to remove the raid: single delete fdisk (d) method two: remove overall dd if = / dev / zero of = / dev / sdb bs = 1 count = 512 step IV: in / etc / fstab raid information will be permanently deleted the entry into force of the problem: remove the partition is not clean, how to solve? Use mdadm --zero-superblock / dev / sdb1, delete superblock, metadata information is about to be deleted. LVM 1.lvm advantages compared raid partition: Disk space can be expanded at any time, you can plug directly into the hard drive. eg: raid in the root partition, if enough space is more trouble, lvm expansion can be directly online, without downtime, users are not affected. 2. lvm configuration simple to understand: (1) physical volume pv (physical volume): corresponds to the raid configuration single disk or disk partition, physical volume name and named in the same way as raid. (2) the volume group VG (volume group): pg vg combined configuration, as may be appreciated that a large disk vg, pv is the sum of the number of his size, the name of their specified volume group, For example, members of the pv size vg0, vg can be inconsistent. (3) the logical volume LV (logical volume): lv out from the reel component may be regarded as vg branched off from the partition, logical volume space from the volume group, in fact, from the physical volume space. Created out of the logical volume (partition) format, you can use the mount. 3. (1) Check the physical volume command pvs command: command to view the physical volume #pvs pvdisplay: display more detailed information about the physical volume #pvdisplay (2) See the volume group command vgs vgdisplay (3) Check the logical volume command lvs lvdisplay 4. create a logical first step: create a physical volume of which is to create two separate f, g disk as a physical volume # pvcreate / dev / sdf / dev / sdg Step 2: create a volume group, the physical volume is about to join volume PE disk size is designated as 16M vg0 physical volume name of the volume group units PE (physical extent), just like the file system: the group /] # vgcreate -s 16M vg0 / dev / sdf / dev / sdg -s 16M block as an extended physical volume must be an integer multiple of the time extension PE. Step 3: Create logical volumes that allocate disk volume group this big, big warehouse # lvcreate -n lv0 -L 5G vg0 -n lv0: n is the name, logical volume created for lv0 (custom) vg0: Volume group may a plurality of, so to specify the allocation from which the volume group, each of which specifies a logical volume assigned vg0 -L 50 much, there are two allocation units: l PE as small units, the number of PE number L dispensed directly separated G lvcreate -l 60% VG -n mylv testvg lvcreate -l 100% FREE -n yourlv testvg fourth step: formatting mount 5. Extended logical first step: the premise is to extend the logical volume vg vgdisplay to see if there is space remaining Step Two: # lvextend -L + 5g / dev / vg0 / lv0 lvextend extension command -L + 5g: Indicates option 5g - L 5g: indicates extended to 5g LE (logical Extents): and PE are the same, the same size, logical volume is a unit LE. 6.vg enough when applied directly to the hard disk #vgextend vg0 / dev / sdd 7. logical volume size reduction #lvreduce -L 50G / dev / vg0 / lv0 8. 9. migrate the volume group to a new volume group (removable hard disk) the first step: change the name of the volume group #vgrename vg0 newvg0 step two: Change the name of the logical volume #lvrename / dev / newvg0 / lv0 newlv0 the third step: unmounted step four: disable volume group (users can not at this time use), the volume group is not available, the above logical admission can not be used. #vgchange -an newvg0 -a: available available n: no Step Five: Export volume group #vgexport newvgo pvscan command: View prior to exporting the Export command Step Six: Remove the hard drive

Guess you like

Origin www.cnblogs.com/ishaping/p/10965575.html