Cause of failure: After the server restarts abnormally, the PV cannot be identified, and the failure symptoms are as follows
[root@app-03 ~]# pvs
WARNING: Device for PV 13uLNW-NTlU-vut8-rbjU-u3FX-CGR0-GghK6n not found or rejected by a filter.
Couldn't find device with uuid 13uLNW-NTlU-vut8-rbjU-u3FX-CGR0-GghK6n.
PV VG Fmt Attr PSize PFree
/dev/sda backupvg lvm2 a-- 2.18t 187.61g
/dev/sdf oradatavg lvm2 a-- 894.25g 252.50g
/dev/sdn3 rootvg lvm2 a-- 64.00g 0
/dev/sdn4 oraclevg lvm2 a-- <300.00g <100.00g
/dev/sdn5 rootvg lvm2 a-- <54.00g 0
/dev/sdn6 rootvg lvm2 a-- <98.00g 0
[unknown] oradatavg lvm2 a-m 894.25g 0
[root@app-03 ~]# vgs
WARNING: Device for PV 13uLNW-NTlU-vut8-rbjU-u3FX-CGR0-GghK6n not found or rejected by a filter.
Couldn't find device with uuid 13uLNW-NTlU-vut8-rbjU-u3FX-CGR0-GghK6n.
VG #PV #LV #SN Attr VSize VFree
backupvg 1 1 0 wz--n- 2.18t 187.61g
oraclevg 1 1 0 wz--n- <300.00g <100.00g
oradatavg 2 1 0 wz-pn- <1.75t 252.50g
rootvg 3 5 0 wz--n- 215.99g 0
Linux's LVM will store every step of the user's operation on PV/VG/LV by default, and automatically back up the current VG information into a file, the location is /etc/lvm/backup/VG name.
The things recorded in this file are probably consistent with the information output by vgdisplay/pvdisplay/lvdisplay, and it also includes PVUUID, which is crucial for restoring VG information.
The information recorded in this file is roughly equivalent to the metadata of the entire VG. This file is very important. Use this file to restore the information of the entire VG
You can see that PV0 Sde is the UUID of the lost disk
[root@app-03 ~]# cat /etc/lvm/backup/oradatavg
# Generated by LVM2 version 2.02.180(2)-RHEL7 (2018-07-20): Wed Aug 28 14:42:20 2019
contents = "Text Format Volume Group"
version = 1
description = "Created *after* executing 'lvcreate -L 1572864M -n lv_oracle_data oradatavg'"
creation_host = "app-03" # Linux app-03 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64
creation_time = 1566974540 # Wed Aug 28 14:42:20 2019
oradatavg {
id = "efT5Up-yfH0-qVus-MV1G-csWl-fhMW-B3jvba"
seqno = 2
format = "lvm2" # informational
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 0
max_pv = 0
metadata_copies = 0
physical_volumes {
pv0 {
id = "13uLNW-NTlU-vut8-rbjU-u3FX-CGR0-GghK6n"
device = "/dev/sde" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 1875385008 # 894.253 Gigabytes
pe_start = 2048
pe_count = 228928 # 894.25 Gigabytes
}
pv1 {
id = "REBnPF-3f0t-a3w5-XHZ2-H0ML-T5og-edthrO"
device = "/dev/sdf" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 1875385008 # 894.253 Gigabytes
pe_start = 2048
pe_count = 228928 # 894.25 Gigabytes
}
}
logical_volumes {
lv_oracle_data {
id = "tzR0gr-BKGI-yBts-48xW-B9ya-Flrv-gco616"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_time = 1566974540 # 2019-08-28 14:42:20 +0800
creation_host = "app-03"
segment_count = 2
segment1 {
start_extent = 0
extent_count = 228928 # 894.25 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
]
}
segment2 {
start_extent = 228928
extent_count = 164288 # 641.75 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv1", 0
]
}
}
}
}
Recovery steps:
[root@app-03 ~]# pvcreate /dev/sde -u 13uLNW-NTlU-vut8-rbjU-u3FX-CGR0-GghK6n --restorefile /etc/lvm/backup/oradatavg // 使用原来的PV UUID来创建PV,并使用自动备份的文件来恢复信息
Couldn't find device with uuid 13uLNW-NTlU-vut8-rbjU-u3FX-CGR0-GghK6n.
WARNING: Device for PV 13uLNW-NTlU-vut8-rbjU-u3FX-CGR0-GghK6n not found or rejected by a filter.
Physical volume "/dev/sde" successfully created.
[root@app-03 ~]# vgcfgrestore oradatavg // 恢复datavg的vg信息
Restored volume group oradatavg
Scan of VG oradatavg from /dev/sde found metadata seqno 3 vs previous 2.
Scan of VG oradatavg from /dev/sdf found metadata seqno 3 vs previous 2.
After the recovery is complete, the mounted directory shows that lv does not exist
[root@app-03 ~]# mount /dev/oradatavg/lv_oracle_data /oradata
mount: special device /dev/oradatavg/lv_oracle_data does not exist
When viewing lv, lv still exists
[root@app-03 ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv_ora_backup backupvg -wi-ao---- 2.00t
lv_oracle_home oraclevg -wi-ao---- 200.00g
lv_oracle_data oradatavg -wi------- 1.50t
lvhome rootvg -wi-ao---- 1.00g
lvopt rootvg -wi-ao---- 10.00g
lvroot rootvg -wi-ao---- <73.00g
lvswap rootvg -wi-ao---- 32.00g
lvvar rootvg -wi-ao---- <100.00g
lvmdisplay checks that the lv status is inactive
lvdisplay
--- Logical volume ---
LV Path /dev/oradatavg/lv_oracle_data
LV Name lv_oracle_data
VG Name oradatavg
LV UUID tzR0gr-BKGI-yBts-48xW-B9ya-Flrv-gco616
LV Write Access read/write
LV Creation host, time app-03, 2019-08-28 14:42:20 +0800
LV Status NOT available
LV Size 1.50 TiB
Current LE 393216
Segments 2
Allocation inherit
Read ahead sectors auto
activate lv
[root@app-03 ~]# vgchange -ay oradatavg
1 logical volume(s) in volume group "oradatavg" now active
--- Logical volume ---
LV Path /dev/oradatavg/lv_oracle_data
LV Name lv_oracle_data
VG Name oradatavg
LV UUID tzR0gr-BKGI-yBts-48xW-B9ya-Flrv-gco616
LV Write Access read/write
LV Creation host, time app-03, 2019-08-28 14:42:20 +0800
LV Status available
# open 0
LV Size 1.50 TiB
Current LE 393216
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:
Mount after activation, the mount is normal
Note: If you change to a new one without a data disk, the lv under this vg needs to be reformatted