(7) ceph 2 pgs inconsistent failure

[root@node141 ~]# ceph health detail
HEALTH_ERR 2 scrub errors; Possible data damage: 2 pgs inconsistent
OSD_SCRUB_ERRORS 2 scrub errors
PG_DAMAGED Possible data damage: 2 pgs inconsistent
pg 3.3e is active+clean+inconsistent, acting [11,17,4]
pg 3.42 is active+clean+inconsistent, acting [17,6,0]

Official website failure Solution:
https://ceph.com/geen-categorie/ceph-manually-repair-object/

The following steps:
(1) identify the abnormal PG, and then to find the corresponding OSD, performed on the corresponding host repair
[@ node140 the root /] # Ceph OSD Tree
ID the CLASS the WEIGHT the TYPE NAME REWEIGHT the PRI the STATUS-AFF
-1 the root 8.71826 default
-2 Host node140 3.26935
0 1.00000 1.00000 HDD 0.54489 osd.0 up
. 1 osd.1 up 1.00000 1.00000 0.54489 HDD
2 HDD 0.54489 1.00000 1.00000 osd.2 up
. 3 HDD 0.54489 1.00000 1.00000 osd.3 up
. 4 osd.4 up HDD 0.54489 1.00000 1.00000
. 5 HDD 0.54489 1.00000 1.00000 osd.5 up
-3 3.26935 Host node141
12 is osd.12 up HDD 0.54489 1.00000 1.00000
13 is osd.13 up HDD 0.54489 1.00000 1.00000
14 0.54489 osd.14 up HDD 1.00000 1.00000
15 0.54489 osd.15 HDD Down 1.00000 1.00000
16 hdd 0.54489 osd.16 up 1.00000 1.00000
17 hdd 0.54489 osd.17 up 1.00000 1.00000
-4 2.17957 host node142
6 hdd 0.54489 osd.6 up 1.00000 1.00000
9 hdd 0.54489 osd.9 up 1.00000 1.00000
10 hdd 0.54489 osd.10 up 1.00000 1.00000
11 hdd 0.54489 osd.11 up 1.00000 1.00000

##这个命令也行
[root@node140 /]# ceph osd find 11
{
"osd": 11,
"addrs": {
"addrvec": [
{
"type": "v2",
"addr": "10.10.202.142:6820",
"nonce": 24423
},
{
"type": "v1",
"addr": "10.10.202.142:6821",
"nonce": 24423
}
]
},
"osd_fsid": "1e977e5f-f514-4eef-bd88-c3632d03b2c3",
"host": "node142",
"crush_location": {
"host": "node142",
"root": "default"
}
}

(2) corresponding to the problem osd 11 17, to the host switch, turning off osd

[root@node142 ~]# systemctl stop ceph-osd@11

(3) Brush the log to disk
[root @ node142 ~] # ceph -osd -i 15 --flush-journal

(4) Start OSD
[node142 the root @ ~] Start Ceph-OSD systemctl # @. 11

(5) Repair PG
[root @ node142 ~] # Ceph PG PG 3.3E Repair

### osd 17 also fix ####
(6) check the status of
[@ node141 the root ~] # Ceph Health Detail
HEALTH_OK

Guess you like

Origin blog.51cto.com/7603402/2434815