start_udev is not going to write disk head

We encountered a case where the record about

A set of Oracle 11.2.0.4 RAC environment, the operating system is RHEL 6.5, the shared disk device name binding is achieved by RAW UDEV, as follows

[root@rac1 opt]# ll /dev/sd*
brw-rw----. 1 root disk 8,  0 Dec  2 20:26 /dev/sda
brw-rw----. 1 root disk 8,  1 Dec  2 20:26 /dev/sda1
brw-rw----. 1 root disk 8,  2 Dec  2 20:26 /dev/sda2
brw-rw----. 1 root disk 8, 16 Dec  2 20:26 /dev/sdb
brw-rw----. 1 root disk 8, 17 Dec  2 20:26 /dev/sdb1
brw-rw----. 1 root disk 8, 32 Dec  2 20:26 /dev/sdc
brw-rw----. 1 root disk 8, 33 Dec  2 20:26 /dev/sdc1
brw-rw----. 1 root disk 8, 48 Dec  2 20:26 /dev/sdd
brw-rw----. 1 root disk 8, 49 Dec  2 20:26 /dev/sdd1
brw-rw----. 1 root disk 8, 64 Dec  2 20:26 /dev/sde
brw-rw----. 1 root disk 8, 65 Dec  2 20:26 /dev/sde1  
[root@rac1 opt]# cat /etc/udev/rules.d/60-raw.rules # Enter raw device bindings here. # # An example would be: # ACTION=="add", KERNEL=="sda", RUN+="/bin/raw /dev/raw/raw1 %N" # to bind /dev/raw/raw1 to /dev/sda, or # ACTION=="add", ENV{MAJOR}=="8", ENV{MINOR}=="1", RUN+="/bin/raw /dev/raw/raw2 %M %m" # to bind /dev/raw/raw2 to the device with major 8, minor 1. ACTION=="add",KERNEL=="/dev/sdb1",RUN+='/bin/raw /dev/raw/raw1 %N" ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="17",RUN+="/bin/raw /dev/raw/raw1 %M %m" ACTION=="add",KERNEL=="/dev/sdc1",RUN+='/bin/raw /dev/raw/raw2 %N" ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="33",RUN+="/bin/raw /dev/raw/raw2 %M %m" ACTION=="add",KERNEL=="/dev/sdd1",RUN+='/bin/raw /dev/raw/raw3 %N" ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="49",RUN+="/bin/raw /dev/raw/raw3 %M %m" ACTION=="add",KERNEL=="/dev/sde1",RUN+='/bin/raw /dev/raw/raw4 %N" ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="65",RUN+="/bin/raw /dev/raw/raw4 %M %m" KERNEL=="raw[1-4]",OWNER="grid",GROUP="asmadmin",MODE="660"
[root@rac1 opt]# ll /dev/raw total 0 crw-rw----. 1 grid asmadmin 162, 1 Dec 2 21:09 raw1 crw-rw----. 1 grid asmadmin 162, 2 Dec 2 21:09 raw2 crw-rw----. 1 grid asmadmin 162, 3 Dec 2 21:09 raw3 crw-rw----. 1 grid asmadmin 162, 4 Dec 2 20:26 raw4 crw-rw----. 1 root disk 162, 0 Dec 2 20:26 rawctl

One day, because the disk IO problems, leading often to restart the RAC, storage Repair Disk storage switch links, leading to memory mapped device names on OS changed, leading to bind failure UDEV

Below modify the binding mode UDEV by UUID bound disk, as follows

[root@rac1 ~]# for i in `cat /proc/partitions | awk '{print $4}' |grep sd | grep [a-z]$`; do echo "### $i: `scsi_id -g -u  -d  /dev/$i`"; done
### sda: 1ATA_VBOX_HARDDISK_VB85d0d4ba-e17b3dda
### sdb: 1ATA_VBOX_HARDDISK_VBc325912a-0addf096
### sdc: 1ATA_VBOX_HARDDISK_VB03a27735-42af5dea
### sdd: 1ATA_VBOX_HARDDISK_VB207dd3a2-a2f610c4
### sde: 1ATA_VBOX_HARDDISK_VBbc4b578d-a8e62d78

vi /etc/udev/rules.d/99-asm-oracle.rules
ACTION=="add",BUS=="scsi", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VBc325912a-0addf096", RUN+="/bin/raw /dev/raw/raw1 %N"
ACTION=="add",BUS=="scsi", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB03a27735-42af5dea", RUN+="/bin/raw /dev/raw/raw2 %N"
ACTION=="add",BUS=="scsi", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VB207dd3a2-a2f610c4", RUN+="/bin/raw /dev/raw/raw3 %N"
ACTION=="add",BUS=="scsi", KERNEL=="sd*", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="1ATA_VBOX_HARDDISK_VBbc4b578d-a8e62d78", RUN+="/bin/raw /dev/raw/raw4 %N"
KERNEL=="raw[1-4]",OWNER="grid",GROUP="asmadmin",MODE="0660"

Delete files /etc/udev/rules.d/60-raw.rules binding way to close the CRS of all nodes, restart UDEV 

[root@rac1 ~]# /u01/app/11.2.0/grid/bin/crsctl stop crs
[root@rac2 ~]# /u01/app/11.2.0/grid/bin/crsctl stop crs
[root@rac1 ~]# start_udev

Start CRS, CRS found not recognize the vote disk, crs alter the following error message

[root@rac1 rac1]# tail -f alertrac1.log 

[ohasd(5337)]CRS-2767:Resource state recovery not attempted for 'ora.diskmon' as its target state is OFFLINE
2019-12-02 21:20:40.230: 
[ohasd(5337)]CRS-2769:Unable to failover resource 'ora.diskmon'.
2019-12-02 21:20:42.580: 
[cssd(6272)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /u01/app/11.2.0/grid/log/rac1/cssd/ocssd.log
2019-12-02 21:20:57.605: 
[cssd(6272)]CRS-1714:Unable to discover any voting files, retrying discovery in 15 seconds; Details at (:CSSNM00070:) in /u01/app/11.2.0/grid/log/rac1/cssd/ocssd.log

ocssd.log given the following information:

[root@rac1 ~]# tail -f /u01/app/11.2.0/grid/log/rac1/cssd/ocssd.log

2019-12-02 21:21:27.636: [   SKGFD][411584256]Execute glob on the string /dev/raw/*
2019-12-02 21:21:27.636: [   SKGFD][411584256]running stat on disk:/dev/raw/raw1
2019-12-02 21:21:27.636: [   SKGFD][411584256]running stat on disk:/dev/raw/raw3
2019-12-02 21:21:27.636: [   SKGFD][411584256]running stat on disk:/dev/raw/raw2
2019-12-02 21:21:27.636: [   SKGFD][411584256]running stat on disk:/dev/raw/raw4
2019-12-02 21:21:27.636: [   SKGFD][411584256]running stat on disk:/dev/raw/rawctl
2019-12-02 21:21:27.636: [   SKGFD][411584256]Fetching UFS disk :/dev/raw/rawctl:

2019-12-02 21:21:27.637: [   SKGFD][411584256]Fetching UFS disk :/dev/raw/raw4:

2019-12-02 21:21:27.637: [   SKGFD][411584256]Fetching UFS disk :/dev/raw/raw2:

2019-12-02 21:21:27.637: [   SKGFD][411584256]Fetching UFS disk :/dev/raw/raw3:

2019-12-02 21:21:27.637: [   SKGFD][411584256]Fetching UFS disk :/dev/raw/raw1:

2019-12-02 21:21:27.637: [   SKGFD][411584256]OSS discovery with ::

2019-12-02 21:21:27.638: [   SKGFD][411584256]Handle 0x7f8704140a50 from lib :UFS:: for disk :/dev/raw/raw4:

2019-12-02 21:21:27.638: [   SKGFD][411584256]Handle 0x7f8704138d60 from lib :UFS:: for disk :/dev/raw/raw2:

2019-12-02 21:21:27.638: [   SKGFD][411584256]Handle 0x7f8704139590 from lib :UFS:: for disk :/dev/raw/raw3:

2019-12-02 21:21:27.638: [   SKGFD][411584256]Handle 0x7f8704141e40 from lib :UFS:: for disk :/dev/raw/raw1:

2019-12-02 21:21:27.639: [   SKGFD][411584256]Lib :UFS:: closing handle 0x7f8704140a50 for disk :/dev/raw/raw4:

2019-12-02 21:21:27.639: [   SKGFD][411584256]Lib :UFS:: closing handle 0x7f8704138d60 for disk :/dev/raw/raw2:

2019-12-02 21:21:27.639: [   SKGFD][411584256]Lib :UFS:: closing handle 0x7f8704139590 for disk :/dev/raw/raw3:

2019-12-02 21:21:27.639: [   SKGFD][411584256]Lib :UFS:: closing handle 0x7f8704141e40 for disk :/dev/raw/raw1:

2019-12-02 21:21:27.639: [    CSSD][411584256]clssnmvDiskVerify: Successful discovery of 0 disks
2019-12-02 21:21:27.639: [    CSSD][411584256]clssnmCompleteInitVFDiscovery: Completing initial voting file discovery
2019-12-02 21:21:27.639: [    CSSD][411584256]clssnmvFindInitialConfigs: No voting files found
2019-12-02 21:21:27.639: [    CSSD][411584256](:CSSNM00070:)clssnmCompleteInitVFDiscovery: Voting file not found. Retrying discovery in 15 seconds
2019-12-02 21:21:27.766: [    CSSD][414275328]clssscSelect: cookie accept request 0x7f8710083a90
2019-12-02 21:21:27.766: [    CSSD][414275328]clssscevtypSHRCON: getting client with cmproc 0x7f8710083a90
2019-12-02 21:21:27.766: [    CSSD][414275328]clssgmRegisterClient: proc(5/0x7f8710083a90), client(40/0x7f8710063d30)
2019-12-02 21:21:27.766: [    CSSD][414275328]clssgmExecuteClientRequest(): type(6) size(684) only connect and exit messages are allowed before lease acquisition proc(0x7f8710083a90) client(0x7f8710063d30)
2019-12-02 21:21:27.766: [    CSSD][414275328]clssgmDiscEndpcl: gipcDestroy 0xdcc
2019-12-02 21:21:28.521: [    CSSD][414275328]clssscSelect: cookie accept request 0x7f871006f0b0
2019-12-02 21:21:28.521: [    CSSD][414275328]clssscevtypSHRCON: getting client with cmproc 0x7f871006f0b0
2019-12-02 21:21:28.521: [    CSSD][414275328]clssgmRegisterClient: proc(3/0x7f871006f0b0), client(42/0x7f8710063d30)
2019-12-02 21:21:28.521: [    CSSD][414275328]clssgmExecuteClientRequest(): type(6) size(684) only connect and exit messages are allowed before lease acquisition proc(0x7f871006f0b0) client(0x7f8710063d30)
2019-12-02 21:21:28.521: [    CSSD][414275328]clssgmDiscEndpcl: gipcDestroy 0xde2
2019-12-02 21:21:28.767: [    CSSD][414275328]clssscSelect: cookie accept request 0x7f8710083a90
2019-12-02 21:21:28.767: [    CSSD][414275328]clssscevtypSHRCON: getting client with cmproc 0x7f8710083a90
2019-12-02 21:21:28.767: [    CSSD][414275328]clssgmRegisterClient: proc(5/0x7f8710083a90), client(41/0x7f8710096260)
2019-12-02 21:21:28.767: [    CSSD][414275328]clssgmExecuteClientRequest(): type(6) size(684) only connect and exit messages are allowed before lease acquisition proc(0x7f8710083a90) client(0x7f8710096260)
2019-12-02 21:21:28.767: [    CSSD][414275328]clssgmDiscEndpcl: gipcDestroy 0xdf8

 

rac1:/home/grid$ kfod asm_diskstring='/dev/raw/*' disks=all
--------------------------------------------------------------------------------
 Disk          Size Path                                     User     Group   
================================================================================
   1:       1019 Mb /dev/raw/raw1                            grid     asmadmin
   2:       1019 Mb /dev/raw/raw2                            grid     asmadmin
   3:       1019 Mb /dev/raw/raw3                            grid     asmadmin
   4:       8189 Mb /dev/raw/raw4                            grid     asmadmin
KFOD-00301: Unable to contact Cluster Synchronization Services (CSS). Return code 2 from kgxgncin.
KFOD-00311: Error scanning device /dev/raw/rawctl
ORA-15025: could not open disk "/dev/raw/rawctl"
Linux-x86_64 Error: 13: Permission denied
Additional information: 42

 

rac1:/home/grid$ kfed read /dev/raw/raw1
kfbh.endian:                          0 ; 0x000: 0x00
kfbh.hard:                            0 ; 0x001: 0x00
kfbh.type:                            0 ; 0x002: KFBTYP_INVALID
kfbh.datfmt:                          0 ; 0x003: 0x00
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:                       0 ; 0x008: file=0
kfbh.check:                           0 ; 0x00c: 0x00000000
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
7FCB36A16400 00000000 00000000 00000000 00000000  [................]
        Repeat 26 times
7FCB36A165B0 00000000 00000000 BB43868B 01000000  [..........C.....]
7FCB36A165C0 FE830001 003F813F DDC30000 0000001F  [....?.?.........]
7FCB36A165D0 00000000 00000000 00000000 00000000  [................]
        Repeat 1 times
7FCB36A165F0 00000000 00000000 00000000 AA550000  [..............U.]
7FCB36A16600 00000000 00000000 00000000 00000000  [................]
  Repeat 223 times
KFED-00322: Invalid content encountered during block traversal: [kfbtTraverseBlock][Invalid OSM block type][][0]

 

The UDEV binding way from the UUID binding equipment binding equipment back to the original name, start_udev, disk back to normal, the cluster started successfully.

[root@rac1 opt]# cat /etc/udev/rules.d/60-raw.rules 
# Enter raw device bindings here.
#
# An example would be:
#   ACTION=="add", KERNEL=="sda", RUN+="/bin/raw /dev/raw/raw1 %N"
# to bind /dev/raw/raw1 to /dev/sda, or
#   ACTION=="add", ENV{MAJOR}=="8", ENV{MINOR}=="1", RUN+="/bin/raw /dev/raw/raw2 %M %m"
# to bind /dev/raw/raw2 to the device with major 8, minor 1.

ACTION=="add",KERNEL=="/dev/sdb1",RUN+='/bin/raw /dev/raw/raw1 %N"
ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="17",RUN+="/bin/raw /dev/raw/raw1 %M %m"
ACTION=="add",KERNEL=="/dev/sdc1",RUN+='/bin/raw /dev/raw/raw2 %N"
ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="33",RUN+="/bin/raw /dev/raw/raw2 %M %m"
ACTION=="add",KERNEL=="/dev/sdd1",RUN+='/bin/raw /dev/raw/raw3 %N"
ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="49",RUN+="/bin/raw /dev/raw/raw3 %M %m"
ACTION=="add",KERNEL=="/dev/sde1",RUN+='/bin/raw /dev/raw/raw4 %N"
ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="65",RUN+="/bin/raw /dev/raw/raw4 %M %m"
KERNEL=="raw[1-4]",OWNER="grid",GROUP="asmadmin",MODE="660"

 

rac1:/home/grid$ kfed read /dev/raw/raw1
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:              2147483648 ; 0x008: disk=0
kfbh.check:                  1914149503 ; 0x00c: 0x72179a7f
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr:         ORCLDISK ; 0x000: length=8
kfdhdb.driver.reserved[0]:            0 ; 0x008: 0x00000000
kfdhdb.driver.reserved[1]:            0 ; 0x00c: 0x00000000
kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000
kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000
kfdhdb.compat:                186646528 ; 0x020: 0x0b200000
kfdhdb.dsknum:                        0 ; 0x024: 0x0000
kfdhdb.grptyp:                        2 ; 0x026: KFDGTP_NORMAL
kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname:                CRS_0000 ; 0x028: length=8
kfdhdb.grpname:                     CRS ; 0x048: length=3
kfdhdb.fgname:                 CRS_0000 ; 0x068: length=8
kfdhdb.capname:                         ; 0x088: length=0
kfdhdb.crestmp.hi:             33091668 ; 0x0a8: HOUR=0x14 DAYS=0x2 MNTH=0xc YEAR=0x7e3
kfdhdb.crestmp.lo:           3665632256 ; 0x0ac: USEC=0x0 MSEC=0x347 SECS=0x27 MINS=0x36
kfdhdb.mntstmp.hi:             33091668 ; 0x0b0: HOUR=0x14 DAYS=0x2 MNTH=0xc YEAR=0x7e3
kfdhdb.mntstmp.lo:           3884250112 ; 0x0b4: USEC=0x0 MSEC=0x13d SECS=0x38 MINS=0x39
kfdhdb.secsize:                     512 ; 0x0b8: 0x0200
kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000
kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000
kfdhdb.mfact:                    113792 ; 0x0c0: 0x0001bc80
kfdhdb.dsksize:                    1019 ; 0x0c4: 0x000003fb
kfdhdb.pmcnt:                         2 ; 0x0c8: 0x00000002
kfdhdb.fstlocn:                       1 ; 0x0cc: 0x00000001
kfdhdb.altlocn:                       2 ; 0x0d0: 0x00000002
kfdhdb.f1b1locn:                      2 ; 0x0d4: 0x00000002
kfdhdb.redomirrors[0]:                0 ; 0x0d8: 0x0000
kfdhdb.redomirrors[1]:                0 ; 0x0da: 0x0000
kfdhdb.redomirrors[2]:                0 ; 0x0dc: 0x0000
kfdhdb.redomirrors[3]:                0 ; 0x0de: 0x0000
kfdhdb.dbcompat:              168820736 ; 0x0e0: 0x0a100000
kfdhdb.grpstmp.hi:             33091668 ; 0x0e4: HOUR=0x14 DAYS=0x2 MNTH=0xc YEAR=0x7e3
kfdhdb.grpstmp.lo:           3665294336 ; 0x0e8: USEC=0x0 MSEC=0x1fd SECS=0x27 MINS=0x36
kfdhdb.vfstart:                     256 ; 0x0ec: 0x00000100
kfdhdb.vfend:                       288 ; 0x0f0: 0x00000120
kfdhdb.spfile:                       59 ; 0x0f4: 0x0000003b
kfdhdb.spfflg:                        1 ; 0x0f8: 0x00000001
kfdhdb.ub4spare[0]:                   0 ; 0x0fc: 0x00000000
kfdhdb.ub4spare[1]:                   0 ; 0x100: 0x00000000
kfdhdb.ub4spare[2]:                   0 ; 0x104: 0x00000000
kfdhdb.ub4spare[3]:                   0 ; 0x108: 0x00000000
kfdhdb.ub4spare[4]:                   0 ; 0x10c: 0x00000000
kfdhdb.ub4spare[5]:                   0 ; 0x110: 0x00000000
kfdhdb.ub4spare[6]:                   0 ; 0x114: 0x00000000
kfdhdb.ub4spare[7]:                   0 ; 0x118: 0x00000000
kfdhdb.ub4spare[8]:                   0 ; 0x11c: 0x00000000
kfdhdb.ub4spare[9]:                   0 ; 0x120: 0x00000000
kfdhdb.ub4spare[10]:                  0 ; 0x124: 0x00000000
kfdhdb.ub4spare[11]:                  0 ; 0x128: 0x00000000
kfdhdb.ub4spare[12]:                  0 ; 0x12c: 0x00000000
kfdhdb.ub4spare[13]:                  0 ; 0x130: 0x00000000
kfdhdb.ub4spare[14]:                  0 ; 0x134: 0x00000000
kfdhdb.ub4spare[15]:                  0 ; 0x138: 0x00000000
kfdhdb.ub4spare[16]:                  0 ; 0x13c: 0x00000000
kfdhdb.ub4spare[17]:                  0 ; 0x140: 0x00000000
kfdhdb.ub4spare[18]:                  0 ; 0x144: 0x00000000
kfdhdb.ub4spare[19]:                  0 ; 0x148: 0x00000000
kfdhdb.ub4spare[20]:                  0 ; 0x14c: 0x00000000
kfdhdb.ub4spare[21]:                  0 ; 0x150: 0x00000000
kfdhdb.ub4spare[22]:                  0 ; 0x154: 0x00000000
kfdhdb.ub4spare[23]:                  0 ; 0x158: 0x00000000
kfdhdb.ub4spare[24]:                  0 ; 0x15c: 0x00000000
kfdhdb.ub4spare[25]:                  0 ; 0x160: 0x00000000
kfdhdb.ub4spare[26]:                  0 ; 0x164: 0x00000000
kfdhdb.ub4spare[27]:                  0 ; 0x168: 0x00000000
kfdhdb.ub4spare[28]:                  0 ; 0x16c: 0x00000000
kfdhdb.ub4spare[29]:                  0 ; 0x170: 0x00000000
kfdhdb.ub4spare[30]:                  0 ; 0x174: 0x00000000
kfdhdb.ub4spare[31]:                  0 ; 0x178: 0x00000000
kfdhdb.ub4spare[32]:                  0 ; 0x17c: 0x00000000
kfdhdb.ub4spare[33]:                  0 ; 0x180: 0x00000000
kfdhdb.ub4spare[34]:                  0 ; 0x184: 0x00000000
kfdhdb.ub4spare[35]:                  0 ; 0x188: 0x00000000
kfdhdb.ub4spare[36]:                  0 ; 0x18c: 0x00000000
kfdhdb.ub4spare[37]:                  0 ; 0x190: 0x00000000
kfdhdb.ub4spare[38]:                  0 ; 0x194: 0x00000000
kfdhdb.ub4spare[39]:                  0 ; 0x198: 0x00000000
kfdhdb.ub4spare[40]:                  0 ; 0x19c: 0x00000000
kfdhdb.ub4spare[41]:                  0 ; 0x1a0: 0x00000000
kfdhdb.ub4spare[42]:                  0 ; 0x1a4: 0x00000000
kfdhdb.ub4spare[43]:                  0 ; 0x1a8: 0x00000000
kfdhdb.ub4spare[44]:                  0 ; 0x1ac: 0x00000000
kfdhdb.ub4spare[45]:                  0 ; 0x1b0: 0x00000000
kfdhdb.ub4spare[46]:                  0 ; 0x1b4: 0x00000000
kfdhdb.ub4spare[47]:                  0 ; 0x1b8: 0x00000000
kfdhdb.ub4spare[48]:                  0 ; 0x1bc: 0x00000000
kfdhdb.ub4spare[49]:                  0 ; 0x1c0: 0x00000000
kfdhdb.ub4spare[50]:                  0 ; 0x1c4: 0x00000000
kfdhdb.ub4spare[51]:                  0 ; 0x1c8: 0x00000000
kfdhdb.ub4spare[52]:                  0 ; 0x1cc: 0x00000000
kfdhdb.ub4spare[53]:                  0 ; 0x1d0: 0x00000000
kfdhdb.acdb.aba.seq:                  0 ; 0x1d4: 0x00000000
kfdhdb.acdb.aba.blk:                  0 ; 0x1d8: 0x00000000
kfdhdb.acdb.ents:                     0 ; 0x1dc: 0x0000
kfdhdb.acdb.ub2spare:                 0 ; 0x1de: 0x0000

 

Here is a question, udev binding policy change disk (the disk name was changed from the previous binding binding uuid), start_udev write disk head? ?

This environment has four shared disks, raw1-3 CRS is a cluster disk, raw4 not yet used to the cluster

1. The following dd command, respectively, after the device name and raw1 binding udev UUID of the first 4K bindings dd disk

rac1:/home/grid$ dd if=/dev/raw/raw1 of=/tmp/raw1.txt bs=1k count=4

udev binding device name raw1 before 4K, you can tell there are CRS disk group information

 udev binding UUID of raw1 before 4K, you can see the disk head has renamed disk group information disappears

2. Because RAC raw4 is not used, the first empty disk head dd command, dd front disk 4K, then execute start_udev, then the disk before dd 4K, comparison

# dd if=/dev/zero of=/dev/raw/raw4 bs=1M count=30
$ dd if=/dev/raw/raw4 of=/tmp/raw4.txt bs=1k count=4

Before executing 4K raw4 before start_udev, you can see it is empty

 Former 4K raw4 after the execution start_udev

 

Guess you like

Origin www.cnblogs.com/zylong-sys/p/11986277.html
Recommended