11gR2RAC更换CRS磁盘组文档 11gR2RAC更换CRS磁盘组文档

11gR2RAC更换CRS磁盘组文档

  1. 磁盘(pv)准备

    在生产环境中,提前从存储上划分一些磁盘挂载到RAC系统的两个节点上(node1,node2).

    新增加磁盘组为(hdisk14--hdisk24)

1.1磁盘使用规划

磁盘名称

磁盘大小

所处存储

计划用途

故障组

hdisk14

500G

NEW

DATA磁盘组

DATA_0000

hdisk15

500G

NEW

DATA磁盘组

DATA_0001

hdisk16

500G

NEW

DATA磁盘组

DATA_0002

hdisk17

500G

NEW

DATA磁盘组

DATA_0003

hdisk18

500G

NEW

DATA磁盘组

DATA_0004

hdisk19

50G

NEW

TOCR磁盘组

 

hdisk20

50G

NEW

NCRS磁盘组

 

hdisk21

50G

NEW

NCRS磁盘组

 

hdisk22

200G

 

归档日志

 

hdisk23

200G

 

归档日志

 

hdisk24

50G

OLD

NCRS磁盘组

 

1.2检查磁盘的属性(两个节点)

    lsattr -El hdisk14 | grep reserve

    lsattr -El hdisk15 | grep reserve

    lsattr -El hdisk16 | grep reserve

    lsattr -El hdisk17 | grep reserve

    lsattr -El hdisk18 | grep reserve

    lsattr -El hdisk19 | grep reserve

    lsattr -El hdisk20 | grep reserve

    lsattr -El hdisk21 | grep reserve

    lsattr -El hdisk24 | grep reserve

1.3更改磁盘的属性以支持并行操作(两个节点)

    chdev -l hdisk14 -a reserve_policy=no_reserve

    chdev -l hdisk15 -a reserve_policy=no_reserve

    chdev -l hdisk16 -a reserve_policy=no_reserve

    chdev -l hdisk17 -a reserve_policy=no_reserve

    chdev -l hdisk18 -a reserve_policy=no_reserve

    chdev -l hdisk19 -a reserve_policy=no_reserve

    chdev -l hdisk20 -a reserve_policy=no_reserve

    chdev -l hdisk21 -a reserve_policy=no_reserve

    chdev -l hdisk24 -a reserve_policy=no_reserve

    chdev -l hdisk14 -a reserve_lock=no

    chdev -l hdisk15 -a reserve_lock=no

    chdev -l hdisk16 -a reserve_lock=no

    chdev -l hdisk17 -a reserve_lock=no

    chdev -l hdisk18 -a reserve_lock=no

    chdev -l hdisk19 -a reserve_lock=no

    chdev -l hdisk20 -a reserve_lock=no

    chdev -l hdisk21 -a reserve_lock=no

1.4修改字符设备的属组、权限(两个节点)

    chown grid:dba /dev/rhdisk13

    chown grid:dba /dev/rhdisk14

    chown grid:dba /dev/rhdisk15

    chown grid:dba /dev/rhdisk16

    chown grid:dba /dev/rhdisk17

    chown grid:dba /dev/rhdisk18

    chown grid:dba /dev/rhdisk19

    chown grid:dba /dev/rhdisk20

    chown grid:dba /dev/rhdisk21

    chown grid:dba /dev/rhdisk24

    chmod 660 /dev/rhdisk13

    chmod 660 /dev/rhdisk14

    chmod 660 /dev/rhdisk15

    chmod 660 /dev/rhdisk16

    chmod 660 /dev/rhdisk17

    chmod 660 /dev/rhdisk18

    chmod 660 /dev/rhdisk19

    chmod 660 /dev/rhdisk20

    chmod 660 /dev/rhdisk21

    chmod 660 /dev/rhdisk24

1.5查看磁盘的信息

    ls -l /dev/rhdisk*

  1. 创建ASM磁盘组(节点1执行即可)

    创建ASM磁盘组[示意] (grid用户) 
    [grid]$asmca

    输入磁盘组名,采用外部冗余,然后选择磁盘.

  1. 数据库开启归档

3.1在一个节点上Oracle进行开启归档操作

    sqlplus / as sysdba

    create pfile='/home/oracle/racdbinit.ora' from spfile;

    alter system set log_archive_dest_1='location=/arch1' sid='racdb1';

    alter system set log_archive_dest_1='location=/arch2' sid='racdb2';

3.2停止所有数据库实例

    oracle:

    srvctl stop database -d racdb

3.3启动一个实例到mount状态

    sqlplus / as sysdba

    startup mount;

3.4开启归档日志:

    alter database archivelog;

3.5停止所有数据库实例

    oracle:

    srvctl stop database -d racdb

3.6启动所有数据库实例

    oracle:

    srvctl start database -d racdb

  1. 数据库备份

    由主机实施工程师将节点1上的/arch1使用NFS挂载到节点2的/arch2上,在节点2上使用如下mount命令挂载:

    mount -v nfs -o rw,bg,hard,rsize=32768,wsize=32768,vers=3,nointr,timeo=600,proto=tcp 192.1.2.51:/arch1 /arch1

    使用oracle用户登录节点2,使用rman备份数据库,备份脚本如下:

    rman target /

    backup database format '/arch2/rman/racdb_%U';

  1. 停止数据库

    使用oracle用户管理所有数据库实例:

    srvctl stop database -d racdb

  1. 更换CRS磁盘组

6.1查看当前集群的状态

[grid@node1 ~]$ crsctl stat res -t

--------------------------------------------------------------------------------

NAME TARGET STATE SERVER STATE_DETAILS

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.ARCH1.dg

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.CRS1.dg

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.DATA1.dg

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.LISTENER.lsnr

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.NCRS.dg

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.asm

ONLINE ONLINE node1 Started

ONLINE ONLINE node2 Started

ora.gsd

OFFLINE OFFLINE node1

OFFLINE OFFLINE node2

ora.net1.network

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.ons

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.registry.acfs

ONLINE ONLINE node1

ONLINE ONLINE node2

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE node1

ora.cvu

1 ONLINE ONLINE node1

ora.node1.vip

1 ONLINE ONLINE node1

ora.node2.vip

1 ONLINE ONLINE node2

ora.oc4j

1 OFFLINE OFFLINE

ora.racdb.db

1 ONLINE ONLINE node2 Open

2 ONLINE ONLINE node1 Open

ora.scan1.vip

1 ONLINE ONLINE node1

6.2添加OCR的mirror镜像磁盘组

    在节点1上使用root用户操作

    cd /app/grid/11.2.0/grid/bin

    /app/grid/11.2.0/grid/bin是grid用户下ORACLE_HOME的变量值。

    ./ocrconfig -add +TOCR

    ./ocrcheck 检测OCR的保存状态

    过程记录如下:

# ./ocrconfig -add +TOCR

# ./ocrcheck

Status of Oracle Cluster Registry is as follows :

Version : 3

Total space (kbytes) : 262120

Used space (kbytes) : 3044

Available space (kbytes) : 259076

ID : 1255075770

Device/File Name : +ORC

Device/File integrity check succeeded

Device/File Name : +TOCR

Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

6.3磁盘原有的OCR磁盘组

    # ./ocrconfig -replace +ORC -replacement +NCRS

# ./ocrcheck

Status of Oracle Cluster Registry is as follows :

Version : 3

Total space (kbytes) : 262120

Used space (kbytes) : 3044

Available space (kbytes) : 259076

ID : 1255075770

Device/File Name : +NCRS

Device/File integrity check succeeded

Device/File Name : +TOCR

Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

  1. 迁移VoteDiks

    使用grid用户登录到一个节点

7.1检查votedisk的存储位置

    [grid@node1 ~]$ crsctl query css votedisk

## STATE File Universal Id File Name Disk group

-- ----- ----------------- --------- ---------

1. ONLINE a2d3e9c8b0094fcabfeee701fe3594a5 (ORC) [ORC]

Located 3 voting disk(s)

7.2更换votedisk的存储位置

    [grid@node1 ~]$ crsctl replace votedisk +NCRS

    [grid@node1 ~]$ crsctl query css votedisk

## STATE File Universal Id File Name Disk group

-- ----- ----------------- --------- ---------

1. ONLINE a2d3e9c8b0094fcabfeee701fe3594a5 (ORCL:CRS1) [CRS1]

2. ONLINE 973e54e8c5c94f0fbf4b746820c14005 (ORCL:CRS2) [CRS1]

3. ONLINE 197c715135a94f4abf545095b9c8a186 (ORCL:CRS3) [CRS1]

Located 3 voting disk(s)

  1. 迁移ASM实例的spfile文件

使用grid用户登录到节点1进行操作

8.1登录asm实例

    [grid@node1 ~]$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.3.0 Production on Tue Jul 1 11:07:49 2014

Copyright (c) 1982, 2011, Oracle. All rights reserved.

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

With the Real Application Clusters and Automatic Storage Management options

SQL>

8.2检测spfile的存储位置

    show parameter spfile

SQL> show parameter spfile

NAME TYPE VALUE

-------------- ----------------------------

spfile tring     +ORC/rac-cluster/asmparameterfile/registry.253.801158513

8.3创建pfile

    SQL>create pfile='/home/grid/asminit.ora' from spfile='+ORC/rac-cluster/asmparameterfile/registry.253.801158513';

8.4使用pfile创建新的spfile

    SQL>create spfile='+NCRS' from pfile='/home/grid/asminit.ora';

  1. 重启集群

使用root用户重启crs即可,在两个节点上执行

    /u01/app/11.2.0/grid/bin/crsctl stop crs

    /u01/app/11.2.0/grid/bin/crsctl start crs

[root@node2 ~]# /u01/app/11.2.0/grid/bin/crsctl stop crs

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'node2'

CRS-2673: Attempting to stop 'ora.crsd' on 'node2'

CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'node2'

CRS-2673: Attempting to stop 'ora.CRS1.dg' on 'node2'

CRS-2673: Attempting to stop 'ora.NCRS.dg' on 'node2'

CRS-2673: Attempting to stop 'ora.registry.acfs' on 'node2'

CRS-2673: Attempting to stop 'ora.racdb.db' on 'node2'

CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'node2'

CRS-2677: Stop of 'ora.racdb.db' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.ARCH1.dg' on 'node2'

CRS-2673: Attempting to stop 'ora.DATA1.dg' on 'node2'

CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.node2.vip' on 'node2'

CRS-2677: Stop of 'ora.node2.vip' on 'node2' succeeded

CRS-2672: Attempting to start 'ora.node2.vip' on 'node1'

CRS-2676: Start of 'ora.node2.vip' on 'node1' succeeded

CRS-2677: Stop of 'ora.registry.acfs' on 'node2' succeeded

CRS-2677: Stop of 'ora.ARCH1.dg' on 'node2' succeeded

CRS-2677: Stop of 'ora.DATA1.dg' on 'node2' succeeded

CRS-2677: Stop of 'ora.CRS1.dg' on 'node2' succeeded

CRS-2677: Stop of 'ora.NCRS.dg' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 'node2'

CRS-2677: Stop of 'ora.asm' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.ons' on 'node2'

CRS-2677: Stop of 'ora.ons' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.net1.network' on 'node2'

CRS-2677: Stop of 'ora.net1.network' on 'node2' succeeded

CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'node2' has completed

CRS-2677: Stop of 'ora.crsd' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.mdnsd' on 'node2'

CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'node2'

CRS-2673: Attempting to stop 'ora.ctssd' on 'node2'

CRS-2673: Attempting to stop 'ora.evmd' on 'node2'

CRS-2673: Attempting to stop 'ora.asm' on 'node2'

CRS-2677: Stop of 'ora.evmd' on 'node2' succeeded

CRS-2677: Stop of 'ora.mdnsd' on 'node2' succeeded

CRS-2677: Stop of 'ora.ctssd' on 'node2' succeeded

CRS-2677: Stop of 'ora.asm' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'node2'

CRS-2677: Stop of 'ora.drivers.acfs' on 'node2' succeeded

CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'node2'

CRS-2677: Stop of 'ora.cssd' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.crf' on 'node2'

CRS-2677: Stop of 'ora.crf' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on 'node2'

CRS-2677: Stop of 'ora.gipcd' on 'node2' succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on 'node2'

CRS-2677: Stop of 'ora.gpnpd' on 'node2' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node2' has completed

CRS-4133: Oracle High Availability Services has been stopped.

[root@node2 ~]# /u01/app/11.2.0/grid/bin/crsctl start crs

CRS-4123: Oracle High Availability Services has been started.

[grid@node2 ~]$ crsctl stat res -t

--------------------------------------------------------------------------------

NAME TARGET STATE SERVER STATE_DETAILS

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.ARCH1.dg

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.CRS1.dg

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.DATA1.dg

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.LISTENER.lsnr

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.NCRS.dg

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.asm

ONLINE ONLINE node1 Started

ONLINE ONLINE node2 Started

ora.gsd

OFFLINE OFFLINE node1

OFFLINE OFFLINE node2

ora.net1.network

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.ons

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.registry.acfs

ONLINE ONLINE node1

ONLINE ONLINE node2

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE node1

ora.cvu

1 ONLINE ONLINE node1

ora.node1.vip

1 ONLINE ONLINE node1

ora.node2.vip

1 ONLINE ONLINE node2

ora.oc4j

1 OFFLINE OFFLINE

ora.racdb.db

1 ONLINE OFFLINE Instance Shutdown,S

TARTING

2 ONLINE ONLINE node1 Open

ora.scan1.vip

1 ONLINE ONLINE node1

  1. 启动数据库

使用oracle用户启动所有实例,在一个节点上执行

    srvctl start database -d racdb

  1. DATA磁盘组中添加磁盘

11.1查看当前DATA磁盘的故障组的情况

select name,group_number,disk_number,state,failgroup,path from v$asm_disk;

11.2向每个故障组中添加一个500G的磁盘

alter diskgroup DATA add failgroup DATA_0000 disk '/dev/rhdisk14';

alter diskgroup DATA add failgroup DATA_0001 disk '/dev/rhdisk15';

alter diskgroup DATA add failgroup DATA_0002 disk '/dev/rhdisk16';

alter diskgroup DATA add failgroup DATA_0003 disk '/dev/rhdisk17';

alter diskgroup DATA add failgroup DATA_0004 disk '/dev/rhdisk18';

11.3查看ASM实例reblance的进度

select * from v$asm_operation;

当显示如下的时候,说明reblance成功。

SQL> select * from v$asm_operation;

no rows selected

至此所有的迁移工作完成。

猜你喜欢

转载自www.cnblogs.com/yss669/p/9968038.html