ORACLE 11gR2 RAC添加删除(正常及强制)节点操作步骤(删除篇)

ORACLE 11gR2 RAC添加删除(正常及强制)节点操作步骤(删除篇)

本文主要转载 【  http://www.cnxdug.org/?p=2511 】 有部分细节自己实验添加,再此谢谢前辈。

RAC删除节点


这里我们模拟节点可以正常启动时,正常删除RAC节点的操作过程以及节点由于遇到硬件故障或其它问题短期内无法启动时,将其强制从RAC集群中删除的过程。

正常删除RAC节点的操作过程和添加RAC节点的操作过程刚好相反,先删除数据库实例,再删除数据库软件(ORACLE_HOME),
再在clusterware层面删除节点,再删除GRID软件(GRID_HOME)。

强制删除RAC节点的操作过程和正常RAC节点的操作逻辑大致相同,只是细节上有所不同。

这里我们以正常删除racdb1和强制racdb2节点为例进行实验。

关于停机时间:

删除RAC节点操作也不需要停机,可以在线操作,不需要申请停机时间。当然,如果条件允许,为了避免在操作过程中出现异常从而导致整个集群运行异常,
也可以适当申请小时级别的申请时间。

正常删除RAC节点操作过程
移除oracle实例
从RAC数据库中移除实例,与添加实例一样,也可以通过dbca工具实现。

对于11gR2的RAC数据库,分为两种类型,策略管理型数据库(policy-managed database)和管理员管理型数据库(administrator-managed database),
这两种不同类型的数据库,删除实例的方法不同。对于策略管理的数据库操作起来比较复杂,可以参考如下官方文档:

https://docs.oracle.com/cd/E18283_01/rac.112/e16795/adddelunix.htm#CEGIEDFF

我们这里的数据库是管理员管理型数据库,按照如下步骤进行操作:

1、解除实例与service的关联
如果数据库实例跟某个service进行了关联,那么使用srvctl工具或em先对service进行relocate操作,然后修改service,
使其只在不计划删除的节点上运行。确保要删除的实例既不是某个service的preferred 实例,也不是available 实例。

我们这里的情况比较简单,没有创建任何service,所以也就不需要进行service相关的操作。

2、备份OCR
先看有没有ocr的自动备份,执行:ocrconfig -showbackup命令查看,如果没有,则手动进行备份,以root用户执行:

ocrconfig -export ocr_racdb3.bak

ocrconfig -manualbackup

3、使用dbca工具,将实例从rac数据库中移除出去。
有两种方式,图形方式和命令行方式,为了加深记忆,这里我们分别用两种方式演示。

方式一:图形方式:

以oracle用户调用dbca:

下一步:

下一步:

下一步:

输入用户名”sys”及sys用户的密码,然后下一步:

下一步:

点击”Finish”:

点击”OK”:

点击”OK”:

删除进度33%,继续等待:

点击”No”。


方式二:命令行方式:
命令行的方法,具体命令为:

dbca -silent -deleteInstance [-nodeList node_name] -gdbName gdb_name -instanceName instance_name -sysDBAUserName sysdba -sysDBAPassword password

结合我们这里的实际情况,进行参数替换:


dbca -silent -deleteInstance -gdbName racdb -instanceName racdb1 -nodelist racdb1 -sysDBAUserName sys -sysDBAPassword password

命令需要在不删除的节点上、以oracle用户执行,我们依然在节点3上执行。执行过程记录如下:


[oracle@racdb3 ~]$ dbca -silent -deleteInstance -gdbName racdb -instanceName racdb1 -nodelist racdb1 -sysDBAUserName sys -sysDBAPassword PASSWORD

Deleting instance

20% complete

……

66% complete

Completing instance management.

100% complete

Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/racdb.log" for further details


=================================================================================================
=================================================================================================


删除oracle软件
1、处理要删除的节点上的监听

如果Oracle RAC HOME配置了监听,且注册进了clusterware中,则先需要禁用并停掉监听,以grid用户,在任意节点上执行下面的命令:

$ srvctl disable listener -l listener_name -n NAME_OF_NODE_TO_BE_DELETED

$ srvctl stop listener -l listener_name -n NAME_OF_NODE_TO_BE_DELETED

我们这里没有单独配置监听,只有默认的监听,我们将其禁用并停掉,在节点3上以grid用户执行:

[grid@racdb1 ~]$ srvctl disable listener -l listener -n racdb3

[grid@racdb1 ~]$ srvctl stop listener -l listener -n racdb3

2、在要删除的节点上更新NodeList

在要删除的节点上以oracle用户执行:

$ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=Oracle_home_location "CLUSTER_NODES={name_of_node_to_delete}" -local

结合我们这里的实际情况,对具体的参数进行替换,对应的实际命令为:

在oracle 用户下执行
$ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 "CLUSTER_NODES=racdb3" -local

执行结果记录如下:


[oracle@racdb3 bin]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 "CLUSTER_NODES=racdb3" -local

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 2881 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[oracle@racdb3 bin]$


3、删除ORACLE_HOME目录

如果ORACLE_HOME是共享的,则在要删除的节点上执行:
cd $ORACLE_HOME/oui/bin
./runInstaller -detachHome ORACLE_HOME=Oracle_home_location

如果是非共享的,则执行:
${ORACLE_HOME}/deinstall/deinstall -local


我们这里是非共享的,按照后一种方法执行:
实际要执行的命令如下:在要删除的节点上用oracle 用户执行

[oracle@racdb1 ~]$ cd $ORACLE_HOME/deinstall

[oracle@racdb1 deinstall]$ ./deinstall -local

[oracle@racdb3 bin]$ cd $ORACLE_HOME/deinstall
[oracle@racdb3 deinstall]$
[oracle@racdb3 deinstall]$
[oracle@racdb3 deinstall]$
[oracle@racdb3 deinstall]$ ls
bootstrap.pl deinstall deinstall.pl deinstall.xml jlib readme.txt response sshUserSetup.sh
[oracle@racdb3 deinstall]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############


######################### CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /u01/app/oracle/product/11.2.0/dbhome_1
Oracle Home type selected for deinstall is: Oracle Real Application Cluster Database
Oracle Base selected for deinstall is: /u01/app/oracle
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/11.2.0/grid
The following nodes are part of this cluster: racdb3
Checking for sufficient temp space availability on node(s) : 'racdb3'

## [END] Install check configuration ##


Network Configuration check config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2018-12-30_09-09-04-AM.log

Network Configuration check config END

Database Check Configuration START

Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_check2018-12-30_09-09-08-AM.log

Database Check Configuration END

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_check2018-12-30_09-09-12-AM.log

Enterprise Manager Configuration Assistant END
Oracle Configuration Manager check START
OCM check log file location : /u01/app/oraInventory/logs//ocm_check313.log
Oracle Configuration Manager check END

######################### CHECK OPERATION END #########################


####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/11.2.0/grid
The cluster node(s) on which the Oracle home deinstallation will be performed are:racdb3
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'racdb3', and the global configuration will be removed.
Oracle Home selected for deinstall is: /u01/app/oracle/product/11.2.0/dbhome_1
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
The option -local will not modify any database configuration for this Oracle home.

No Enterprise Manager configuration to be updated for any database(s)
No Enterprise Manager ASM targets to update
No Enterprise Manager listener targets to migrate
Checking the config status for CCR
Oracle Home exists with CCR directory, but CCR is not configured
CCR check is finished
Do you want to continue (y - yes, n - no)? [n]: y
....................需人工介入的部分....................... Do you want to continue (y - yes, n - no)? [n]: y --------这里需要手动输入y
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2018-12-30_09-08-56-AM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2018-12-30_09-08-56-AM.err'

######################## CLEAN OPERATION START ########################

Enterprise Manager Configuration Assistant START

EMCA de-configuration trace file location: /u01/app/oraInventory/logs/emcadc_clean2018-12-30_09-09-12-AM.log

Updating Enterprise Manager ASM targets (if any)
Updating Enterprise Manager listener targets (if any)
Enterprise Manager Configuration Assistant END
Database de-configuration trace file location: /u01/app/oraInventory/logs/databasedc_clean2018-12-30_09-09-31-AM.log

Network Configuration clean config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2018-12-30_09-09-31-AM.log

De-configuring Local Net Service Names configuration file...
Local Net Service Names configuration file de-configured successfully.

De-configuring backup files...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END

Oracle Configuration Manager clean START
OCM clean log file location : /u01/app/oraInventory/logs//ocm_clean313.log
Oracle Configuration Manager clean END
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/u01/app/oracle/product/11.2.0/dbhome_1' from the central inventory on the local node : Done

Delete directory '/u01/app/oracle/product/11.2.0/dbhome_1' on the local node : Done

Failed to delete the directory '/u01/app/oracle'. The directory is in use.
Delete directory '/u01/app/oracle' on the local node : Failed <<<<

Oracle Universal Installer cleanup completed with errors.

Oracle Universal Installer clean END


## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2018-12-30_09-08-37AM' on node 'racdb3'

## [END] Oracle install clean ##


######################### CLEAN OPERATION END #########################


####################### CLEAN OPERATION SUMMARY #######################
Cleaning the config for CCR
As CCR is not configured, so skipping the cleaning of CCR configuration
CCR clean is finished
Successfully detached Oracle home '/u01/app/oracle/product/11.2.0/dbhome_1' from the central inventory on the local node.
Successfully deleted directory '/u01/app/oracle/product/11.2.0/dbhome_1' on the local node.
Failed to delete directory '/u01/app/oracle' on the local node.
Oracle Universal Installer cleanup completed with errors.

Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################


############# ORACLE DEINSTALL & DECONFIG TOOL END #############

[oracle@racdb3 deinstall]$


4、在不删除的节点上更新inventory

在保留的任意节点,也就是racdb1、racdb2或racdb4上,以oracle用户执行:

cd $ORACLE_HOME/oui/bin
./runInstaller -updateNodeList ORACLE_HOME=Oracle_home_location "CLUSTER_NODES={remaining_node_list}"


结合我们这里的实际情况,进行参数替换,即:

./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 "CLUSTER_NODES= racdb1,racdb2"

我们在racdb1上执行,执行过程记录如下:

[root@racdb1 ~]# su - oracle
[oracle@racdb1 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@racdb1 bin]$
[oracle@racdb1 bin]$ echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0/dbhome_1
[oracle@racdb1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1 "CLUSTER_NODES= racdb1,racdb2"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 2104 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

=================================================================================================================
接下来处理grid。

将节点移除出RAC集群
将节点移除出集群,使其不再是集群成员。

1、查看节点的状态是否为 Unpinned

以grid用户执行:olsnodes -s -t ,如果不是Unpinned,则以root用户执行:crsctl unpin css将其unpin。

“olsnodes -s -t”在要删除的节点上、不删除的节点上执行均可:

节点1:
[root@racdb1 ~]# su - grid
[grid@racdb1 ~]$ olsnodes -s -t
racdb1 Active Unpinned
racdb2 Active Unpinned
racdb3 Active Unpinned

节点3:
[root@racdb3 ~]# su - grid
[grid@racdb3 ~]$ olsnodes -s -t
racdb1 Active Unpinned
racdb2 Active Unpinned
racdb3 Active Unpinned

-----------------------------
[grid@racdb1 ~]$ olsnodes -s -t|grep racdb1

racdb1 Active Unpinned

状态为Unpinned,但是我们这里是实验环境,可以体验一下unpin命令:

[root@racdb1 ~]# crsctl unpin css -n racdb3

CRS-4667: Node racdb2 successfully unpinned.

unpin命令在要删除的节点上或不删除的节点上执行都可以。


2、对节点进行deconfig操作

在要删除的节点上以root用户执行:

cd $GRID_HOME/crs/install

./rootcrs.pl -deconfig -force
执行过程记录如下:

[root@racdb3 ~]# cd /u01/app/11.2.0/grid
[root@racdb3 grid]#
[root@racdb3 grid]#
[root@racdb3 grid]# cd crs/install/
[root@racdb3 install]# ls
cmdllroot.sh crsdelete.pm installRemove.excl paramfile.crs rootofs.sh
crsconfig_addparams crspatch.pm onsconfig ParentDirPerm_racdb1.txt s_crsconfig_defs
crsconfig_addparams.sbs hasdconfig.pl oraacfs.pm ParentDirPerm_racdb3.txt s_crsconfig_lib.pm
crsconfig_lib.pm inittab oracle-ohasd.conf preupdate.sh s_crsconfig_racdb1_env.txt
crsconfig_params install.excl oracle-ohasd.service rootcrs.pl s_crsconfig_racdb3_env.txt
crsconfig_params.sbs install.incl oracss.pm roothas.pl tfa_setup.sh

[root@racdb3 install]# ./rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
Network exists: 1/192.168.16.0/255.255.255.0/eth0, type static
VIP exists: /192.168.16.55/192.168.16.55/192.168.16.0/255.255.255.0/eth0, hosting node racdb1
VIP exists: /racdb2-vip/192.168.16.56/192.168.16.0/255.255.255.0/eth0, hosting node racdb2
VIP exists: /racdb3-vip/192.168.16.57/192.168.16.0/255.255.255.0/eth0, hosting node racdb3
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'racdb3'
CRS-2673: Attempting to stop 'ora.crsd' on 'racdb3'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'racdb3'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'racdb3'
CRS-2673: Attempting to stop 'ora.OCR.dg' on 'racdb3'
CRS-2677: Stop of 'ora.DATA.dg' on 'racdb3' succeeded
CRS-2677: Stop of 'ora.OCR.dg' on 'racdb3' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'racdb3'
CRS-2677: Stop of 'ora.asm' on 'racdb3' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'racdb3' has completed
CRS-2677: Stop of 'ora.crsd' on 'racdb3' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'racdb3'
CRS-2673: Attempting to stop 'ora.ctssd' on 'racdb3'
CRS-2673: Attempting to stop 'ora.evmd' on 'racdb3'
CRS-2673: Attempting to stop 'ora.asm' on 'racdb3'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'racdb3'
CRS-2677: Stop of 'ora.ctssd' on 'racdb3' succeeded
CRS-2677: Stop of 'ora.crf' on 'racdb3' succeeded
CRS-2677: Stop of 'ora.evmd' on 'racdb3' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'racdb3' succeeded
CRS-2677: Stop of 'ora.asm' on 'racdb3' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'racdb3'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'racdb3' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'racdb3'
CRS-2677: Stop of 'ora.cssd' on 'racdb3' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'racdb3'
CRS-2677: Stop of 'ora.gipcd' on 'racdb3' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'racdb3'
CRS-2677: Stop of 'ora.gpnpd' on 'racdb3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'racdb3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Removing Trace File Analyzer
error: package cvuqdisk is not installed
Successfully deconfigured Oracle clusterware stack on this node

注意:如果要删除的节点是当前整个集群中最后一个节点,也就是说要将整个集群删除掉的话,那么需要执行:


./rootcrs.pl -deconfig -force -lastnode

3、在其它节点上删除要删除的节点(执行删除动作)

在不删除的节点上以root用户执行:

crsctl delete node -n NAME_OF_NODE_TO_BE_DELETED

我们在节点1上执行:

[root@racdb1 bin]# ./crsctl delete node -n racdb3
CRS-4661: Node racdb3 successfully deleted.


4、在要删除的节点上更新Nodelist

在要删除的节点,也就是节点3,上以grid用户执行下面的命令:

./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES={node_to_be_deleted}" CRS=TRUE -silent -local

结合我们这里的实际情况,进行参数替换,即:

[root@racdb3 bin]# su - grid
[grid@racdb3 ~]$ cd /u01/app/11.2.0/grid/oui/bin
[grid@racdb3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES=racdb3" CRS=TRUE -silent -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 3066 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.

接下来处理GRID_HOME。

删除GRID软件
1、删除GRID_HOME

如果GRID_HOME是共享的,则在要删除的节点上,以grid用户执行:

cd $GRID_HOME/oui/

./runInstaller -detachHome ORACLE_HOME=Grid_home -silent -local

如果非共享的,以grid用户执行:

$ Grid_home/deinstall/deinstall -local

注意:这里一定要加上-local参数,否则的话,这个命令会删除所有节点的GRID_HOME目录。

我们属于后一种情况,这里我们在节点1上执行,命令执行过程中,除红色字体部分以外,一路回车即可,执行过程记录如下:


[grid@racdb1 bin]$ $ORACLE_HOME/deinstall/deinstall -local

....................命令执行过程省略,只记录重要或需人工介入的部分.......................

Enter an address or the name of the virtual IP used on node "racdb1"[racdb1-vip]

> --可直接回车

Enter the IP netmask of Virtual IP "xxx.xxx.3.53" on node "racdb1"[255.255.255.0]

> --可直接回车

Enter the network interface name on which the virtual IP address "xxx.xxx.3.53" is active

> --可直接回车

Enter an address or the name of the virtual IP[]

> --可直接回车

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN1]: LISTENER

At least one listener from the discovered listener list available after deinstall. If you want to remove a specific listener, please use Oracle Net Configuration Assistant instead. Do you want to continue? (y|n) [n]: y

……

Do you want to continue (y - yes, n - no)? [n]: y

……

Run the following command as the root user or the administrator on node "racdb1".

……

/tmp/deinstall2018-04-26_00-15-04PM/perl/bin/perl -I/tmp/deinstall2018-04-26_00-15-04PM/perl/lib -I/tmp/deinstall2018-04-26_00-15-04PM/crs/install /tmp/deinstall2018-04-26_00-15-04PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2018-04-26_00-15-04PM/response/deinstall_Ora11g_gridinfrahome1.rsp" "

Press Enter after you finish running the above commands

……

[grid@racdb1 bin]$ $GRID_HOME/deinstall/deinstall -local

....................命令执行过程省略,只记录重要或需人工介入的部分.......................

Enter an address or the name of the virtual IP used on node "racdb1"[racdb1-vip]

> --可直接回车

Enter the IP netmask of Virtual IP "xxx.xxx.3.53" on node "racdb1"[255.255.255.0]

> --可直接回车

Enter the network interface name on which the virtual IP address "xxx.xxx.3.53" is active

> --可直接回车

Enter an address or the name of the virtual IP[]

> --可直接回车

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER,LISTENER_SCAN1]: LISTENER

At least one listener from the discovered listener list available after deinstall. If you want to remove a specific listener, please use Oracle Net Configuration Assistant instead. Do you want to continue? (y|n) [n]: y

……

Do you want to continue (y - yes, n - no)? [n]: y

……

Run the following command as the root user or the administrator on node "racdb1".

……

/tmp/deinstall2018-04-26_00-15-04PM/perl/bin/perl -I/tmp/deinstall2018-04-26_00-15-04PM/perl/lib -I/tmp/deinstall2018-04-26_00-15-04PM/crs/install /tmp/deinstall2018-04-26_00-15-04PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2018-04-26_00-15-04PM/response/deinstall_Ora11g_gridinfrahome1.rsp" "

Press Enter after you finish running the above commands

……
根据提示,以root用户在节点1上另开一个会话执行上述命令:


[root@racdb1 ~]# /tmp/deinstall2018-04-26_00-15-04PM/perl/bin/perl -I/tmp/deinstall2018-04-26_00-15-04PM/perl/lib -I/tmp/deinstall2018-04-26_00-15-04PM/crs/install /tmp/deinstall2018-04-26_00-15-04PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2018-04-26_00-15-04PM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Using configuration parameter file: /tmp/deinstall2018-04-26_00-15-04PM/response/deinstall_Ora11g_gridinfrahome1.rsp

****Unable to retrieve Oracle Clusterware home.

Start Oracle Clusterware stack and try again.

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Stop failed, or completed with errors.

################################################################

# You must kill processes or reboot the system to properly #

# cleanup the processes started by Oracle clusterware #

################################################################

Either /etc/oracle/olr.loc does not exist or is not readable

Make sure the file exists and it has read and execute access

Either /etc/oracle/olr.loc does not exist or is not readable

Make sure the file exists and it has read and execute access

Failure in execution (rc=-1, 256, No such file or directory) for command /etc/init.d/ohasd deinstall

error: package cvuqdisk is not installed

Successfully deconfigured Oracle clusterware stack on this node


然后回到刚才的会话窗口,按回车继续执行:


Run the following command as the root user or the administrator on node "racdb1".

/tmp/deinstall2018-04-26_00-15-04PM/perl/bin/perl -I/tmp/deinstall2018-04-26_00-15-04PM/perl/lib -I/tmp/deinstall2018-04-26_00-15-04PM/crs/install /tmp/deinstall2018-04-26_00-15-04PM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2018-04-26_00-15-04PM/response/deinstall_Ora11g_gridinfrahome1.rsp" "

Press Enter after you finish running the above commands -- 此处回车

<----------------------------------------

##### ORACLE DEINSTALL & DECONFIG TOOL END ######

GRID_HOME删除完成。

=====================================================================================================================================
如下为上述步骤的详细记录:
[root@racdb3 11.2.0]# find . -name runInstaller
./grid/oui/bin/runInstaller
[root@racdb3 11.2.0]# cd ./grid/oui/bin/runInstaller
-bash: cd: ./grid/oui/bin/runInstaller: Not a directory
[root@racdb3 11.2.0]# cd ./grid/oui/bin/
[root@racdb3 bin]# ls
addLangs.sh attachHome.sh filesList.bat filesList.sh resource runInstaller runSSHSetup.sh
addNode.sh detachHome.sh filesList.properties lsnodes runConfig.sh runInstaller.sh
[root@racdb3 bin]# pwd
/u01/app/11.2.0/grid/oui/bin
[root@racdb3 bin]# ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES=racdb3" CRS=TRUE -silent -local

The user is root. Oracle Universal Installer cannot continue installation if the user is root.
: No such file or directory
[root@racdb3 bin]# su - grid
[grid@racdb3 ~]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES=racdb3" CRS=TRUE -silent -local
-bash: ./runInstaller: No such file or directory
[grid@racdb3 ~]$ cd /u01/app/11.2.0/grid/oui/bin
[grid@racdb3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES=racdb3" CRS=TRUE -silent -local
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 3066 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.
[grid@racdb3 bin]$ cd
[grid@racdb3 ~]$ $ORACLE_HOME/deinstall/deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2018-12-30_10-00-57AM/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############


######################### CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /u01/app/11.2.0/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home
The following nodes are part of this cluster: racdb3
Checking for sufficient temp space availability on node(s) : 'racdb3'

## [END] Install check configuration ##

Traces log file: /tmp/deinstall2018-12-30_10-00-57AM/logs//crsdc.log
Enter an address or the name of the virtual IP used on node "racdb3"[racdb3-vip]
>

The following information can be collected by running "/sbin/ifconfig -a" on node "racdb3"
Enter the IP netmask of Virtual IP "192.168.16.57" on node "racdb3"[255.255.255.0]
>

Enter the network interface name on which the virtual IP address "192.168.16.57" is active
>

Enter an address or the name of the virtual IP[]
>


Network Configuration check config START

Network de-configuration trace file location: /tmp/deinstall2018-12-30_10-00-57AM/logs/netdc_check2018-12-30_10-01-44-AM.log

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured [LISTENER]:

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /tmp/deinstall2018-12-30_10-00-57AM/logs/asmcadc_check2018-12-30_10-01-55-AM.log


######################### CHECK OPERATION END #########################


####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is:
The cluster node(s) on which the Oracle home deinstallation will be performed are:racdb3
Since -local option has been specified, the Oracle home will be deinstalled only on the local node, 'racdb3', and the global configuration will be removed.
Oracle Home selected for deinstall is: /u01/app/11.2.0/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following RAC listener(s) will be de-configured: LISTENER
Option -local will not modify any ASM configuration.
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2018-12-30_10-00-57AM/logs/deinstall_deconfig2018-12-30_10-01-11-AM.out'
Any error messages from this session will be written to: '/tmp/deinstall2018-12-30_10-00-57AM/logs/deinstall_deconfig2018-12-30_10-01-11-AM.err'

######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /tmp/deinstall2018-12-30_10-00-57AM/logs/asmcadc_clean2018-12-30_10-01-59-AM.log
ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /tmp/deinstall2018-12-30_10-00-57AM/logs/netdc_clean2018-12-30_10-01-59-AM.log

De-configuring RAC listener(s): LISTENER

De-configuring listener: LISTENER
Stopping listener on node "racdb3": LISTENER
Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.

De-configuring Naming Methods configuration file...
Naming Methods configuration file de-configured successfully.

De-configuring backup files...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END


---------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on the local node after the execution completes on all the remote nodes.

Run the following command as the root user or the administrator on node "racdb3".

/tmp/deinstall2018-12-30_10-00-57AM/perl/bin/perl -I/tmp/deinstall2018-12-30_10-00-57AM/perl/lib -I/tmp/deinstall2018-12-30_10-00-57AM/crs/install /tmp/deinstall2018-12-30_10-00-57AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2018-12-30_10-00-57AM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Press Enter after you finish running the above commands

<----------------------------------------

这里需要开一个窗口,在root用户下执行命令:
/tmp/deinstall2018-12-30_10-00-57AM/perl/bin/perl -I/tmp/deinstall2018-12-30_10-00-57AM/perl/lib -I/tmp/deinstall2018-12-30_10-00-57AM/crs/install /tmp/deinstall2018-12-30_10-00-57AM/crs/install/rootcrs.pl -force -deconfig -paramfile "/tmp/deinstall2018-12-30_10-00-57AM/response/deinstall_Ora11g_gridinfrahome1.rsp"


Remove the directory: /tmp/deinstall2018-12-30_10-00-57AM on node:
Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done

Delete directory '/u01/app/11.2.0/grid' on the local node : Done

Delete directory '/u01/app/oraInventory' on the local node : Done

Failed to delete the directory '/u01/app/grid'. The directory is in use.
Delete directory '/u01/app/grid' on the local node : Failed <<<<

Oracle Universal Installer cleanup completed with errors.

Oracle Universal Installer clean END


## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2018-12-30_10-00-57AM' on node 'racdb3'

## [END] Oracle install clean ##


######################### CLEAN OPERATION END #########################


####################### CLEAN OPERATION SUMMARY #######################
Following RAC listener(s) were de-configured successfully: LISTENER
Oracle Clusterware is stopped and successfully de-configured on node "racdb3"
Oracle Clusterware is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node.
Successfully deleted directory '/u01/app/11.2.0/grid' on the local node.
Successfully deleted directory '/u01/app/oraInventory' on the local node.
Failed to delete directory '/u01/app/grid' on the local node.
Oracle Universal Installer cleanup completed with errors.


Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'racdb3' at the end of the session.

Run 'rm -rf /opt/ORCLfmap' as root on node(s) 'racdb3' at the end of the session.
Run 'rm -rf /etc/oratab' as root on node(s) 'racdb3' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################


############# ORACLE DEINSTALL & DECONFIG TOOL END #############

[grid@racdb3 ~]$

=========================================================================================================================================


2、在剩余节点上更新NodeList

在剩余的任意节点,也就是节点2、3或4上执行:

cd $GRID_HOME/oui/bin

./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES={remaining_nodes_list}" CRS=TRUE -silent

结合我们这里的实际情况,进行参数替换,即:


cd $ORACLE_HOME/oui/bin

./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES=racdb1,racdb1" CRS=TRUE -silent

这里我们在节点3上执行,执行过程记录如下:

[root@racdb1 bin]# su - grid
[grid@racdb1 ~]$ cd $ORACLE_HOME/oui/bin
[grid@racdb1 bin]$
[grid@racdb1 bin]$
[grid@racdb1 bin]$
[grid@racdb1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0/grid "CLUSTER_NODES=racdb1,racdb1" CRS=TRUE -silent
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 2076 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
'UpdateNodeList' was successful.


3、执行删除后检查

在节点2、3或4上以grid用户执行:

cluvfy stage -post nodedel -n deleted_node_list [-verbose]

结合我们这里的实际情况,进行参数替换,即:

cluvfy stage -post nodedel -n racdb1 -verbose

我们在节点3上执行,执行过程记录如下:


[grid@racdb3 bin]$ cluvfy stage -post nodedel -n racdb3 -verbose

Performing post-checks for node removal

Checking CRS integrity...

Clusterware version consistency passed

……

CRS integrity check passed

Result:

Node removal check passed

Post-check for node removal was successful.

至此racdb1节点的删除操作完成。

接下来执行删除racdb2的操作,这里我们假设节点2由于硬件故障无法启动,来模拟强制将其从集群中删除的操作过程。

=============================================================================================================================================

强制删除RAC节点操作过程
移除oracle实例
1、如果数据库配置了service,处理service:

在如果数据库配置了service,那么首先处理service,让其只可能在集群中剩余的节点上运行。

这里我们模拟为数据库配置的service的名称为racdbsvc,在实际环境中可以通过”crsctl stat res -t|grep svc”命令查看。

在集群中剩余的任意节点上以oracle用户执行:

srvctl modify service -d racdb -s racdbsvc -n -i racdb3,racdb4 -f
如果命令成功执行,不会有返回结果,执行完后,对service进行确认:

srvctl status service -d racdb -s racdbsvc

Service racdbsvc is running on instance(s) racdb3,racdb4

2、将实例从racdb数据库中删除

对于”Administrator Managed database”,同样调用dbca工具实现,可以通过图形方式删除,也可以通过命令行方式删除,具体操作过程可以参考正常删除rac节点操作过程,此处略去操作过程记录。

处理DB软件
在集群中剩余节点上以oracle用户执行:

cd $ORACLE_HOME/oui/bin

./runInstaller -updateNodeList ORACLE_HOME=${ORACLE_HOME} "CLUSTER_NODES={remainnode1,remainnode2,....}"

结合我们这里的实际情况进行替换,即:

./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1 "CLUSTER_NODES={racdb3,racdb4}"

执行过程记录如下:

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 3007 MB Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oraInventory

'UpdateNodeList' was successful.
将节点移除出RAC集群
1、停掉节点VIP:

在集群中剩余任意节点上以root用户执行:

/u01/app/11.2.0.4/grid/bin/srvctl stop vip -i racdb2
2、移除节点VIP:

在集群中剩余任意节点上以root用户执行:

/u01/app/11.2.0.4/grid/bin/srvctl remove vip -i racdb2 -f
3、查看节点的状态是否为Unpinned

以grid用户执行:olsnodes -s -t,如果不是Unpinned,则以root用户执行:crsctl unpin css将其unpin。

4、剔除RAC节点:

在集群中剩余任意节点上以root用户执行:

[root@racdb3 ~]# /u01/app/11.2.0.4/grid/bin/crsctl delete node -n racdb2

CRS-4661: Node racdb2 successfully deleted.
处理GI Inventory
在集群中剩余任意节点上以grid用户执行:

cd $ORACLE_HOME/oui/bin
./runInstaller -updateNodeList ORACLE_HOME=/opt/app/oracle/product/grid "CLUSTER_NODES={sage}" CRS=TRUE -silent
结合我们这里的实际情况进行替换,即:

cd $ORACLE_HOME/oui/bin
./runInstaller -updateNodeList ORACLE_HOME=/u01/app/11.2.0.4/grid "CLUSTER_NODES={racdb3,racdb4}" CRS=TRUE -silent
命令执行记录如下:

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 3007 MB Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oraInventory

'UpdateNodeList' was successful.
最后查询集群成员及其状态:

olsnodes -s

racdb3 Active

racdb4 Active
至此删除节点的操作完成。

总结
从上面的操作可以看出,RAC添加和删除节点的操作,并不需要多高的技术能力,但是建议严格ORACLE官方文档和METALINK(MOS)文档进行操作,以避免不必要的麻烦。

参考资料

1、How to Add Node/Instance or Remove Node/Instance in 10gr2, 11gr1, 11gr2 and 12c Oracle Clusterware and RAC(文档 ID 1332451.1)

2、How to Remove/Delete a Node From Grid Infrastructure Clusterware When the Node Has Failed (文档 ID 1262925.1)

3、https://docs.oracle.com/cd/E14795_01/doc/rac.112/e10717/adddelclusterware.htm#CHDFIAIE

4、https://docs.oracle.com/cd/E18283_01/rac.112/e16795/adddelunix.htm#BEICADHD

5、https://docs.oracle.com/cd/E18283_01/rac.112/e16794/adddelclusterware.htm#CWADD90992

6、http://docs.oracle.com/cd/E11882_01/rac.112/e41959/adddelclusterware.htm#CWADD90992

猜你喜欢

转载自www.cnblogs.com/chendian0/p/10221817.html