Oracle 11gR2 RAC网络配置,更改public ip、vip和scanip

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/Martin201609/article/details/52557037

Oracle RAC网络配置,修改public IP以及VIP地址

Oracle Clusterware Network管理

#public ip和private ip

An Oracle Clusterware configuration requires at least two interfaces:
A public network interface, on which users and application servers connect to access data on the database server.
–pulibc网卡,是VIP所绑定的网卡,客户端连接所使用的,对外提供服务连的网卡

A private network interface for internode communication.
–private network interface,用于rac节点之间作信息同步的。
–Oracle RAC系统中每个节点至少有两个interface。public网卡,对外提供服务的,用于客户端连接。

SCAN IP属于虚拟IP,是对外提供的IP,oracle推荐使用scan ip,通过配置scan ip可以将客户端来的请求,负载均衡的分配到集群的各个节点上。

配置要求:
public ip 和 private ip对应的操作系统网卡在集群中各节点的名称要一致,比如都用eth0和eth1,若不一致,则rac软件无法成功安装。
操作系统hosts文件对应正确,需要配置public ip ,public vip ,private ip以及scan ip

关于修改网络配置的各种情况说明

实验环境信息:(linux 64 + RAC 11.2.0.4)
–public ip
192.168.56.101 rac1.wtest.com rac1
192.168.56.103 rac2.wtest.com rac2
192.168.56.102 rac1-vip.wtest.com rac1-vip
192.168.56.104 rac2-vip.wtest.com rac2-vip
192.168.56.105 rac.wtest.com rac
–priv
192.168.57.11 rac1-priv
192.168.57.13 rac2-priv

–网卡信息
node1:
eth0 inet addr:192.168.56.101 Bcast:192.168.56.255 Mask:255.255.255.0
eth1 inet addr:192.168.57.11 Bcast:192.168.57.255 Mask:255.255.255.0
node2:
eth0 inet addr:192.168.56.103 Bcast:192.168.56.255 Mask:255.255.255.0
eth1 inet addr:192.168.57.13 Bcast:192.168.57.255 Mask:255.255.255.0

Case 1、修改主机名(hostname)
public hostname是在软件安装期间,自动在OCR中配置,不能被随便修改。
要想修改hostname,则只有通过将节点踢出集群,然后重新加入的方式,修改hostname

Case 2、修改public ip
修改public ip,如果不修改网卡的名称和掩码等,修改后的IP地址,仍然在原网络的局域内,则可以直接进行修改。

直接在OS操作系统层处理,不需要再在oracle clusterware层做一些其它的处理

node1:
eth0 192.168.56.101 –> 192.168.56.111
node2:
eth0 192.168.56.103 –> 192.168.56.113
原网卡的名称保持不变
1. Shutdown Oracle Clusterware stack
node1:
./crsctl stop crs
node2:
./crsctl stop crs
2. Modify the IP address at network layer, DNS and /etc/hosts file to reflect the change
修改/etc/hosts文件
修改/etc/sysconfig/network-scripts/ifcfg-eth0
重开网络服务
service network restart
3. Restart Oracle Clusterware stack
使用新网卡登录到机器
cd /app/grid/11.2.0/bin
./crsctl start crs
查看状态
crsctl stat res -t

[grid@rac2 ~]$ crsctl stat res -t 
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATADG.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.DGSYS.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.EXTDG.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.SYSDG.dg
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.asm
               ONLINE  ONLINE       rac1                     Started             
               ONLINE  ONLINE       rac2                     Started             
ora.gsd
               OFFLINE OFFLINE      rac1                                         
               OFFLINE OFFLINE      rac2                                         
ora.net1.network
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.ons
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
ora.registry.acfs
               ONLINE  ONLINE       rac1                                         
               ONLINE  ONLINE       rac2                                         
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1                                         
ora.cvu
      1        ONLINE  ONLINE       rac1                                         
ora.mar.db
      1        OFFLINE OFFLINE                               Instance Shutdown   
      2        OFFLINE OFFLINE                               Instance Shutdown   
ora.oc4j
      1        ONLINE  ONLINE       rac1                                         
ora.rac.db
      1        OFFLINE OFFLINE                               Instance Shutdown   
      2        OFFLINE OFFLINE                               Instance Shutdown   
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                                         
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                                         
ora.scan1.vip
      1        ONLINE  ONLINE       rac1 

crs_stat -t -v
[grid@rac2 ~]$ crs_stat -t -v 
Name           Type           R/RA   F/FT   Target    State     Host        
----------------------------------------------------------------------
ora.DATADG.dg  ora....up.type 0/5    0/     ONLINE    ONLINE    rac1        
ora.DGSYS.dg   ora....up.type 0/5    0/     ONLINE    ONLINE    rac1        
ora.EXTDG.dg   ora....up.type 0/5    0/     ONLINE    ONLINE    rac1        
ora....ER.lsnr ora....er.type 0/5    0/     ONLINE    ONLINE    rac1        
ora....N1.lsnr ora....er.type 0/5    0/0    ONLINE    ONLINE    rac1        
ora.SYSDG.dg   ora....up.type 0/5    0/     ONLINE    ONLINE    rac1        
ora.asm        ora.asm.type   0/5    0/     ONLINE    ONLINE    rac1        
ora.cvu        ora.cvu.type   0/5    0/0    ONLINE    ONLINE    rac1        
ora.gsd        ora.gsd.type   0/5    0/     OFFLINE   OFFLINE               
ora.mar.db     ora....se.type 0/2    0/1    OFFLINE   OFFLINE               
ora....network ora....rk.type 0/5    0/     ONLINE    ONLINE    rac1        
ora.oc4j       ora.oc4j.type  0/1    0/2    ONLINE    ONLINE    rac1        
ora.ons        ora.ons.type   0/3    0/     ONLINE    ONLINE    rac1        
ora.rac.db     ora....se.type 0/2    0/1    OFFLINE   OFFLINE               
ora....SM1.asm application    0/5    0/0    ONLINE    ONLINE    rac1        
ora....C1.lsnr application    0/5    0/0    ONLINE    ONLINE    rac1        
ora.rac1.gsd   application    0/5    0/0    OFFLINE   OFFLINE               
ora.rac1.ons   application    0/3    0/0    ONLINE    ONLINE    rac1        
ora.rac1.vip   ora....t1.type 0/0    0/0    ONLINE    ONLINE    rac1        
ora....SM2.asm application    0/5    0/0    ONLINE    ONLINE    rac2        
ora....C2.lsnr application    0/5    0/0    ONLINE    ONLINE    rac2        
ora.rac2.gsd   application    0/5    0/0    OFFLINE   OFFLINE               
ora.rac2.ons   application    0/3    0/0    ONLINE    ONLINE    rac2        
ora.rac2.vip   ora....t1.type 0/0    0/0    ONLINE    ONLINE    rac2        
ora....ry.acfs ora....fs.type 0/5    0/     ONLINE    ONLINE    rac1        
ora.scan1.vip  ora....ip.type 0/0    0/0    ONLINE    ONLINE    rac1  

Case 3、Changing public network interface, subnet or netmask
修改网卡、子网掩码等信息,需要通过oifcfg命令完成对应的修改

If the change involves different subnet(netmask) or interface, delete the existing interface information from OCR and add it back with the correct information is required.

原先使用eth0,分别为192.168.56.111 和 192.168.56.113
修改网卡名到eth2,使用ip为192.168.56.121 和 192.168.56.123

node1:
eth2 192.168.56.121
node2:
eth2 192.168.56.123

srvctl stop database -d RAC -o immediate
srvctl stop asm -n rac1
srvctl stop asm -n rac2
srvctl stop nodeapps -n rac1
srvctl stop nodeapps -n rac2
./crsctl stop crs  (两个节点都要执行)

1.停止数据库
srvctl stop database -d RAC -o immediate
2.停止nodeapps
从11gR2以后:
srvctl config nodeapps -a

[grid@rac1 ~]$ srvctl config nodeapps -a
网络存在: 1/192.168.56.0/255.255.255.0/eth0, 类型 static
VIP 存在: /rac1-vip/192.168.56.102/192.168.56.0/255.255.255.0/eth0, 托管节点 rac1
VIP 存在: /rac2-vip/192.168.56.104/192.168.56.0/255.255.255.0/eth0, 托管节点 rac2

[grid@rac2 ~]$ srvctl stop asm -n rac2
PRCR-1014 : 无法停止资源 ora.asm
PRCR-1065 : 无法停止资源 ora.asm
CRS-2529: Unable to act on 'ora.asm' because that would require stopping or relocating 'ora.DATADG.dg', but the force option was not specified

oerr crs 2529
[grid@rac2 ~]$ oerr crs 2529
2529, 1, "Unable to act on '%s' because that would require stopping or relocating '%s', but the force option was not specified"
// *Cause:  Acting on the resource requires stopping or relocating other resources,
//          which requires that force option be specified, and it is not.
// *Action: Re-evaluate the request and if it makes sense, set the force option and
//          re-submit

强制关闭(加f选项)
srvctl stop asm -n rac2 -f
servctl stop asm -n rac1 -f 

查看oifcfg getif
[root@rac1 bin]# ./oifcfg getif
PRIF-10: failed to initialize the cluster registry

当关闭asm实例时,则执行这个会报错的

3.所以 crsctl start crs
处理:

cd /app/grid/11.2.0/bin
./oifcfg getif
[root@rac1 bin]# ./oifcfg getif
eth0  192.168.56.0  global  public
eth1  192.168.57.0  global  cluster_interconnect
[root@rac1 bin]# ./oifcfg iflist
eth0  192.168.56.0
eth1  192.168.57.0
eth1  169.254.0.0
eth2  192.168.56.0

查看hosts

#public ip
192.168.56.111  rac1.wtest.com rac1
192.168.56.113  rac2.wtest.com rac2
192.168.56.102  rac1-vip.wtest.com rac1-vip
192.168.56.104  rac2-vip.wtest.com rac2-vip

192.168.56.105  rac.wtest.com rac

#priv
192.168.57.11  rac1-priv
192.168.57.13  rac2-priv

#新的所使用的public ip
192.168.56.121 rac1.wtest.com rac1
192.168.56.123 rac2.wtest.com rac2

4.处理
通过 oifcfg 命令修改所使用的public ip的网卡

[root@rac1 bin]# ./oifcfg delif -global eth0/192.168.56.0
[root@rac1 bin]# 
[root@rac1 bin]# ./oifcfg getif
eth1  192.168.57.0  global  cluster_interconnect

./oifcfg delif -global eth0/192.168.56.0
./oifcfg setif -global eth2/192.168.56.0:public

5.
禁用eth0,启动eth2,模拟启动新的名称的网卡,换掉旧的网卡

ifdown eth0  停用原有网卡
ifup eth2    使用网卡eth2
./crsctl stop crs    
./crsctl start crs

6.查看各资源状态
crsctl stat res -t
发现 两个节点上的VIP并没有启动
尝试启动两节点的VIP

[grid@rac2 ~]$ srctl start vip -n rac2
-bash: srctl: command not found
[grid@rac2 ~]$ srvctl start vip -n rac2
PRCR-1079 : 无法启动资源 ora.rac2.vip
CRS-2674: Start of 'ora.net1.network' on 'rac2' failed
CRS-2632: There are no more servers to try to place resource 'ora.rac2.vip' on that would satisfy its placement policy

[grid@rac2 ~]$ oerr crs 2632
2632, 1, "There are no more servers to try to place resource '%s' on that would satisfy its placement policy"
// *Cause: After one or more attempts, the system ran out of servers
// that can be used to place the resource and satisfy its placement
// policy.
// *Action: None.
[grid@rac2 ~]$ oerr crs 2674
2674, 1, "Start of '%s' on '%s' failed"
// *Cause: This is a status message.
// *Action: None.

7.查看vip的信息,仍然绑定在eth0上

[grid@rac2 ~]$ srvctl config nodeapps -a
网络存在: 1/192.168.56.0/255.255.255.0/eth0, 类型 static
VIP 存在: /rac1-vip/192.168.56.102/192.168.56.0/255.255.255.0/eth0, 托管节点 rac1
VIP 存在: /rac2-vip/192.168.56.104/192.168.56.0/255.255.255.0/eth0, 托管节点 rac2

故:修改public ip所在的网卡名称之后,因所有的VIP绑定对PUBLIC IP所在的网卡上,所以同样要同步修改VIP的相关配置的
修改VIP所绑定的网卡
通过原有的方式修改

root用户操作
cd /app/grid/11.2.0/bin
srvctl config nodeapps -n rac1 -A 192.168.56.102/255.255.255.0/eth2
srvctl config nodeapps -n rac2 -A 192.168.56.104/255.255.255.0/eth2

查看修改后的状态

[grid@rac2 ~]$ srvctl config nodeapps -a
网络存在: 1/192.168.56.0/255.255.255.0/eth2, 类型 static
VIP 存在: /rac1-vip/192.168.56.102/192.168.56.0/255.255.255.0/eth2, 托管节点 rac1
VIP 存在: /rac2-vip/192.168.56.104/192.168.56.0/255.255.255.0/eth2, 托管节点 rac2

在11.2.0.2以后的版本中,也可以通过直接修改network资源完成类似的操作
更改相关的配置:

srvctl modify network -k 1 -S 192.168.56.102/255.255.255.0/eth2
srvctl modify network -k 1 -S 192.168.56.104/255.255.255.0/eth2

注:

How to ModifyPublic Network Information including VIP in Oracle Clusterware (文档 ID 276434.1)
Note 1: Starting with 11.2, the VIPs depend on the network resource (ora.net1.network), the OCR only records the VIP
hostname or the IP address associated with the VIP resource. The network attributes (subnet/netmask/interface) are
recorded with the network resource. When the nodeapps resource is modified, the network resoure(ora.net1.network)
attributes are also modified implicitly.
From 11.2.0.2 onwards, if only subnet/netmask/interface change is required, network resource can be modified directly via
srvctl modify network command.
as root user:
# srvctl modify network -k <network_number>] [-S <subnet>/<netmask>[/if1[|if2...]]
eg:
# srvctl modify network -k 1 -S 110.11.70.0/255.255.255.0/eth2

8.查看资源状态

crsctl stat res -t
crs_stat -t

9.如果保证放心,有停机时间窗口,可以尝试两个节点重启一起,保证安全请放心

crsctl stop crs
crsctl start crs
crsctl stat res -t 
servctl start database -d RAC -o immediate

Case 4: Changing VIPs associated with public network change
如果涉及VIP的修改以及public ip所在的网卡名称,掩码等特性的修改。参照上述Case 3的步骤修改。这里不在重复测试记录。

猜你喜欢

转载自blog.csdn.net/Martin201609/article/details/52557037