linux搭建oracle rac实操

Oracle 11g RAC搭建(VMware环境)
Oracle 11g RAC搭建(VMware环境)
安装环境与网络规划
安装环境
网络规划
环境配置

  1. 通过SecureCRT建立命令行连接
  2. 关闭防火墙
  3. 创建必要的用户、组和目录,并授权
  4. 节点配置检查
  5. 系统文件设置
    6.配置IP和hosts、hostname
    7.配置grid和oracle用户环境变量
    8.配置oracle用户ssh互信
    9.配置裸盘
    10.配置grid用户ssh互信
    11.挂载安装软件文件夹
    12.安装用于Linux的cvuqdisk
    13.手动运行cvu使用验证程序验证Oracle集群件要求(所有节点都执行)
    安装Grid Infrastructure
    1.安装流程
    2.安装grid后的资源检查
    3.为数据和快速恢复去创建ASM磁盘组
    安装Oracle database软件(RAC)
    1.安装流程
    2.创建集群数据库
    RAC维护
    1.查看服务状态
    2.检查CRS状态
    3.查看集群中节点配置信息
    4.查看集群件的表决磁盘信息
    5.查看集群SCAN VIP信息
    6.启、停集群数据库
    EM管理
    本地sqlplus连接
    更新于20180715,在rhel 7.1安装,添加注意事项

安装环境与网络规划
安装环境
主机操作系统:windows 10
虚拟机VMware12:两台Oracle Linux R6 U5 x86_64
Oracle Database software: Oracle11gR2
Cluster software: Oracle grid infrastructure 11gR2(11.2.0.4.0)
共享存储:ASM

说明:11.2.0.1.0版本在安装grid时有个bug
执行脚本/u01/app/11.2.0/grid/root.sh 出现问题
Adding daemon to inittab
CRS-4124: Oracle High Availability Services startup failed.
CRS-4000: Command Start failed, or completed with errors.
ohasd failed to start: Inappropriate ioctl for device
ohasd failed to start at /u01/app/11.2.0/grid/crs/install/rootcrs.pl line 443.

[root@rac1 ~]# lsb_release -a
LSB Version: :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
Distributor ID: OracleServer
Description: Oracle Linux Server release 6.5
Release: 6.5
Codename: n/a
[root@rac1 ~]# uname -r
3.8.13-16.2.1.el6uek.x86_64
1
2
3
4
5
6
7
8
细节说明:

  1. 安装Oracle Linux时,注意分配两个网卡,一个网卡为Host Only方式,用于两台虚拟机节点的通讯,另一个网卡为Nat方式,用于连接外网,后面再手动分配静态IP。每台主机的内存和swap规划为至少2.5G。硬盘规划为:boot 500M,其他空间分配为LVM方式管理,LVM划分2.5G为swap,其他为/。
    两台Oracle Linux主机名为rac1、rac2
    注意这里安装的两个操作系统最好在不同的硬盘中,否则I/O会很吃力。
  2. 由于采用的是共享存储ASM,而且搭建集群需要共享空间作注册盘(OCR)和投票盘(votingdisk)。VMware创建共享存储方式:
    进入VMware安装目录,cmd命令下:

C:\Program Files (x86)\VMware\VMware Workstation>
vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 F:\VMware\RAC\Sharedisk\ocr.vmdk
vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 F:\VMware\RAC\Sharedisk\ocr2.vmdk
vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 F:\VMware\RAC\Sharedisk\votingdisk.vmdk
vmware-vdiskmanager.exe -c -s 20000Mb -a lsilogic -t 2 F:\VMware\RAC\Sharedisk\data.vmdk
vmware-vdiskmanager.exe -c -s 10000Mb -a lsilogic -t 2 F:\VMware\RAC\Sharedisk\backup.vmdk
1
2
3
4
5
6
这里创建了两个1G的ocr盘,一个1G的投票盘,一个20G的数据盘,一个10G的备份盘。

修改RAC1虚拟机目录下的vmx配置文件:

scsi1.present = “TRUE”
scsi1.virtualDev = “lsilogic”
scsi1.sharedBus = “virtual”

scsi1:1.present = “TRUE”
scsi1:1.mode = “independent-persistent”
scsi1:1.filename = “F:\VMware\RAC\Sharedisk\ocr.vmdk”
scsi1:1.deviceType = “plainDisk”

scsi1:2.present = “TRUE”
scsi1:2.mode = “independent-persistent”
scsi1:2.filename = “F:\VMware\RAC\Sharedisk\votingdisk.vmdk”
scsi1:2.deviceType = “plainDisk”

scsi1:3.present = “TRUE”
scsi1:3.mode = “independent-persistent”
scsi1:3.filename = “F:\VMware\RAC\Sharedisk\data.vmdk”
scsi1:3.deviceType = “plainDisk”

scsi1:4.present = “TRUE”
scsi1:4.mode = “independent-persistent”
scsi1:4.filename = “F:\VMware\RAC\Sharedisk\backup.vmdk”
scsi1:4.deviceType = “plainDisk”

scsi1:5.present = “TRUE”
scsi1:5.mode = “independent-persistent”
scsi1:5.filename = “F:\VMware\RAC\Sharedisk\ocr2.vmdk”
scsi1:5.deviceType = “plainDisk”

disk.locking = “false”
diskLib.dataCacheMaxSize = “0”
diskLib.dataCacheMaxReadAheadSize = “0”
diskLib.DataCacheMinReadAheadSize = “0”
diskLib.dataCachePageSize = “4096”
diskLib.maxUnsyncedWrites = “0”
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
修改RAC2的vmx配置文件:

scsi1.sharedBus = “virtual”
disk.locking = “false”
diskLib.dataCacheMaxSize = “0”
diskLib.dataCacheMaxReadAheadSize = “0”
diskLib.DataCacheMinReadAheadSize = “0”
diskLib.dataCachePageSize = “4096”
diskLib.maxUnsyncedWrites = “0”
gui.lastPoweredViewMode = “fullscreen”
checkpoint.vmState = “”
usb:0.present = “TRUE”
usb:0.deviceType = “hid”
usb:0.port = “0”
usb:0.parent = “-1”
1
2
3
4
5
6
7
8
9
10
11
12
13
这里就在RAC2的虚拟机设置中手动添加创建好的五个虚拟硬盘,要求是独立永久属性。

网络规划
硬件配置要求:

  • 每个服务器节点至少需要2块网卡,一个对外网络接口,一个私有网路接口(心跳)。
  • 如果你通过OUI安装Oracle集群软件,需要保证每个节点用于外网或私网接口(网卡名)保证一致。比如,node1使用eth0作为对外接口,node2就不能使用eth1作为对外接口。

IP配置要求:
这里不采用DHCP方式,指定静态的scan ip(scan ip可以实现集群的负载均衡,由集群软件按情况分配给某一节点)。
每个节点分配一个ip、一个虚拟ip、一个私有ip。
其中ip、vip和scan-ip需要在同一个网段。

非GNS下手动配置IP实例:

Identity Home Node Host Node Given Name Type Address
RAC1 Public RAC1 RAC1 rac1 Public 192.168.248.101
RAC1 VIP RAC1 RAC1 rac1-vip Public 192.168.248.201
RAC1 Private RAC1 RAC1 rac1-priv Private 192.168.109.101
RAC2 RAC2 RAC2 rac2 Public 192.168.248.102
RAC2 VIP RAC2 RAC2 rac2-vip Public 192.168.248.202
RAC2 Private RAC2 RAC2 rac2-priv Private 192.168.109.102
SCAN IP none Selected by Oracle Clusterware scan-ip virtual 192.168.248.110
环境配置
默认情况下,下面操作在每个节点下均要进行,密码均设置oracle

  1. 通过SecureCRT建立命令行连接
    sqlplus中Backspace出现^H的乱码
    Options->Session Options->Terminal->Emulation->Mapped Keys->Other mappings
    勾选Backspace sends delete

vi中不能使用delete和home
Options->Session Options->Terminal->Emulation
设置Terminal为Linux
勾选Select an alternate keyboard emulation为Linux

  1. 关闭防火墙
    [root@rac1 ~]# setenforce 0
    setenforce: SELinux is disabled
    [root@rac1 ~]# vi /etc/sysconfig/selinux
    SELINUX=disabled
    [root@rac1 ~]# service iptables stop
    [root@rac1 ~]# chkconfig iptables off
    1
    2
    3
    4
    5
    6

  2. 创建必要的用户、组和目录,并授权
    /usr/sbin/groupadd -g 1000 oinstall
    /usr/sbin/groupadd -g 1020 asmadmin
    /usr/sbin/groupadd -g 1021 asmdba
    /usr/sbin/groupadd -g 1022 asmoper
    /usr/sbin/groupadd -g 1031 dba
    /usr/sbin/groupadd -g 1032 oper
    useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid
    useradd -u 1101 -g oinstall -G dba,asmdba,oper oracle
    mkdir -p /u01/app/11.2.0/grid
    mkdir -p /u01/app/grid
    mkdir /u01/app/oracle
    chown -R grid:oinstall /u01
    chown oracle:oinstall /u01/app/oracle
    chmod -R 775 /u01/
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    参照官方文档,采用GI与DB分开安装和权限的策略,对于多实例管理有利。

  3. 节点配置检查
    内存大小:至少2.5GB
    Swap大小:
    当内存为2.5GB-16GB时,Swap需要大于等于系统内存。
    当内存大于16GB时,Swap等于16GB即可。
    查看内存和swap大小:

[root@rac1 ~]# grep MemTotal /proc/meminfo
MemTotal: 2552560 kB
[root@rac1 ~]# grep SwapTotal /proc/meminfo
SwapTotal: 2621436 kB
1
2
3
4
如果swap太小,swap调整方法:

通过此种方式进行swap 的扩展,首先要计算出block的数目。具体为根据需要扩展的swapfile的大小,以M为单位。block=swap分区大小1024, 例如,需要扩展64M的swapfile,则:block=641024=65536.

然后做如下步骤:

#dd if=/dev/zero of=/swapfile bs=1024 count=65536
#mkswap /swapfile
#swapon /swapfile
#vi /etc/fstab
增加/swapf swap swap defaults 0 0

cat /proc/swaps 或者# free –m //查看swap分区大小

swapoff /swapf //关闭扩展的swap分区

  1. 系统文件设置
    (1)内核参数设置:
    [root@rac1 ~]# vi /etc/sysctl.conf
    kernel.msgmnb = 65536
    kernel.msgmax = 65536
    kernel.shmmax = 68719476736
    kernel.shmall = 4294967296
    fs.aio-max-nr = 1048576
    fs.file-max = 6815744
    kernel.shmall = 2097152
    kernel.shmmax = 1306910720
    kernel.shmmni = 4096
    kernel.sem = 250 32000 100 128
    net.ipv4.ip_local_port_range = 9000 65500
    net.core.rmem_default = 262144
    net.core.rmem_max = 4194304
    net.core.wmem_default = 262144
    net.core.wmem_max = 1048586
    net.ipv4.tcp_wmem = 262144 262144 262144
    net.ipv4.tcp_rmem = 4194304 4194304 4194304

这里后面检测要改
kernel.shmmax = 68719476736

确认修改内核
[root@rac1 ~]# sysctl -p

也可以采用Oracle Linux光盘中的相关安装包来调整
[root@rac1 Packages]# pwd
/mnt/cdrom/Packages
[root@rac1 Packages]# ll | grep preinstall
-rw-r–r– 1 root root 15524 Dec 25 2012 oracle-rdbms-server-11gR2-preinstall-1.0-7.el6.x86_64.rpm

(2)配置oracle、grid用户的shell限制
[root@rac1 ~]# vi /etc/security/limits.conf
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536

(3)配置login
[root@rac1 ~]# vi /etc/pam.d/login
session required pam_limits.so

安装需要的软件包
binutils-2.20.51.0.2-5.11.el6 (x86_64)
compat-libcap1-1.10-1 (x86_64)
compat-libstdc+±33-3.2.3-69.el6 (x86_64)
compat-libstdc+±33-3.2.3-69.el6.i686
gcc-4.4.4-13.el6 (x86_64)
gcc-c+±4.4.4-13.el6 (x86_64)
glibc-2.12-1.7.el6 (i686)
glibc-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6 (x86_64)
glibc-devel-2.12-1.7.el6.i686
ksh
libgcc-4.4.4-13.el6 (i686)
libgcc-4.4.4-13.el6 (x86_64)
libstdc+±4.4.4-13.el6 (x86_64)
libstdc+±4.4.4-13.el6.i686
libstdc+±devel-4.4.4-13.el6 (x86_64)
libstdc+±devel-4.4.4-13.el6.i686
libaio-0.3.107-10.el6 (x86_64)
libaio-0.3.107-10.el6.i686
libaio-devel-0.3.107-10.el6 (x86_64)
libaio-devel-0.3.107-10.el6.i686
make-3.81-19.el6
sysstat-9.0.4-11.el6 (x86_64)

这里使用的是配置本地源的方式,自己先进行配置:
[root@rac1 ~]# mount /dev/cdrom /mnt/cdrom/
[root@rac1 ~]# vi /etc/yum.repos.d/dvd.repo
[dvd]
name=dvd
baseurl=file:///mnt/cdrom
gpgcheck=0
enabled=1
[root@rac1 ~]# yum clean all
[root@rac1 ~]# yum makecache
[root@rac1 ~]# yum install gcc gcc-c++ glibc* glibc-devel* ksh libgcc* libstdc++* libstdc+±devel* make sysstat

6.配置IP和hosts、hostname
(1)配置ip
//这里的网关有vmware中网络设置决定,eth0为连接外网,eth0内网心跳
//rac1主机下:
[root@rac1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0
IPADDR=192.168.248.101
PREFIX=24
GATEWAY=192.168.248.2
DNS1=114.114.114.114

[root@rac1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1
IPADDR=192.168.109.101
PREFIX=24

//rac2主机下
[root@rac2 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0
IPADDR=192.168.248.102
PREFIX=24
GATEWAY=192.168.248.2
DNS1=114.114.114.114

[root@rac2 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1
IPADDR=192.168.109.102
PREFIX=24

(2)配置hostname
//rac1主机下
[root@rac1 ~]# vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=rac1
GATEWAY=192.168.248.2
NOZEROCONF=yes

//rac2主机下
[root@rac2 ~]# vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=rac2
GATEWAY=192.168.248.2
NOZEROCONF=yes

(3)配置hosts
rac1和rac2均要添加:
[root@rac1 ~]# vi /etc/hosts
192.168.248.101 rac1
192.168.248.201 rac1-vip
192.168.109.101 rac1-priv

192.168.248.102 rac2
192.168.248.202 rac2-vip
192.168.109.102 rac2-priv

192.168.248.110 scan-ip

7.配置grid和oracle用户环境变量
Oracle_sid需要根据节点不同进行修改
[root@rac1 ~]# su - grid
[grid@rac1 ~]$ vi .bash_profile

export TMP=/tmp
export TMPDIR=KaTeX parse error: Expected 'EOF', got '#' at position 30: …CLE_SID=+ASM1 #̲ RAC1 export OR…PATH
export PATH= O R A C L E H O M E / b i n : ORACLE_HOME/bin: ORACLEHOME/bin:PATH
export LD_LIBRARY_PATH= O R A C L E H O M E / l i b : / l i b : / u s r / l i b e x p o r t C L A S S P A T H = ORACLE_HOME/lib:/lib:/usr/lib export CLASSPATH= ORACLEHOME/lib:/lib:/usr/libexportCLASSPATH=ORACLE_HOME/JRE: O R A C L E H O M E / j l i b : ORACLE_HOME/jlib: ORACLEHOME/jlib:ORACLE_HOME/rdbms/jlib
umask 022
1
2
3
4
5
6
7
8
9
10
11
需要注意的是ORACLE_UNQNAME是数据库名,创建数据库时指定多个节点是会创建多个实例,ORACLE_SID指的是数据库实例名

[root@rac1 ~]# su - oracle
[oracle@rac1 ~]$ vi .bash_profile

export TMP=/tmp
export TMPDIR=KaTeX parse error: Expected 'EOF', got '#' at position 30: …CLE_SID=orcl1 #̲ RAC1 export OR…ORACLE_BASE/product/11.2.0/db_1
export TNS_ADMIN= O R A C L E H O M E / n e t w o r k / a d m i n e x p o r t P A T H = / u s r / s b i n : ORACLE_HOME/network/admin export PATH=/usr/sbin: ORACLEHOME/network/adminexportPATH=/usr/sbin:PATH
export PATH= O R A C L E H O M E / b i n : ORACLE_HOME/bin: ORACLEHOME/bin:PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
1
2
3
4
5
6
7
8
9
10
11
$ source .bash_profile使配置文件生效

8.配置oracle用户ssh互信
这是很关键的一步,虽然官方文档中声称安装GI和RAC的时候OUI会自动配置SSH,但为了在安装之前使用CVU检查各项配置,还是手动配置互信更优。

ssh-keygen -t rsa
ssh-keygen -t dsa

[oracle@RAC1 ~]$
ssh rac1 cat ~/.ssh/id_rsa.pub >> authorized_keys
ssh rac2 cat ~/.ssh/id_rsa.pub >> authorized_keys
ssh rac1 cat ~/.ssh/id_dsa.pub >> authorized_keys
ssh rac2 cat ~/.ssh/id_dsa.pub >> authorized_keys

[oracle@RAC1 .ssh]$ scp authorized_keys rac2:~/.ssh/
[oracle@RAC1 .ssh]$ chmod 600 authorized_keys

ssh rac1 date
ssh rac2 date
ssh rac1-priv date
ssh rac2-priv date
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
需要注意的是生成密钥时不设置密码,授权文件权限为600,同时需要两个节点互相ssh通过一次。

9.配置裸盘
使用asm管理存储需要裸盘,前面配置了共享硬盘到两台主机上。配置裸盘的方式有两种(1)oracleasm添加(2)/etc/udev/rules.d/60-raw.rules配置文件添加(字符方式帮绑定udev) (3)脚本方式添加(块方式绑定udev,速度比字符方式快,最新的方法,推荐用此方式)

在配置裸盘之前需要先格式化硬盘:

fdisk /dev/sdb
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
最后 w 命令保存更改
1
2
3
4
5
6
7
8
重复步骤,格式化其他盘,得到如下分区
[root@rac1 ~]# ls /dev/sd*
/dev/sda /dev/sda2 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sda1 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf

添加裸盘:

[root@rac1 ~]# vi /etc/udev/rules.d/60-raw.rules
ACTION==“add”,KERNEL=="/dev/sdb1",RUN+=’/bin/raw /dev/raw/raw1 %N"
ACTION==“add”,ENV{MAJOR}“8”,ENV{MINOR}“17”,RUN+="/bin/raw /dev/raw/raw1 %M %m"
ACTION==“add”,KERNEL=="/dev/sdc1",RUN+=’/bin/raw /dev/raw/raw2 %N"
ACTION==“add”,ENV{MAJOR}“8”,ENV{MINOR}“33”,RUN+="/bin/raw /dev/raw/raw2 %M %m"
ACTION==“add”,KERNEL=="/dev/sdd1",RUN+=’/bin/raw /dev/raw/raw3 %N"
ACTION==“add”,ENV{MAJOR}“8”,ENV{MINOR}“49”,RUN+="/bin/raw /dev/raw/raw3 %M %m"
ACTION==“add”,KERNEL=="/dev/sde1",RUN+=’/bin/raw /dev/raw/raw4 %N"
ACTION==“add”,ENV{MAJOR}“8”,ENV{MINOR}“65”,RUN+="/bin/raw /dev/raw/raw4 %M %m"
ACTION==“add”,KERNEL=="/dev/sdf1",RUN+=’/bin/raw /dev/raw/raw5 %N"
ACTION==“add”,ENV{MAJOR}“8”,ENV{MINOR}“81”,RUN+="/bin/raw /dev/raw/raw5 %M %m"

KERNEL==“raw[1-5]”,OWNER=“grid”,GROUP=“asmadmin”,MODE=“660”

[root@rac1 ~]# start_udev
Starting udev: [ OK ]
[root@rac1 ~]# ll /dev/raw/
total 0
crw-rw---- 1 grid asmadmin 162, 1 Apr 13 13:51 raw1
crw-rw---- 1 grid asmadmin 162, 2 Apr 13 13:51 raw2
crw-rw---- 1 grid asmadmin 162, 3 Apr 13 13:51 raw3
crw-rw---- 1 grid asmadmin 162, 4 Apr 13 13:51 raw4
crw-rw---- 1 grid asmadmin 162, 5 Apr 13 13:51 raw5
crw-rw---- 1 root disk 162, 0 Apr 13 13:51 rawctl
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
这里需要注意的是配置的,前后都不能有空格,否则会报错。最后看到的raw盘权限必须是grid:asmadmin用户。

方法(3):

[root@rac1 ~]# for i in b c d e f ;
do
echo “KERNEL==“sd*”, BUS==“scsi”, PROGRAM==”/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i", NAME=“asm-disk$i”, OWNER=“grid”, GROUP=“asmadmin”, MODE=“0660"”>> /etc/udev/rules.d/99-oracle-asmdevices.rules
done

可能出现问题:vmware中的RHEL scsi_id不显示虚拟磁盘的wwid的问题
在vmx文件中添加
disk.EnableUUID = “TRUE”

[root@rac1 ~]# start_udev
Starting udev: [ OK ]
1
2
3
4
5
6
7
8
9
10
11
12
13
[root@rac1 ~]# ll /dev/asm
brw-rw—- 1 grid asmadmin 8, 16 Apr 27 18:52 /dev/asm-diskb
brw-rw—- 1 grid asmadmin 8, 32 Apr 27 18:52 /dev/asm-diskc
brw-rw—- 1 grid asmadmin 8, 48 Apr 27 18:52 /dev/asm-diskd
brw-rw—- 1 grid asmadmin 8, 64 Apr 27 18:52 /dev/asm-diske
brw-rw—- 1 grid asmadmin 8, 80 Apr 27 18:52 /dev/asm-diskf

用这种方式添加,在后面的添加asm磁盘组的时候,需要指定Change Diskcovery Path为/dev/asm

注意rhel 7之后有如下改变

  1. 生成规则文件

touch /etc/udev/rules.d/99-oracle-asmdevices.rules
或者
touch /usr/lib/udev/rules.d/99-oracle-asmdevices.rules

  1. 生成规则
    没有对sdb进行分区,执行如下shell脚本,
    for i in b ;
    do
    echo “KERNEL==“sd*”, SUBSYSTEM==“block”, PROGRAM==”/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i", SYMLINK+=“asm-disk$i”, OWNER=“grid”, GROUP=“asmadmin”, MODE=“0660"”
    done

对sdb 进行了分区,执行如下shell脚本,
for i in b1 b2
do
echo “KERNEL==“sd$i”, SUBSYSTEM==“block”, PROGRAM==”/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$parent", RESULT=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/sd${i:0:1}", SYMLINK+=“asm-disk$i”, OWNER=“grid”, GROUP=“asmadmin”, MODE=“0660"”
done;

注意未分区用 $name
分区用 $parent

  1. 将结果复制到 99-oracle-asmdevices.rules

将第二步的输出粘贴入 99-oracle-asmdevices.rules 这个文件

KERNEL==“sd*”, SUBSYSTEM==“block”, PROGRAM=="/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT==“36000c2948ef9d9e4a7937bfc65888bc8”, NAME=“asm-diskb”, OWNER=“grid”, GROUP=“asmadmin”, MODE=“0660”

Load updated block device partition tables.

/sbin/partprobe /dev/sdb

备注:
获取RESULT
在 Linux 7下,可以使用如下命令:

/usr/lib/udev/scsi_id -g -u /dev/sdb

在 Linux 6下,可以使用如下命令:

/sbin/scsi_id -g -u /dev/sdb

在 Linux 5下,可以使用如下命令:

/sbin/scsi_id -g -u -s /block/sdb/sdb

  1. 用udevadm进行测试,注意udevadm命令不接受/dev/sdc这样的挂载设备名,必须是使用/sys/block/sdb这样的原始设备名。

udevadm test /sys/block/sdb
udevadm info --query=all --path=/sys/block/sdb
udevadm info --query=all --name=asm-diskb

  1. 启动udev
    /usr/sbin/udevadm control --reload-rules
    systemctl status systemd-udevd.service
    systemctl enable systemd-udevd.service

start_udev 也整合到
systemctl restart systemd-udev-trigger.service

  1. 检查设备是否正确绑定

ls -l /dev/asm* /dev/sdb

lrwxrwxrwx 1 root root 3 Nov 29 18:17 /dev/asm-diskb -> sdb
brw-rw---- 1 grid asmadmin 8, 16 Nov 29 18:17 /dev/sdb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
10.配置grid用户ssh互信
[root@rac1 ~]#sh-keygen -t rsa
[root@rac1 ~]#ssh-keygen -t dsa
[root@rac2 ~]#sh-keygen -t rsa
[root@rac2 ~]#ssh-keygen -t dsa

[root@rac1 ~]#su - grid

[grid@RAC1 ~]$
ssh rac1 cat ~/.ssh/id_rsa.pub >> authorized_keys
ssh rac2 cat ~/.ssh/id_rsa.pub >> authorized_keys
ssh rac1 cat ~/.ssh/id_dsa.pub >> authorized_keys
ssh rac2 cat ~/.ssh/id_dsa.pub >> authorized_keys

[grid@RAC1 .ssh]$ scp authorized_keys rac2:~/.ssh/
[oracle@RAC1 .ssh]$ chmod 600 authorized_keys
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
11.挂载安装软件文件夹
这里是主机windows系统开启文件夹共享,让后虚拟机挂载即可
mkdir -p /home/grid/db
mount -t cifs -o username=share,password=123456 //192.168.248.1/DB /home/grid/db

mkdir -p /home/oracle/db
mount -t cifs -o username=share,password=123456 //192.168.248.1/DB /home/oracle/db

12.安装用于Linux的cvuqdisk
在Oracle RAC两个节点上安装cvuqdisk,否则,集群验证使用程序就无法发现共享磁盘,当运行(手动运行或在Oracle Grid Infrastructure安装结束时自动运行)集群验证使用程序,会报错“Package cvuqdisk not installed”
注意使用适用于硬件体系结构(x86_64或i386)的cvuqdisk RPM。
cvuqdisk RPM在grid的安装介质上的rpm目录中。

13.手动运行cvu使用验证程序验证Oracle集群件要求(所有节点都执行)
rac1到grid软件目录下执行runcluvfy.sh命令:

这里可能出现问题
wait …[grid@rac1 grid]$ Exception in thread “main” java.lang.NoClassDefFoundError
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:164)
at java.awt.ToolkitKaTeX parse error: Expected 'EOF', got '#' at position 357: …1 [root@rac1 ~]#̲ su - grid [gri… cd db/grid/
[grid@rac1 grid]$ ls
doc readme.html rpm runInstaller stage
install response runcluvfy.sh sshsetup welcome.html
[grid@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose
1
2
3
4
5
6
查看cvu报告,修正错误
这里CVU执行的所有其他检查的结果为”passed”,只出现了如下错误:
Checking DNS response time for an unreachable node
Node Name Status

rac2 failed
rac1 failed
PRVF-5637 : DNS response time could not be checked on following nodes: rac2,rac1

File “/etc/resolv.conf” is not consistent across nodes

这个错误是因为没有配置DNS,但不影响安装,后面也会提示resolv.conf错误,我们用静态的scan-ip,所以可以忽略。

安装Grid Infrastructure
1.安装流程
只需要在一个节点上安装即可,会自动复制到其他节点中,这里在rac1中安装。
进入图形化界面,在grid用户下进行安装

[root@rac1 ~]# su - grid
[grid@rac1 ~]$ cd db/grid/
doc/ readme.html rpm/ runInstaller stage/
install/ response/ runcluvfy.sh sshsetup/ welcome.html
[grid@rac1 ~]$ cd db/grid/
[grid@rac1 grid]$ ./runInstaller
1
2
3
4
5
6
跳过更新
这里写图片描述

选择安装集群

选择自定义安装

选择语言为English

定义集群名字,SCAN Name 为hosts中定义的scan-ip,取消GNS

界面只有第一个节点rac1,点击“Add”把第二个节点rac2加上

选择网卡

配置ASM,这里选择前面配置的裸盘raw1,raw2,raw3,冗余方式为External即不冗余。因为是不用于,所以也可以只选一个设备。这里的设备是用来做OCR注册盘和votingdisk投票盘的。

配置ASM实例需要为具有sysasm权限的sys用户,具有sysdba权限的asmsnmp用户设置密码,这里设置统一密码为oracle,会提示密码不符合标准,点击OK即可

不选择智能管理

检查ASM实例权限分组情况

选择grid软件安装路径和base目录

选择grid安装清单目录

环境检测出现resolv.conf错误,是因为没有配置DNS,可以忽略

安装grid概要

开始安装

复制安装到其他节点

安装grid完成,提示需要root用户依次执行脚本orainstRoot.shroot.sh (一定要先在rac1执行完脚本后,才能在其他节点执行)

在rac1中执行脚本

[root@rac1 rpm]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@rac1 rpm]# /u01/app/
11.2.0/ grid/ oracle/ oraInventory/
[root@rac1 rpm]# /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin …
Copying oraenv to /usr/local/bin …
Copying coraenv to /usr/local/bin …

Creating /etc/oratab file…
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding Clusterware entries to upstart
CRS-2672: Attempting to start ‘ora.mdnsd’ on ‘rac1’
CRS-2676: Start of ‘ora.mdnsd’ on ‘rac1’ succeeded
CRS-2672: Attempting to start ‘ora.gpnpd’ on ‘rac1’
CRS-2676: Start of ‘ora.gpnpd’ on ‘rac1’ succeeded
CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘rac1’
CRS-2672: Attempting to start ‘ora.gipcd’ on ‘rac1’
CRS-2676: Start of ‘ora.cssdmonitor’ on ‘rac1’ succeeded
CRS-2676: Start of ‘ora.gipcd’ on ‘rac1’ succeeded
CRS-2672: Attempting to start ‘ora.cssd’ on ‘rac1’
CRS-2672: Attempting to start ‘ora.diskmon’ on ‘rac1’
CRS-2676: Start of ‘ora.diskmon’ on ‘rac1’ succeeded
CRS-2676: Start of ‘ora.cssd’ on ‘rac1’ succeeded

ASM created and started successfully.

Disk Group OCR created successfully.

clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’…
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 496abcfc4e214fc9bf85cf755e0cc8e2.
Successfully replaced voting disk group with +OCR.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced

STATE File Universal Id File Name Disk group


  1. ONLINE 496abcfc4e214fc9bf85cf755e0cc8e2 (/dev/raw/raw1) [OCR]
    Located 1 voting disk(s).
    CRS-2672: Attempting to start ‘ora.asm’ on ‘rac1’
    CRS-2676: Start of ‘ora.asm’ on ‘rac1’ succeeded
    CRS-2672: Attempting to start ‘ora.OCR.dg’ on ‘rac1’
    CRS-2676: Start of ‘ora.OCR.dg’ on ‘rac1’ succeeded
    Configure Oracle Grid Infrastructure for a Cluster … succeeded
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    62
    63
    64
    65
    66
    67
    68
    69
    70
    71
    72
    73
    74
    75
    76
    77
    78
    79
    80
    81
    82
    83
    84
    85
    86
    87
    88
    在rac2执行脚本

[root@rac2 grid]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@rac2 grid]# /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin …
Copying oraenv to /usr/local/bin …
Copying coraenv to /usr/local/bin …

Creating /etc/oratab file…
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to upstart
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster … succeeded
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
完成脚本后,点击OK,Next,下一步

这里出现了一个错误

根据提示查看日志

[grid@rac1 grid]$ vi /u01/app/oraInventory/logs/installActions2016-04-10_04-57-29PM.log
命令模式查找错误:/ERROR
WARNING:
INFO: Completed Plugin named: Oracle Cluster Verification Utility
INFO: Checking name resolution setup for “scan-ip”…
INFO: ERROR:
INFO: PRVG-1101 : SCAN name “scan-ip” failed to resolve
INFO: ERROR:
INFO: PRVF-4657 : Name resolution setup check for “scan-ip” (IP address: 192.168.2
48.110) failed
INFO: ERROR:
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name “scan-i
p”
INFO: Verification of SCAN VIP and Listener setup failed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
由错误日志可知,是因为没有配置resolve.conf,可以忽略

安装完成

安装grid清单位置

至此grid集群软件安装完成

注意rhel 7 跑root脚本会有问题
peer user cert
pa user cert
Adding Clusterware entries to inittab
ohasd failed to start
Failed to start the Clusterware. Last 20 lines of the alert log follow:
2018-07-15 11:52:25.836:
[client(27869)]CRS-2101:The OLR was formatted using version 3.
2018-07-15 11:56:27.912:
[client(28821)]CRS-2101:The OLR was formatted using version 3.

/paic/app/11.2.0/grid/perl/bin/perl -I/paic/app/11.2.0/grid/perl/lib -I/paic/app/11.2.0/grid/crs/install /paic/app/11.2.0/grid/crs/install/rootcrs.pl execution failed

解决方法:

在RHEL 7中ohasd需要被设置为一个服务,在运行脚本root.sh之前。

步骤如下:

  1. 以root用户创建服务文件

touch /usr/lib/systemd/system/ohas.service
chmod 777 /usr/lib/systemd/system/ohas.service

  1. 将以下内容添加到新创建的ohas.service文件中

cat /usr/lib/systemd/system/ohas.service
[Unit]
Description=Oracle High Availability Services
After=syslog.target

[Service]
ExecStart=/etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple
Restart=always

[Install]
WantedBy=multi-user.target

  1. 以root用户运行下面的命令

systemctl daemon-reload
systemctl enable ohas.service
systemctl start ohas.service

  1. 查看运行状态

[root@rac1 init.d]# systemctl status ohas.service
ohas.service - Oracle High Availability Services
Loaded: loaded (/usr/lib/systemd/system/ohas.service; enabled)
Active: failed (Result: start-limit) since Sun 2018-07-15 13:03:43 CST; 8s ago
Process: 38891 ExecStart=/etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple (code=exited, status=203/EXEC)
Main PID: 38891 (code=exited, status=203/EXEC)

Jul 15 13:03:43 cnsz002 systemd[1]: ohas.service: main process exited, code=exited, status=203/EXEC
Jul 15 13:03:43 cnsz002 systemd[1]: Unit ohas.service entered failed state.
Jul 15 13:03:43 cnsz002 systemd[1]: ohas.service holdoff time over, scheduling restart.
Jul 15 13:03:43 cnsz002 systemd[1]: Stopping Oracle High Availability Services…
Jul 15 13:03:43 cnsz002 systemd[1]: Starting Oracle High Availability Services…
Jul 15 13:03:43 cnsz002 systemd[1]: ohas.service start request repeated too quickly, refusing to start.
Jul 15 13:03:43 cnsz002 systemd[1]: Failed to start Oracle High Availability Services.
Jul 15 13:03:43 cnsz002 systemd[1]: Unit ohas.service entered failed state.
Hint: Some lines were ellipsized, use -l to show in full.

此时状态为失败,原因是现在还没有/etc/init.d/init.ohasd文件。

下面可以运行脚本root.sh 不会再报ohasd failed to start错误了。

如果还是报ohasd failed to start错误,可能是root.sh脚本创建了init.ohasd之后,ohas.service没有马上启动,解决方法参考以下:

当运行root.sh时,一直刷新/etc/init.d ,直到出现 init.ohasd 文件,马上手动启动ohas.service服务 命令:systemctl start ohas.service

[root@rac1 init.d]# systemctl status ohas.service
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
2.安装grid后的资源检查
以grid用户执行以下命令。
[root@rac1 ~]# su - grid

检查crs状态

[grid@rac1 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
1
2
3
4
5
检查Clusterware资源

[grid@rac1 ~]$ crs_stat -t -v
Name Type R/RA F/FT Target State Host

ora…ER.lsnr ora…er.type 0/5 0/ ONLINE ONLINE rac1
ora…N1.lsnr ora…er.type 0/5 0/0 ONLINE ONLINE rac1
ora.OCR.dg ora…up.type 0/5 0/ ONLINE ONLINE rac1
ora.asm ora.asm.type 0/5 0/ ONLINE ONLINE rac1
ora.cvu ora.cvu.type 0/5 0/0 ONLINE ONLINE rac1
ora.gsd ora.gsd.type 0/5 0/ OFFLINE OFFLINE
ora…network ora…rk.type 0/5 0/ ONLINE ONLINE rac1
ora.oc4j ora.oc4j.type 0/1 0/2 ONLINE ONLINE rac1
ora.ons ora.ons.type 0/3 0/ ONLINE ONLINE rac1
ora…SM1.asm application 0/5 0/0 ONLINE ONLINE rac1
ora…C1.lsnr application 0/5 0/0 ONLINE ONLINE rac1
ora.rac1.gsd application 0/5 0/0 OFFLINE OFFLINE
ora.rac1.ons application 0/3 0/0 ONLINE ONLINE rac1
ora.rac1.vip ora…t1.type 0/0 0/0 ONLINE ONLINE rac1
ora…SM2.asm application 0/5 0/0 ONLINE ONLINE rac2
ora…C2.lsnr application 0/5 0/0 ONLINE ONLINE rac2
ora.rac2.gsd application 0/5 0/0 OFFLINE OFFLINE
ora.rac2.ons application 0/3 0/0 ONLINE ONLINE rac2
ora.rac2.vip ora…t1.type 0/0 0/0 ONLINE ONLINE rac2
ora.scan1.vip ora…ip.type 0/0 0/0 ONLINE ONLINE rac1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
检查集群节点

[grid@rac1 ~]$ olsnodes -n
rac1 1
rac2 2
1
2
3
检查两个节点上的Oracle TNS监听器进程

[grid@rac1 ~]$ ps -ef|grep lsnr|grep -v ‘grep’|grep -v ‘ocfs’|awk ‘{print$9}’
LISTENER_SCAN1
LISTENER
1
2
3
确认针对Oracle Clusterware文件的Oracle ASM功能:
如果在 Oracle ASM 上暗转过了OCR和表决磁盘文件,则以Grid Infrastructure 安装所有者的身份,使用给下面的命令语法来确认当前正在运行已安装的Oracle ASM:

[grid@rac1 ~]$ srvctl status asm -a
ASM is running on rac2,rac1
ASM is enabled.
1
2
3
3.为数据和快速恢复去创建ASM磁盘组
官方文档中规定了不同冗余策略下OCR、Voting disk、Database和Recovery所需的大小

只在节点rac1执行即可
进入grid用户下
[root@rac1 ~]# su - grid
利用asmca
[grid@rac1 ~]$ asmca

这里看到安装grid时配置的OCR盘已存在

添加DATA盘,点击create,使用裸盘raw4

同样创建FRA盘,使用裸盘raw5

ASM磁盘组情况

ASM实例

安装Oracle database软件(RAC)
1.安装流程
只需要在节点rac1上执行即可
[root@rac1 ~]# su - oracle
[oracle@rac1 ~]$ cd db/database
[oracle@rac1 database]$ ./runInstaller

进入图形化界面,跳过更新

选择只安装数据库软件

选择Oracel Real Application Clusters database installation按钮(默认),确保勾选所有的节点

这里的SSH Connectivity是配置每个节点之间的oracle用户互信,前面已手动配置过,可以不配

选择语言English

选择安装企业版软件

选择安装Oracle软件路径,其中ORACLE_BASE,ORACLE_HOME均选择之前配置好的

oracle权限授予用户组

安装前的预检查

这两个错误前面有说明,忽略

错误提示:
如果有

  1. Node Application Existence
    PRVF-4557:Node application “ora.rac1.vip” is offline on node “rac1”
    说明是rac1 出异常,vip飘到rac2,正常rac1恢复会自动飘回,可以手动
    crsctl relocate resource ora.cnsz001.vip

  2. Node Connectivity
    检查
    [grid@cnsz002 grid]$ ./runcluvfy.sh stage -post hwos -n cnsz001,cnsz002 -verbose

  3. 在RHEL7或者OL7上安装11.2.0.4时遇到错误 “undefined reference to symbol ‘B_DestroyKeyObject’”
    来源于:
    Installation of Oracle 11.2.0.4 on OL7 fails with “undefined reference to symbol ‘B_DestroyKeyObject’” error (文档 ID 1965691.1)

适用于:
Oracle Database - Enterprise Edition - Version 11.2.0.4 to 11.2.0.4 [Release 11.2]
Oracle Database - Standard Edition - Version 11.2.0.4 to 11.2.0.4 [Release 11.2]
Linux x86-64

usr/bin/ld: /u01/app/oracle/product/11.2.0/dbhome_1/sysman/lib//libnmectl.a(nmectlt.o): undefined reference to symbol ‘B_DestroyKeyObject’
/usr/bin/ld: note: ‘B_DestroyKeyObject’ is defined in DSO /u01/app/oracle/product/11.2.0/dbhome_1/lib/libnnz11.so so try adding it to the linker command line
/u01/app/oracle/product/11.2.0/dbhome_1/lib/libnnz11.so: could not read symbols: Invalid operation
collect2: error: ld returne
INFO: d 1 exit status

原因:
未公开的bug 19692824

解决方法:

  1. 忽略 Oracle 11.2.0.4安装过程中的 undefined symbol error 错误并继续安装,软件安装过程将会在没有错误的情况下成功结束

  2. 下载并安装patch 19692824

  3. 为11.2.0.4的HOME设置诸如ORACLE_HOME, PATH等等的环境变量,并使用下面的命令来重新编译失败的target

    $ make -f $ORACLE_HOME/sysman/lib/ins_emagent.mk agent nmhs
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    安装RAC的概要信息

开始安装,会自动复制到其他节点

安装完,在每个节点用root用户执行脚本

[root@rac1 etc]# /u01/app/oracle/product/11.2.0/db_1/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/11.2.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of “dbhome” have not changed. No need to overwrite.
The contents of “oraenv” have not changed. No need to overwrite.
The contents of “coraenv” have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
安装完成,close

至此在RAC双节点上完成oracle软件安装,安装日志在

2.创建集群数据库
在节点rac1上用oracle用户执行dbca创建RAC数据库

[root@rac1 ~]# su - oracle
[oracle@rac1 ~]$ dbca

选择创建数据库

选择自定义数据库(也可以是通用)

配置类型选择Admin-Managed,输入全局数据库名orcl,每个节点实例SID前缀为orcl,选择双节点

选择默认,配置OEM,启用数据库自动维护任务

统一设置sys,system,dbsnmp,sysman用户的密码为oracle

使用ASM存储,使用OMF(oracle的自动管理文件),数据区选择之前创建的DATA磁盘组

设置ASM密码为oracle

指定数据闪回区,选择之前创建好的FRA磁盘组,不开归档

组建选择

选择字符集AL32UTF8

选择默认的数据存储信息

开始创建数据库,勾选生成数据库的脚本

数据库的概要信息

开始安装组建

完成数据库安装

RAC维护
1.查看服务状态
忽略gsd问题

[root@rac1 ~]# su - grid
[grid@rac1 ~]$ crs_stat -t
Name Type Target State Host

ora.DATA.dg ora…up.type ONLINE ONLINE rac1
ora.FRA.dg ora…up.type ONLINE ONLINE rac1
ora…ER.lsnr ora…er.type ONLINE ONLINE rac1
ora…N1.lsnr ora…er.type ONLINE ONLINE rac1
ora.OCR.dg ora…up.type ONLINE ONLINE rac1
ora.asm ora.asm.type ONLINE ONLINE rac1
ora.cvu ora.cvu.type ONLINE ONLINE rac1
ora.gsd ora.gsd.type OFFLINE OFFLINE
ora…network ora…rk.type ONLINE ONLINE rac1
ora.oc4j ora.oc4j.type ONLINE ONLINE rac1
ora.ons ora.ons.type ONLINE ONLINE rac1
ora.orcl.db ora…se.type ONLINE ONLINE rac1
ora…SM1.asm application ONLINE ONLINE rac1
ora…C1.lsnr application ONLINE ONLINE rac1
ora.rac1.gsd application OFFLINE OFFLINE
ora.rac1.ons application ONLINE ONLINE rac1
ora.rac1.vip ora…t1.type ONLINE ONLINE rac1
ora…SM2.asm application ONLINE ONLINE rac2
ora…C2.lsnr application ONLINE ONLINE rac2
ora.rac2.gsd application OFFLINE OFFLINE
ora.rac2.ons application ONLINE ONLINE rac2
ora.rac2.vip ora…t1.type ONLINE ONLINE rac2
ora.scan1.vip ora…ip.type ONLINE ONLINE rac1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
检查集群运行状态
[grid@rac1 ~]$ srvctl status database -d orcl
Instance orcl1 is running on node rac1
Instance orcl2 is running on node rac2

2.检查CRS状态
检查本地节点的CRS状态

[grid@rac1 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
1
2
3
4
5
检查集群的CRS状态

[grid@rac1 ~]$ crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
1
2
3
4
3.查看集群中节点配置信息
[grid@rac1 ~]$ olsnodes
rac1
rac2

[grid@rac1 ~]$ olsnodes -n
rac1 1
rac2 2

[grid@rac1 ~]$ olsnodes -n -i -s -t
rac1 1 rac1-vip Active Unpinned
rac2 2 rac2-vip Active Unpinned
1
2
3
4
5
6
7
8
9
10
11
4.查看集群件的表决磁盘信息
[grid@rac1 ~]$ crsctl query css votedisk

STATE File Universal Id File Name Disk group


  1. ONLINE 496abcfc4e214fc9bf85cf755e0cc8e2 (/dev/raw/raw1) [OCR]
    Located 1 voting disk(s).
    1
    2
    3
    4
    5
    5.查看集群SCAN VIP信息
    [grid@rac1 ~]$ srvctl config scan
    SCAN name: scan-ip, Network: 1/192.168.248.0/255.255.255.0/eth0
    SCAN VIP name: scan1, IP: /scan-ip/192.168.248.110
    1
    2
    3
    查看集群SCAN Listener信息

[grid@rac1 ~]$ srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
1
2
6.启、停集群数据库
整个集群的数据库启停
进入grid用户
[grid@rac1 ~]$ srvctl stop database -d orcl
[grid@rac1 ~]$ srvctl start database -d orcl

关闭所有节点
进入root用户
关闭所有节点
[root@rac1 bin]# pwd
/u01/app/11.2.0/grid/bin
[root@rac1 bin]# ./crsctl stop crs
实际只关闭了当前结点

EM管理
oracle用户下执行

[oracle@rac1 ~]$ emctl status dbconsole
[oracle@rac1 ~]$ emctl start dbconsole
[oracle@rac1 ~]$ emctl stop dbconsole
1
2
3
本地sqlplus连接
windows中安装oracle客户端版
修改tsnames.ora
D:\develop\app\orcl\product\11.2.0\client_1\network\admin\tsnames.ora
添加

RAC_ORCL =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.248.110)(PORT = 1521))
)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl)
)
)
1
2
3
4
5
6
7
8
9
10
这里的HOST写的是scan-ip

C:\Users\sxtcx>sqlplus sys/oracle@RAC_ORCL as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on 星期四 4月 14 14:37:30 2016

Copyright © 1982, 2010, Oracle. All rights reserved.

连接到:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select instance_name, status from v$instance;

INSTANCE_NAME STATUS


orcl1 OPEN
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
当开启第二个命令行窗口连接时,发现实例名为orcl2,可以看出,scan-ip的加入可以具有负载均衡的作用。

参考网址:
Oracle 11g R2+RAC+ASM+OracleLinux6.4安装详解(图)

VMWARE vSphere+REDHAT 6.3+ORACLE 11G RAC配置数据库集群

VMware搭建Oracle 11g RAC测试环境 For Linux

https://blog.csdn.net/u014595668/article/details/51160783

猜你喜欢

转载自blog.csdn.net/qq_33276306/article/details/88203895
今日推荐