Oracle RAC 11g R2(11.2.0.4)部署文档


环境准备:

 

主机

名称

部署应用

IP

系统盘空间分配

数据存储

空间

系统配置信息

逻辑

核心

内存

node1

 

node2

oracle linux   6.7

Oracle11g RAC

11.2.0.4

  pub:eth0
  192.168.*.240
  192.168.*.239
  vip:
  192.168.*.238
  192.168.*.237
  priv:eth1
  10.0.0.11
  10.0.0.12
  scan:rac-scan
  192.168.*.236

vda1
  /boot 500MB  ext4
  vda2

  swap 20GB
  /dev/shm 48GB  ext4
  / 90GB   ext4
  /home 60GB  ext4
  vda3:
  /u01 100GB  ext4
  sdj1
  /backup 1TB  ext4 (
仅服务器node1

ASM磁盘组:
  ASM_DATA  3*300G
  ASM_FRA  2*300G
  OCR_OVTE  4*1G

hostname:   node1;node2
  root(****)
  oracle(****)
  grid(****)
  sys(****)
 
目录:
  ORACLE_HOME=

/u01/app/oracle/product/11.2.0/db_1
  GRID_HOME=

/u01/11.2.0/grid
 
登陆方式:ssh

32

48

 

共享磁盘分区列表

用途

分区

大小

 

COR+VOTE

/dev/sda1

1G

/dev/sdb1

1G

/dev/sdc1

1G

/dev/sdd1

1G

DATABASE

 

/dev/sde1

300G

/dev/sdf1

300G

/dev/sdg1

300G

RECOVERY AREA

/dev/sdh1

300G

/dev/sdi1

300G

硬件环境检测

检查项

检查方法

内存

grep -i   memtotal /proc/meminfo

Swap空间

/sbin/swapon   -s

相关软件包安装

软件包

安装方法

yum   install -y binutils*

yum   install -y compat-libstdc*

yum install   -y elfutils-libelf*

yum   install -y gcc*

yum   install -y gcc-c*

yum   install -y glibc*

yum   install -y libaio*

yum   install -y libgcc*

yum   install -y libstdc*

yum   install -y compat-libcap1*

yum   install -y make*

yum   install -y sysstat*

yum   install -y unixODBC*

yum   install -y ksh*

yum   install -y vnc*

yum

cvuqdisk-1.0.10-1

oracleasmlib-2.0.12-1.el6.x86_64

oracleasm-support-2.1.8-1.el6.x86_64

Rpm

(先下载好)

RAC安装步骤

 

网络与主机名配置

1           更改主机node1/etc/sysconfig/network

--不用设置网关

NETWORKING=yes

NETWORKING_IPV6=no

HOSTNAME=node1

2           更改主机node1/etc/sysconfig/network-scripts/ifcfg-eth0

--这个文件不需要配置MAC地址

DEVICE=eth0

BOOTPROTO=static

IPADDR=192.168.*.240

NETMASK=255.255.255.0

GATEWAY=192.168.*.1

ONBOOT=yes

3           更改主机node1/etc/sysconfig/network-scripts/ifcfg-eth1

--内部通信的私有IP不用设置网关

--这个文件不需要配置MAC地址

DEVICE=eth1

BOOTPROTO=static

IPADDR=10.10.10.11

NETMASK=255.255.255.0

ONBOOT=yes

4           使用service network restart重启node1的网络服务。也可以重启系统使新主机名一并生效

5           更改主机node2/etc/sysconfig/network

--不用设置网关

NETWORKING=yes

NETWORKING_IPV6=no

HOSTNAME=node2

6           更改主机node2/etc/sysconfig/network-scripts/ifcfg-eth0

--这个文件不需要配置MAC地址

DEVICE=eth0

BOOTPROTO=static

IPADDR=192.168.*.239

NETMASK=255.255.255.0

GATEWAY=192.168.*.254

ONBOOT=yes

7           更改主机node2/etc/sysconfig/network-scripts/ifcfg-eth1

--内部通信的私有IP不用设置网关

--这个文件不需要配置MAC地址

DEVICE=eth1

BOOTPROTO=static

IPADDR=10.10.10.12

NETMASK=255.255.255.0

ONBOOT=yes

8           使用service network restart重启node2的网络服务。也可以重启reahat使新主机名一并生效

对磁盘进行分区(略)

 

创建用户及用户组

1           对主机node1node2创建用户

--两个节点的用户与组的ID号必须一致

groupadd  -g 200 oinstall

groupadd  -g 201 dba

groupadd  -g 202 oper

groupadd  -g 203 asmadmin

groupadd  -g 204 asmoper

groupadd  -g 205 asmdba

useradd -u 200 -g oinstall -G dba,asmdba,oper oracle

useradd -u 201 -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid

--设置用户密码

[root@node1 ~]# passwd oracle

Changing password for user oracle.

New UNIX password:

BAD PASSWORD: it is too simplistic/systematic

Retype new UNIX password:

passwd: all authentication tokens updated successfully.

[root@node1 ~]# passwd grid

Changing password for user grid.

New UNIX password:

BAD PASSWORD: it is too simplistic/systematic

Retype new UNIX password:

passwd: all authentication tokens updated successfully.

2     分别在主机node1node2/u01下创建相应目录

--创建目录完毕之后,注意检查目录的所属用户及组

mkdir -p /u01/app/oraInventory

chown -R grid:oinstall /u01/app/oraInventory/

chmod -R 775 /u01/app/oraInventory/

mkdir -p /u01/11.2.0/grid

chown -R grid:oinstall /u01/11.2.0/grid/

chmod -R 775 /u01/11.2.0/grid/

mkdir -p /u01/app/oracle

mkdir -p /u01/app/oracle/cfgtoollogs

mkdir -p /u01/app/oracle/product/11.2.0/db_1

chown -R oracle:oinstall /u01/app/oracle

chmod -R 775 /u01/app/oracle

3     修改主机node1oracle用户环境变量

--注意设置ORACLE_SID

[root@node1 ~]# su - oracle

[oracle@node1 ~]$ vi .bash_profile

 

PATH=$PATH:$HOME/bin

 

# .bash_profile

 

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

        . ~/.bashrc

fi

 

# User specific environment and startup programs

export EDITOR=vi

export ORACLE_SID=prod1

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1

export LD_LIBRARY_PATH=$ORACLE_HOME/lib

export PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin

umask 022


4     修改主机node2oracle用户的环境变量

--注意设置ORACLE_SID

[root@node2 ~]# su - oracle

[oracle@node2 ~]$ vi .bash_profile

 

# .bash_profile

PATH=$PATH:$HOME/bin

 

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

        . ~/.bashrc

fi

 

# User specific environment and startup programs

export EDITOR=vi

export ORACLE_SID=prod2

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1

export LD_LIBRARY_PATH=$ORACLE_HOME/lib

export PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin

umask 022

5     修改主机node1grid用户环境变量

--注意更改ORACLE_SID

--grid用户的环境变量中,GRID_HOMEORACLE_HOME两个环境变量二选一即可,建议选择GRID_HOME,在本文档中,这两个环境变量都设置了同样的值

[oracle@node1 ~]$ su - grid

Password:

[grid@node1 ~]$ vi .bash_profile

 

# .bash_profile

 

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

        . ~/.bashrc

fi

 

# User specific environment and startup programs

export EDITOR=vi

export ORACLE_SID=+ASM1

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=/u01/11.2.0/grid

export GRID_HOME=/u01/11.2.0/grid

export LD_LIBRARY_PATH=$ORACLE_HOME/lib

export THREADS_FLAG=native

export PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin

umask 022

6     修改主机node2grid用户环境变量

--注意更改ORACLE_SID

[oracle@node2 ~]$ su - grid

Password:

[grid@node2 ~]$ vi .bash_profile

 

 

PATH=$PATH:$HOME/bin

 

# .bash_profile

 

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

        . ~/.bashrc

fi

 

# User specific environment and startup programs

export EDITOR=vi

export ORACLE_SID=+ASM2

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=/u01/11.2.0/grid

export GRID_HOME=/u01/11.2.0/grid

export LD_LIBRARY_PATH=$ORACLE_HOME/lib

export THREADS_FLAG=native

export PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin

umask 022

修改hosts文件

1     配置主机node1hosts文件

--vipip地址只有在安装完CRS,启动集群服务之后才能访问

[root@node1 ~]# su - root

[root@node1 ~]# vi /etc/hosts

 

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1               localhost

 

192.168.*.240          node1

192.168.*.238           node1-vip

10.10.10.11             node1-priv

 

192.168.*.239         node2

192.168.*.237         node2-vip

10.10.10.12             node2-priv

 

192.168.*.236           rac_scan

2     配置主机node2hosts文件

--通过scp命令将node1/etc/hosts文件拷贝到node2

[root@node1 ~]# scp /etc/hosts node2:/etc

The authenticity of host 'node2   (192.168.8.215)' can't be established.

RSA key fingerprint is 16:28:88:50:27:30:92:cb:49:be:55:61:f6:c2:a1:3f.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'node2,192.168.8.215' (RSA) to the list of known hosts.

root@node2's password:

Permission denied, please try again.

root@node2's password:

hosts                                                                                              100%  380       0.4KB/s   00:00

--node2中查看/etc/hosts文件是否已被正确配置

[oracle@node2 ~]$ cat /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1               localhost

 

92.168.113.240          node1

192.168.*.238           node1-vip

10.10.10.11             node1-priv

 

192.168.*.239         node2

192.168.*.237         node2-vip

10.10.10.12             node2-priv

 

192.168.*.236           rac_scan

修改内核参数文件,资源限制文件,login文件,profile文件,禁用NTP服务

1     配置主机node1的内核参数

[root@node1 ~]# vi /etc/sysctl.conf

--在文件末尾新增以下内核参数,如果默认有这个参数取值大的那个参数值。

 

fs.aio-max-nr = 1048576

fs.file-max = 6815744

kernel.shmall = 2097152

kernel.shmmax = 4294967295

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 262144

net.core.rmem_max = 4194304

net.core.wmem_default = 262144

net.core.wmem_max = 1048586

--使内核参数生效

[root@node1 ~]# sysctl -p

2     配置主机node1的资源限制文件

[root@node1 ~]# vi /etc/security/limits.conf

--在文件末尾新增以下内容

oracle soft nproc 2047

oracle hard nproc 16384

oracle soft nofile 1024

oracle hard nofile 65536

oracle soft stack 10240

grid soft nproc 2047

grid hard nproc 16384

grid soft nofile 1024

grid hard nofile 65536

grid soft stack 10240

"/etc/security/limits.conf" 61L, 2034C written


3     配置主机node1login文件

[root@node1 ~]# vi /etc/pam.d/login

--在文件末尾新增以下内容,用户登陆,则资源限制开始生效

session required /lib/security/pam_limits.so


4     修改主机node1profile文件

[root@node1 ~]# vi /etc/profile

--在文件末尾新增如下内容,对资源进行限制

if [ $USER = "oracle" ]||[ $USER = "grid" ]; then

if [ $SHELL = "/bin/ksh" ]; then

ulimit -p 16384

ulimit -n 65536

else

ulimit -u 16384 -n 65536

fi

fi

5     修改主机node2的内核参数,资源限制文件,login文件,profile文件(四个文件都从node1复制过去)

--将主机node1的内核参数文件,资源限制文件,login文件,profile文件传给node2

[root@node1 ~]# scp /etc/sysctl.conf node2:/etc

root@node2's password:

sysctl.conf                                   100%   1303     1.3KB/s   00:00     

[root@node1 ~]# scp /etc/security/limits.conf node2:/etc/security

root@node2's password:

limits.conf                                     100% 2034     2.0KB/s   00:00   

[root@node1 ~]# scp /etc/pam.d/login node2:/etc/pam.d/

root@node2's password:

login                                           100%  688     0.7KB/s     00:00   

[root@node1 ~]# scp /etc/profile node2:/etc

root@node2's password:

profile                                       100% 1181     1.2KB/s   00:00 

--node2执行以下命令,使内核参数生效

[root@node2 etc]# sysctl -p

6     禁用主机node1与主机node2ntp服务,sendmail服务

[root@node1 ~]# chkconfig ntpd off

[root@node1 ~]# mv /etc/ntp.conf /etc/ntp.conf.bak

[root@node1 ~]# chkconfig sendmail off

[root@node2 ~]# chkconfig ntpd off

[root@node2 ~]# mv /etc/ntp.conf /etc/ntp.conf.bak

[root@node2 ~]# chkconfig sendmail off

对共享磁盘进行分区

1     主机node1对共享磁盘进行分区

[root@node1 ~]# fdisk

略,共

2           主机node2中查看共享磁盘分区

[root@node2 ~]# fdisk -l

看是否与node1的共享分区信息同步

 

安装ASM软件

1           主机node1安装ASM软件

--查看redhat内核版本,版本号必须与oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm的信息一致

[root@node1 asm]# uname -a

Linux node1 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:43 EDT 2010 i686 i686 i386 GNU/Linux

 

[root@node1 soft]# ls

asm  linux_11gR2_database_1of2.zip  linux_11gR2_database_2of2.zip  linux_11gR2_grid.zip

[root@node1 soft]# cd asm

[root@node1 asm]# ls

oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm  oracleasmlib-2.0.4-1.el5.i386.rpm  oracleasm-support-2.1.3-1.el5.i386.rpm

[root@node1 asm]# rpm -ivh *

warning: oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159

Preparing...                  ########################################### [100%]

   1:oracleasm-support      ###########################################   [ 33%]

   2:oracleasm-2.6.18-194.el###########################################   [ 67%]

   3:oracleasmlib             ########################################### [100%]

[root@node1 asm]# rpm -qa|grep oracleasm

oracleasmlib-2.0.4-1.el5

oracleasm-support-2.1.3-1.el5

oracleasm-2.6.18-194.el5-2.0.5-1.el5

 2           主机node2安装ASM软件

--使用scpASM软件传给node2

[root@node1 soft]# scp -r asm node2:/home/oracle

root@node2's password:

oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm                                                      100%  127KB 127.0KB/s   00:00     

oracleasmlib-2.0.4-1.el5.i386.rpm                                                                  100%   14KB  13.6KB/s     00:00   

oracleasm-support-2.1.3-1.el5.i386.rpm                                                             100%   83KB  83.4KB/s     00:00

 

--在主机node2上安装ASM软件

[root@node2 ~]# cd /home/oracle

[root@node2 oracle]# ls

asm

[root@node2 oracle]# cd asm

[root@node2 asm]# ls

oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm  oracleasmlib-2.0.4-1.el5.i386.rpm  oracleasm-support-2.1.3-1.el5.i386.rpm

[root@node2 asm]# rpm -ivh *

warning: oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm: Header V3 DSA   signature: NOKEY, key ID 1e5e0159

Preparing...                  ########################################### [100%]

   1:oracleasm-support      ###########################################   [ 33%]

     2:oracleasm-2.6.18-194.el########################################### [   67%]

   3:oracleasmlib             ########################################### [100%]

 3           在主机node1配置ASM

[root@node1 ~]# service   oracleasm configure

Configuring the Oracle ASM library driver.

 

This will configure the on-boot properties of the Oracle ASM   library

driver.  The following questions will   determine whether the driver is

loaded on boot and what permissions it will have.  The current values

will be shown in brackets ('[]').  Hitting <ENTER> without typing an

answer will keep that current value.  Ctrl-C will abort.

 

Default user to own the driver interface []: grid

Default group to own the driver interface []: asmadmin

Start Oracle ASM library driver on boot (y/n) [n]: y

Scan for Oracle ASM disks on boot (y/n) [y]:   

Writing Oracle ASM library driver configuration: done

Initializing the Oracle ASMLib driver: [    OK  ]

Scanning the system for Oracle ASMLib disks: [    OK  ]

 4           在主机node2配置ASM

[root@node2 ~]# service   oracleasm configure

Configuring the Oracle ASM library driver.

 

This will configure the on-boot properties of the Oracle ASM   library

driver.  The following questions will   determine whether the driver is

loaded on boot and what permissions it will have.  The current values

will be shown in brackets ('[]').  Hitting <ENTER> without typing an

answer will keep that current value.  Ctrl-C will abort.

 

Default user to own the driver interface []: grid

Default group to own the driver interface []: asmadmin

Start Oracle ASM library driver on boot (y/n) [n]: y

Scan for Oracle ASM disks on boot (y/n) [y]:

Writing Oracle ASM library driver configuration: done

Initializing the Oracle ASMLib driver: [    OK  ]

Scanning the system for Oracle ASMLib disks: [    OK  ]

 5           在主机node1创建ASM磁盘

[root@node1 ~]# service oracleasm createdisk OCR_VOTE1 /dev/sda1

Marking disk "OCR_VOTE1" as an ASM disk: [    OK  ]

[root@node1 ~]# service oracleasm createdisk OCR_VOTE2 /dev/sdb1

Marking disk "OCR_VOTE2" as an ASM disk: [    OK  ]

[root@node1 ~]# service oracleasm createdisk OCR_VOTE3 /dev/sdc1

Marking disk "OCR_VOTE3" as an ASM disk: [    OK  ]

[root@node1 ~]# service oracleasm createdisk OCR_VOTE4 /dev/sdd1

Marking disk "OCR_VOTE3" as an ASM disk: [    OK  ]

[root@node1 ~]# service oracleasm createdisk ASM_DATA1 /dev/sde1

Marking disk "ASM_DATA1" as an ASM disk: [    OK  ]

[root@node1 ~]# service oracleasm createdisk ASM_DATA2 /dev/sdf1

Marking disk "OCR_VOTE3" as an ASM disk: [  OK  ]

[root@node1 ~]# service oracleasm createdisk ASM_DATA2 /dev/sdg1

Marking disk "ASM_DATA2" as an ASM disk: [    OK  ]

[root@node1 ~]# service oracleasm createdisk ASM_FRA1 /dev/sdh1

Marking disk "ASM_RCY1" as an ASM disk: [    OK  ]

[root@node1 ~]# service oracleasm createdisk ASM_FRA2 /dev/sdi1

Marking disk "ASM_RCY2" as an ASM disk: [    OK  ]

[root@node1 ~]# service oracleasm listdisks

ASM_DATA1

ASM_DATA2

ASM_DATA3

ASM_RCY1

ASM_RCY2

OCR_VOTE1

OCR_VOTE2

OCR_VOTE3

OCR_VOTE4

 6           在主机node2查看ASM磁盘

--主机node2扫描磁盘

[root@node2 ~]# service oracleasm scandisks

Scanning the system for Oracle ASMLib disks: [    OK  ]

--查看

[root@node2 ~]# service oracleasm listdisks

ASM_DATA1

ASM_DATA2

ASM_DATA3

ASM_RCY1

ASM_RCY2

OCR_VOTE1

OCR_VOTE2

OCR_VOTE3

OCR_VOTE4

建立GRID用户信任关系(oracle安装时也建立信任关系,但此时如果手动建立信任则无法通过安装前的检查,无法发现安装前的问题所在)

建立GRID用户信任关系(参考以下配置而成)

1.  配置过程如下:

2.  各节点生成Keys

1.   [root@node1 ~]# su - grid

2.   [grid@node1 ~]$ mkdir ~/.ssh

3.   [grid@node1 ~]$ chmod 700 ~/.ssh

4.   [grid@node1 ~]$ ssh-keygen -t rsa

5.   [grid@node1 ~]$ ssh-keygen -t dsa

6.   [root@node2 ~]# su - grid

7.   [grid@node2 ~]$ mkdir ~/.ssh

8.   [grid@node2 ~]$ chmod 700 ~/.ssh

9.   [grid@node2 ~]$ ssh-keygen -t rsa

10.  [grid@node2 ~]$ ssh-keygen -t dsa

11. 

12.  在节点1上进行互信配置:

13.  [grid@node1 ~]$ touch ~/.ssh/authorized_keys

14.  [grid@node1 ~]$ cd ~/.ssh

15.  [grid@node1 .ssh]$ ssh node1 cat   ~/.ssh/id_rsa.pub >> authorized_keys

16.  [grid@node1 .ssh]$ ssh node2 cat   ~/.ssh/id_rsa.pub >> authorized_keys

17.  [grid@node1 .ssh]$ ssh node1 cat   ~/.ssh/id_dsa.pub >> authorized_keys

18.  [grid@node1 .ssh]$ ssh node2 cat   ~/.ssh/id_dsa.pub >> authorized_keys

19. 

20.  在node1把存储公钥信息的验证文件传送到node2上

21.  [grid@node1 .ssh]$ pwd

22.  /home/grid/.ssh

23.  [grid@node1 .ssh]$ scp authorized_keys   node2:'pwd'

24.  grid@node2's password:

25.  authorized_keys 100% 1644 1.6KB/s 00:00

26. 

27.  设置验证文件的权限

28.  在每一个节点执行:

29.  $ chmod 600 ~/.ssh/authorized_keys

30. 

31.  启用用户一致性

32.  在你要运行OUI的节点以grid用户运行(这里选择node1):

33.  [grid@node1 .ssh]$ exec /usr/bin/ssh-agent   $SHELL

34.  [grid@node1 .ssh]$ ssh-add

35.  Identity added: /home/grid/.ssh/id_rsa   (/home/grid/.ssh/id_rsa)

36.  Identity added: /home/grid/.ssh/id_dsa   (/home/grid/.ssh/id_dsa)

37. 

38.  验证ssh配置是否正确

39.  以grid用户在所有节点分别执行:

40.  ssh node1 date

41.  ssh node2 date

42.  ssh node1-priv date

43.  ssh node2-priv date

44. 

45.  如果不需要输入密码就可以输出时间,说明ssh验证配置成功。必须把以上命令在两个节点都运行,每一个命令在第一次执行的时候需要输入yes。

46.  如果不运行这些命令,即使ssh验证已经配好,安装clusterware的时候也会出现错误:

47.  The specified nodes are not clusterable

48.  因为,配好ssh后,还需要在第一次访问时输入yes,才算是真正的无障碍访问其他服务器。

请谨记,SSH互信需要实现的就是各个节点之间可以无密码进行SSH访问。

 关闭防火墙

--在主机node1解压grid安装包

1)   重启后生效
  开启: chkconfig iptables on
  关闭: chkconfig iptables off
  2) 即时生效,重启后失效
  开启: service iptables start
  关闭: service iptables stop

 

安装GRID

1           安装前环境检测

--在主机node1解压grid安装包

[grid@node1 ~]$ cd /soft/grid/

[grid@node1 grid]$ ls

doc  install  response    rpm  runcluvfy.sh  runInstaller  sshsetup    stage  welcome.html

[grid@node1 grid]$ ./runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose fixup

2           主机node1,node2安装所需的软件

安装文章前面软件准备部分的相关软件


3           增加主机node1swap空间

[root@node1 yum.repos.d]# free -m

             total       used       free     shared      buffers     cached

Mem:          1562         1381        181            0         33         1216

-/+ buffers/cache:          131       1430

Swap:         2047            0       2047

[root@node1 yum.repos.d]# dd if=/dev/zero of=/u01/swpf1 bs=1024k count=2048

2048+0 records in

2048+0 records out

2147483648 bytes (2.1 GB) copied, 12.2324 seconds, 176 MB/s

[root@node1 yum.repos.d]# mkswap -c /u01/swpf1

Setting up swapspace version 1, size = 2147479 kB

[root@node1 yum.repos.d]# swapon -a /u01/swpf1

[root@node1 yum.repos.d]# free -m

             total       used       free     shared      buffers     cached

Mem:          1562         1523         39            0          7         1384

-/+ buffers/cache:          130       1431

Swap:         4095            0       4095

--/etc/fstab增加如下内容

/u01/swpf1              swap                    swap     defaults       0 0

 

4           增加主机node2swap空间

[root@node2 yum.repos.d]# dd if=/dev/zero of=/u01/swpf1 bs=1024k count=2048

2048+0 records in

2048+0 records out

2147483648 bytes (2.1 GB) copied, 12.6712 seconds, 169 MB/s

[root@node2 yum.repos.d]# mkswap -c /u01/swpf1

Setting up swapspace version 1, size = 2147479 kB

[root@node2 yum.repos.d]# swapon -a /u01/swpf1

--/etc/fstab增加如下内容

/u01/swpf1              swap                    swap     defaults       0 0


 

5           安装GRID

--建议使用VNC安装,node1主机为例

--node1主机上运行vncserver,设置vnc连接的密码

--node1主机的本机系统中,打开终端,root用户下执行xhost +

--然后切换至grid用户:su - grid

--执行vncviewer node1:5901,vnc界面的root用户 ,执行xhost +

--vnc界面中切换至grid用户:su – grid

$ export 为了中文不显示乱码

--然后在grid用户下执行grid软件的安装

--cd /soft/grid

--./runInstaller


选择第一个安装选项

 

选择”Advanced Installation”

默认语言选择

 

 

设置SCAN Namerac_scan,不要安装”Configure CNS”

点击Add,添加HOSTNAME填写node2,Virtual IP Name填写node2-vip

“Network Interface Usage”默认选择,点击下一步

“Storage option”选择ASM

“Disk Group Name”设置为”OCR_VOTE”,”Redundancy””Normal”,选择相应的磁盘,点击下一步

 

设置sys密码:****

IPMI界面,默认选择

       Groups界面,确认Groups正确后点击下一步

确认ORACLE BASESOFTWARE LOCATION路径正确后点击下一步

确认Inventory路径正确后点击下一步

查看摘要信息,正确无误后点击Finish

 

分别在主机node1与主机    node2root用户运行如下两个脚本,不能同时运行,执行完一个节点在执行下一个节点

       编辑主机node1与主机node2/etc/profile文件,增加如下内容

       export PATH=$PATH:/u01/11.2.0/grid/bin

      

    然后source /etc/profile

       查看主机node1与主机node2的服务是否在线

      

[root@node1 ~]# crsctl check crs

CRS-4638: Oracle High Availability Services is online

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

 

[root@node2 yum.repos.d]# crsctl check crs

CRS-4638: Oracle High Availability Services is online

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

       在主机node2查看资源是否在线

[root@node2 yum.repos.d]# crs_stat -t

Name           Type           Target    State     Host       

------------------------------------------------------------

ora....N1.lsnr ora....er.type ONLINE    ONLINE    node1     

ora....VOTE.dg ora....up.type ONLINE    ONLINE    node1     

ora.asm        ora.asm.type   ONLINE    ONLINE    node1     

ora....SM1.asm application    ONLINE    ONLINE    node1     

ora....de1.gsd application    OFFLINE   OFFLINE              

ora....de1.ons application    ONLINE    ONLINE    node1     

ora....de1.vip ora....t1.type ONLINE    ONLINE    node1     

ora....SM2.asm application    ONLINE    ONLINE    node2     

ora....de2.gsd application    OFFLINE   OFFLINE              

ora....de2.ons application    ONLINE    ONLINE    node2     

ora....de2.vip ora....t1.type ONLINE    ONLINE    node2     

ora.eons       ora.eons.type  ONLINE    ONLINE    node1     

ora.gsd        ora.gsd.type   OFFLINE   OFFLINE              

ora....network ora....rk.type ONLINE    ONLINE    node1     

ora.oc4j       ora.oc4j.type  OFFLINE   OFFLINE              

ora.ons        ora.ons.type   ONLINE    ONLINE    node1     

ora....ry.acfs ora....fs.type ONLINE    ONLINE    node1     

ora.scan1.vip  ora....ip.type ONLINE    ONLINE    node1

      

       点击OK,结束安装

      

       这个关于IPMI的错误可以不用管

      

      

安装ORACLE数据库软件

1           解压ORACLE DATABASE 软件

[root@node1 soft]# unzip linux_11gR2_database_1of2.zip && unzip   linux_11gR2_database_2of2.zip

2     安装ORACLE DATABASE软件

--建议使用VNC安装,node1主机为例

--node1主机上运行vncserver,设置vnc连接的密码,如果已经设置了vnc的密码,则不需要再设置。

--node1主机的本机系统中,打开终端,root用户下执行xhost +

--然后切换至oracle用户:su - oracle

--执行vncviewer node1:5901,vnc界面的root用户下,执行xhost +

--vnc界面中切换至oracle用户:su - oracle

--然后在oracle用户下执行oracle软件的安装

--cd /soft/database

--./runInstaller

 

选择”Install database software only”,点击下一步

默认选择”Real Application Cluster database installation”,点击下一步

默认语言选择,点击下一步

选择”Enterprise Edition”,点击下一步

确认用户组选择正确无误,点击下一步

出现如下问题,使用命令crsctl check crscrs_stat –t检查服务与资源是否在线

点击”Finish”开始安装

在主机node1与主机node2root用户执行以下脚本

/u01/app/oracle/product/11.2.0/db_1/root.sh

点击OK,完成安装

ASMCA建立磁盘组

1           通过VNC运行ASMCA

--建议使用VNC安装,node1主机为例

--node1主机上运行vncserver,设置vnc连接的密码,如果已经设置了vnc的密码,则不需要再设置。

--node1主机的本机系统中,打开终端,root用户下执行xhost +

--然后切换至grid用户:su - grid

--执行vncviewer node1:5901,vnc界面的root用户下,执行xhost +

--vnc界面中切换至grid用户:su - grid

--然后在grid用户下运行asmca命令

--asmca

2     ASMCA建立磁盘组,


猜你喜欢

转载自blog.51cto.com/77jiayuan/2160520