rac 11g node expansion

An experimental environment

Among them, rac1 and rac2 are two nodes in the rac cluster. Now it is planned to add rac3 to the cluster.

Two experimental steps

2.1 Configure a new node

2.1.1 Configure the network

2.1.1.1 Configure network

Two network cards need to be configured, one of which is used for internal connections between nodes.

My ip configuration here is:

2.1.1.2 Modify host name

vi /etc/sysconfig/network

HOSTNAME=rac3

The configuration needs to restart the server to take effect.

Temporary configuration method (invalid after server restart):

hostname rac3

2.1.1.3 Modify /etc/hosts

Add the ip configuration of the other two nodes and this node:

# rac1

192.168.144.213 rac1

192.168.144.215 rac1_vip

10.10.10.1 rac1_priv

 

#rac2

192.168.144.214 rac2

192.168.144.216 rac2_vip

10.10.10.2 rac2_priv

 

# rac3

192.168.144.218 rac3

192.168.144.219 rac3_vip

10.10.10.3 rac3_priv

 

#scan-ip

192.168.144.217 scan-vip

 

#Add the ip configuration of the rac3 node on the other two rac nodes

slightly

2.1.2 Configure shared storage

If shared storage is not configured, an error will be reported when GRID is installed later: PRVG-1013: The path "/u01/app/11.2.0.4/grid" does not exist or cannot be created on the node to be added

2.1.2.1 Add a shared disk

The experimental environment here is a virtualbox virtual machine, so add a shared disk in rac3 like this:

Note: The port of the virtual disk of the shared disk rac.vdi of the two virtual machines should be the same, and should not be duplicated with the existing SATA port. Here are all SATA port 1 :

/*

Note that a single shared disk should not exceed 2T, otherwise Oracle will not recognize it, and subsequent installation will report an error: ORA-15018: diskgroup cannot be created
ORA-15099: disk'ORCL:DATA' is larger than maximum size of 2097152 MBs

*/

2.1.2.2 Create users and user groups

groupadd -g 1000 oinstall #Members of this group can access the Oracle Inventory directory

groupadd -g 1001 dba #Members of this group have SYSDBA permissions

groupadd -g 1200 asmadmin

groupadd -g 1201 asmdba

groupadd -g 1202 asmoper

useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,dba grid #Manage Oracle Grid Infrastructure and ASM users

useradd -u 1101 -g oinstall -G dba,asmdba oracle

echo password | passwd --stdin grid

echo password | passwd --stdin oracle 

2.1.2.3 Configure udev shared storage

for i in b c d e f g ;

do

echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\""      >> /etc/udev/rules.d/99-oracle-asmdevices.rules

done

 

#Start udev service

/sbin/start_udev

 

#View the generated asm disk

[root@rac1 ~]# ll /dev | grep asm

brw-rw----. 1 grid asmadmin   8,  16 1月  22 16:12 asm-diskb

2.1.3 Ensure that the swap memory space is greater than 3G

Expansion swap example (expanding 2G here):

dd if=/dev/zero of=/tmp/swap bs=1MB count=2048

mkswap /tmp/swap

swapon /tmp/swap

free -m #Check to confirm

vi /etc/fstab

Add to:

/tmp/swap swap swap defaults 0 0

2.1.4 Ensure that the disk space is at least 30G

slightly

2.1.5 Create directory

mkdir -p /u01/app/oracle

mkdir -p /u01/app/grid

mkdir -p /u01/app/11.2.0.4/grid

mkdir -p /u01/app/oraInventory

chown -R oracle:oinstall /u01/app/oracle

chown -R grid /u01/app/grid

chown -R grid /u01/app/11.2.0.4/grid

chown -R grid /u01/app/oraInventory

chmod -R 775 /u01/app/oracle/

chmod -R 775 /u01/app/grid

2.1.6 Install dependent packages

#Check which packages are not installed

rpm -q binutils compat-libstdc++-33 elfutils-libelf elfutils-libelf-devel expat gcc gcc-c++ glibc glibc-common glibc-devel glibc-headers libaio libaio-devel libgcc libstdc++ libstdc++-devel make numactl pdksh sysstat unixODBC unixODBC-devel  smartmontools | grep "not installed"

 

#Install the uninstalled software package, example:

yum install compat-libstdc++-33 elfutils-libelf-devel libaio-devel pdksh unixODBC unixODBC-devel smartmontools -y

 

--If it prompts No package pdksh available, download the rpm package and install it:

wget http://vault.centos.org/5.11/os/x86_64/CentOS/pdksh-5.2.14-37.el5_8.1.x86_64.rpm

rpm -ivh pdksh-5.2.14-37.el5_8.1.x86_64.rpm

 

#Install compat-libcap

rpm -ivh compat-libcap1-1.10-1.x86_64.rpm

 

#Install cvuqdisk

rpm -ivh /home/grid/cvuqdisk-1.0.9-1.rpm

2.1.7 Turn off the firewall

service iptables stop

chkconfig iptables off

2.1.8 Configure Node Mutual Trust

Configure the mutual trust between rac3 and rac1, rac2, and rac3 (grid and oracle users), the specific steps are omitted, please refer to https://blog.csdn.net/yabingshi_tech/article/details/113034756 in '2.3.4 Configure node mutual trust .

2.1.9 Configure environment variables

2.1.9.1 Modify the environment variables of the oracle user

su - oracle

you .bash_profile

Add to:

export EDITOR = vi

export ORACLE_SID=prod3

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1

export LD_LIBRARY_PATH=$ORACLE_HOME/lib

export PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X1186/bin

umask 022

 

source .bash_profile #Make the modification effective

2.1.9.2 Modify the environment variables of the grid user

su - grid

you .bash_profile

Add to:

export ORACLE_BASE=/u01/app/grid

export ORACLE_SID=+ASM3

export ORACLE_HOME=/u01/app/11.2.0.4/grid

export PATH=$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH

export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK

 

source .bash_profile #Make the modification effective

2.1.10 Clock synchronization

Here choose to disable the NTP service and use the internal clock of oracle rac to synchronize CTSS:

service ntpd stop

mv /etc/ntp.conf /etc/ntp.conf_bak

2.1.11 Configure kernel parameters

vi /etc/sysctl.conf

Modify the following parameters, the minimum settings are as follows (if the parameter value is greater than the following, just keep it unchanged):

fs.aio-max-nr = 1048576

fs.file-max = 6815744

kernel.shmall = 2097152

kernel.shmmax = 1036870912

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 262144

net.core.rmem_max = 4194304

net.core.wmem_default = 262144

net.core.wmem_max = 1048576

--Note that here kernel.shmmax is only set to 1G, please increase it according to the actual situation (initialization parameter MEMORY_TARGET or MEMORY_MAX_TARGET cannot be greater than shared memory, so the parameter value needs to be greater than MEMORY_TARGET).

Run sysctl -p to apply the above parameters:

/sbin/sysctl -p

2.1.12 Configure resource limits

vi /etc/security/limits.conf

Add the following content:

oracle soft nofile 65536

oracle hard nofile 65536

oracle soft nproc 16384

oracle hard nproc 16384

oracle stack nproc 16384

oracle stack nproc 16384

grid soft nofile 65536

grid hard nofile 65536

grid soft nproc 16384

grid hard nproc 16384

grid stack nproc 16384

grid stack nproc 16384

2.1.13 Open huge page

Make sure that the value of /sys/kernel/mm/transparent_hugepage/enabled is alwarys

If not, set it like this:

sh -c 'echo "alwarys" >  /sys/kernel/mm/transparent_hugepage/enabled'

2.2 Install GRID for rac3

#Ensure that grid users of each rac node have permissions to /u01/app/11.2.0.4/grid

chown -R grid:oinstall /u01/app/11.2.0.4/grid

 

#Install Grid for rac3 on nodes with grid software in existing rac nodes

/u01/app/11.2.0.4/grid/oui/bin/addNode.sh -silent "CLUSTER_NEW_NODES={rac3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac3_vip}" "CLUSTER_NEW_PRIVATE_NODE_NAMES={rac3_priv}"

/*

If an error is reported:

PRVG-1013: The path "/u01/app/11.2.0.4/grid" does not exist or cannot be created on the node to be added. The shared resource check for adding a node failed

You need to check whether there is a problem with the shared storage, or whether the owner of the shared disk is a grid user.

If there is no problem and this error is reported, execute export IGNORE_PREADDNODE_CHECKS=Y to skip the pre-check

*/

Execute the script as the root user on the rac3 node:

/u01/app/oraInventory/orainstRoot.sh

/u01/app/11.2.0.4/grid/root.sh

Check the log for errors.

Modify rac3 directory permissions

chown -R grid:oinstall /u01/app/11.2.0.4/grid

#Check cluster status

#Check monitor

2.3 Install database software for rac3

#Install database software for rac3 on nodes with database software in existing rac nodes

su - oracle

/app/oracle/product/11.2.0/db_1/oui/bin/addNode.sh -silent "CLUSTER_NEW_NODES={rac3}"

Use the root user to execute the script on the rac3 node:

/u01/app/oracle/product/11.2.0/db_1/root.sh

2.4 Add an instance of rac3

dbca -silent -addInstance -nodeList rac3 -gdbName prod -instanceName prod3 -sysDBAUserName sys -sysDBAPassword "***"

--This article mainly refers to http://blog.itpub.net/29249734/viewspace-1760232/

 

Guess you like

Origin blog.csdn.net/yabignshi/article/details/113190841