Install RAC 11G silently in the virtualbox virtual machine

Click to download the word document , the format is clearer, which can save your time.

A plan

1.1 IP planning

RAC environment introduction:

1.2 Swap memory plan


The swap memory requires at least 3G. 1

Check the size of swap space:

Expansion swap example (expanding 2G here):

dd if=/dev/zero of=/tmp/swap bs=1MB count=2048

mkswap /tmp/swap

swapon /tmp/swap

free -m #Check to confirm

vi /etc/fstab

Add to:

/tmp/swap swap swap defaults 0 0

For details on swap expansion, please refer to here .

1.3 Disk space planning

This is only for experimental use, so allocate a smaller disk space:

Root directory 40G, shared storage 20G

Two implementation steps

2.1 Configure the network

2.1.1 Configure the network

Both nodes need to be configured with two network cards.

The virtualbox virtual machine can be simulated like this (you need to shut down the virtual machine for configuration):

The first network card bridge mode;

The second network card is the internal network:

#Set the second network card (here is eth1) ip example:

Execute ip addr to view the HWADDR information of the network card:

cd /etc/sysconfig/network-scripts

cp ifcfg-eth0 ifcfg-eth1

vi ifcfg-eth1

DEVICE=eth1

HWADDR=Red box value in the above picture

IPADDR=10.10.10.1 #The private IP of node two is 10.10.10.2.

Remove the lines GATEWAY and UUID

#Restart the network card service

service network restart

#Test whether you can ping the other party

2.1.2 Configure the host name

vi /etc/sysconfig/network

Just modify the value of hostname.

Node 1 is changed to rac1, node 2 is changed to rac2

The configuration needs to restart the server to take effect.

Temporary configuration method (invalid after server restart):

hostname host name, example:

hostname rac1

2.1.3 Modify the /etc/hosts file

vi /etc/hosts

Add example:

#rac1

192.168.144.213 rac1

192.168.144.215 rac1_vip

10.10.10.1 rac1_priv


#rac2

192.168.144.214 rac2

192.168.144.216 rac2_vip

10.10.10.2 rac2_priv


#scan-ip

192.168.144.217 scan-vip

2.2 Configure shared storage

Shut down both virtual machines.

2.2.1 Configure shared disk

This experiment only creates a shared disk.

2.2.1.1 Create a shared disk

cd C:\Program Files\Oracle\VirtualBox

/*

The path can be viewed by right-clicking the VirtualBox icon-Properties:

*/

VBoxManage createhd --filename D:\VirtualBoxFile\rac.vdi --size 20480 --format vdi --variant fix

After the command is executed successfully, a rac.vdi file with a size of 20G will be generated under D:\VirtualBoxFile.

2.2.1.2 Add the disk to the rac1 virtual machine and set it as a shared disk

VBoxManage storageattach rac1 --storagectl "SATA" --port 1 --device 0 --type hdd --medium  D:\VirtualBoxFile\rac.vdi --mtype shareable

/*

Remarks:

rac1 is the name of the rac1 virtual machine:

SATA is the type of controller, which must be consistent with the controller type of the existing disk:

If the controller type of the existing disk is IDE, the storageectl value is IDE.

If the controller type is wrong, the virtual machine will not start, and an error will be reported:

fatal:no bootable medium found!system halted

*/

2.2.1.3 Add the shared disk to the rac2 virtual machine

Note: The port of the virtual disk of the shared disk rac.vdi of the two virtual machines should be the same, and should not be duplicated with the existing SATA port. Here are all SATA port 1:

Start two virtual machines and make sure that they can see the same newly added shared disk. Example:

2.2.2 Create users and user groups

Execute on two nodes:

groupadd -g 1000 oinstall #Members of this group can access the Oracle Inventory directory

groupadd -g 1001 dba #Members of this group have SYSDBA permissions

groupadd -g 1200 asmadmin

groupadd -g 1201 asmdba

groupadd -g 1202 asmoper

useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,dba grid #Manage Oracle Grid Infrastructure and ASM users

useradd -u 1101 -g oinstall -G dba,asmdba oracle

echo password | passwd --stdin grid

echo password | passwd --stdin oracle  

2.2.3 Configure udev shared storage

Execute on two nodes:

for i in b c d e f g ;

do

echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\""      >> /etc/udev/rules.d/99-oracle-asmdevices.rules

done

#Start udev service

/sbin/start_udev

#View the generated asm disk

[root@rac1 ~]# ll /dev | grep asm

brw-rw----. 1 grid asmadmin   8,  16 1月  22 16:12 asm-diskb

--You can also use asmlib to configure shared disks. For details, please refer to https://blog.csdn.net/yabingshi_tech/article/details/113990123

2.3 Configure the server

Perform the following steps on both nodes: 

2.3.1 Install the software

#Check which packages are not installed

rpm -q binutils compat-libstdc++-33 elfutils-libelf elfutils-libelf-devel expat gcc gcc-c++ glibc glibc-common glibc-devel glibc-headers libaio libaio-devel libgcc libstdc++ libstdc++-devel make numactl pdksh sysstat unixODBC unixODBC-devel  smartmontools | grep "not installed"

 

#Install the uninstalled software package, example:

yum install compat-libstdc++-33 elfutils-libelf-devel libaio-devel pdksh unixODBC unixODBC-devel smartmontools -y

 

--If it prompts No package pdksh available, download the rpm package and install it:

wget http://vault.centos.org/5.11/os/x86_64/CentOS/pdksh-5.2.14-37.el5_8.1.x86_64.rpm

rpm -ivh pdksh-5.2.14-37.el5_8.1.x86_64.rpm

 

#Install compat-libcap

rpm -ivh compat-libcap1-1.10-1.x86_64.rpm

2.3.2 Create Directory

mkdir -p /u01/app/oracle

mkdir -p /u01/app/grid

mkdir -p /u01/app/11.2.0.4/grid

mkdir -p /u01/app/oraInventory

chown -R oracle:oinstall /u01/app/oracle

chown -R grid /u01/app/grid

chown -R grid /u01/app/11.2.0.4/grid

chown -R grid /u01/app/oraInventory

chmod -R 775 /u01/app/oracle/

chmod -R 775 /u01/app/grid

2.3.3 Turn off the firewall

service iptables stop

chkconfig iptables off

2.3.4 Configure Node Mutual Trust

During installation, use ssh and scp commands to run remote commands on other cluster nodes and copy files to other cluster nodes. You must configure SSH so that these commands do not prompt for a password.

Need to configure mutual trust between node 1 grid users and node 2 grid users;

Need to configure mutual trust between node 1 oracle users and node 2 oracle users;

Configure the mutual trust between the two node grid and oracle users and themselves;

The following only lists the configuration steps of grid user mutual trust, and the configuration steps of oracle user mutual trust are omitted.

2.3.5.1 Configure node 1 to log in to node 2 without password

Execute on node 1:

su - grid

ssh-keygen #Generate secret key

ssh-copy-id rac2 #copy secret key

 

#Test password-free login (you will be asked to enter yes for the first time, and ssh several times to ensure that you do not allow yes to be entered later)

ssh rac2 hostname

ssh rac2_priv hostname

2.3.5.2 Configure node 2 to log in to node 1 without password

Execute on node 2:

su - grid

ssh-keygen #Generate secret key

ssh-copy-id 192.168.1.202 #copy secret key

 

#Test password-free login

ssh rac1 date

ssh rac1_priv date

2.3.5.3 Configure node 1 to log in to node 1 without password

Execute on node 1:

ssh-copy-id 192.168.1.202 #copy secret key

 

#Test password-free login

ssh rac1 date

ssh rac1_priv date

2.3.5.4 Configure node 2 to log in to node 2 without password

Execute on node 2:

ssh-copy-id 192.168.144.204 #copy secret key

 

#Test password-free login

ssh rac2 date

ssh rac2_priv date

2.3.5 Configure environment variables

2.3.5.1 Modify the environment variables of the oracle user

su - oracle

you .bash_profile

Add to:

export EDITOR = vi

export ORACLE_SID=prod1

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1

export LD_LIBRARY_PATH=$ORACLE_HOME/lib

export PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X1186/bin

umask 022

 

--Note that you need to change the value of ORACLE_SID on node 2 to prod2.

source .bash_profile #Make the modification effective

2.3.5.2 Modify the environment variables of the grid user

su - grid

you .bash_profile

Add to:

export ORACLE_BASE=/u01/app/grid

export ORACLE_SID=+ASM1

export ORACLE_HOME=/u01/app/11.2.0.4/grid

export PATH=$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH

export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK

--Note that the value of ORACLE_SID on node 2 is changed to +ASM2.

source .bash_profile #Make the modification effective

2.3.6 Clock synchronization

Before starting the installation, make sure that the date and time settings on all cluster nodes are set to the same date and time as much as possible. The cluster time synchronization mechanism ensures that the internal clocks of all cluster members are synchronized. For Oracle RAC on Linux, you can use Network Time Protocol (NTP) or Oracle Cluster Time Synchronization Service.

Here choose to disable the NTP service and use the internal clock of oracle rac to synchronize CTSS:

service ntpd stop

mv /etc/ntp.conf /etc/ntp.conf_bak

If only the ntp service is turned off and the configuration file is not deleted, the following error will be reported when the grid is installed later:

INFO: Error Message: PRVF-5507: The NTP daemon or service is not running on any node, but there are NTP configuration files on the following nodes

2.3.7 Configure kernel parameters

Execute on two node servers:

vi /etc/sysctl.conf

Modify the following parameters, the minimum settings are as follows (if the parameter value is greater than the following, just keep it unchanged):

fs.aio-max-nr = 1048576

fs.file-max = 6815744

kernel.shmall = 2097152

kernel.shmmax = 1036870912

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 262144

net.core.rmem_max = 4194304

net.core.wmem_default = 262144

net.core.wmem_max = 1048576

--Note that here kernel.shmmax is only set to 1G, please increase it according to the actual situation (initialization parameter MEMORY_TARGET or MEMORY_MAX_TARGET cannot be greater than shared memory, so the parameter value needs to be greater than MEMORY_TARGET).

Run sysctl -p to apply the above parameters:

/sbin/sysctl -p

2.3.8 Configure resource limits

vi /etc/security/limits.conf

Add the following content:

oracle soft nofile 65536

oracle hard nofile 65536

oracle soft nproc 16384

oracle hard nproc 16384

oracle stack nproc 16384

oracle stack nproc 16384

grid soft nofile 65536

grid hard nofile 65536

grid soft nproc 16384

grid hard nproc 16384

grid stack nproc 16384

grid stack nproc 16384

2.3.9 Open huge page

Make sure that the value of /sys/kernel/mm/transparent_hugepage/enabled is alwarys

If not, set it like this:

sh -c 'echo "alwarys" >  /sys/kernel/mm/transparent_hugepage/enabled'

2.4 Install RAC

2.4.1 Download the installation package

download link:

https://updates.oracle.com/download/13390677.html

Select Platform, and then download the following three installation packages:

Among them, p13390677_112040_Linux-x86-64_3of7.zip is the GI installation package (introduced in the View Readme):

1/2 of 7.zip is the database installation package.

Download pages for other versions:

https://www.oracle.com/database/technologies/oracle-database-software-downloads.html

2.4.2 Install GRID silently on node 1

Use the grid user to install.

unzip p13390677_112040_Linux-x86-64_3of7.zip

cd grid

2.4.2.1 Pre-inspection

Check on node one:

./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -verbose > runcluvfy.log

Check the log file runcluvfy.log to see if there is an error (query'failed' keyword)

/*

If an error is reported:

Result: The TCP connectivity check of the subnet "192.168.144.0" failed

You need to check whether the firewalls of the next two servers are not turned off, and you need to turn off the firewalls.

 

If an error is reported:

PRVF-5636: On the following nodes, the DNS response time of the unreachable node exceeds "15000" milliseconds: rac2,rac1

Then modify /etc/resolv.conf

Add DNS configuration, format:

nameserver your DNS server address

--DNS server address view:

*/

2.4.2.2 Create response file

Click to download grid.rsp.

Upload the response file to /home/grid

Need to pay attention to the following changes in grid.rsp:

oracle.install.crs.config.clusterNodes=rac1:rac1_vip,rac2:rac2_vip

oracle.install.crs.config.networkInterfaceList=eth1:192.168.144.0:1,eth2:10.10.10.0:2 #Indicates the network segment of the public network and the private network, pay attention to modify the network card name to the network card name of your own server.

oracle.install.asm.diskGroup.disks=/dev/asm-diskb

oracle.install.asm.diskGroup.diskDiscoveryString=/dev/

/*

If you use the shared disk configured in asmlib mode, you need to pay attention to the modification:

oracle.install.asm.diskGroup.disks=/dev/oracleasm/disks/DATA 

oracle.install.asm.diskGroup.diskDiscoveryString=/dev/oracleasm/disks/*

If the oracle.install.asm.diskGroup.diskDiscoveryString is configured incorrectly, when the /u01/app/11.2.0.4/grid/root.sh script is executed later, an error will be reported in the log:

ORA-15018: diskgroup cannot be created

ORA-15031: disk specification '/dev/oracleasm/disks/DATA' matches no disks

ORA-15014: path '/dev/oracleasm/disks/DATA' is not in the discovery set

Need to modify the value of oracle.install.asm.diskGroup.diskDiscoveryString, delete the relevant directories (/u01/app/oraInventory, /u01/app/grid, /u01/app/11.2.0.4/grid), and reinstall GRID

*/

2.4.2.3 Install cvuqdisk on two nodes

#Install cvuqdisk package with root user

#Node One

cd /home/grid/grid/rpm

rpm -ivh cvuqdisk-1.0.9-1.rpm

#Node two

Copy the installation package on node one to node two, and then install it.

2.4.2.4 Install GRID

Install as grid user:

cd /home/grid/grid

./runInstaller -silent -responseFile /home/grid/grid.rsp

Follow the prompts to execute the scripts on the two nodes as the root user:

sh /u01/app/oraInventory/orainstRoot.sh

sh /u01/app/11.2.0.4/grid/root.sh

root.sh will create an ASM instance and disk group, and start the RAC cluster:

After executing the root.sh script, you need to check the log for errors.

 

Follow the prompts to execute the script on node one with the grid user:

cd /u01/app/11.2.0.4/grid/cfgtoollogs/

touch cfgrsp.properties

./configToolAllCommands RESPONSE_FILE=cfgrsp.properties

The script will start the monitoring.

Remember to check the log for errors after execution.

2.4.2.5 Checking the startup status of the cluster

[grid@rac1 cfgtoollogs]$ crsctl check cluster -all

**************************************************************

rac1:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

**************************************************************

rac2:

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

2.4.2.6 Check Disk Group

Log in to rac1, rac2 to see the disk group:

su - grid

sqlplus / as sysasm

set lines 200

col path for a40

select group_number,header_status,state,name,path,redundancy from v$asm_disk;

SQL> select group_number,name,state from v$asm_diskgroup;

2.4.2.7 Check monitoring status

2.4.2.8 Modify the owner of the directory

Execute on two nodes:

chown -R grid:oinstall /u01/app/11.2.0.4/grid

2.4.3 Install Database Silently on Node One

2.4.3.1 Unzip

su - oracle

unzip p13390677_112040_Linux-x86-64_1of7.zip

unzip p13390677_112040_Linux-x86-64_2of7.zip

After the decompression is complete, the files will be merged into the database folder.

2.4.3.2 Create a response file

Click to download db.rsp

Pay attention to modify oracle.install.db.CLUSTER_NODES:

oracle.install.db.CLUSTER_NODES=rac1,rac2

2.4.3.3 Install the database

cd database

./runInstaller -silent -responseFile /home/oracle/db.rsp -ignorePrereq -ignoreSysPreReqs -ignoreDiskWarning

This command will automatically install the database software on both nodes.

Follow the prompts to execute the script on the two nodes as the root user:

sh /u01/app/oracle/product/11.2.0/db_1/root.sh

Check the log for errors.

2.4.4 Silently build a library on node 1

2.4.4.1 Create response file

su - oracle

Click to download dbca.rsp

Pay attention to modify the following items:

GDBNAME = "prod"

SID = "prod"

NODELIST=rac1,rac2

STORAGETYPE=ASM

DISKGROUPNAME=DATA

CHARACTERSET = "ZHS16GBK"

NATIONALCHARACTERSET= "UTF8"

 

#Precautions

  • NODELIST is to change the NODELIST of the first place (cluster), not the NODELIST of INSTANCE, otherwise the two database instances will not start.

② The value of NODELIST is rac1, rac2, not prod1, prod2.

2.4.4.2 Silent installation

dbca -silent -responseFile dbca.rsp

/*

If the execution of dbca exits without any return result, you need to check whether the dbca.rsp file is configured incorrectly.

*/

You can see the database instance on both nodes:

2.4.4.3 Checking the cluster status

SQL> SELECT value FROM v$parameter where name = 'cluster_database';

VALUE

--------------------------------------------------------------------------------

TRUE

A value of true indicates that it is a rac cluster.

2.4.4.4 Check monitoring

2.5 Verify load balancing

Create a database connection:

Use this connection to connect to scan-vip multiple times, get the server hostname and ip address (), and see if you can view the hostname and ip of the two nodes:

sql: SELECT UTL_INADDR.GET_HOST_NAME,UTL_INADDR.GET_HOST_ADDRESS FROM dual;

2.6 Verify that the database instance is highly available

After the rac installation is complete, the ip of the node is like this:

#Node One

#Node two

2.6.1 Simulate node-database downtime

SQL> shutdown abort;

ORACLE instance shut down.

By connecting scan-vip, the read and write SQL can still be executed normally. After a few minutes of observation, it is found that the floating IP has no drift.

2.6.2 Simulate node-server downtime

By connecting scan-vip, the read and write SQL can still be executed normally, and the floating IP of node one drifts to node two:

This article mainly refers to: https://blogs.oracle.com/database4cn/11gr2-rac

 

 

 

Guess you like

Origin blog.csdn.net/yabignshi/article/details/113034756