[转帖]如何获得一个RAC Oracle数据库(从Github - oracle/docker-images) - 本地版 ---暂时未做实验.

如何获得一个RAC Oracle数据库(从Github - oracle/docker-images) - 本地版

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接: https://xiaoyu.blog.csdn.net/article/details/102988682

环境

一台笔记本电脑,Windows操作系统,安装了VirtualBox,Vagrant,Github。

目标

操作系统Oracle Linux 7,运行容器数据库,数据库企业版,RAC,版本为19.3.0,实例名为ORCLCDB,带一个可插拔数据库orclpdb1。两个RAC节点均运行于同一主机。

创建Linux操作系统

克隆项目以获得Linux Vagrant Box:

PS D:\DB> git clone https://github.com/oracle/vagrant-boxes.git

安装磁盘扩展插件:

vagrant plugin install vagrant-disksize

在Vagrantfile中将内存由默认的2048改为8192,然后修改根盘的大小为80G。如下:

...
config.vm.box = "ol7-latest"
  config.disksize.size = "80GB" config.vm.box_url = "https://yum.oracle.com/boxes/oraclelinux/latest/ol7-latest.box" config.vm.define NAME config.vm.box_check_update = false # change memory size config.vm.provider "virtualbox" do |v| v.memory = 8192 v.name = NAME end ...

然后创建虚机(Oracle Linux 7)。耗时7分42秒,我的环境一般在7分钟左右。

PS E:\DB\vagrant-boxes\OracleLinux\7> vagrant up

启动VM后,磁盘是64G,但根分区仍是32G:

$ df -h
Filesystem                   Size  Used Avail Use% Mounted on
devtmpfs                     3.8G     0  3.8G   0% /dev
tmpfs                        3.8G     0  3.8G   0% /dev/shm
tmpfs                        3.8G  8.5M  3.8G   1% /run
tmpfs                        3.8G     0  3.8G   0% /sys/fs/cgroup
/dev/mapper/vg_main-lv_root   32G  1.7G   31G   6% /
/dev/sda1                    497M  125M  373M  26% /boot
vagrant                      1.9T  1.1T  753G  60% /vagrant
tmpfs                        771M     0  771M   0% /run/user/1000

$ lsblk
NAME                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sdb                   8:16   0 15.6G  0 disk
sda                   8:0    0   80G  0 disk
├─sda2                8:2    0   36G  0 part
│ ├─vg_main-lv_swap 252:1    0    4G  0 lvm  [SWAP]
│ └─vg_main-lv_root 252:0    0   32G  0 lvm  /
└─sda1                8:1    0  500M  0 part /boot

因此需要扩展分区,大致过程如下:

fdisk /dev/sda (n, p, <Enter>, <Enter>, w) -> 产生新分区/dev/sda3
partprobe
pvcreate /dev/sda
vgextend vg_main /dev/sda3
lvextend /dev/vg_main/lv_root /dev/sda3
xfs_growfs /

扩容后的空间:

# df -h
Filesystem                   Size  Used Avail Use% Mounted on
devtmpfs                     3.8G     0  3.8G   0% /dev
tmpfs                        3.8G     0  3.8G   0% /dev/shm
tmpfs                        3.8G  8.5M  3.8G   1% /run
tmpfs                        3.8G     0  3.8G   0% /sys/fs/cgroup
/dev/mapper/vg_main-lv_root   76G  1.7G   74G   3% /
/dev/sda1                    497M  125M  373M  26% /boot
vagrant                      1.9T  1.1T  753G  60% /vagrant
tmpfs                        771M     0  771M   0% /run/user/1000

以下操作均登入Linux中运行。

安装Docker

安装Docker,耗时0m49.161s:

sudo yum install -y yum-utils
sudo yum-config-manager --enable ol7_addons
sudo yum install -y docker-engine sudo systemctl start docker sudo systemctl enable docker sudo usermod -aG docker vagrant

确认docker安装成功:

$ docker version
Client: Docker Engine - Community
 Version:           18.09.8-ol
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        76804b7
 Built:             Fri Sep 27 21:00:18 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.8-ol
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.8
  Git commit:       76804b7
  Built:            Fri Sep 27 20:54:00 2019
  OS/Arch:          linux/amd64
  Experimental:     false
  Default Registry: docker.io

克隆Github项目

耗时0m31.599s:

sudo yum install -y git
git clone https://github.com/oracle/docker-images.git

设置内核参数

docker会从Host OS继承参数,因此需在文件/etc/sysctl.conf中设置以下参数:

fs.file-max = 6815744
net.core.rmem_max = 4194304
net.core.rmem_default = 262144
net.core.wmem_max = 1048576
net.core.wmem_default = 262144 net.core.rmem_default = 262144

使其生效:

sudo sysctl -a
sudo sysctl -p

创建虚拟网络

docker network create --driver=bridge --subnet=172.16.1.0/24 rac_pub1_nw
docker network create --driver=bridge --subnet=192.168.17.0/24 rac_priv1_nw

查看状态:

$ docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
aac5636fe8fc        bridge              bridge              local
051a1439f036        host                host                local
1a9007862a18        none                null                local
fe35e54e1aa0        rac_priv1_nw        bridge              local
0c6bdebeab78        rac_pub1_nw         bridge              local

配置实时模式

RAC的某些进程需要运行在实时模式,因此需要在文件/etc/sysconfig/docker中添加以下:

OPTIONS='--selinux-enabled --cpu-rt-runtime=950000'

使其生效:

sudo systemctl daemon-reload
sudo systemctl stop docker
sudo systemctl start docker

SELINUX 配置为 permissive模式(/etc/selinux/config),过程略。
然后重启实例使得SELINUX生效。

此时的空间状态:

$ df -h
Filesystem                   Size  Used Avail Use% Mounted on
devtmpfs                     3.8G     0  3.8G   0% /dev
tmpfs                        3.8G     0  3.8G   0% /dev/shm
tmpfs                        3.8G  8.5M  3.8G   1% /run
tmpfs                        3.8G     0  3.8G   0% /sys/fs/cgroup
/dev/mapper/vg_main-lv_root   76G  2.1G   74G   3% /
/dev/sda1                    497M  125M  373M  26% /boot
vagrant                      1.9T  1.1T  753G  60% /vagrant
tmpfs                        771M     0  771M   0% /run/user/1000

将安装文件拷贝到目录

耗时真的看运气,有时7分钟,最近一次1分半:

cd docker-images/OracleDatabase/RAC/OracleRealApplicationClusters/dockerfiles/19.3.0
cp /vagrant/LINUX.X64_193000_db_home.zip .
cp /vagrant/LINUX.X64_193000_grid_home.zip . 

此时的空间状态:

$ df -h
Filesystem                   Size  Used Avail Use% Mounted on
devtmpfs                     3.8G     0  3.8G   0% /dev
tmpfs                        3.8G     0  3.8G   0% /dev/shm
tmpfs                        3.8G  8.5M  3.8G   1% /run
tmpfs                        3.8G     0  3.8G   0% /sys/fs/cgroup
/dev/mapper/vg_main-lv_root   76G  7.6G   68G  11% /
/dev/sda1                    497M  125M  373M  26% /boot
vagrant                      1.9T  1.1T  745G  61% /vagrant
tmpfs                        771M     0  771M   0% /run/user/1000

构建Docker Install Image

这一步最重要的任务就是拷贝介质和配置脚本,还有从网络下载OS更新。然后安装GI和数据库。
执行以下命令开始构建:

$ cd docker-images/OracleDatabase/RAC/OracleRealApplicationClusters/dockerfiles
$ ls
12.2.0.1  18.3.0  19.3.0  buildDockerImage.sh
$ time ./buildDockerImage.sh -v 19.3.0

如果空间不够,会报错:

...
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! checkSpace.sh: ERROR - There is not enough space available in the docker container. checkSpace.sh: The container needs at least 35 GB , but only 14 available. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ... There was an error building the image.

以下是成功时的完整日志,整个过程耗时58分钟:

$ time ./buildDockerImage.sh -v 19.3.0
Checking if required packages are present and valid...
LINUX.X64_193000_grid_home.zip: OK
LINUX.X64_193000_db_home.zip: OK
========================== DOCKER info: Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 18.09.8-ol Storage Driver: overlay2 Backing Filesystem: xfs Supports d_type: true Native Overlay Diff: false Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: c4446665cb9c30056f4998ed953e6d4ff22c7c39 runc version: 4bb1fe4ace1a32d3676bb98f5d3b6a4e32bf6c58 init version: fec3683 Security Options: seccomp Profile: default selinux Kernel Version: 4.14.35-1902.6.6.el7uek.x86_64 Operating System: Oracle Linux Server 7.7 OSType: linux Architecture: x86_64 CPUs: 2 Total Memory: 7.528GiB Name: ol7-vagrant-rac ID: MS7Y:32TG:TGTF:C3QP:DR4Q:IDG4:RHHS:SQVW:5QWY:U45Z:ZCXK:BDCP Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false Product License: Community Engine Registries: docker.io (secure) ========================== Building image 'oracle/database-rac:19.3.0' ... Sending build context to Docker daemon 5.949GB Step 1/11 : FROM oraclelinux:7-slim Trying to pull repository docker.io/library/oraclelinux ... 7-slim: Pulling from docker.io/library/oraclelinux a316717fc6ee: Pull complete Digest: sha256:c5f3baff726ffd97c7e9574e803ad0e8a1e5c7de236325eed9e87f853a746e90 Status: Downloaded newer image for oraclelinux:7-slim ---> 874477adb545 Step 2/11 : MAINTAINER Paramdeep Saini <[email protected]> ---> Running in a1b3685f3111 Removing intermediate container a1b3685f3111 ---> 8187c15c17ab Step 3/11 : ENV SETUP_LINUX_FILE="setupLinuxEnv.sh" INSTALL_DIR=/opt/scripts GRID_BASE=/u01/app/grid GRID_HOME=/u01/app/19.3.0/grid INSTALL_FILE_1="LINUX.X64_193000_grid_home.zip" GRID_INSTALL_RSP="gridsetup_19c.rsp" GRID_SW_INSTALL_RSP="grid_sw_install_19c.rsp" GRID_SETUP_FILE="setupGrid.sh" FIXUP_PREQ_FILE="fixupPreq.sh" INSTALL_GRID_BINARIES_FILE="installGridBinaries.sh" INSTALL_GRID_PATCH="applyGridPatch.sh" INVENTORY=/u01/app/oraInventory CONFIGGRID="configGrid.sh" ADDNODE="AddNode.sh" DELNODE="DelNode.sh" ADDNODE_RSP="grid_addnode.rsp" SETUPSSH="setupSSH.expect" DOCKERORACLEINIT="dockeroracleinit" GRID_USER_HOME="/home/grid" SETUPGRIDENV="setupGridEnv.sh" ASM_DISCOVERY_DIR="/dev" RESET_OS_PASSWORD="resetOSPassword.sh" MULTI_NODE_INSTALL="MultiNodeInstall.py" DB_BASE=/u01/app/oracle DB_HOME=/u01/app/oracle/product/19.3.0/dbhome_1 INSTALL_FILE_2="LINUX.X64_193000_db_home.zip" DB_INSTALL_RSP="db_sw_install_19c.rsp" DBCA_RSP="dbca_19c.rsp" DB_SETUP_FILE="setupDB.sh" PWD_FILE="setPassword.sh" RUN_FILE="runOracle.sh" STOP_FILE="stopOracle.sh" ENABLE_RAC_FILE="enableRAC.sh" CHECK_DB_FILE="checkDBStatus.sh" USER_SCRIPTS_FILE="runUserScripts.sh" REMOTE_LISTENER_FILE="remoteListener.sh" INSTALL_DB_BINARIES_FILE="installDBBinaries.sh" GRID_HOME_CLEANUP="GridHomeCleanup.sh" ORACLE_HOME_CLEANUP="OracleHomeCleanup.sh" DB_USER="oracle" GRID_USER="grid" FUNCTIONS="functions.sh" COMMON_SCRIPTS="/common_scripts" CHECK_SPACE_FILE="checkSpace.sh" RESET_FAILED_UNITS="resetFailedUnits.sh" SET_CRONTAB="setCrontab.sh" CRONTAB_ENTRY="crontabEntry" EXPECT="/usr/bin/expect" BIN="/usr/sbin" container="true" ---> Running in 01dfaa3cc133 Removing intermediate container 01dfaa3cc133 ---> be15b2094e53 Step 4/11 : ENV INSTALL_SCRIPTS=$INSTALL_DIR/install PATH=/bin:/usr/bin:/sbin:/usr/sbin:$PATH SCRIPT_DIR=$INSTALL_DIR/startup GRID_PATH=$GRID_HOME/bin:$GRID_HOME/OPatch/:/usr/sbin:$PATH DB_PATH=$DB_HOME/bin:$DB_HOME/OPatch/:/usr/sbin:$PATH GRID_LD_LIBRARY_PATH=$GRID_HOME/lib:/usr/lib:/lib DB_LD_LIBRARY_PATH=$DB_HOME/lib:/usr/lib:/lib ---> Running in 7c6a76bc6baf Removing intermediate container 7c6a76bc6baf ---> 1666646716e1 Step 5/11 : COPY $GRID_SW_INSTALL_RSP $INSTALL_GRID_PATCH $SETUP_LINUX_FILE $GRID_SETUP_FILE $INSTALL_GRID_BINARIES_FILE $FIXUP_PREQ_FILE $DB_SETUP_FILE $CHECK_SPACE_FILE $DB_INSTALL_RSP $INSTALL_DB_BINARIES_FILE $ENABLE_RAC_FILE $GRID_HOME_CLEANUP $ORACLE_HOME_CLEANUP $INSTALL_FILE_1 $INSTALL_FILE_2 $INSTALL_SCRIPTS/ ---> aeded06d0a00 Step 6/11 : COPY $RUN_FILE $ADDNODE $ADDNODE_RSP $SETUPSSH $FUNCTIONS $CONFIGGRID $GRID_INSTALL_RSP $DBCA_RSP $PWD_FILE $CHECK_DB_FILE $USER_SCRIPTS_FILE $STOP_FILE $CHECK_DB_FILE $REMOTE_LISTENER_FILE $SETUPGRIDENV $DELNODE $RESET_OS_PASSWORD $MULTI_NODE_INSTALL $SCRIPT_DIR/ ---> b9b139ebda70 Step 7/11 : RUN chmod 755 $INSTALL_SCRIPTS/*.sh && sync && $INSTALL_DIR/install/$CHECK_SPACE_FILE && $INSTALL_DIR/install/$SETUP_LINUX_FILE && $INSTALL_DIR/install/$GRID_SETUP_FILE && $INSTALL_DIR/install/$DB_SETUP_FILE && sed -e '/hard *memlock/s/^/#/g' -i /etc/security/limits.d/oracle-database-preinstall-19c.conf && su $GRID_USER -c "$INSTALL_DIR/install/$INSTALL_GRID_BINARIES_FILE EE $PATCH_NUMBER" && $INVENTORY/orainstRoot.sh && $GRID_HOME/root.sh && su $DB_USER -c "$INSTALL_DIR/install/$INSTALL_DB_BINARIES_FILE EE" && su $DB_USER -c "$INSTALL_DIR/install/$ENABLE_RAC_FILE" && $INVENTORY/orainstRoot.sh && $DB_HOME/root.sh && su $GRID_USER -c "$INSTALL_SCRIPTS/$GRID_HOME_CLEANUP" && su $DB_USER -c "$INSTALL_SCRIPTS/$ORACLE_HOME_CLEANUP" && $INSTALL_DIR/install/$FIXUP_PREQ_FILE && rm -rf $INSTALL_DIR/install && rm -rf $INSTALL_DIR/install && sync && chmod 755 $SCRIPT_DIR/*.sh && chmod 755 $SCRIPT_DIR/*.expect && chmod 666 $SCRIPT_DIR/*.rsp && echo "nohup $SCRIPT_DIR/runOracle.sh &" >> /etc/rc.local && rm -f /etc/rc.d/init.d/oracle-database-preinstall-19c-firstboot && mkdir -p $GRID_HOME/dockerinit && cp $GRID_HOME/bin/$DOCKERORACLEINIT $GRID_HOME/dockerinit/ && chown $GRID_USER:oinstall $GRID_HOME/dockerinit && chown root:oinstall $GRID_HOME/dockerinit/$DOCKERORACLEINIT && chmod 4755 $GRID_HOME/dockerinit/$DOCKERORACLEINIT && ln -s $GRID_HOME/dockerinit/$DOCKERORACLEINIT /usr/sbin/oracleinit && chmod +x /etc/rc.d/rc.local && rm -f /etc/sysctl.d/99-oracle-database-preinstall-19c-sysctl.conf && rm -f /etc/sysctl.d/99-sysctl.conf && sync ---> Running in c40a0e8ea8ed Loaded plugins: ovl No package openssh-client available. Resolving Dependencies --> Running transaction check ---> Package e2fsprogs.x86_64 0:1.42.9-16.el7 will be installed ... Transaction Summary ================================================================================ Install 14 Packages (+109 Dependent packages) Upgrade ( 9 Dependent packages) Total download size: 70 M Downloading packages: Delta RPMs disabled because /usr/bin/applydeltarpm not installed. ... Complete! Loaded plugins: ovl Cleaning repos: ol7_UEKR5 ol7_developer_EPEL ol7_latest ... /opt/scripts/install/installGridBinaries.sh: line 57: : command not found Launching Oracle Grid Infrastructure Setup Wizard... [WARNING] [INS-13014] Target environment does not meet some optional requirements. CAUSE: Some of the optional prerequisites are not met. See logs for details. gridSetupActions2019-11-11_03-39-25AM.log ACTION: Identify the list of failed prerequisite checks from the log: gridSetupActions2019-11-11_03-39-25AM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually. The response file for this session can be found at: /u01/app/19.3.0/grid/install/response/grid_2019-11-11_03-39-25AM.rsp You can find the log of this install session at: /tmp/GridSetupActions2019-11-11_03-39-25AM/gridSetupActions2019-11-11_03-39-25AM.log As a root user, execute the following script(s): 1. /u01/app/oraInventory/orainstRoot.sh 2. /u01/app/19.3.0/grid/root.sh Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes: [c40a0e8ea8ed] Execute /u01/app/19.3.0/grid/root.sh on the following nodes: [c40a0e8ea8ed] Successfully Setup Software with warning(s). Moved the install session logs to: /u01/app/oraInventory/logs/GridSetupActions2019-11-11_03-39-25AM Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. Check /u01/app/19.3.0/grid/install/root_c40a0e8ea8ed_2019-11-11_03-41-46-398462346.log for the output of root script Launching Oracle Database Setup Wizard... [WARNING] [INS-13014] Target environment does not meet some optional requirements. CAUSE: Some of the optional prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/InstallActions2019-11-11_03-46-31AM/installActions2019-11-11_03-46-31AM.log ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/InstallActions2019-11-11_03-46-31AM/installActions2019-11-11_03-46-31AM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually. The response file for this session can be found at: /u01/app/oracle/product/19.3.0/dbhome_1/install/response/db_2019-11-11_03-46-31AM.rsp You can find the log of this install session at: /u01/app/oraInventory/logs/InstallActions2019-11-11_03-46-31AM/installActions2019-11-11_03-46-31AM.log As a root user, execute the following script(s): 1. /u01/app/oracle/product/19.3.0/dbhome_1/root.sh Execute /u01/app/oracle/product/19.3.0/dbhome_1/root.sh on the following nodes: [c40a0e8ea8ed] Successfully Setup Software with warning(s). (if /u01/app/oracle/product/19.3.0/dbhome_1/bin/skgxpinfo | grep rds;\ then \ make -f /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/ins_rdbms.mk ipc_rds; \ else \ make -f /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/ins_rdbms.mk ipc_g; \ fi) make[1]: Entering directory `/' rm -f /u01/app/oracle/product/19.3.0/dbhome_1/lib/libskgxp19.so cp /u01/app/oracle/product/19.3.0/dbhome_1/lib//libskgxpg.so /u01/app/oracle/product/19.3.0/dbhome_1/lib/libskgxp19.so make[1]: Leaving directory `/' - Use stub SKGXN library cp /u01/app/oracle/product/19.3.0/dbhome_1/lib/libskgxns.so /u01/app/oracle/product/19.3.0/dbhome_1/lib/libskgxn2.so /usr/bin/ar d /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/libknlopt.a ksnkcs.o /usr/bin/ar cr /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/libknlopt.a /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/kcsm.o chmod 755 /u01/app/oracle/product/19.3.0/dbhome_1/bin - Linking Oracle rm -f /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/oracle /u01/app/oracle/product/19.3.0/dbhome_1/bin/orald -o /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/oracle -m64 -z noexecstack -Wl,--disable-new-dtags -L/u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/ -L/u01/app/oracle/product/19.3.0/dbhome_1/lib/ -L/u01/app/oracle/product/19.3.0/dbhome_1/lib/stubs/ -Wl,-E /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/opimai.o /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/ssoraed.o /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/ttcsoi.o -Wl,--whole-archive -lperfsrv19 -Wl,--no-whole-archive /u01/app/oracle/product/19.3.0/dbhome_1/lib/nautab.o /u01/app/oracle/product/19.3.0/dbhome_1/lib/naeet.o /u01/app/oracle/product/19.3.0/dbhome_1/lib/naect.o /u01/app/oracle/product/19.3.0/dbhome_1/lib/naedhs.o /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/config.o -ldmext -lserver19 -lodm19 -lofs -lcell19 -lnnet19 -lskgxp19 -lsnls19 -lnls19 -lcore19 -lsnls19 -lnls19 -lcore19 -lsnls19 -lnls19 -lxml19 -lcore19 -lunls19 -lsnls19 -lnls19 -lcore19 -lnls19 -lclient19 -lvsnst19 -lcommon19 -lgeneric19 -lknlopt -loraolap19 -lskjcx19 -lslax19 -lpls19 -lrt -lplp19 -ldmext -lserver19 -lclient19 -lvsnst19 -lcommon19 -lgeneric19 `if [ -f /u01/app/oracle/product/19.3.0/dbhome_1/lib/libavserver19.a ] ; then echo "-lavserver19" ; else echo "-lavstub19"; fi` `if [ -f /u01/app/oracle/product/19.3.0/dbhome_1/lib/libavclient19.a ] ; then echo "-lavclient19" ; fi` -lknlopt -lslax19 -lpls19 -lrt -lplp19 -ljavavm19 -lserver19 -lwwg `cat /u01/app/oracle/product/19.3.0/dbhome_1/lib/ldflags` -lncrypt19 -lnsgr19 -lnzjs19 -ln19 -lnl19 -lngsmshd19 -lnro19 `cat /u01/app/oracle/product/19.3.0/dbhome_1/lib/ldflags` -lncrypt19 -lnsgr19 -lnzjs19 -ln19 -lnl19 -lngsmshd19 -lnnzst19 -lzt19 -lztkg19 -lmm -lsnls19 -lnls19 -lcore19 -lsnls19 -lnls19 -lcore19 -lsnls19 -lnls19 -lxml19 -lcore19 -lunls19 -lsnls19 -lnls19 -lcore19 -lnls19 -lztkg19 `cat /u01/app/oracle/product/19.3.0/dbhome_1/lib/ldflags` -lncrypt19 -lnsgr19 -lnzjs19 -ln19 -lnl19 -lngsmshd19 -lnro19 `cat /u01/app/oracle/product/19.3.0/dbhome_1/lib/ldflags` -lncrypt19 -lnsgr19 -lnzjs19 -ln19 -lnl19 -lngsmshd19 -lnnzst19 -lzt19 -lztkg19 -lsnls19 -lnls19 -lcore19 -lsnls19 -lnls19 -lcore19 -lsnls19 -lnls19 -lxml19 -lcore19 -lunls19 -lsnls19 -lnls19 -lcore19 -lnls19 `if /usr/bin/ar tv /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/libknlopt.a | grep "kxmnsd.o" > /dev/null 2>&1 ; then echo " " ; else echo "-lordsdo19 -lserver19"; fi` -L/u01/app/oracle/product/19.3.0/dbhome_1/ctx/lib/ -lctxc19 -lctx19 -lzx19 -lgx19 -lctx19 -lzx19 -lgx19 -lclscest19 -loevm -lclsra19 -ldbcfg19 -lhasgen19 -lskgxn2 -lnnzst19 -lzt19 -lxml19 -lgeneric19 -locr19 -locrb19 -locrutl19 -lhasgen19 -lskgxn2 -lnnzst19 -lzt19 -lxml19 -lgeneric19 -lgeneric19 -lorazip -loraz -llzopro5 -lorabz2 -lorazstd -loralz4 -lipp_z -lipp_bz2 -lippdc -lipps -lippcore -lippcp -lsnls19 -lnls19 -lcore19 -lsnls19 -lnls19 -lcore19 -lsnls19 -lnls19 -lxml19 -lcore19 -lunls19 -lsnls19 -lnls19 -lcore19 -lnls19 -lsnls19 -lunls19 -lsnls19 -lnls19 -lcore19 -lsnls19 -lnls19 -lcore19 -lsnls19 -lnls19 -lxml19 -lcore19 -lunls19 -lsnls19 -lnls19 -lcore19 -lnls19 -lasmclnt19 -lcommon19 -lcore19 -ledtn19 -laio -lons -lmql1 -lipc1 -lfthread19 `cat /u01/app/oracle/product/19.3.0/dbhome_1/lib/sysliblist` -Wl,-rpath,/u01/app/oracle/product/19.3.0/dbhome_1/lib -lm `cat /u01/app/oracle/product/19.3.0/dbhome_1/lib/sysliblist` -ldl -lm -L/u01/app/oracle/product/19.3.0/dbhome_1/lib `test -x /usr/bin/hugeedit -a -r /usr/lib64/libhugetlbfs.so && test -r /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/shugetlbfs.o && echo -Wl,-zcommon-page-size=2097152 -Wl,-zmax-page-size=2097152 -lhugetlbfs` rm -f /u01/app/oracle/product/19.3.0/dbhome_1/bin/oracle mv /u01/app/oracle/product/19.3.0/dbhome_1/rdbms/lib/oracle /u01/app/oracle/product/19.3.0/dbhome_1/bin/oracle chmod 6751 /u01/app/oracle/product/19.3.0/dbhome_1/bin/oracle (if [ ! -f /u01/app/oracle/product/19.3.0/dbhome_1/bin/crsd.bin ]; then \ getcrshome="/u01/app/oracle/product/19.3.0/dbhome_1/srvm/admin/getcrshome" ; \ if [ -f "$getcrshome" ]; then \ crshome="`$getcrshome`"; \ if [ -n "$crshome" ]; then \ if [ $crshome != /u01/app/oracle/product/19.3.0/dbhome_1 ]; then \ oracle="/u01/app/oracle/product/19.3.0/dbhome_1/bin/oracle"; \ $crshome/bin/setasmgidwrap oracle_binary_path=$oracle; \ fi \ fi \ fi \ fi\ ); Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. Check /u01/app/oracle/product/19.3.0/dbhome_1/install/root_c40a0e8ea8ed_2019-11-11_03-51-20-097025139.log for the output of root script Preparing... ######################################## Updating / installing... cvuqdisk-1.0.10-1 ######################################## Removing intermediate container c40a0e8ea8ed ---> 947c42f51105 Step 8/11 : USER grid ---> Running in f12659d9d383 Removing intermediate container f12659d9d383 ---> 06b8b4dcd15e Step 9/11 : WORKDIR /home/grid ---> Running in 0106cd633ef9 Removing intermediate container 0106cd633ef9 ---> c2ad15635695 Step 10/11 : VOLUME ["/common_scripts"] ---> Running in d817b9de8b29 Removing intermediate container d817b9de8b29 ---> c0465d5925d5 Step 11/11 : CMD ["/usr/sbin/oracleinit"] ---> Running in 499d3e1cf5d9 Removing intermediate container 499d3e1cf5d9 ---> 049f87053beb Successfully built 049f87053beb Successfully tagged oracle/database-rac:19.3.0 Oracle Database Docker Image for Real Application Clusters (RAC) version 19.3.0 is ready to be extended: --> oracle/database-rac:19.3.0 Build completed in 3463 seconds. real 57m55.524s user 0m16.831s sys 0m20.288s 

至此,数据库的RAC docker image就绪。linux image使用的瘦身版。

$ docker images
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
oracle/database-rac   19.3.0              049f87053beb        About an hour ago   20.6GB
oraclelinux           7-slim              874477adb545        3 months ago        118MB

此时的空间状态:

$ df -h
Filesystem                   Size  Used Avail Use% Mounted on
devtmpfs                     3.8G     0  3.8G   0% /dev
tmpfs                        3.8G     0  3.8G   0% /dev/shm
tmpfs                        3.8G  8.5M  3.8G   1% /run
tmpfs                        3.8G     0  3.8G   0% /sys/fs/cgroup
/dev/mapper/vg_main-lv_root   76G   27G   49G  36% /
/dev/sda1                    497M  125M  373M  26% /boot
vagrant                      1.9T  1.2T  698G  63% /vagrant
tmpfs                        771M     0  771M   0% /run/user/1000

实际上,此时可以删除数据库和GI的安装介质了。

创建共享主机解析文件

sudo mkdir /opt/containers
sudo touch /opt/containers/rac_host_file

准备共享磁盘(块设备)

停止虚机。vagrant halt
挂接50G磁盘。VBoxManage createmedium diskVBoxManage storageattach
启动虚机。vagrant up
确认可看到新盘,本例为sdc:

$ lsblk
NAME                MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sdb                   8:16   0 15.6G  0 disk
sdc                   8:32   0   50G  0 disk
sda                   8:0    0   80G  0 disk
├─sda2                8:2    0   36G  0 part
│ ├─vg_main-lv_swap 252:1    0    4G  0 lvm  [SWAP]
│ └─vg_main-lv_root 252:0    0 75.5G  0 lvm  /
├─sda3                8:3    0 43.5G  0 part
│ └─vg_main-lv_root 252:0    0 75.5G  0 lvm  /
└─sda1                8:1    0  500M  0 part /boot

初始化磁盘以确保其上无文件系统:

$ sudo dd if=/dev/zero of=/dev/sdc  bs=8k count=10000

口令管理

以下设置的口令为oracle和grid操作系统用户以及数据库共同使用。

mkdir /opt/.secrets/
openssl rand -hex 64 -out /opt/.secrets/pwd.key
-- 将口令明码写入临时文件
echo Oracle.123# >/opt/.secrets/common_os_pwdfile
-- 加密后存储
openssl enc -aes-256-cbc -salt -in /opt/.secrets/common_os_pwdfile -out /opt/.secrets/common_os_pwdfile.enc -pass file:/opt/.secrets/pwd.key
-- 删除临时文件
rm -f /opt/.secrets/common_os_pwdfile

创建第一个RAC节点:racnode1容器

先创建容器:

docker create -t -i \
  --hostname racnode1 \
  --volume /boot:/boot:ro \
  --volume /dev/shm \
  --tmpfs /dev/shm:rw,exec,size=4G \
  --volume /opt/containers/rac_host_file:/etc/hosts  \
  --volume /opt/.secrets:/run/secrets \
  --dns-search=example.com \
  --device=/dev/sdc:/dev/asm_disk1  \
  --privileged=false  \
  --cap-add=SYS_NICE \ --cap-add=SYS_RESOURCE \ --cap-add=NET_ADMIN \ -e NODE_VIP=172.16.1.160 \ -e VIP_HOSTNAME=racnode1-vip \ -e PRIV_IP=192.168.17.150 \ -e PRIV_HOSTNAME=racnode1-priv \ -e PUBLIC_IP=172.16.1.150 \ -e PUBLIC_HOSTNAME=racnode1 \ -e SCAN_NAME=racnode-scan \ -e SCAN_IP=172.16.1.70 \ -e OP_TYPE=INSTALL \ -e DOMAIN=example.com \ -e ASM_DEVICE_LIST=/dev/asm_disk1 \ -e ASM_DISCOVERY_DIR=/dev \ -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \ -e PWD_KEY=pwd.key \ --restart=always --tmpfs=/run -v /sys/fs/cgroup:/sys/fs/cgroup:ro \ --cpu-rt-runtime=95000 --ulimit rtprio=99 \ --name racnode1 \ oracle/database-rac:19.3.0

查看状态:

$ docker ps -a
CONTAINER ID        IMAGE                        COMMAND                  CREATED             STATUS              PORTS               NAMES
aa88f55d68cd        oracle/database-rac:19.3.0   "/usr/sbin/oracleinit"   5 seconds ago       Created                                 racnode1

配置racnode1的网络:

docker network disconnect bridge racnode1
docker network connect rac_pub1_nw --ip 172.16.1.150 racnode1
docker network connect rac_priv1_nw --ip 192.168.17.150  racnode1

启动第一个容器:

docker start racnode1

查看日志:

docker logs -f racnode1

在容器内部或在宿主机都可以查看到dbca 进程,这个比较神奇。
以下命令可登录到容器内部:

docker exec -it racnode1 bash

/u01/app/oracle/cfgtoollogs/dbca/ORCLCDB中可以查看到dbca的日志。

以下为成功执行时的完整日志,可知首先内存还是配小了,另外整个过程耗时1小时50分:

$ docker logs -f racnode1
PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=racnode1
TERM=xterm
NODE_VIP=172.16.1.160
VIP_HOSTNAME=racnode1-vip PRIV_IP=192.168.17.150 PRIV_HOSTNAME=racnode1-priv PUBLIC_IP=172.16.1.150 PUBLIC_HOSTNAME=racnode1 SCAN_NAME=racnode-scan SCAN_IP=172.16.1.70 OP_TYPE=INSTALL DOMAIN=example.com ASM_DEVICE_LIST=/dev/asm_disk1 ASM_DISCOVERY_DIR=/dev COMMON_OS_PWD_FILE=common_os_pwdfile.enc PWD_KEY=pwd.key SETUP_LINUX_FILE=setupLinuxEnv.sh INSTALL_DIR=/opt/scripts GRID_BASE=/u01/app/grid GRID_HOME=/u01/app/19.3.0/grid INSTALL_FILE_1=LINUX.X64_193000_grid_home.zip GRID_INSTALL_RSP=gridsetup_19c.rsp GRID_SW_INSTALL_RSP=grid_sw_install_19c.rsp GRID_SETUP_FILE=setupGrid.sh FIXUP_PREQ_FILE=fixupPreq.sh INSTALL_GRID_BINARIES_FILE=installGridBinaries.sh INSTALL_GRID_PATCH=applyGridPatch.sh INVENTORY=/u01/app/oraInventory CONFIGGRID=configGrid.sh ADDNODE=AddNode.sh DELNODE=DelNode.sh ADDNODE_RSP=grid_addnode.rsp SETUPSSH=setupSSH.expect DOCKERORACLEINIT=dockeroracleinit GRID_USER_HOME=/home/grid SETUPGRIDENV=setupGridEnv.sh RESET_OS_PASSWORD=resetOSPassword.sh MULTI_NODE_INSTALL=MultiNodeInstall.py DB_BASE=/u01/app/oracle DB_HOME=/u01/app/oracle/product/19.3.0/dbhome_1 INSTALL_FILE_2=LINUX.X64_193000_db_home.zip DB_INSTALL_RSP=db_sw_install_19c.rsp DBCA_RSP=dbca_19c.rsp DB_SETUP_FILE=setupDB.sh PWD_FILE=setPassword.sh RUN_FILE=runOracle.sh STOP_FILE=stopOracle.sh ENABLE_RAC_FILE=enableRAC.sh CHECK_DB_FILE=checkDBStatus.sh USER_SCRIPTS_FILE=runUserScripts.sh REMOTE_LISTENER_FILE=remoteListener.sh INSTALL_DB_BINARIES_FILE=installDBBinaries.sh GRID_HOME_CLEANUP=GridHomeCleanup.sh ORACLE_HOME_CLEANUP=OracleHomeCleanup.sh DB_USER=oracle GRID_USER=grid FUNCTIONS=functions.sh COMMON_SCRIPTS=/common_scripts CHECK_SPACE_FILE=checkSpace.sh RESET_FAILED_UNITS=resetFailedUnits.sh SET_CRONTAB=setCrontab.sh CRONTAB_ENTRY=crontabEntry EXPECT=/usr/bin/expect BIN=/usr/sbin container=true INSTALL_SCRIPTS=/opt/scripts/install SCRIPT_DIR=/opt/scripts/startup GRID_PATH=/u01/app/19.3.0/grid/bin:/u01/app/19.3.0/grid/OPatch/:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin DB_PATH=/u01/app/oracle/product/19.3.0/dbhome_1/bin:/u01/app/oracle/product/19.3.0/dbhome_1/OPatch/:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin GRID_LD_LIBRARY_PATH=/u01/app/19.3.0/grid/lib:/usr/lib:/lib DB_LD_LIBRARY_PATH=/u01/app/oracle/product/19.3.0/dbhome_1/lib:/usr/lib:/lib HOME=/home/grid Failed to parse kernel command line, ignoring: No such file or directory systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN) Detected virtualization other. Detected architecture x86-64. Welcome to Oracle Linux Server 7.6! Set hostname to <racnode1>. Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directoryFailed to parse kernel command line, ignoring: No such file or directory /usr/lib/systemd/system-generators/systemd-fstab-generator failed with error code 1. Binding to IPv6 address not available since kernel does not support IPv6. Binding to IPv6 address not available since kernel does not support IPv6. Cannot add dependency job for unit display-manager.service, ignoring: Unit not found. [ OK ] Reached target Swap. [ OK ] Started Dispatch Password Requests to Console Directory Watch. [ OK ] Started Forward Password Requests to Wall Directory Watch. [ OK ] Created slice Root Slice. [ OK ] Listening on /dev/initctl Compatibility Named Pipe. [ OK ] Created slice System Slice. [ OK ] Created slice User and Session Slice. [ OK ] Reached target Slices. [ OK ] Listening on Journal Socket. Starting Read and set NIS domainname from /etc/sysconfig/network... Starting Configure read-only root support... Starting Journal Service... Starting Rebuild Hardware Database... Couldn't determine result for ConditionKernelCommandLine=|rd.modules-load for systemd-modules-load.service, assuming failed: No such file or directory Couldn't determine result for ConditionKernelCommandLine=|modules-load for systemd-modules-load.service, assuming failed: No such file or directory [ OK ] Created slice system-getty.slice. [ OK ] Listening on Delayed Shutdown Socket. [ OK ] Reached target Local Encrypted Volumes. [ OK ] Reached target Local File Systems (Pre). [ OK ] Reached target RPC Port Mapper. [ OK ] Started Journal Service. [ OK ] Started Read and set NIS domainname from /etc/sysconfig/network. Starting Flush Journal to Persistent Storage... [ OK ] Started Flush Journal to Persistent Storage. [ OK ] Started Configure read-only root support. Starting Load/Save Random Seed... [ OK ] Reached target Local File Systems. Starting Mark the need to relabel after reboot... Starting Preprocess NFS configuration... Starting Rebuild Journal Catalog... Starting Create Volatile Files and Directories... [ OK ] Started Load/Save Random Seed. [ OK ] Started Mark the need to relabel after reboot. [ OK ] Started Preprocess NFS configuration. [ OK ] Started Rebuild Journal Catalog. [ OK ] Started Create Volatile Files and Directories. Starting Update UTMP about System Boot/Shutdown... Mounting RPC Pipe File System... [FAILED] Failed to mount RPC Pipe File System. See 'systemctl status var-lib-nfs-rpc_pipefs.mount' for details. [DEPEND] Dependency failed for rpc_pipefs.target. [DEPEND] Dependency failed for RPC security service for NFS client and server. [ OK ] Started Update UTMP about System Boot/Shutdown. [ OK ] Started Rebuild Hardware Database. Starting Update is Completed... [ OK ] Started Update is Completed. [ OK ] Reached target System Initialization. [ OK ] Started Flexible branding. [ OK ] Reached target Paths. [ OK ] Started Daily Cleanup of Temporary Directories. [ OK ] Reached target Timers. [ OK ] Listening on D-Bus System Message Bus Socket. [ OK ] Listening on RPCbind Server Activation Socket. Starting RPC bind service... [ OK ] Reached target Sockets. [ OK ] Reached target Basic System. Starting Self Monitoring and Reporting Technology (SMART) Daemon... Starting OpenSSH Server Key Generation... [ OK ] Started D-Bus System Message Bus. Starting Login Service... Starting GSSAPI Proxy Daemon... Starting Resets System Activity Logs... Starting LSB: Bring up/down networking... [ OK ] Started RPC bind service. Starting Cleanup of Temporary Directories... [ OK ] Started Resets System Activity Logs. [ OK ] Started Login Service. [ OK ] Started GSSAPI Proxy Daemon. [ OK ] Reached target NFS client services. [ OK ] Reached target Remote File Systems (Pre). [ OK ] Reached target Remote File Systems. Starting Permit User Sessions... [ OK ] Started Cleanup of Temporary Directories. [ OK ] Started Permit User Sessions. [ OK ] Started Command Scheduler. [ OK ] Started OpenSSH Server Key Generation. [ OK ] Started LSB: Bring up/down networking. [ OK ] Reached target Network. Starting OpenSSH server daemon... Starting /etc/rc.d/rc.local Compatibility... [ OK ] Reached target Network is Online. Starting Notify NFS peers of a restart... [ OK ] Started Notify NFS peers of a restart. [ OK ] Started /etc/rc.d/rc.local Compatibility. [ OK ] Started Console Getty. [ OK ] Reached target Login Prompts. [ OK ] Started OpenSSH server daemon. 11-11-2019 06:55:07 UTC : : Process id of the program : 11-11-2019 06:55:07 UTC : : ################################################# 11-11-2019 06:55:07 UTC : : Starting Grid Installation 11-11-2019 06:55:07 UTC : : ################################################# 11-11-2019 06:55:07 UTC : : Pre-Grid Setup steps are in process 11-11-2019 06:55:07 UTC : : Process id of the program : 11-11-2019 06:55:07 UTC : : Disable failed service var-lib-nfs-rpc_pipefs.mount Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory 11-11-2019 06:55:07 UTC : : Resetting Failed Services 11-11-2019 06:55:07 UTC : : Sleeping for 60 seconds [ OK ] Started Self Monitoring and Reporting Technology (SMART) Daemon. [ OK ] Reached target Multi-User System. [ OK ] Reached target Graphical Interface. Starting Update UTMP about System Runlevel Changes... [ OK ] Started Update UTMP about System Runlevel Changes. Oracle Linux Server 7.6 Kernel 4.14.35-1902.6.6.el7uek.x86_64 on an x86_64 racnode1 login: 11-11-2019 06:56:07 UTC : : Systemctl state is running! 11-11-2019 06:56:07 UTC : : Setting correct permissions for /bin/ping 11-11-2019 06:56:08 UTC : : Public IP is set to 172.16.1.150 11-11-2019 06:56:08 UTC : : RAC Node PUBLIC Hostname is set to racnode1 11-11-2019 06:56:08 UTC : : racnode1 already exists : 172.16.1.150 racnode1.example.com racnode1 192.168.17.150 racnode1-priv.example.com racnode1-priv 172.16.1.160 racnode1-vip.example.com racnode1-vip, no update required 11-11-2019 06:56:08 UTC : : racnode1-priv already exists : 192.168.17.150 racnode1-priv.example.com racnode1-priv, no update required 11-11-2019 06:56:08 UTC : : racnode1-vip already exists : 172.16.1.160 racnode1-vip.example.com racnode1-vip, no update required 11-11-2019 06:56:08 UTC : : racnode-scan already exists : 172.16.1.70 racnode-scan.example.com racnode-scan, no update required 11-11-2019 06:56:08 UTC : : Preapring Device list 11-11-2019 06:56:08 UTC : : Changing Disk permission and ownership /dev/asm_disk1 11-11-2019 06:56:08 UTC : : DNS_SERVERS is set to empty. /etc/resolv.conf will use default dns docker embedded server. 11-11-2019 06:56:08 UTC : : ##################################################################### 11-11-2019 06:56:08 UTC : : RAC setup will begin in 2 minutes 11-11-2019 06:56:08 UTC : : #################################################################### 11-11-2019 06:56:10 UTC : : ################################################### 11-11-2019 06:56:10 UTC : : Pre-Grid Setup steps completed 11-11-2019 06:56:10 UTC : : ################################################### 11-11-2019 06:56:10 UTC : : Checking if grid is already configured 11-11-2019 06:56:10 UTC : : Process id of the program : 11-11-2019 06:56:10 UTC : : Public IP is set to 172.16.1.150 11-11-2019 06:56:10 UTC : : RAC Node PUBLIC Hostname is set to racnode1 11-11-2019 06:56:10 UTC : : Domain is defined to example.com 11-11-2019 06:56:10 UTC : : Default setting of AUTO GNS VIP set to false. If you want to use AUTO GNS VIP, please pass DHCP_CONF as an env parameter set to true 11-11-2019 06:56:10 UTC : : RAC VIP set to 172.16.1.160 11-11-2019 06:56:10 UTC : : RAC Node VIP hostname is set to racnode1-vip 11-11-2019 06:56:10 UTC : : SCAN_NAME name is racnode-scan 11-11-2019 06:56:10 UTC : : SCAN PORT is set to empty string. Setting it to 1521 port. 11-11-2019 06:56:10 UTC : : 172.16.1.70 11-11-2019 06:56:10 UTC : : SCAN Name resolving to IP. Check Passed! 11-11-2019 06:56:11 UTC : : SCAN_IP name is 172.16.1.70 11-11-2019 06:56:11 UTC : : RAC Node PRIV IP is set to 192.168.17.150 11-11-2019 06:56:11 UTC : : RAC Node private hostname is set to racnode1-priv 11-11-2019 06:56:11 UTC : : CMAN_NAME set to the empty string 11-11-2019 06:56:11 UTC : : CMAN_IP set to the empty string 11-11-2019 06:56:11 UTC : : Cluster Name is not defined 11-11-2019 06:56:11 UTC : : Cluster name is set to 'racnode-c' 11-11-2019 06:56:11 UTC : : Password file generated 11-11-2019 06:56:11 UTC : : Common OS Password string is set for Grid user 11-11-2019 06:56:11 UTC : : Common OS Password string is set for Oracle user 11-11-2019 06:56:11 UTC : : Common OS Password string is set for Oracle Database 11-11-2019 06:56:11 UTC : : Setting CONFIGURE_GNS to false 11-11-2019 06:56:11 UTC : : GRID_RESPONSE_FILE env variable set to empty. configGrid.sh will use standard cluster responsefile 11-11-2019 06:56:11 UTC : : Location for User script SCRIPT_ROOT set to /common_scripts 11-11-2019 06:56:11 UTC : : IGNORE_CVU_CHECKS is set to true 11-11-2019 06:56:11 UTC : : Oracle SID is set to ORCLCDB 11-11-2019 06:56:11 UTC : : Oracle PDB name is set to ORCLPDB 11-11-2019 06:56:11 UTC : : Check passed for network card eth1 for public IP 172.16.1.150 11-11-2019 06:56:11 UTC : : Public Netmask : 255.255.255.0 11-11-2019 06:56:11 UTC : : Check passed for network card eth0 for private IP 192.168.17.150 11-11-2019 06:56:11 UTC : : Building NETWORK_STRING to set networkInterfaceList in Grid Response File 11-11-2019 06:56:11 UTC : : Network InterfaceList set to eth1:172.16.1.0:1,eth0:192.168.17.0:5 11-11-2019 06:56:11 UTC : : Setting random password for grid user 11-11-2019 06:56:12 UTC : : Setting random password for oracle user 11-11-2019 06:56:13 UTC : : Calling setupSSH function 11-11-2019 06:56:13 UTC : : SSh will be setup among racnode1 nodes 11-11-2019 06:56:13 UTC : : Running SSH setup for grid user between nodes racnode1 11-11-2019 06:56:52 UTC : : Running SSH setup for oracle user between nodes racnode1 11-11-2019 06:57:00 UTC : : SSH check fine for the racnode1 11-11-2019 06:57:01 UTC : : SSH check fine for the oracle@racnode1 11-11-2019 06:57:01 UTC : : Preapring Device list 11-11-2019 06:57:01 UTC : : Changing Disk permission and ownership 11-11-2019 06:57:01 UTC : : ASM Disk size : 0 11-11-2019 06:57:01 UTC : : ASM Device list will be with failure groups /dev/asm_disk1, 11-11-2019 06:57:01 UTC : : ASM Device list will be groups /dev/asm_disk1 11-11-2019 06:57:01 UTC : : CLUSTER_TYPE env variable is set to STANDALONE, will not process GIMR DEVICE list as default Diskgroup is set to DATA. GIMR DEVICE List will be processed when CLUSTER_TYPE is set to DOMAIN for DSC 11-11-2019 06:57:01 UTC : : Nodes in the cluster racnode1 11-11-2019 06:57:01 UTC : : Setting Device permissions for RAC Install on racnode1 11-11-2019 06:57:01 UTC : : Preapring ASM Device list 11-11-2019 06:57:01 UTC : : Changing Disk permission and ownership 11-11-2019 06:57:01 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chown $GRID_USER:asmadmin $device" execute on racnode1 11-11-2019 06:57:02 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode1 11-11-2019 06:57:02 UTC : : Populate Rac Env Vars on Remote Hosts 11-11-2019 06:57:02 UTC : : Command : su - $GRID_USER -c "ssh $node sudo echo \"export ASM_DEVICE_LIST=${ASM_DEVICE_LIST}\" >> /etc/rac_env_vars" execute on racnode1 11-11-2019 06:57:02 UTC : : Generating Reponsefile 11-11-2019 06:57:02 UTC : : Running cluvfy Checks 11-11-2019 06:57:02 UTC : : Performing Cluvfy Checks 11-11-2019 06:58:26 UTC : : Checking /tmp/cluvfy_check.txt if there is any failed check. ERROR: PRVG-10467 : The default Oracle Inventory group could not be determined. Verifying Physical Memory ...FAILED (PRVF-7530) Verifying Available Physical Memory ...PASSED Verifying Swap Size ...FAILED (PRVF-7573) Verifying Free Space: racnode1:/usr,racnode1:/var,racnode1:/etc,racnode1:/sbin,racnode1:/tmp ...PASSED Verifying User Existence: grid ... Verifying Users With Same UID: 54332 ...PASSED Verifying User Existence: grid ...PASSED Verifying Group Existence: asmadmin ...PASSED Verifying Group Existence: asmdba ...PASSED Verifying Group Membership: asmdba ...PASSED Verifying Group Membership: asmadmin ...PASSED Verifying Run Level ...PASSED Verifying Hard Limit: maximum open file descriptors ...PASSED Verifying Soft Limit: maximum open file descriptors ...PASSED Verifying Hard Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum stack size ...PASSED Verifying Architecture ...PASSED Verifying OS Kernel Version ...PASSED Verifying OS Kernel Parameter: semmsl ...PASSED Verifying OS Kernel Parameter: semmns ...PASSED Verifying OS Kernel Parameter: semopm ...PASSED Verifying OS Kernel Parameter: semmni ...PASSED Verifying OS Kernel Parameter: shmmax ...PASSED Verifying OS Kernel Parameter: shmmni ...PASSED Verifying OS Kernel Parameter: shmall ...FAILED (PRVG-1201) Verifying OS Kernel Parameter: file-max ...PASSED Verifying OS Kernel Parameter: aio-max-nr ...FAILED (PRVG-1205) Verifying OS Kernel Parameter: panic_on_oops ...PASSED Verifying Package: kmod-20-21 (x86_64) ...PASSED Verifying Package: kmod-libs-20-21 (x86_64) ...PASSED Verifying Package: binutils-2.23.52.0.1 ...PASSED Verifying Package: compat-libcap1-1.10 ...PASSED Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED Verifying Package: sysstat-10.1.5 ...PASSED Verifying Package: ksh ...PASSED Verifying Package: make-3.82 ...PASSED Verifying Package: glibc-2.17 (x86_64) ...PASSED Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED Verifying Package: libaio-0.3.109 (x86_64) ...PASSED Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED Verifying Package: nfs-utils-1.2.3-15 ...PASSED Verifying Package: smartmontools-6.2-4 ...PASSED Verifying Package: net-tools-2.0-0.17 ...PASSED Verifying Port Availability for component "Oracle Remote Method Invocation (ORMI)" ...PASSED Verifying Port Availability for component "Oracle Notification Service (ONS)" ...PASSED Verifying Port Availability for component "Oracle Cluster Synchronization Services (CSSD)" ...PASSED Verifying Port Availability for component "Oracle Notification Service (ONS) Enterprise Manager support" ...PASSED Verifying Port Availability for component "Oracle Database Listener" ...PASSED Verifying Users With Same UID: 0 ...PASSED Verifying Current Group ID ...PASSED Verifying Root user consistency ...PASSED Verifying Host name ...PASSED Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying Node Connectivity ...PASSED Verifying Multicast or broadcast check ...PASSED Verifying ASM Integrity ...PASSED Verifying Device Checks for ASM ... Verifying Access Control List check ...PASSED Verifying Device Checks for ASM ...PASSED Verifying Network Time Protocol (NTP) ... Verifying '/etc/ntp.conf' ...PASSED Verifying Network Time Protocol (NTP) ...FAILED (PRVG-1017) Verifying Same core file name pattern ...PASSED Verifying User Mask ...PASSED Verifying User Not In Group "root": grid ...PASSED Verifying Time zone consistency ...PASSED Verifying VIP Subnet configuration check ...PASSED Verifying resolv.conf Integrity ...FAILED (PRVG-10048) Verifying DNS/NIS name service ... Verifying Name Service Switch Configuration File Integrity ...PASSED Verifying DNS/NIS name service ...FAILED (PRVG-1101) Verifying Single Client Access Name (SCAN) ...WARNING (PRVG-11368) Verifying Domain Sockets ...PASSED Verifying /boot mount ...PASSED Verifying Daemon "avahi-daemon" not configured and running ...PASSED Verifying Daemon "proxyt" not configured and running ...PASSED Verifying loopback network interface address ...PASSED Verifying Oracle base: /u01/app/grid ... Verifying '/u01/app/grid' ...PASSED Verifying Oracle base: /u01/app/grid ...PASSED Verifying User Equivalence ...PASSED Verifying RPM Package Manager database ...INFORMATION (PRVG-11250) Verifying Network interface bonding status of private interconnect network interfaces ...PASSED Verifying /dev/shm mounted as temporary file system ...PASSED Verifying File system mount options for path /var ...PASSED Verifying DefaultTasksMax parameter ...PASSED Verifying zeroconf check ...PASSED Verifying ASM Filter Driver configuration ...PASSED Verifying Systemd login manager IPC parameter ...PASSED Verifying Access control attributes for cluster manifest file ...PASSED Pre-check for cluster services setup was unsuccessful on all the nodes. Failures were encountered during execution of CVU verification request "stage -pre crsinst". Verifying Physical Memory ...FAILED racnode1: PRVF-7530 : Sufficient physical memory is not available on node "racnode1" [Required physical memory = 8GB (8388608.0KB)] Verifying Swap Size ...FAILED racnode1: PRVF-7573 : Sufficient swap size is not available on node "racnode1" [Required = 7.5283GB (7893968.0KB) ; Found = 4GB (4194300.0KB)] Verifying OS Kernel Parameter: shmall ...FAILED racnode1: PRVG-1201 : OS kernel parameter "shmall" does not have expected configured value on node "racnode1" [Expected = "2251799813685247" ; Current = "18446744073692774000"; Configured = "1073741824"]. Verifying OS Kernel Parameter: aio-max-nr ...FAILED racnode1: PRVG-1205 : OS kernel parameter "aio-max-nr" does not have expected current value on node "racnode1" [Expected = "1048576" ; Current = "65536"; Configured = "1048576"]. Verifying Network Time Protocol (NTP) ...FAILED racnode1: PRVG-1017 : NTP configuration file "/etc/ntp.conf" is present on nodes "racnode1" on which NTP daemon or service was not running Verifying resolv.conf Integrity ...FAILED racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers "127.0.0.11". Verifying DNS/NIS name service ...FAILED PRVG-1101 : SCAN name "racnode-scan" failed to resolve Verifying Single Client Access Name (SCAN) ...WARNING racnode1: PRVG-11368 : A SCAN is recommended to resolve to "3" or more IP addresses, but SCAN "racnode-scan" resolves to only "172.16.1.70" Verifying RPM Package Manager database ...INFORMATION PRVG-11250 : The check "RPM Package Manager database" was not performed because it needs 'root' user privileges. CVU operation performed: stage -pre crsinst Date: Nov 11, 2019 6:57:15 AM CVU home: /u01/app/19.3.0/grid/ User: grid 11-11-2019 06:58:27 UTC : : CVU Checks are ignored as IGNORE_CVU_CHECKS set to true. It is recommended to set IGNORE_CVU_CHECKS to false and meet all the cvu checks requirement. RAC installation might fail, if there are failed cvu checks. 11-11-2019 06:58:27 UTC : : Running Grid Installation 11-11-2019 07:00:07 UTC : : Running root.sh 11-11-2019 07:00:07 UTC : : Nodes in the cluster racnode1 11-11-2019 07:00:07 UTC : : Running root.sh on racnode1 Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directoryFailed to parse kernel command line, ignoring: No such file or directoryFailed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directoryFailed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory 11-11-2019 07:27:11 UTC : : Running post root.sh steps 11-11-2019 07:27:12 UTC : : Running post root.sh steps to setup Grid env 11-11-2019 07:33:02 UTC : : Checking Cluster Status 11-11-2019 07:33:02 UTC : : Nodes in the cluster 11-11-2019 07:33:02 UTC : : Removing /tmp/cluvfy_check.txt as cluster check has passed 11-11-2019 07:33:02 UTC : : Running User Script for grid user 11-11-2019 07:33:04 UTC : : Generating DB Responsefile Running DB creation 11-11-2019 07:33:04 UTC : : Running DB creation 11-11-2019 08:39:01 UTC : : Checking DB status 11-11-2019 08:39:10 UTC : : ################################################################# 11-11-2019 08:39:10 UTC : : Oracle Database ORCLCDB is up and running on racnode1 11-11-2019 08:39:10 UTC : : ################################################################# 11-11-2019 08:39:10 UTC : : Running User Script oracle user 11-11-2019 08:39:13 UTC : : Setting Remote Listener 11-11-2019 08:39:20 UTC : : #################################### 11-11-2019 08:39:20 UTC : : ORACLE RAC DATABASE IS READY TO USE! 11-11-2019 08:39:20 UTC : : ####################################

注意最后3行,即表示已成功完成:
登录到容器内部确认GI和数据库均正常:

[grid@racnode1 ~]$ crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

[grid@racnode1 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304     51200    47000                0           47000              0             Y  DATA/

[grid@racnode1 ~]$ export ORACLE_HOME=/u01/app/19.3.0/grid [grid@racnode1 ~]$ lsnrctl status LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 12-NOV-2019 12:10:31 Copyright (c) 1991, 2019, Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 19.0.0.0.0 - Production Start Date 12-NOV-2019 12:04:52 Uptime 0 days 0 hr. 5 min. 38 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/19.3.0/grid/network/admin/listener.ora Listener Log File /u01/app/grid/diag/tnslsnr/racnode1/listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.16.1.150)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.16.1.160)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=racnode1.example.com)(PORT=5500))(Security=(my_wallet_directory=/u01/app/oracle/product/19.3.0/dbhome_1/admin/ORCLCDB/xdb_wallet))(Presentation=HTTP)(Session=RAW)) Services Summary... Service "+ASM" has 1 instance(s). Instance "+ASM1", status READY, has 1 handler(s) for this service... Service "+ASM_DATA" has 1 instance(s). Instance "+ASM1", status READY, has 1 handler(s) for this service... Service "970f09422a5234d2e053960110ac7965" has 1 instance(s). Instance "ORCLCDB1", status READY, has 1 handler(s) for this service... Service "ORCLCDB" has 1 instance(s). Instance "ORCLCDB1", status READY, has 1 handler(s) for this service... Service "ORCLCDBXDB" has 1 instance(s). Instance "ORCLCDB1", status READY, has 1 handler(s) for this service... Service "orclpdb" has 1 instance(s). Instance "ORCLCDB1", status READY, has 1 handler(s) for this service... The command completed successfully [grid@racnode1 ~]$ sudo -s bash-4.2# su - oracle [oracle@racnode1 ~]$ export ORACLE_HOME=/u01/app/oracle/product/19.3.0/dbhome_1/ [oracle@racnode1 admin]$ cat $ORACLE_HOME/network/admin/tnsnames.ora # tnsnames.ora Network Configuration File: /u01/app/oracle/product/19.3.0/dbhome_1/network/admin/tnsnames.ora # Generated by Oracle configuration tools. ORCLCDB = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = racnode-scan)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = ORCLCDB) ) ) [oracle@racnode1 admin]$ sqlplus sys/Oracle.123#@ORCLCDB as sysdba SQL*Plus: Release 19.0.0.0.0 - Production on Tue Nov 12 12:13:30 2019 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.3.0.0.0 SQL> select instance_name from v$instance; INSTANCE_NAME ---------------- ORCLCDB1 SQL> select name from v$database; NAME --------- ORCLCDB SQL> exit Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.3.0.0.0 

添加第二个RAC节点

首先创建容器racnode2:

docker create -t -i \
  --hostname racnode2 \
  --volume /dev/shm \
  --tmpfs /dev/shm:rw,exec,size=4G  \
  --volume /boot:/boot:ro \
  --dns-search=example.com  \
  --volume /opt/containers/rac_host_file:/etc/hosts \
  --volume /opt/.secrets:/run/secrets \
  --device=/dev/sdc:/dev/asm_disk1 \
  --privileged=false \
  --cap-add=SYS_NICE \ --cap-add=SYS_RESOURCE \ --cap-add=NET_ADMIN \ -e EXISTING_CLS_NODES=racnode1 \ -e NODE_VIP=172.16.1.161 \ -e VIP_HOSTNAME=racnode2-vip \ -e PRIV_IP=192.168.17.151 \ -e PRIV_HOSTNAME=racnode2-priv \ -e PUBLIC_IP=172.16.1.151 \ -e PUBLIC_HOSTNAME=racnode2 \ -e DOMAIN=example.com \ -e SCAN_NAME=racnode-scan \ -e SCAN_IP=172.16.1.70 \ -e ASM_DISCOVERY_DIR=/dev \ -e ASM_DEVICE_LIST=/dev/asm_disk1 \ -e ORACLE_SID=ORCLCDB \ -e OP_TYPE=ADDNODE \ -e COMMON_OS_PWD_FILE=common_os_pwdfile.enc \ -e PWD_KEY=pwd.key \ --tmpfs=/run -v /sys/fs/cgroup:/sys/fs/cgroup:ro \ --cpu-rt-runtime=95000 \ --ulimit rtprio=99 \ --restart=always \ --name racnode2 \ oracle/database-rac:19.3.0

为第二个容器分配网络:

docker network disconnect bridge racnode2
docker network connect rac_pub1_nw --ip 172.16.1.151 racnode2
docker network connect rac_priv1_nw --ip 192.168.17.151 racnode2

启动第二个容器:

docker start racnode2

查看日志:

docker logs -f racnode2

日志如下:

[vagrant@ol7-vagrant-rac ~]$ docker logs -f racnode2
PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=racnode2
TERM=xterm EXISTING_CLS_NODES=racnode1 NODE_VIP=172.16.1.161 VIP_HOSTNAME=racnode2-vip PRIV_IP=192.168.17.151 PRIV_HOSTNAME=racnode2-priv PUBLIC_IP=172.16.1.151 PUBLIC_HOSTNAME=racnode2 DOMAIN=example.com SCAN_NAME=racnode-scan SCAN_IP=172.16.1.70 ASM_DISCOVERY_DIR=/dev ASM_DEVICE_LIST=/dev/asm_disk1 ORACLE_SID=ORCLCDB OP_TYPE=ADDNODE COMMON_OS_PWD_FILE=common_os_pwdfile.enc PWD_KEY=pwd.key SETUP_LINUX_FILE=setupLinuxEnv.sh INSTALL_DIR=/opt/scripts GRID_BASE=/u01/app/grid GRID_HOME=/u01/app/19.3.0/grid INSTALL_FILE_1=LINUX.X64_193000_grid_home.zip GRID_INSTALL_RSP=gridsetup_19c.rsp GRID_SW_INSTALL_RSP=grid_sw_install_19c.rsp GRID_SETUP_FILE=setupGrid.sh FIXUP_PREQ_FILE=fixupPreq.sh INSTALL_GRID_BINARIES_FILE=installGridBinaries.sh INSTALL_GRID_PATCH=applyGridPatch.sh INVENTORY=/u01/app/oraInventory CONFIGGRID=configGrid.sh ADDNODE=AddNode.sh DELNODE=DelNode.sh ADDNODE_RSP=grid_addnode.rsp SETUPSSH=setupSSH.expect DOCKERORACLEINIT=dockeroracleinit GRID_USER_HOME=/home/grid SETUPGRIDENV=setupGridEnv.sh RESET_OS_PASSWORD=resetOSPassword.sh MULTI_NODE_INSTALL=MultiNodeInstall.py DB_BASE=/u01/app/oracle DB_HOME=/u01/app/oracle/product/19.3.0/dbhome_1 INSTALL_FILE_2=LINUX.X64_193000_db_home.zip DB_INSTALL_RSP=db_sw_install_19c.rsp DBCA_RSP=dbca_19c.rsp DB_SETUP_FILE=setupDB.sh PWD_FILE=setPassword.sh RUN_FILE=runOracle.sh STOP_FILE=stopOracle.sh ENABLE_RAC_FILE=enableRAC.sh CHECK_DB_FILE=checkDBStatus.sh USER_SCRIPTS_FILE=runUserScripts.sh REMOTE_LISTENER_FILE=remoteListener.sh INSTALL_DB_BINARIES_FILE=installDBBinaries.sh GRID_HOME_CLEANUP=GridHomeCleanup.sh ORACLE_HOME_CLEANUP=OracleHomeCleanup.sh DB_USER=oracle GRID_USER=grid FUNCTIONS=functions.sh COMMON_SCRIPTS=/common_scripts CHECK_SPACE_FILE=checkSpace.sh RESET_FAILED_UNITS=resetFailedUnits.sh SET_CRONTAB=setCrontab.sh CRONTAB_ENTRY=crontabEntry EXPECT=/usr/bin/expect BIN=/usr/sbin container=true INSTALL_SCRIPTS=/opt/scripts/install SCRIPT_DIR=/opt/scripts/startup GRID_PATH=/u01/app/19.3.0/grid/bin:/u01/app/19.3.0/grid/OPatch/:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin DB_PATH=/u01/app/oracle/product/19.3.0/dbhome_1/bin:/u01/app/oracle/product/19.3.0/dbhome_1/OPatch/:/usr/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin GRID_LD_LIBRARY_PATH=/u01/app/19.3.0/grid/lib:/usr/lib:/lib DB_LD_LIBRARY_PATH=/u01/app/oracle/product/19.3.0/dbhome_1/lib:/usr/lib:/lib HOME=/home/grid Failed to parse kernel command line, ignoring: No such file or directory systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN) Detected virtualization other. Detected architecture x86-64. Welcome to Oracle Linux Server 7.6! Set hostname to <racnode2>. Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory /usr/lib/systemd/system-generators/systemd-fstab-generator failed with error code 1. Failed to parse kernel command line, ignoring: No such file or directory Binding to IPv6 address not available since kernel does not support IPv6. Binding to IPv6 address not available since kernel does not support IPv6. Cannot add dependency job for unit display-manager.service, ignoring: Unit not found. [ OK ] Created slice Root Slice. [ OK ] Listening on Journal Socket. [ OK ] Reached target Swap. [ OK ] Created slice System Slice. Starting Journal Service... Starting Read and set NIS domainname from /etc/sysconfig/network... [ OK ] Listening on Delayed Shutdown Socket. [ OK ] Reached target Local Encrypted Volumes. [ OK ] Reached target RPC Port Mapper. [ OK ] Started Forward Password Requests to Wall Directory Watch. [ OK ] Created slice system-getty.slice. [ OK ] Created slice User and Session Slice. [ OK ] Reached target Slices. [ OK ] Started Dispatch Password Requests to Console Directory Watch. [ OK ] Listening on /dev/initctl Compatibility Named Pipe. Starting Configure read-only root support... Starting Rebuild Hardware Database... [ OK ] Reached target Local File Systems (Pre). Couldn't determine result for ConditionKernelCommandLine=|rd.modules-load for systemd-modules-load.service, assuming failed: No such file or directory Couldn't determine result for ConditionKernelCommandLine=|modules-load for systemd-modules-load.service, assuming failed: No such file or directory [ OK ] Started Journal Service. [ OK ] Started Read and set NIS domainname from /etc/sysconfig/network. Starting Flush Journal to Persistent Storage... [ OK ] Started Configure read-only root support. [ OK ] Reached target Local File Systems. Starting Rebuild Journal Catalog... Starting Preprocess NFS configuration... Starting Mark the need to relabel after reboot... Starting Load/Save Random Seed... [ OK ] Started Mark the need to relabel after reboot. [ OK ] Started Preprocess NFS configuration. [ OK ] Started Flush Journal to Persistent Storage. Starting Create Volatile Files and Directories... [ OK ] Started Create Volatile Files and Directories. Mounting RPC Pipe File System... Starting Update UTMP about System Boot/Shutdown... [FAILED] Failed to mount RPC Pipe File System. See 'systemctl status var-lib-nfs-rpc_pipefs.mount' for details. [DEPEND] Dependency failed for rpc_pipefs.target. [DEPEND] Dependency failed for RPC security service for NFS client and server. [ OK ] Started Update UTMP about System Boot/Shutdown. [ OK ] Started Load/Save Random Seed. [ OK ] Started Rebuild Journal Catalog. [ OK ] Started Rebuild Hardware Database. Starting Update is Completed... [ OK ] Started Update is Completed. [ OK ] Reached target System Initialization. [ OK ] Listening on D-Bus System Message Bus Socket. [ OK ] Started Flexible branding. [ OK ] Reached target Paths. [ OK ] Started Daily Cleanup of Temporary Directories. [ OK ] Reached target Timers. [ OK ] Listening on RPCbind Server Activation Socket. [ OK ] Reached target Sockets. Starting RPC bind service... [ OK ] Reached target Basic System. Starting OpenSSH Server Key Generation... Starting LSB: Bring up/down networking... Starting GSSAPI Proxy Daemon... [ OK ] Started D-Bus System Message Bus. Starting Resets System Activity Logs... Starting Self Monitoring and Reporting Technology (SMART) Daemon... Starting Login Service... [ OK ] Started RPC bind service. Starting Cleanup of Temporary Directories... [ OK ] Started GSSAPI Proxy Daemon. [ OK ] Reached target NFS client services. [ OK ] Reached target Remote File Systems (Pre). [ OK ] Reached target Remote File Systems. Starting Permit User Sessions... [ OK ] Started Permit User Sessions. [ OK ] Started Command Scheduler. [ OK ] Started Resets System Activity Logs. [ OK ] Started Login Service. [ OK ] Started Cleanup of Temporary Directories. [ OK ] Started OpenSSH Server Key Generation. [ OK ] Started Self Monitoring and Reporting Technology (SMART) Daemon. [ OK ] Started LSB: Bring up/down networking. [ OK ] Reached target Network. Starting /etc/rc.d/rc.local Compatibility... Starting OpenSSH server daemon... [ OK ] Reached target Network is Online. Starting Notify NFS peers of a restart... [ OK ] Started /etc/rc.d/rc.local Compatibility. [ OK ] Started Console Getty. [ OK ] Reached target Login Prompts. [ OK ] Started Notify NFS peers of a restart. [ OK ] Started OpenSSH server daemon. [ OK ] Reached target Multi-User System. [ OK ] Reached target Graphical Interface. Starting Update UTMP about System Runlevel Changes... [ OK ] Started Update UTMP about System Runlevel Changes. 11-12-2019 12:21:28 UTC : : Process id of the program : 11-12-2019 12:21:28 UTC : : ################################################# 11-12-2019 12:21:28 UTC : : Starting Grid Installation 11-12-2019 12:21:28 UTC : : ################################################# 11-12-2019 12:21:28 UTC : : Pre-Grid Setup steps are in process 11-12-2019 12:21:28 UTC : : Process id of the program : 11-12-2019 12:21:28 UTC : : Disable failed service var-lib-nfs-rpc_pipefs.mount Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory 11-12-2019 12:21:28 UTC : : Resetting Failed Services 11-12-2019 12:21:28 UTC : : Sleeping for 60 seconds Oracle Linux Server 7.6 Kernel 4.14.35-1902.6.6.el7uek.x86_64 on an x86_64 racnode2 login: 11-12-2019 12:22:28 UTC : : Systemctl state is running! 11-12-2019 12:22:28 UTC : : Setting correct permissions for /bin/ping 11-12-2019 12:22:28 UTC : : Public IP is set to 172.16.1.151 11-12-2019 12:22:28 UTC : : RAC Node PUBLIC Hostname is set to racnode2 11-12-2019 12:22:28 UTC : : Preparing host line for racnode2 11-12-2019 12:22:28 UTC : : Adding \n172.16.1.151\tracnode2.example.com\tracnode2 to /etc/hosts 11-12-2019 12:22:28 UTC : : Preparing host line for racnode2-priv 11-12-2019 12:22:28 UTC : : Adding \n192.168.17.151\tracnode2-priv.example.com\tracnode2-priv to /etc/hosts 11-12-2019 12:22:28 UTC : : Preparing host line for racnode2-vip 11-12-2019 12:22:28 UTC : : Adding \n172.16.1.161\tracnode2-vip.example.com\tracnode2-vip to /etc/hosts 11-12-2019 12:22:28 UTC : : racnode-scan already exists : 172.16.1.70 racnode-scan.example.com racnode-scan, no update required 11-12-2019 12:22:28 UTC : : Preapring Device list 11-12-2019 12:22:28 UTC : : Changing Disk permission and ownership /dev/asm_disk1 11-12-2019 12:22:28 UTC : : DNS_SERVERS is set to empty. /etc/resolv.conf will use default dns docker embedded server. 11-12-2019 12:22:28 UTC : : ##################################################################### 11-12-2019 12:22:28 UTC : : RAC setup will begin in 2 minutes 11-12-2019 12:22:28 UTC : : #################################################################### 11-12-2019 12:22:30 UTC : : ################################################### 11-12-2019 12:22:30 UTC : : Pre-Grid Setup steps completed 11-12-2019 12:22:30 UTC : : ################################################### 11-12-2019 12:22:30 UTC : : Checking if grid is already configured 11-12-2019 12:22:31 UTC : : Public IP is set to 172.16.1.151 11-12-2019 12:22:31 UTC : : RAC Node PUBLIC Hostname is set to racnode2 11-12-2019 12:22:31 UTC : : Domain is defined to example.com 11-12-2019 12:22:31 UTC : : Setting Existing Cluster Node for node addition operation. This will be retrieved from racnode1 11-12-2019 12:22:31 UTC : : Existing Node Name of the cluster is set to racnode1 11-12-2019 12:22:31 UTC : : 172.16.1.150 11-12-2019 12:22:31 UTC : : Existing Cluster node resolved to IP. Check passed 11-12-2019 12:22:31 UTC : : Default setting of AUTO GNS VIP set to false. If you want to use AUTO GNS VIP, please pass DHCP_CONF as an env parameter set to true 11-12-2019 12:22:31 UTC : : RAC VIP set to 172.16.1.161 11-12-2019 12:22:31 UTC : : RAC Node VIP hostname is set to racnode2-vip 11-12-2019 12:22:31 UTC : : SCAN_NAME name is racnode-scan 11-12-2019 12:22:31 UTC : : 172.16.1.70 11-12-2019 12:22:31 UTC : : SCAN Name resolving to IP. Check Passed! 11-12-2019 12:22:31 UTC : : SCAN_IP name is 172.16.1.70 11-12-2019 12:22:31 UTC : : RAC Node PRIV IP is set to 192.168.17.151 11-12-2019 12:22:31 UTC : : RAC Node private hostname is set to racnode2-priv 11-12-2019 12:22:31 UTC : : CMAN_NAME set to the empty string 11-12-2019 12:22:31 UTC : : CMAN_IP set to the empty string 11-12-2019 12:22:31 UTC : : Password file generated 11-12-2019 12:22:31 UTC : : Common OS Password string is set for Grid user 11-12-2019 12:22:31 UTC : : Common OS Password string is set for Oracle user 11-12-2019 12:22:31 UTC : : GRID_RESPONSE_FILE env variable set to empty. AddNode.sh will use standard cluster responsefile 11-12-2019 12:22:31 UTC : : Location for User script SCRIPT_ROOT set to /common_scripts 11-12-2019 12:22:31 UTC : : ORACLE_SID is set to ORCLCDB 11-12-2019 12:22:31 UTC : : Setting random password for root/grid/oracle user 11-12-2019 12:22:31 UTC : : Setting random password for grid user 11-12-2019 12:22:32 UTC : : Setting random password for oracle user 11-12-2019 12:22:32 UTC : : Setting random password for root user 11-12-2019 12:22:32 UTC : : Cluster Nodes are racnode1 racnode2 11-12-2019 12:22:32 UTC : : Running SSH setup for grid user between nodes racnode1 racnode2 11-12-2019 12:22:45 UTC : : Running SSH setup for oracle user between nodes racnode1 racnode2 11-12-2019 12:23:00 UTC : : SSH check fine for the racnode1 11-12-2019 12:23:00 UTC : : SSH check fine for the racnode2 11-12-2019 12:23:00 UTC : : SSH check fine for the racnode2 11-12-2019 12:23:00 UTC : : SSH check fine for the oracle@racnode1 11-12-2019 12:23:01 UTC : : SSH check fine for the oracle@racnode2 11-12-2019 12:23:01 UTC : : SSH check fine for the oracle@racnode2 11-12-2019 12:23:01 UTC : : Setting Device permission to grid and asmadmin on all the cluster nodes 11-12-2019 12:23:01 UTC : : Nodes in the cluster racnode2 11-12-2019 12:23:01 UTC : : Setting Device permissions for RAC Install on racnode2 11-12-2019 12:23:01 UTC : : Preapring ASM Device list 11-12-2019 12:23:01 UTC : : Changing Disk permission and ownership 11-12-2019 12:23:01 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chown $GRID_USER:asmadmin $device" execute on racnode2 11-12-2019 12:23:01 UTC : : Command : su - $GRID_USER -c "ssh $node sudo chmod 660 $device" execute on racnode2 11-12-2019 12:23:01 UTC : : Populate Rac Env Vars on Remote Hosts 11-12-2019 12:23:01 UTC : : Command : su - $GRID_USER -c "ssh $node sudo echo \"export ASM_DEVICE_LIST=${ASM_DEVICE_LIST}\" >> /etc/rac_env_vars" execute on racnode2 11-12-2019 12:23:02 UTC : : Checking Cluster Status on racnode1 11-12-2019 12:23:02 UTC : : Checking Cluster 11-12-2019 12:23:02 UTC : : Cluster Check on remote node passed 11-12-2019 12:23:02 UTC : : Cluster Check went fine 11-12-2019 12:23:03 UTC : : CRSD Check went fine 11-12-2019 12:23:03 UTC : : CSSD Check went fine 11-12-2019 12:23:04 UTC : : EVMD Check went fine 11-12-2019 12:23:04 UTC : : Generating Responsefile for node addition 11-12-2019 12:23:04 UTC : : Clustered Nodes are set to racnode2:racnode2-vip:HUB 11-12-2019 12:23:04 UTC : : Running Cluster verification utility for new node racnode2 on racnode1 11-12-2019 12:23:04 UTC : : Nodes in the cluster racnode2 11-12-2019 12:23:04 UTC : : ssh to the node racnode1 and executing cvu checks on racnode2 11-12-2019 12:24:33 UTC : : Checking /tmp/cluvfy_check.txt if there is any failed check. Verifying Physical Memory ...PASSED Verifying Available Physical Memory ...PASSED Verifying Swap Size ...FAILED (PRVF-7573) Verifying Free Space: racnode2:/usr,racnode2:/var,racnode2:/etc,racnode2:/u01/app/19.3.0/grid,racnode2:/sbin,racnode2:/tmp ...PASSED Verifying Free Space: racnode1:/usr,racnode1:/var,racnode1:/etc,racnode1:/u01/app/19.3.0/grid,racnode1:/sbin,racnode1:/tmp ...PASSED Verifying User Existence: oracle ... Verifying Users With Same UID: 54321 ...PASSED Verifying User Existence: oracle ...PASSED Verifying User Existence: grid ... Verifying Users With Same UID: 54332 ...PASSED Verifying User Existence: grid ...PASSED Verifying User Existence: root ... Verifying Users With Same UID: 0 ...PASSED Verifying User Existence: root ...PASSED Verifying Group Existence: asmadmin ...PASSED Verifying Group Existence: asmoper ...PASSED Verifying Group Existence: asmdba ...PASSED Verifying Group Existence: oinstall ...PASSED Verifying Group Membership: oinstall ...PASSED Verifying Group Membership: asmdba ...PASSED Verifying Group Membership: asmadmin ...PASSED Verifying Group Membership: asmoper ...PASSED Verifying Run Level ...PASSED Verifying Hard Limit: maximum open file descriptors ...PASSED Verifying Soft Limit: maximum open file descriptors ...PASSED Verifying Hard Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum user processes ...PASSED Verifying Soft Limit: maximum stack size ...PASSED Verifying Architecture ...PASSED Verifying OS Kernel Version ...PASSED Verifying OS Kernel Parameter: semmsl ...PASSED Verifying OS Kernel Parameter: semmns ...PASSED Verifying OS Kernel Parameter: semopm ...PASSED Verifying OS Kernel Parameter: semmni ...PASSED Verifying OS Kernel Parameter: shmmax ...PASSED Verifying OS Kernel Parameter: shmmni ...PASSED Verifying OS Kernel Parameter: shmall ...FAILED (PRVG-1201) Verifying OS Kernel Parameter: file-max ...PASSED Verifying OS Kernel Parameter: aio-max-nr ...FAILED (PRVG-1205) Verifying OS Kernel Parameter: panic_on_oops ...PASSED Verifying Package: kmod-20-21 (x86_64) ...PASSED Verifying Package: kmod-libs-20-21 (x86_64) ...PASSED Verifying Package: binutils-2.23.52.0.1 ...PASSED Verifying Package: compat-libcap1-1.10 ...PASSED Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED Verifying Package: sysstat-10.1.5 ...PASSED Verifying Package: ksh ...PASSED Verifying Package: make-3.82 ...PASSED Verifying Package: glibc-2.17 (x86_64) ...PASSED Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED Verifying Package: libaio-0.3.109 (x86_64) ...PASSED Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED Verifying Package: nfs-utils-1.2.3-15 ...PASSED Verifying Package: smartmontools-6.2-4 ...PASSED Verifying Package: net-tools-2.0-0.17 ...PASSED Verifying Users With Same UID: 0 ...PASSED Verifying Current Group ID ...PASSED Verifying Root user consistency ...PASSED Verifying Node Addition ... Verifying CRS Integrity ...PASSED Verifying Clusterware Version Consistency ...PASSED Verifying '/u01/app/19.3.0/grid' ...PASSED Verifying Node Addition ...PASSED Verifying Host name ...PASSED Verifying Node Connectivity ... Verifying Hosts File ...PASSED Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED Verifying subnet mask consistency for subnet "172.16.1.0" ...PASSED Verifying subnet mask consistency for subnet "192.168.17.0" ...PASSED Verifying Node Connectivity ...PASSED Verifying Multicast or broadcast check ...PASSED Verifying ASM Integrity ...PASSED Verifying Device Checks for ASM ... Verifying Access Control List check ...PASSED Verifying Device Checks for ASM ...PASSED Verifying Database home availability ...PASSED Verifying OCR Integrity ...PASSED Verifying Time zone consistency ...PASSED Verifying Network Time Protocol (NTP) ... Verifying '/etc/ntp.conf' ...PASSED Verifying '/var/run/ntpd.pid' ...PASSED Verifying '/var/run/chronyd.pid' ...PASSED Verifying Network Time Protocol (NTP) ...FAILED (PRVG-1017) Verifying User Not In Group "root": grid ...PASSED Verifying Time offset between nodes ...PASSED Verifying resolv.conf Integrity ...FAILED (PRVG-10048) Verifying DNS/NIS name service ...PASSED Verifying User Equivalence ...PASSED Verifying /dev/shm mounted as temporary file system ...PASSED Verifying /boot mount ...PASSED Verifying zeroconf check ...PASSED Pre-check for node addition was unsuccessful on all the nodes. Failures were encountered during execution of CVU verification request "stage -pre nodeadd". Verifying Swap Size ...FAILED racnode2: PRVF-7573 : Sufficient swap size is not available on node "racnode2" [Required = 9.497GB (9958344.0KB) ; Found = 4GB (4194300.0KB)] racnode1: PRVF-7573 : Sufficient swap size is not available on node "racnode1" [Required = 9.497GB (9958344.0KB) ; Found = 4GB (4194300.0KB)] Verifying OS Kernel Parameter: shmall ...FAILED racnode2: PRVG-1201 : OS kernel parameter "shmall" does not have expected configured value on node "racnode2" [Expected = "2251799813685247" ; Current = "18446744073692774000"; Configured = "1073741824"]. racnode1: PRVG-1201 : OS kernel parameter "shmall" does not have expected configured value on node "racnode1" [Expected = "2251799813685247" ; Current = "18446744073692774000"; Configured = "1073741824"]. Verifying OS Kernel Parameter: aio-max-nr ...FAILED racnode2: PRVG-1205 : OS kernel parameter "aio-max-nr" does not have expected current value on node "racnode2" [Expected = "1048576" ; Current = "65536"; Configured = "1048576"]. racnode1: PRVG-1205 : OS kernel parameter "aio-max-nr" does not have expected current value on node "racnode1" [Expected = "1048576" ; Current = "65536"; Configured = "1048576"]. Verifying Network Time Protocol (NTP) ...FAILED racnode2: PRVG-1017 : NTP configuration file "/etc/ntp.conf" is present on nodes "racnode2,racnode1" on which NTP daemon or service was not running racnode1: PRVG-1017 : NTP configuration file "/etc/ntp.conf" is present on nodes "racnode2,racnode1" on which NTP daemon or service was not running Verifying resolv.conf Integrity ...FAILED racnode2: PRVG-10048 : Name "racnode2" was not resolved to an address of the specified type by name servers "127.0.0.11". racnode1: PRVG-10048 : Name "racnode1" was not resolved to an address of the specified type by name servers "127.0.0.11". CVU operation performed: stage -pre nodeadd Date: Nov 12, 2019 12:23:09 PM CVU home: /u01/app/19.3.0/grid/ User: grid 11-12-2019 12:24:33 UTC : : CVU Checks are ignored as IGNORE_CVU_CHECKS set to true. It is recommended to set IGNORE_CVU_CHECKS to false and meet all the cvu checks requirement. RAC installation might fail, if there are failed cvu checks. 11-12-2019 12:24:33 UTC : : Running Node Addition and cluvfy test for node racnode2 11-12-2019 12:24:33 UTC : : Copying /tmp/grid_addnode.rsp on remote node racnode1 11-12-2019 12:24:33 UTC : : Running GridSetup.sh on racnode1 to add the node to existing cluster 11-12-2019 12:26:08 UTC : : Node Addition performed. removing Responsefile 11-12-2019 12:26:08 UTC : : Running root.sh on node racnode2 11-12-2019 12:26:08 UTC : : Nodes in the cluster racnode2 Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory Failed to parse kernel command line, ignoring: No such file or directory 11-12-2019 12:36:37 UTC : : Checking Cluster 11-12-2019 12:36:38 UTC : : Cluster Check passed 11-12-2019 12:36:38 UTC : : Cluster Check went fine 11-12-2019 12:36:39 UTC : : CRSD Check went fine 11-12-2019 12:36:39 UTC : : CSSD Check went fine 11-12-2019 12:36:39 UTC : : EVMD Check went fine 11-12-2019 12:36:39 UTC : : Removing /tmp/cluvfy_check.txt as cluster check has passed 11-12-2019 12:36:39 UTC : : Checking Cluster Class 11-12-2019 12:36:39 UTC : : Checking Cluster Class 11-12-2019 12:36:40 UTC : : Cluster class is CRS-41008: Cluster class is 'Standalone Cluster' 11-12-2019 12:36:40 UTC : : Running User Script for grid user 11-12-2019 12:36:40 UTC : : Performing DB Node addition 11-12-2019 12:38:05 UTC : : Node Addition went fine for racnode2 11-12-2019 12:38:05 UTC : : Running root.sh 11-12-2019 12:38:05 UTC : : Nodes in the cluster racnode2 11-12-2019 12:38:09 UTC : : Adding DB Instance 11-12-2019 12:38:09 UTC : : Adding DB Instance on racnode1 11-12-2019 12:45:49 UTC : : Checking DB status 11-12-2019 12:46:51 UTC : : ORCLCDB is not up and running on racnode2 11-12-2019 12:46:52 UTC : : Error has occurred in Grid Setup, Please verify! 

从日志中可知,数据库启动没有成功,也许是资源的原因。
重启racnode2容器,然后就正常了:

docker stop racnode2
docker start racnode2

查看日志,以下为最后部分日志:

11-12-2019 13:06:54 UTC :  : Setting correct permissions for /bin/ping
11-12-2019 13:06:54 UTC :  : Public IP is set to 172.16.1.151 11-12-2019 13:06:54 UTC : : RAC Node PUBLIC Hostname is set to racnode2 11-12-2019 13:06:54 UTC : : racnode2 already exists : 172.16.1.151 racnode2.example.com racnode2 192.168.17.151 racnode2-priv.example.com racnode2-priv 172.16.1.161 racnode2-vip.example.com racnode2-vip, no update required 11-12-2019 13:06:54 UTC : : racnode2-priv already exists : 192.168.17.151 racnode2-priv.example.com racnode2-priv, no update required 11-12-2019 13:06:54 UTC : : racnode2-vip already exists : 172.16.1.161 racnode2-vip.example.com racnode2-vip, no update required 11-12-2019 13:06:54 UTC : : racnode-scan already exists : 172.16.1.70 racnode-scan.example.com racnode-scan, no update required 11-12-2019 13:06:54 UTC : : Preapring Device list 11-12-2019 13:06:54 UTC : : Changing Disk permission and ownership /dev/asm_disk1 11-12-2019 13:06:54 UTC : : DNS_SERVERS is set to empty. /etc/resolv.conf will use default dns docker embedded server. 11-12-2019 13:06:54 UTC : : ##################################################################### 11-12-2019 13:06:54 UTC : : RAC setup will begin in 2 minutes 11-12-2019 13:06:54 UTC : : #################################################################### 11-12-2019 13:06:56 UTC : : ################################################### 11-12-2019 13:06:56 UTC : : Pre-Grid Setup steps completed 11-12-2019 13:06:56 UTC : : ################################################### 11-12-2019 13:06:56 UTC : : Checking if grid is already configured 11-12-2019 13:06:56 UTC : : Grid is installed on racnode2. runOracle.sh will start the Grid service 11-12-2019 13:06:56 UTC : : Setting up Grid Env for Grid Start 11-12-2019 13:06:56 UTC : : ########################################################################################## 11-12-2019 13:06:56 UTC : : Grid is already installed on this container! Grid will be started by default ohasd scripts 11-12-2019 13:06:56 UTC : : ############################################################################################ 

检查,可以看到所有资源正常:

[grid@racnode2 ~]$ crsctl status resource
NAME=ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup) TYPE=ora.asm_listener.type TARGET=ONLINE , ONLINE , ONLINE STATE=ONLINE on racnode1, ONLINE on racnode2, OFFLINE NAME=ora.DATA.dg(ora.asmgroup) TYPE=ora.diskgroup.type TARGET=ONLINE , ONLINE , OFFLINE STATE=ONLINE on racnode1, ONLINE on racnode2, OFFLINE NAME=ora.LISTENER.lsnr TYPE=ora.listener.type TARGET=ONLINE , ONLINE STATE=ONLINE on racnode1, ONLINE on racnode2 NAME=ora.LISTENER_SCAN1.lsnr TYPE=ora.scan_listener.type TARGET=ONLINE STATE=ONLINE on racnode1 NAME=ora.asm(ora.asmgroup) TYPE=ora.asm.type TARGET=ONLINE , ONLINE , OFFLINE STATE=ONLINE on racnode1, ONLINE on racnode2, OFFLINE NAME=ora.asmnet1.asmnetwork(ora.asmgroup) TYPE=ora.asm_network.type TARGET=ONLINE , ONLINE , OFFLINE STATE=ONLINE on racnode1, ONLINE on racnode2, OFFLINE NAME=ora.chad TYPE=ora.chad.type TARGET=ONLINE , ONLINE STATE=ONLINE on racnode1, ONLINE on racnode2 NAME=ora.cvu TYPE=ora.cvu.type TARGET=ONLINE STATE=ONLINE on racnode1 NAME=ora.net1.network TYPE=ora.network.type TARGET=ONLINE , ONLINE STATE=ONLINE on racnode1, ONLINE on racnode2 NAME=ora.ons TYPE=ora.ons.type TARGET=ONLINE , ONLINE STATE=ONLINE on racnode1, ONLINE on racnode2 NAME=ora.orclcdb.db TYPE=ora.database.type TARGET=ONLINE , ONLINE STATE=ONLINE on racnode1, ONLINE on racnode2 NAME=ora.qosmserver TYPE=ora.qosmserver.type TARGET=ONLINE STATE=ONLINE on racnode1 NAME=ora.racnode1.vip TYPE=ora.cluster_vip_net1.type TARGET=ONLINE STATE=ONLINE on racnode1 NAME=ora.racnode2.vip TYPE=ora.cluster_vip_net1.type TARGET=ONLINE STATE=ONLINE on racnode2 NAME=ora.scan1.vip TYPE=ora.scan_vip.type TARGET=ONLINE STATE=ONLINE on racnode1 

此时的空间状态:

[vagrant@ol7-vagrant-rac ~]$ df -h
Filesystem                   Size  Used Avail Use% Mounted on
devtmpfs                     4.8G     0  4.8G   0% /dev
tmpfs                        4.8G     0  4.8G   0% /dev/shm
tmpfs                        4.8G  8.6M  4.8G   1% /run
tmpfs                        4.8G     0  4.8G   0% /sys/fs/cgroup
/dev/mapper/vg_main-lv_root   76G   36G   41G  47% /
/dev/sda1                    497M  125M  373M  26% /boot
vagrant                      1.9T  1.2T  687G  64% /vagrant
tmpfs                        973M     0  973M   0% /run/user/1000

此时的内存情况:

[vagrant@ol7-vagrant-rac ~]$ free
              total        used        free      shared  buff/cache   available
Mem:        9958344     4261636      428572     4124668     5268136     1040228
Swap:       4194300     2250268     1944032

参考

  1. https://github.com/oracle/docker-images/blob/master/OracleDatabase/RAC/OracleRealApplicationClusters/README.md
  2. https://stackoverflow.com/questions/49822594/vagrant-how-to-specify-the-disk-size
  3. https://asanga-pradeep.blogspot.com/2018/07/rac-on-docker-single-host-setup.html
  4. https://marcbrandner.com/blog/increasing-disk-space-of-a-linux-based-vagrant-box-on-provisioning/
  5. https://stackoverflow.com/questions/27380641/see-full-command-of-running-stopped-container-in-docker

猜你喜欢

转载自www.cnblogs.com/jinanxiaolaohu/p/11917622.html