RAC+DG之database安装 (四)

本章目录结构:

这一步主要可能安装的时候找不见磁盘组,这个也不要急,一步一步肯定可以解决的,,,,,Database安装与配置

  1. 安装数据库

日志:tail -f /u01/app/oraInventory/logs/installActions2014-06-05_01-30-25AM.log

解压文件:

[oracle@localhost ~]$ ll linux*

-rw-r--r-- 1 oracle oinstall 1239269270 Apr 18 20:44 linux.x64_11gR2_database_1of2.zip

-rw-r--r-- 1 oracle oinstall 1111416131 Apr 18 20:47 linux.x64_11gR2_database_2of2.zip

扫描二维码关注公众号,回复: 2868067 查看本文章

[oracle@localhost ~]$ unzip linux.x64_11gR2_database_1of2.zip && unzip linux.x64_11gR2_database_2of2.zip

以Oracle 用户在rac1上安装:

[oracle@rac1 database]$ export DISPLAY=192.168.1.100:0.0

[oracle@rac1 database]$ xhost +

access control disabled, clients can connect from any host

[oracle@rac1 database]$ ./runInstaller

Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB. Actual 10840 MB Passed

Checking swap space: must be greater than 150 MB. Actual 1599 MB Passed

Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-06-05_01-30-25AM. Please wait ...

安装oracle database软件

以下图形界面:

这里可能报错:INS-35354,ins-35354 the system on which you are attempting to install oracle rac is not part of a valid cluster.

解决办法:

修改文件:vi /u01/app/oraInventory/ContentsXML/inventory.xml

把:<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" >

修改为:<HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true">

56% 的rman 工具的时候也很慢,,,,,

  1. 94%的时候很慢

这个时候在copy到rac2上,可以查看大小来确定是否挂起:

  1. 执行root脚本:

在两个节点上,分别以root身份执行上述脚本,然后点击OK。

[root@rac1 app]# /u01/app/oracle/product/11.2.0/dbhome_1/root.sh

Running Oracle 11g root.sh script...

The following environment variables are set as:

ORACLE_OWNER= oracle

ORACLE_HOME= /u01/app/oracle/product/11.2.0/dbhome_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)

[n]: y

Copying dbhome to /usr/local/bin ...

The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)

[n]: y

Copying oraenv to /usr/local/bin ...

The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)

[n]: y

Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.

Finished product-specific root actions.

[root@rac1 app]#

节点二执行:

[root@rac2 ~]# /u01/app/oracle/product/11.2.0/dbhome_1/root.sh

Running Oracle 11g root.sh script...

The following environment variables are set as:

ORACLE_OWNER= oracle

ORACLE_HOME= /u01/app/oracle/product/11.2.0/dbhome_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)

[n]: y

Copying dbhome to /usr/local/bin ...

The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)

[n]: y

Copying oraenv to /usr/local/bin ...

The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)

[n]: y

Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.

Finished product-specific root actions.

[root@rac2 11.2.0]# /u01/app/oraInventory/orainstRoot.sh

Creating the Oracle inventory pointer file (/etc/oraInst.loc)

Changing permissions of /u01/app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.

The execution of the script is complete.

[root@rac2 11.2.0]# /u01/app/oracle/product/11.2.0/dbhome_1/root.sh

Running Oracle 11g root.sh script...

The following environment variables are set as:

ORACLE_OWNER= oracle

ORACLE_HOME= /u01/app/oracle/product/11.2.0/dbhome_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)

[n]: y

Copying dbhome to /usr/local/bin ...

The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)

[n]: y

Copying oraenv to /usr/local/bin ...

The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)

[n]: y

Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.

Finished product-specific root actions.

[root@rac2 11.2.0]#

数据库软件安装完成,点击close退出。

数据库软件安装完成后就可以在2个节点上测试一下了:

[oracle@rac1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Wed Oct 1 22:42:56 2014

Copyright (c) 1982, 2011, Oracle. All rights reserved.

Connected to an idle instance.

SQL>

  1. 使用DBCA创建数据库

在Oracle 用户下在节点一操作:

注意这一步2个节点都需要选择:

这一步中的em可以不用配置,不然太耗资源

这里的闪回区域选择FRA磁盘组:

  1. 日志路径

可以查看dbca建库日志

路径:/u01/app/oracle/cfgtoollogs/dbca/racdb

tail -f /u01/app/oracle/cfgtoollogs/dbca/racdb/trace.log

  1. 验证

验证集群化数据库已开启

[grid@node1 ~]$ crsctl stat res -t

--------------------------------------------------------------------------------

NAME TARGET STATE SERVER STATE_DETAILS

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.CRS.dg

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.DATADG.dg

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.FRADG.dg

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.LISTENER.lsnr

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.asm

ONLINE ONLINE node1 Started

ONLINE ONLINE node2 Started

ora.gsd

OFFLINE OFFLINE node1

OFFLINE OFFLINE node2

ora.net1.network

ONLINE ONLINE node1

ONLINE ONLINE node2

ora.ons

ONLINE ONLINE node1

ONLINE ONLINE node2

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

1 ONLINE ONLINE node2

ora.LISTENER_SCAN2.lsnr

1 ONLINE ONLINE node1

ora.LISTENER_SCAN3.lsnr

1 ONLINE ONLINE node1

ora.cvu

1 ONLINE ONLINE node1

ora.node1.vip

1 ONLINE ONLINE node1

ora.node2.vip

1 ONLINE ONLINE node2

ora.oc4j

1 ONLINE ONLINE node1

ora.scan1.vip

1 ONLINE ONLINE node2

ora.scan2.vip

1 ONLINE ONLINE node1

ora.scan3.vip

1 ONLINE ONLINE node1

ora.zhongwc.db

1 ONLINE ONLINE node1 Open

2 ONLINE ONLINE node2 Open

检查集群的运行状况

[grid@node1 ~]$ crsctl check cluster

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

[grid@node1 ~]$

所有 Oracle 实例

[grid@node1 ~]$ srvctl status database -d zhongwc

Instance zhongwc1 is running on node node1

Instance zhongwc2 is running on node node2

单个 Oracle 实例

[grid@node1 ~]$ srvctl status instance -d zhongwc -i zhongwc1

Instance zhongwc1 is running on node node1

节点应用程序

[grid@node1 ~]$ srvctl status nodeapps

VIP node1-vip is enabled

VIP node1-vip is running on node: node1

VIP node2-vip is enabled

VIP node2-vip is running on node: node2

Network is enabled

Network is running on node: node1

Network is running on node: node2

GSD is disabled

GSD is not running on node: node1

GSD is not running on node: node2

ONS is enabled

ONS daemon is running on node: node1

ONS daemon is running on node: node2

节点应用程序

[grid@node1 ~]$ srvctl config nodeapps

Network exists: 1/192.168.0.0/255.255.0.0/eth0, type static

VIP exists: /node1-vip/192.168.1.151/192.168.0.0/255.255.0.0/eth0, hosting node node1

VIP exists: /node2-vip/192.168.1.152/192.168.0.0/255.255.0.0/eth0, hosting node node2

GSD exists

ONS exists: Local port 6100, remote port 6200, EM port 2016

数据库配置

[grid@node1 ~]$ srvctl config database -d zhongwc -a

Database unique name: zhongwc

Database name: zhongwc

Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1

Oracle user: oracle

Spfile: +DATADG/zhongwc/spfilezhongwc.ora

Domain:

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Server pools: zhongwc

Database instances: zhongwc1,zhongwc2

Disk Groups: DATADG,FRADG

Mount point paths:

Services:

Type: RAC

Database is enabled

Database is administrator managed

ASM 状态

[grid@node1 ~]$ srvctl status asm

ASM is running on node2,node1

ASM 配置

[grid@node1 ~]$ srvctl config asm -a

ASM home: /u01/app/11.2.0/grid

ASM listener: LISTENER

ASM is enabled.

TNS 监听器状态

[grid@node1 ~]$ srvctl status listener

Listener LISTENER is enabled

Listener LISTENER is running on node(s): node2,node1

TNS 监听器配置

[grid@node1 ~]$ srvctl config listener -a

Name: LISTENER

Network: 1, Owner: grid

Home: <CRS home>

/u01/app/11.2.0/grid on node(s) node2,node1

End points: TCP:1521

节点应用程序配置 VIP、GSD、ONS、监听器

[grid@node1 ~]$ srvctl config nodeapps -a -g -s -l

Warning:-l option has been deprecated and will be ignored.

Network exists: 1/192.168.0.0/255.255.0.0/eth0, type static

VIP exists: /node1-vip/192.168.1.151/192.168.0.0/255.255.0.0/eth0, hosting node node1

VIP exists: /node2-vip/192.168.1.152/192.168.0.0/255.255.0.0/eth0, hosting node node2

GSD exists

ONS exists: Local port 6100, remote port 6200, EM port 2016

Name: LISTENER

Network: 1, Owner: grid

Home: <CRS home>

/u01/app/11.2.0/grid on node(s) node2,node1

End points: TCP:1521

SCAN 状态

[grid@node1 ~]$ srvctl status scan

SCAN VIP scan1 is enabled

SCAN VIP scan1 is running on node node2

SCAN VIP scan2 is enabled

SCAN VIP scan2 is running on node node1

SCAN VIP scan3 is enabled

SCAN VIP scan3 is running on node node1

[grid@node1 ~]$

SCAN 配置

[grid@node1 ~]$ srvctl config scan

SCAN name: cluster-scan.localdomain, Network: 1/192.168.0.0/255.255.0.0/eth0

SCAN VIP name: scan1, IP: /cluster-scan.localdomain/192.168.1.57

SCAN VIP name: scan2, IP: /cluster-scan.localdomain/192.168.1.58

SCAN VIP name: scan3, IP: /cluster-scan.localdomain/192.168.1.59

[grid@node1 ~]$

  1. 验证所有集群节点间的时钟同步

[grid@node1 ~]$ cluvfy comp clocksync -verbose

Verifying Clock Synchronization across the cluster nodes

Checking if Clusterware is installed on all nodes...

Check of Clusterware install passed

Checking if CTSS Resource is running on all nodes...

Check: CTSS Resource running on all nodes

Node Name Status

------------------------------------ ------------------------

node1 passed

Result: CTSS resource check passed

Querying CTSS for time offset on all nodes...

Result: Query of CTSS for time offset passed

Check CTSS state started...

Check: CTSS state

Node Name State

------------------------------------ ------------------------

node1 Active

CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...

Reference Time Offset Limit: 1000.0 msecs

Check: Reference Time Offset

Node Name Time Offset Status

------------ ------------------------ ------------------------

node1 0.0 passed

Time offset is within the specified limits on the following set of nodes:

"[node1]"

Result: Check of clock time offsets passed

Oracle Cluster Time Synchronization Services check passed

Verification of Clock Synchronization across the cluster nodes was successful.

  1. 登陆

[oracle@node1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Sat Dec 29 14:30:08 2012

Copyright (c) 1982, 2011, Oracle. All rights reserved.

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,

Data Mining and Real Application Testing options

SQL> col host_name format a20

SQL> set linesize 200

SQL> select INSTANCE_NAME,HOST_NAME,VERSION,STARTUP_TIME,STATUS,ACTIVE_STATE,INSTANCE_ROLE,DATABASE_STATUS from gv$INSTANCE;

INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS ACTIVE_ST INSTANCE_ROLE DATABASE_STATUS

---------------- -------------------- ----------------- ----------------------- ------------ --------- ------------------ -----------------

zhongwc1 node1.localdomain 11.2.0.3.0 29-DEC-2012 13:55:55 OPEN NORMAL PRIMARY_INSTANCE ACTIVE

zhongwc2 node2.localdomain 11.2.0.3.0 29-DEC-2012 13:56:07 OPEN NORMAL PRIMARY_INSTANCE ACTIVE

[grid@node1 ~]$ sqlplus / as sysasm

SQL*Plus: Release 11.2.0.3.0 Production on Sat Dec 29 14:31:04 2012

Copyright (c) 1982, 2011, Oracle. All rights reserved.

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

With the Real Application Clusters and Automatic Storage Management options

SQL> select name from v$asm_diskgroup;

NAME

------------------------------------------------------------

CRS

DATADG

FRADG

---------------------------------------------------------------------------------------------------------------------------------------------------------------

转载:http://blog.itpub.net/26736162/viewspace-1297113/

猜你喜欢

转载自blog.csdn.net/qq_37136900/article/details/81625718
今日推荐