linux7 installation oracle11.2.0.4RAC Notes

This document describes, again on a cloud server, operating system version Centos7 version, install a few questions and explain ORACLE RAC 11.2.0.4 encountered:

1. Grid cluster and then install the software, before executing root.sh, need to both nodes are patched

Install of Clusterware fails while running root.sh on OL7 - ohasd fails to start (Doc ID 1959008.1)

Grid 11.2.0.4 Install fails when running root.sh on OL7, this affects both Oracle Clusterware and Oracle Restart Installation.

rootcrs.log/roothas.log confirms that ohasd/crsd failed to start

CAUSE

There is a known issue where OL7 expects to use systemd rather than initd for running processes and restarting them and root.sh does not handle this currently.

This was reported in the following Unpublished Bug
      Bug 18370031  - RC SCRIPTS (/ETC/RC.D/RC.* , /ETC/INIT.D/* ) ON OL7 FOR CLUSTERWARE

SOLUTION

Because Oracle Linux 7 (and Redhat 7) use systemd rather than initd for starting/restarting processes and runs them as a service the current software install of both 11.2.0.4 & 12.1.0.1 will not succeed because the ohasd process does not start properly.
In OL7 it needs to be set up as a service and patch fix  for Bug 18370031 needs to be applied for this , BEFORE you run root.sh when prompted .

Need to apply the  patch 18370031 for 11.2.0.4 .
And also its mentioned in 11gR2 Release Notes:https://docs.oracle.com/cd/E11882_01/relnotes.112/e23558/toc.htm#CJAJEBGG
During the Oracle Grid Infrastructure installation, you must apply patch 18370031 before configuring the software that is installed.
The timing of applying the patch is important and is described in detail in the Note 1951613.1 on My Oracle Support. This patch ensures that
the clusterware stack is configured to use systemd for clusterware processes, as Oracle Linux 7 uses systemd for all services.


Before executing root.sh need to be marked with 18,370,031 patch
specific operation, download and install the patch, perform after decompression, both nodes, and then execute the root.sh script before, patch, and then followed by the implementation root.sh script
$ /u01/app/11.2.0/grid/OPatch/opatch apply -local

 2.Oracle Software Installation encountered agent nmhs of make file ins_emagent.mk

error in invoking target 'agent nmhs' of make file ins_emagent.mk while installing Oracle 11.2.0.4 on Linux (文档 ID 2299494.1)

SYMPTOMS

When installing Oracle database 11.2.0.4 software on some Linux x86-64 releases, such as SUSE12SP1, SUSE12SP2 or RHEL7, the below error is reported during link stage:

SOLUTION

Edit $ORACLE_HOME/sysman/lib/ins_emagent.mk, search for the line

$ (MK_EMAGENT_NMECTL)

Then replace the line with

$ (MK_EMAGENT_NMECTL) -lnnz11 after the space, - the characters l, nnz, numeral 11, and then only the actual operation of a node exists found

Then click “Retry” button to continue.

 

 

---- * BUG oracle follows no clear MOS documentation, refer to understand.

Patch 19692824

During installation of Oracle Database or Oracle RAC on OL7, the following linking error may be encountered:

Error in invoking target 'agent nmhs' of makefile '<ORACLE_HOME>/sysman/lib/ins_emagent.mk'. See '<installation log>' for details.

If this error is encountered, the user should select  Continue . Then, after the installation has completed, the user must download Patch 19692824 from  My Oracle Support  and apply it per the instructions included in the patch README.

3. As the public cloud customer environments are encountered private HAIP barrier, leading to the installation GRID, the second node grid can not start

CRS- 5018 :(: CLSN00037 :) Node 2 installation GRID execute root.sh script failed to observe the log is found to cluster ASM started after the error, which means that the ASM instance can not start successfully. 
Since the start before ORACLE ASM is a need to ensure HAIP be able to communicate, so this will concern HAIP information

Reference
Known Issues: Grid Infrastructure Redundant Interconnect and ora.cluster_interconnect.haip (Doc ID 1640865.1)

Bug 11077756 - allow root script to continue upon HAIP failure

Issue: Startup failure of HAIP fails root script, fix of the bug will allow root script to continue so HAIP issue can be worked later.

Fixed in: 11.2.0.2 GI PSU6, 11.2.0.3 and above

Note: the consequence is that HAIP will be disabled. Once the cause is identified and solution is implemented, HAIP needs to be enabled when there's an outage window. To enable, as root on ALL nodes:

# $GRID_HOME/bin/crsctl modify res ora.cluster_interconnect.haip -attr "ENABLED=1" -init
# $GRID_HOME/bin/crsctl stop crs
# $GRID_HOME/bin/crsctl start crs

  Observe the cluster private network information, HAIP another node 2 is normal

 # ./crsctl stat res ora.cluster_interconnect.haip -init 

= Ora.cluster_interconnect.haip NAME
the TYPE = ora.haip.type
the TARGET = ONLINE
STATE = ONLINE ON rac2

inquiry found HAIP already bound to bond0, so the private IP node 1 again, there is an abnormal phenomenon.
[root @ rac2 bin] # netstat -rn
Destination Gateway Flags Genmask the MSS Window irtt Iface
10.10.10.0 0.0.0.0 U 0 0 0 255.255.255.0 eth1                                                                                       
10.118.7.0 255.255.255.0 0.0.0.0 U 0 0 0 eth0 
169.254.0.0 0.0 255.255.0.0 U 0 0 0 .0.0 eth0
0.0.0.0 10.118.7.1 0.0.0.0 UG 0 0 0 eth0 

Original link: https: //blog.csdn.net/evils798/article/details/27248263

Operation, configuration is a private network gateway, network service restart, reinstall (suggestions can try the above link, specify the IP routing network card)

    After reconfiguring the private gateway address, reinstall the RAC, the implementation of root.sh error again, this time using Oracle haip address 169.254 of eth1 is the private network card address, but found

 1 can not ping the node address of the node haip 2.

ASM on Non-First Node (Second or Others) Fails to Start: PMON (ospid: nnnn): terminating the instance due to error 481 (Doc ID 1383737.1)

And, grid $ sqlplus / as sysasm

SQL> startup error as described above, or private network problem HAIP unreasonable.

Case5. HAIP is up on all nodes and route info is presented but HAIP is not pingable

Symptom:
HAIP is presented on both nodes and route information is also presented, but both nodes can not ping or traceroute against the other node HAIP.

······

Solution: 

For Openstack Cloud implementation, engage system admin to create another neutron port to map link-local traffic. For other environment, engage SysAdmin/NetworkAdmin to review routing/network setup.

The solution is to let network engineers to adjust, but the specific cloud vendors is difficult to open a connection between HAIP.

The option to disable HAIP services to achieve cloud environment for installation purposes.

 Disable HAIP

Reference links
http://blog.itpub.net/23135684/viewspace-752721/
https://blog.csdn.net/ctypyb2002/article/details/90705436
https://blog.51cto.com/snowhill/2045748

After the root.sh node 2, and then, after being given cluster
2 root node performs the following modification commands, restart the CRS, and then execute the script root.sh
crsctl modify res ora.cluster_interconnect.haip -attr "ENABLED=0" -init
crsctl modify res ora.asm -attr
"START_DEPENDENCIES='hard(ora.cssd,ora.ctssd)pullup(ora.cssd,ora.ctssd)weak(ora.drivers.acfs)',
STOP_DEPENDENCIES='hard(intermediate:ora.cssd)'
" -init Then node 1, root modification executes the command

and then, the GRID installation can be carried out smoothly, and also normal ORACLE cluster software installed.
DBCA building a database encountered an error
crs 2672 crs 5017 error, and DBCA anomaly is building a database, only one node can start properly DB, DB for another node failed to start.
Grid root.sh given type of installation, when the rear HAIP nowhere found, the disabled, also need to perform the following steps.
ASM instance, and DB instance, need to be modified, this node uses private IP addresses
the SQL> SET ALTER System .10.10.3 have cluster_interconnects = '10 '= SPFILE scope SID =' + ASM1 ';
the SQL> SET ALTER System have cluster_interconnects = '10 .10 .10.4 'scope = SPFILE SID =' + ASM2 ';
DB
the SQL> SET ALTER System .10.10.3 have cluster_interconnects = '10' = SPFILE scope SID = 'ORCL1'
SQL> alter system set cluster_interconnects = '10 .10.10.4 'scope = spfile sid =' orcl2 '; 
practical steps are DBCA After Building, after a DBOK, modified DB, ASM parameters after restart the cluster, the DB, the second examples of nodes can be properly started. The installation was successful.



Recommendation: When disabled HAIP, GRID cluster after successful installation, modification ASM parameters, follow DBCA do not know if I can avoid this problem.

 

 

Guess you like

Origin www.cnblogs.com/lvcha001/p/12155042.html