Detailed structures oracleRAC (raw device)

  Clear thinking first device, a laptop, two redhat5 virtual machine (32bit), two virtual machines oracle installed, configure the necessary conditions for better installation, the next bag's official website need. 1. Select 2 hosts, with the platform to ensure network connectivity 2. Ip address design and configuration, each host 3 ip, configuration 2, a reserved private IP: In a separate segment, duplex intercom 10.10.10.1 public IP: the IP maintained, the working segment 192.168.1.11 virtual IP: constant and the same public IP network, after the start crs managed by the oracle 192.168.1.101 3. Time synchronization and dual peer relationships (AB execute another script, log seamless) 4. Iscsi storage configuration, two of 100m, a 5 2g. Installation crs, verify: crs_stat -t 6. The installation software database 7. Establish asm instance in the cluster of parallel, dbca 8. The establishment of a disk group 9. Establish a cluster database, in asm disk group 10. Management and maintenance rac, configure the network, backup, restore, establish a table space 1.2. Configure the network two virtual machines, rac1 rac2 rac1: rac1.ting.com Private IP: 10.10.58.1 public IP: 192.168.58.1 / etc / hosts 10.10.58.1 rac1 10.10.58.2 rac2 192.168.58.1 rac1-priv 192.168.58.2 rac2 -priv 192.168.58.100 rac1-vip 192.168.58.200 rac2-vip rac2: rac2.ting.com private IP: 10.10.58.2 public IP: 192.168.58.2 / etc / hosts 10.10.58.1 rac1 10.10.58.2 rac2 192.168.58.1 rac1- priv 192.168.58. ssh / authorized_keys Note: The following command will prompt you for rac2 the oracle password, follow the prompts to enter, if it fails to re-try the command. Rac1 node: [oracle @ rac1 ~] $ scp ~ / .ssh / authorized_keys rac2: ~ / .ssh / authorized_keys Rac2 node: [oracle @ rac2 ~] $ cat ~ / .ssh / id_rsa.pub >> ~ / .ssh / authorized_keys [oracle @ rac2 ~] $ cat ~ / .ssh / id_dsa.pub >> ~ / .ssh / authorized_keys [oracle @ rac2 ~] $ scp ~ / .ssh / authorized_keys rac1: ~ /. ssh / authorized_keys 4) ensure that the node has two mutually node information. The two machines perform mutual. [Oracle @ rac1 ~] $ ssh rac1 date [oracle @ rac1 ~] $ ssh rac2 date [oracle @ rac1 ~] $ ssh rac1-priv date [oracle @ rac1 ~] $ ssh rac2-priv date switch to rac2 performed [oracle @ rac2 ~] $ ssh rac1 date [oracle @ rac2 ~] $ ssh rac2 date [oracle @ rac2 ~] $ ssh rac1-priv date [oracle @ rac2 ~] $ ssh rac2-priv date arranged hangcheck-timer module, two nodes have to perform Hangcheck-timer is provided by a Linux kernel-level IO-Fencing module that monitors the Linux kernel running state, if long hangs, this module will automatically reboot the system. This module in Linux kernel space to run, is not affected by system load. This module uses the CPU Time Stamp Counter (TSC) register, this will automatically increase the value of the register at each clock cycle, thus using hardware time, so a higher accuracy. This module configuration requires two parameters: hangcheck_tick and hangcheck_margin. hangcheck_tick for defining how long inspection time, the default value is 30 seconds. Busy possible kernel itself, resulting in this check is delayed, the module also allows the definition of a maximum delay, is hangcheck_margin, its default value is 180 seconds. Hangcheck-timer module according hangcheck_tick provided, regularly check the kernel. 2 as long as the time interval is less than inspections hangcheck_tick + hangchec_margin, We will think the kernel is running properly, otherwise it means that abnormal operation, the module will automatically reboot the system. CRS itself, there are a MissCount parameter, you can view the miscount command crsctl get css. When the heartbeat information between RAC nodes is lost, Clusterware must ensure that during the reconstruction, the failure node is indeed a Dead state, otherwise the node is only temporary load is too high resulting in the loss of a heartbeat, then other nodes begin reconstruction, but node does not restart, this may damage the database. Thus MissCount must be greater than and hangcheck_tick + hangcheck_margin. 1) View module locations: [root @ rac1 ~] # find / lib / modules -name "hangcheck-timer.ko" /lib/modules/2.6.18-164.el5/kernel/drivers/char/hangcheck-timer. ko /lib/modules/2.6.18-164.el5xen/kernel/drivers/char/hangcheck-timer.ko 2) configure the system to automatically load the startup module, add the following to /etc/rc.d/rc.local [root @ rac1 ~] # modprobe hangcheck-timer [root @ rac1 ~] # vi /etc/rc.d/rc.local modprobe hangcheck-timer 3) arranged hangcheck-timer parameter added in /etc/modprobe.conf the following: [root @ rac1 ~] # vi / etc / modprobe. conf options hangcheck-timer hangcheck_tick = 30 hangcheck_margin = 180 4) confirming module is loaded successfully: [root @ rac1 ~] # grep Hangcheck / var / log / messages | tail -2 Sep 7 19:53:03 rac1 kernel: Hangcheck: starting hangcheck timer 0.9 0 (tick is 180 seconds, margin is 60 seconds) Sep 7 19:53:03 rac1 kernel:.. Hangcheck: Using monotonic_clock () 4.. The machine configuration used for storage iscsi: 192.168.58.58 2 th of 100m, 2g of a native target side: yum install scsi-target-utils vim /etc/tgt/targets.conf <target iqn.20111215.com.ting: server.sda5> backing-store / dev / sda5 vendor_id sda5 product_id disk1_100m </ target> <target iqn.20111215.com.ting: server.sda6> backing-store / dev / sda6 vendor_id sda6 product_id disk2_100m </ target> <target iqn.20111215.com.ting: server.sda7> backing-store / dev / sda7 vendor_id sda7 product_id disk3_2G </ target> services: chkconfig tgtd on / etc / init. / Dev / null || F = "$ F $ i" done; echo $ F; unset F rac2: mkdir -p / u01 / app / oracle chown -R oracle: oinstall / u01 chmod -R 777 / u01 vim / etc 524288000. the official website provides value - /sysctl.conf net.core.rmem_default = 262144 net.core.wmem_default = 262144 net.core.rmem_max = 262144 net.core.wmem_max = 262144 kernel.shmall = 131072000 kernel.shmmax = 544288000 small. However, when checked through kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 fs.file-max = 65536 net.ipv4.ip_local_port_range = 1024 65000 sysctl -p set user resource limits, have to perform two nodes vi / etc / sysconfig / limits.conf oracle soft memlock 5242880 oracle hard memlock 524280 oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 65536 oracle hard nofile 65536 /etc/pam.d/login add the following line to the file: session required / lib / security / pam_limits.so Oracle configuration on each node profile file vim. bash_profile export PATH unset USERNAME export ORACLE_BASE = / u01 / app / oracle export ORACLE_HOME = $ ORACLE_BASE / product / 10.2.0 / db export ORA_CRS_HOME = $ ORACLE_BASE / product / crs export NLS_LANG = AMERICAN_AMERICA.UTF8 export ORACLE_SID = rac2 export ORACLE_TERM = xterm export ORA_NLS33 = $ ORACLE_HOME / ocommon / nls / admin / data export PATH = $ CRS_HOME / bin: $ ORACLE_HOME / bin: $ PATH package inspection for i in binutils compat-gcc-34 compat-libstdc ++ - 296 control-center \ gcc gcc-c ++ glibc glibc-common glibc-devel libaio libgcc \ libstdc ++ libstdc ++ - devel libXp make openmotif22 setarch do rpm -q $ i &> / dev / null || F = "$ F $ i" done; echo $ F; unset F 5. Installation crs, verify: crs_stat -t use CVU verify the feasibility of the cluster is mounted on two nodes to :( through) 1> rac1 for verification: su - oracle /home/oracle/clusterware/cluvfy/runcluvfy.sh stage - pre crsinst -n rac1, rac2 -verbose if given as follows: pre-check the implementation of the cluster service checks are accessible nodes ... rac1.ting.com: rac1.ting.com Check: Node "null" node accessibility destination node is accessible ----------------------------? -------- ------------------------ rac2 No rac1 No result: node "null" node accessibility check failed. ERROR: Unable to access any node. Verification can not continue. In the pre-inspection on all nodes of the cluster service is set to fail. Review: hostname and / etc / hosts in the same host name corresponding to find out why no correction is also the same for verification rac2: 2> rac1: /home/oracle/clusterware/runInstaller.sh Note: modify the program installation path db_1 ==> crs_1 if: the lack of an error on the installation of the module interface display module is not properly installed: ttfonts-zh_CN-2.14-6.noarch graphical installer: Add rac2 add raw device when prompted to execute the script: first change: / home / oracle / oracle / product /10.2.0/crs/bin/vipca Add a new row after row 123 fi: unset LD_ASSUME_KERNEL /home/oracle/oracle/product/10.2.0/crs/bin/srvctl added unset LD_ASSUME_KERNEL export LD_ASSUME_KERNEL performed after the line: two scripts prompt: /home/oracle/oraInventory/orainstRoot.sh /home/oracle/oracle/product/10.2.0/crs/root.

Reproduced in: https: //my.oschina.net/766/blog/211081

Guess you like

Origin blog.csdn.net/weixin_34032792/article/details/91546995