【11g】3.3 Oracle自动存储管理存储配置

3.3 Oracle Automatic Storage Management Storage Configuration

Review the following sections to configure storage for Oracle Automatic Storage Management:

3.3.1 Configuring Storage for Oracle Automatic Storage Management

This section describes how to configure storage for use with Oracle Automatic Storage Management.

3.3.1.1 Identifying Storage Requirements for Oracle Automatic Storage Management

To identify the storage requirements for using Oracle ASM, you must determine how many devices and the amount of free disk space that you require. To complete this task, follow these steps:

  1. Determine whether you want to use Oracle ASM for Oracle Clusterware files (OCR and voting disks), Oracle Database files, recovery files, or all files except for Oracle Clusterware or Oracle Database binaries. Oracle Database files include data files, control files, redo log files, the server parameter file, and the password file.

    Note:

    You do not have to use the same storage mechanism for Oracle Clusterware, Oracle Database files and recovery files. You can use a shared file system for one file type and Oracle ASM for the other.

    If you choose to enable automated backups and you do not have a shared file system available, then you must choose Oracle ASM for recovery file storage.

    If you enable automated backups during the installation, then you can select Oracle ASM as the storage mechanism for recovery files by specifying an Oracle Automatic Storage Management disk group for the Fast Recovery Area. If you select a noninteractive installation mode, then by default it creates one disk group and stores the OCR and voting disk files there. If you want to have any other disk groups for use in a subsequent database install, then you can choose interactive mode, or run ASMCA (or a command line tool) to create the appropriate disk groups before starting the database install.

  2. Choose the Oracle ASM redundancy level to use for the Oracle ASM disk group.

    The redundancy level that you choose for the Oracle ASM disk group determines how Oracle ASM mirrors files in the disk group and determines the number of disks and amount of free disk space that you require, as follows:

    • External redundancy

      An external redundancy disk group requires a minimum of one disk device. The effective disk space in an external redundancy disk group is the sum of the disk space in all of its devices.

      For Oracle Clusterware files, External redundancy disk groups provide 1 voting disk file, and 1 OCR, with no copies. You must use an external technology to provide mirroring for high availability.

      Because Oracle ASM does not mirror data in an external redundancy disk group, Oracle recommends that you use external redundancy with storage devices such as RAID, or other similar devices that provide their own data protection mechanisms.

    • Normal redundancy

      In a normal redundancy disk group, to increase performance and reliability, Oracle ASM by default uses two-way mirroring. A normal redundancy disk group requires a minimum of two disk devices (or two failure groups). The effective disk space in a normal redundancy disk group is half the sum of the disk space in all of its devices.

      For Oracle Clusterware files, Normal redundancy disk groups provide 3 voting disk files, 1 OCR and 2 copies (one primary and one secondary mirror). With normal redundancy, the cluster can survive the loss of one failure group.

      For most installations, Oracle recommends that you select normal redundancy.

    • High redundancy

      In a high redundancy disk group, Oracle ASM uses three-way mirroring to increase performance and provide the highest level of reliability. A high redundancy disk group requires a minimum of three disk devices (or three failure groups). The effective disk space in a high redundancy disk group is one-third the sum of the disk space in all of its devices.

      For Oracle Clusterware files, High redundancy disk groups provide 5 voting disk files, 1 OCR and 3 copies (one primary and two secondary mirrors). With high redundancy, the cluster can survive the loss of two failure groups.

      While high redundancy disk groups do provide a high level of data protection, you should consider the greater cost of additional storage devices before deciding to select high redundancy disk groups.

  3. Determine the total amount of disk space that you require for Oracle Clusterware files, and for the database files and recovery files.

    Use Table 3-5 and Table 3-6 to determine the minimum number of disks and the minimum disk space requirements for installing Oracle Clusterware files, and installing the starter database, where you have voting disks in a separate disk group:

    Table 3-5 Total Oracle Clusterware Storage Space Required by Redundancy Type

    Redundancy Level Minimum Number of Disks Oracle Cluster Registry (OCR) Files Voting Disk Files Both File Types

    External

    1

    300 MB

    300 MB

    600 MB

    Normal

    3

    600 MB

    900 MB

    1.5 GBFoot 1 

    High

    5

    900 MB

    1.5 GB

    2.4 GB

     

    Footnote 1 If you create a disk group during installation, then it must be at least 2 GB.

    Note:

    If the voting disk files are in a disk group, be aware that disk groups with Oracle Clusterware files (OCR and voting disk files) have a higher minimum number of failure groups than other disk groups.

    If you create a disk group as part of the installation in order to install the OCR and voting disk files, then the installer requires that you create these files on a disk group with at least 2 GB of available space.

    A quorum failure group is a special type of failure group and disks in these failure groups do not contain user data. A quorum failure group is not considered when determining redundancy requirements in respect to storing user data. However, a quorum failure group counts when mounting a disk group.

    Table 3-6 Total Oracle Database Storage Space Required by Redundancy Type

    Redundancy Level Minimum Number of Disks Database Files Recovery Files Both File Types

    External

    1

    1.5 GB

    3 GB

    4.5 GB

    Normal

    2

    3 GB

    6 GB

    9 GB

    High

    3

    4.5 GB

    9 GB

    13.5 GB

  4. Determine an allocation unit size. Every Oracle ASM disk is divided into allocation units (AU). An allocation unit is the fundamental unit of allocation within a disk group. You can select the AU Size value from 1, 2, 4, 8, 16, 32 or 64 MB, depending on the specific disk group compatibility level. The default value is set to 1 MB.

  5. For Oracle Clusterware installations, you must also add additional disk space for the Oracle ASM metadata. You can use the following formula to calculate the disk space requirements (in MB) for OCR and voting disk files, and the Oracle ASM metadata:

    total = [2 * ausize * disks] + [redundancy * (ausize * (nodes * (clients + 1) + 30) + (64 * nodes) + 533)]

    Where:

    • redundancy = Number of mirrors: external = 1, normal = 2, high = 3.

    • ausize = Metadata AU size in megabytes (default is 1 MB)

    • nodes = Number of nodes in cluster.

    • clients - Number of database instances for each node.

    • disks - Number of disks in disk group.

    For example, for a four-node Oracle RAC installation, using three disks in a normal redundancy disk group, you require an additional 1684 MB of space:

    [2 * 1 * 3] + [2 * (1 * (4 * (4 + 1)+ 30)+ (64 * 4)+ 533)] = 1684 MB

    To ensure high availability of Oracle Clusterware files on Oracle ASM, for a normal redundancy disk group, as a general rule for most installations, you must have at least 2 GB of disk space for Oracle Clusterware files in three separate failure groups, with at least three physical disks. To ensure that the effective disk space to create Oracle Clusterware files is 2 GB, best practice suggests that you ensure at least 2.1 GB of capacity for each disk, with a total capacity of at least 6.3 GB for three disks.

  6. Optionally, identify failure groups for the Oracle ASM disk group devices.

    If you intend to use a normal or high redundancy disk group, then you can further protect your database against hardware failure by associating a set of disk devices in a custom failure group. By default, each device comprises its own failure group. However, if two disk devices in a normal redundancy disk group are attached to the same SCSI controller, then the disk group becomes unavailable if the controller fails. The controller in this example is a single point of failure.

    To protect against failures of this type, you could use two SCSI controllers, each with two disks, and define a failure group for the disks attached to each controller. This configuration would enable the disk group to tolerate the failure of one SCSI controller.

    Note:

    Define custom failure groups after installation, using the GUI tool ASMCA, the command line tool asmcmd, or SQL commands.

    If you define custom failure groups, then for failure groups containing database files only, you must specify a minimum of two failure groups for normal redundancy disk groups and three failure groups for high redundancy disk groups.

    For failure groups containing database files and clusterware files, including voting disks, you must specify a minimum of three failure groups for normal redundancy disk groups, and five failure groups for high redundancy disk groups.

    Disk groups containing voting files must have at least 3 failure groups for normal redundancy or at least 5 failure groups for high redundancy. Otherwise, the minimum is 2 and 3 respectively. The minimum number of failure groups applies whether or not they are custom failure groups.

  7. If you are sure that a suitable disk group does not exist on the system, then install or identify appropriate disk devices to add to a new disk group. Use the following guidelines when identifying appropriate disk devices:

    • All of the devices in an Oracle ASM disk group should be the same size and have the same performance characteristics.

    • Do not specify multiple partitions on a single physical disk as a disk group device. Each disk group device should be on a separate physical disk.

    • Although you can specify a logical volume as a device in an Oracle ASM disk group, Oracle does not recommend their use because it adds a layer of complexity that is unnecessary with Oracle ASM. In addition, Oracle RAC requires a cluster logical volume manager in case you decide to use a logical volume with Oracle ASM and Oracle RAC.

      Oracle recommends that if you choose to use a logical volume manager, then use the logical volume manager to represent a single LUN without striping or mirroring, so that you can minimize the impact of the additional storage layer.

3.3.1.2 Creating Files on a NAS Device for Use with Oracle ASM

If you have a certified NAS storage device, then you can create zero-padded files in an NFS mounted directory and use those files as disk devices in an Oracle ASM disk group.

To create these files, follow these steps:

  1. If necessary, create an exported directory for the disk group files on the NAS device.

    Refer to the NAS device documentation for more information about completing this step.

  2. Switch user to root.

  3. Create a mount point directory on the local system. For example:

    # mkdir -p /mnt/oracleasm
    
  4. To ensure that the NFS file system is mounted when the system restarts, add an entry for the file system in the mount file /etc/fstab.

    See Also:

    My Oracle Support note 359515.1 for updated NAS mount option information, available at the following URL:
    https://support.oracle.com
    

    For more information about editing the mount file for the operating system, refer to the man pages. For more information about recommended mount options, refer to the section "Checking NFS Mount and Buffer Size Parameters for Oracle RAC".

  5. Enter a command similar to the following to mount the NFS file system on the local system:

    # mount /mnt/oracleasm
    
  6. Choose a name for the disk group to create. For example: sales1.

  7. Create a directory for the files on the NFS file system, using the disk group name as the directory name. For example:

    # mkdir /mnt/oracleasm/nfsdg
    
  8. Use commands similar to the following to create the required number of zero-padded files in this directory:

    # dd if=/dev/zero
    of=/mnt/oracleasm/nfsdg/disk1 bs=1024k 
    count=1000 oflag=direct
    

    This example creates 1 GB files on the NFS file system. You must create one, two, or three files respectively to create an external, normal, or high redundancy disk group.

  9. Enter commands similar to the following to change the owner, group, and permissions on the directory and files that you created, where the installation owner is grid, and the OSASM group is asmadmin:

    # chown -R grid:asmadmin /mnt/oracleasm
    # chmod -R 660 /mnt/oracleasm
    
  10. If you plan to install Oracle RAC or a standalone Oracle Database, then during installation, edit the Oracle ASM disk discovery string to specify a regular expression that matches the file names you created. For example:

    /mnt/oracleasm/sales1/
    

    Note:

    During installation, disk paths mounted on Oracle ASM and registered on ASMLIB with the string ORCL:* are listed as default database storage candidate disks.

3.3.1.3 Using an Existing Oracle ASM Disk Group

Select from the following choices to store either database or recovery files in an existing Oracle ASM disk group, depending on installation method:

  • If you select an installation method that runs Database Configuration Assistant in interactive mode, then you can decide whether you want to create a disk group, or to use an existing one.

    The same choice is available to you if you use Database Configuration Assistant after the installation to create a database.

  • If you select an installation method that runs Database Configuration Assistant in noninteractive mode, then you must choose an existing disk group for the new database; you cannot create a disk group. However, you can add disk devices to an existing disk group if it has insufficient free space for your requirements.

Note:

The Oracle Automatic Storage Management instance that manages the existing disk group can be running in a different Oracle home directory.

To determine if an existing Oracle Automatic Storage Management disk group exists, or to determine if there is sufficient disk space in a disk group, you can use the Oracle ASM command line tool (asmcmd), Oracle Enterprise Manager Grid Control or Database Control. Alternatively, you can use the following procedure:

  1. View the contents of the oratab file to determine if an Oracle Automatic Storage Management instance is configured on the system:

    $ more /etc/oratab
    

    If an Oracle Automatic Storage Management instance is configured on the system, then the oratab file should contain a line similar to the following:

    +ASM2:oracle_home_path
    

    In this example, +ASM2 is the system identifier (SID) of the Oracle Automatic Storage Management instance, with the node number appended, and oracle_home_path is the Oracle home directory where it is installed. By convention, the SID for an Oracle Automatic Storage Management instance begins with a plus sign.

  2. Set the ORACLE_SID and ORACLE_HOME environment variables to specify the appropriate values for the Oracle Automatic Storage Management instance.

  3. Connect to the Oracle Automatic Storage Management instance and start the instance if necessary:

    $ $ORACLE_HOME/bin/asmcmd
    ASMCMD> startup
    
  4. Enter one of the following commands to view the existing disk groups, their redundancy level, and the amount of free disk space in each one:

    ASMCMD> lsdg
    

    or:

    $ORACLE_HOME/bin/asmcmd -p lsdg
    
  5. From the output, identify a disk group with the appropriate redundancy level and note the free space that it contains.

  6. If necessary, install or identify the additional disk devices required to meet the storage requirements listed in the previous section.

    Note:

    If you are adding devices to an existing disk group, then Oracle recommends that you use devices that have the same size and performance characteristics as the existing devices in that disk group.

3.3.1.4 Configuring Disks for Oracle ASM with ASMLIB

The Oracle Automatic Storage Management (Oracle ASM) library driver (ASMLIB) simplifies the configuration and management of the disk devices by eliminating the need to rebind disk devices used with Oracle ASM each time the system is restarted.

Without ASMLIB Linux 2.6 kernel and later, block device paths do not maintain permissions and path persistence unless you create a permissions or rules file on each cluster member node; block device paths that were /dev/sda can appear as /dev/sdb after a system restart. Adding new disks requires you to modify the udev file to provide permissions and path persistence for the new disk.

With ASMLIB, you define the range of disks you want to have made available as Oracle ASM disks. ASMLIB maintains permissions and disk labels that are persistent on the storage device, so that label is available even after an operating system upgrade. You can update storage paths on all cluster member nodes by running one oracleasm command on each node.

If you intend to use Oracle ASM on block devices for database storage for Linux, then Oracle recommends that you install the ASMLIB driver and associated utilities, and use them to configure the disks for Oracle ASM.

Caution:

On IBM: Linux on System z servers, due to a block size compatibility issue, you cannot use ASMLIB with SCSI storage devices and Fibre Channel Protocol (FCP) for Oracle Grid Infrastructure release 11.2.0.1 and later.

Workaround: use block device directly (for example, using paths similar to /dev/mapper/mpatha_part1), or use DASD disks.

See Also:

My Oracle Support notes How to Manually Configure Disk Storage devices for use with Oracle ASM 11.2 on IBM: Linux on System z under SLES (Doc ID 1350008.1) and How to Manually Configure Disk Storage devices for use with Oracle ASM 11.2 on IBM: Linux on System z under Red Hat 5 (Doc ID 1351746.1), available at the following URL:

https://support.oracle.com

To use the Oracle Automatic Storage Management library driver (ASMLIB) to configure Oracle ASM devices, complete the following tasks.

Note:

To create a database during the installation using the Oracle ASM library driver, you must choose an installation method that runs ASMCA in interactive mode. You must also change the default disk discovery string to ORCL:*.

3.3.1.4.1 Installing and Configuring the Oracle ASM Library Driver Software

ASMLIB is already included with Unbreakable Enterprise Kernel packages, and with SUSE 11. If you are a member of the Unbreakable Linux Network, then you can install the ASMLIB rpms by subscribing to the Oracle Software for Enterprise Linux channel, and using up2date to retrieve the most current package for your system and kernel. For additional information, refer to the following URL:

http://www.oracle.com/technetwork/topics/linux/asmlib/index-101839.html

To install and configure the ASMLIB driver software manually, follow these steps:

  1. Enter the following command to determine the kernel version and architecture of the system:

    # uname -rm
    
  2. Download the required ASMLIB packages from the Oracle Technology Network (OTN) Web site:

    http://www.oracle.com/technetwork/server-storage/linux/downloads/index-088143.html
    

    Note:

    You must install oracleasm-support package version 2.0.1 or later to use ASMLIB on Red Hat Enterprise Linux 5 Advanced Server. ASMLIB is already included with SUSE distributions.

    Tip:

    My Oracle Support note 1089399.1 for information about ASMLIB support with Red Hat distributions:

    https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=1089399.1

    You must install the following packages, where version is the version of the ASMLIB driver, arch is the system architecture, and kernel is the version of the kernel that you are using:

    oracleasm-support-version.arch.rpm
    oracleasm-kernel-version.arch.rpm
    oracleasmlib-version.arch.rpm
    
  3. Switch user to the root user:

    $ su -
    
  4. Enter a command similar to the following to install the packages:

    # rpm -ivh oracleasm-support-version.arch.rpm \
               oracleasm-kernel-version.arch.rpm \
               oracleasmlib-version.arch.rpm
    

    For example, if you are using the Red Hat Enterprise Linux 5 AS kernel on an AMD64 system, then enter a command similar to the following:

    # rpm -ivh oracleasm-support-2.1.3-1.el5.x86_64.rpm \
         oracleasm-2.6.18-194.26.1.el5xen-2.0.5-1.el5.x86_64.rpm \
         oracleasmlib-2.0.4-1.el5.x86_64.rpm
    
  5. Enter the following command to run the oracleasm initialization script with the configure option:

    # /usr/sbin/oracleasm configure -i
    

    Note:

    The oracleasm command in /usr/sbin is the command you should use. The /etc/init.d path is not deprecated, but the oracleasm binary in that path is now used typically for internal commands.
  6. Enter the following information in response to the prompts that the script displays:

    Prompt Suggested Response
    Default user to own the driver interface: Standard groups and users configuration: Specify the Oracle software owner user (for example, oracle)

    Job role separation groups and users configuration:Specify the Grid Infrastructure software owner (for example, grid)

    Default group to own the driver interface: Standard groups and users configuration: Specify the OSDBA group for the database (for example, dba).

    Job role separation groups and users configuration:Specify the OSASM group for storage administration (for example, asmadmin).

    Start Oracle ASM Library driver on boot (y/n): Enter y to start the Oracle Automatic Storage Management library driver when the system starts.
    Scan for Oracle ASM disks on boot (y/n) Enter y to scan for Oracle ASM disks when the system starts.
     

    The script completes the following tasks:

    • Creates the /etc/sysconfig/oracleasm configuration file

    • Creates the /dev/oracleasm mount point

    • Mounts the ASMLIB driver file system

      Note:

      The ASMLIB driver file system is not a regular file system. It is used only by the Oracle ASM library to communicate with the Oracle ASM driver.
  7. Enter the following command to load the oracleasm kernel module:

    # /usr/sbin/oracleasm init
    
  8. Repeat this procedure on all nodes in the cluster where you want to install Oracle RAC.

3.3.1.4.2 Configuring Disk Devices to Use Oracle ASM Library Driver on x86 Systems

To configure the disk devices to use in an Oracle ASM disk group, follow these steps:

  1. If you intend to use IDE, SCSI, or RAID devices in the Oracle ASM disk group, then follow these steps:

    1. If necessary, install or configure the shared disk devices that you intend to use for the disk group and restart the system.

    2. To identify the device name for the disks to use, enter the following command:

      # /sbin/fdisk -l
      

      Depending on the type of disk, the device name can vary:

      Disk Type Device Name Format Description
      IDE disk
      /dev/hdxn
      
      In this example, x is a letter that identifies the IDE disk and n is the partition number. For example, /dev/hda is the first disk on the first IDE bus.
      SCSI disk
      /dev/sdxn
      
      In this example, x is a letter that identifies the SCSI disk and n is the partition number. For example, /dev/sda is the first disk on the first SCSI bus.
      RAID disk
      /dev/rd/cxdypz
      /dev/ida/cxdypz
      
      Depending on the RAID controller, RAID devices can have different device names. In the examples shown, x is a number that identifies the controller, y is a number that identifies the disk, and z is a number that identifies the partition. For example, /dev/ida/c0d1 is the second logical drive on the first controller.
       

      To include devices in a disk group, you can specify either whole-drive device names or partition device names.

      Note:

      Oracle recommends that you create a single whole-disk partition on each disk.
    3. Use either fdisk or parted to create a single whole-disk partition on the disk devices.

  2. Enter a command similar to the following to mark a disk as an Oracle ASM disk:

    # /usr/sbin/oracleasm createdisk DISK1 /dev/sdb1
    

    In this example, DISK1 is the name you assign to the disk.

    Note:

    The disk names that you specify can contain uppercase letters, numbers, and the underscore character. They must start with an uppercase letter.

    If you are using a multi-pathing disk driver with Oracle ASM, then make sure that you specify the correct logical device name for the disk.

  3. To make the disk available on the other nodes in the cluster, enter the following command as root on each node:

    # /usr/sbin/oracleasm scandisks
    

    This command identifies shared disks attached to the node that are marked as Oracle ASM disks.

3.3.1.4.3 Configuring Disk Devices to Use ASM Library Driver on IBM: Linux on System z

  1. If you formatted the DASD with the compatible disk layout, then enter a command similar to the following to create a single whole-disk partition on the device:

    # /sbin/fdasd -a /dev/dasdxxxx
    
  2. Enter a command similar to the following to mark a disk as an ASM disk:

    # /etc/init.d/oracleasm createdisk DISK1 /dev/dasdxxxx
    

    In this example, DISK1 is a name that you want to assign to the disk.

    Note:

    The disk names that you specify can contain uppercase letters, numbers, and the underscore character. They must start with an uppercase letter.

    If you are using a multi-pathing disk driver with ASM, then make sure that you specify the correct logical device name for the disk.

  3. To make the disk available on the other cluster nodes, enter the following command as root on each node:

    # /etc/init.d/oracleasm scandisks
    

    This command identifies shared disks attached to the node that are marked as ASM disks.

3.3.1.4.4 Administering the Oracle ASM Library Driver and Disks

To administer the Oracle Automatic Storage Management library driver (ASMLIB) and disks, use the oracleasm initialization script with different options, as described in Table 3-7.

Table 3-7 ORACLEASM Script Options

Option Description
configure

Use the configure option to reconfigure the Oracle Automatic Storage Management library driver, if necessary:

# /usr/sbin/oracleasm configure -i

To see command options, enter oracleasm configure without the -i flag.

enable
disable

Use the disable and enable options to change the actions of the Oracle Automatic Storage Management library driver when the system starts. The enable option causes the Oracle Automatic Storage Management library driver to load when the system starts:

# /usr/sbin/oracleasm enable
start
stop
restart

Use the startstop, and restart options to load or unload the Oracle Automatic Storage Management library driver without restarting the system:

# /usr/sbin/oracleasm restart
createdisk

Use the createdisk option to mark a disk device for use with the Oracle Automatic Storage Management library driver and give it a name:

# /usr/sbin/oracleasm createdisk DISKNAME devicename
deletedisk

Use the deletedisk option to unmark a named disk device:

# /usr/sbin/oracleasm deletedisk DISKNAME

Caution: Do not use this command to unmark disks that are being used by an Oracle Automatic Storage Management disk group. You must delete the disk from the Oracle Automatic Storage Management disk group before you unmark it.

querydisk

Use the querydisk option to determine if a disk device or disk name is being used by the Oracle Automatic Storage Management library driver:

# /usr/sbin/oracleasm querydisk {DISKNAME | devicename}
listdisks

Use the listdisks option to list the disk names of marked Oracle Automatic Storage Management library driver disks:

# /usr/sbin/oracleasm listdisks
scandisks

Use the scandisks option to enable cluster nodes to identify which shared disks have been marked as Oracle Automatic Storage Management library driver disks on another node:

# /usr/sbin/oracleasm scandisks


3.3.1.5 Configuring ASMLIB for Multipath Disks

Additional configuration is required to use the Oracle Automatic Storage Management library Driver (ASMLIB) with third party vendor multipath disks.

See Also:

My Oracle Support site for updates to supported storage options:

https://support.oracle.com

3.3.1.5.1 About Using Oracle ASM with Multipath Disks

Oracle ASM requires that each disk is uniquely identified. If the same disk appears under multiple paths, then it causes errors. In a multipath disk configuration, the same disk can appear three times:

  1. The initial path to the disk

  2. The second path to the disk

  3. The multipath disk access point.

For example: If you have one local disk, /dev/sda, and one disk attached with external storage, then your server shows two connections, or paths, to that external storage. The Linux SCSI driver shows both paths. They appear as /dev/sdb and /dev/sdc. The system may access either /dev/sdb or /dev/sdc, but the access is to the same disk.

If you enable multipathing, then you have a multipath disk (for example, /dev/multipatha), which can access both /dev/sdb and /dev sdc; any I/O to multipatha can use either the sdb or sdc path. If a system is using the /dev/sdb path, and that cable is unplugged, then the system shows an error. But the multipath disk will switch from the /dev/sdb path to the /dev/sdc path.

Most system software is unaware of multipath configurations. They can use any paths (sdbsdc or multipatha). ASMLIB also is unaware of multipath configurations.

By default, ASMLIB recognizes the first disk path that Linux reports to it, but because it imprints an identity on that disk, it recognizes that disk only under one path. Depending on your storage driver, it may recognize the multipath disk, or it may recognize one of the single disk paths.

Instead of relying on the default, you should configure Oracle ASM to recognize the multipath disk.

3.3.1.5.2 Disk Scan Ordering

The ASMLIB configuration file is located in the path /etc/sysconfig/oracleasm. It contains all the startup configuration you specified with the command /etc/init.d/oracleasm configure. That command cannot configure scan ordering.

The configuration file contains many configuration variables. The ORACLEASM_SCANORDER variable specifies disks to be scanned first. The ORACLEASM_SCANEXCLUDE variable specifies the disks that are to be ignored.

Configure values for ORACLEASM_SCANORDER using space-delimited prefix strings. A prefix string is the common string associated with a type of disk. For example, if you use the prefix string sd, then this string matches all SCSI devices, including /dev/sda/dev/sdb/dev/sdc and so on. Note that these are not globs. They do not use wild cards. They are simple prefixes. Also note that the path is not a part of the prefix. For example, the /dev/ path is not part of the prefix for SCSI disks that are in the path /dev/sd*.

For Oracle Linux and Red Hat Enterprise Linux version 5, when scanning, the kernel sees the devices as /dev/mapper/XXX entries. By default, the 2.6 kernel device file naming scheme udev creates the /dev/mapper/XXX names for human readability. Any configuration using ORACLEASM_SCANORDER should use the /dev/mapper/XXX entries.

3.3.1.5.3 Configuring Disk Scan Ordering to Select Multipath Disks

To configure ASMLIB to select multipath disks first, complete the following procedure:

  1. Using a text editor, open the ASMLIB configuration file /etc/sysconfig/oracleasm.

  2. Edit the ORACLEASM_SCANORDER variable to provide the prefix path of the multipath disks. For example, if the multipath disks use the prefix multipath (/dev/mapper/multipatha/dev/mapper/multipathb and so on), and the multipath disks mount SCSI disks, then provide a prefix path similar to the following:

    ORACLEASM_SCANORDER="multipath sd"
    
  3. Save the file.

When you have completed this procedure, then when ASMLIB scans disks, it first scans all disks with the prefix string multipath, and labels these disks as Oracle ASM disks using the /dev/mapper/multipathX value. It then scans all disks with the prefix string sd. However, because ASMLIB recognizes that these disks have already been labeled with the /dev/mapper/multipath string values, it ignores these disks. After scanning for the prefix strings multipath and sd, Oracle ASM then scans for any other disks that do not match the scan order.

In the example in step 2, the key word multipath is actually the alias for multipath devices configured in /etc/multipath.conf under the multipaths section. For example:

multipaths {
       multipath {
               wwid                    3600508b4000156d700012000000b0000
               alias                   multipath
               ...
       }
       multipath {
               ...
               alias                   mympath
               ...
       }
       ...
}

The default device name is in the format /dev/mapper/mpath* (or a similar path).

3.3.1.5.4 Configuring Disk Order Scan to Exclude Single Path Disks

To configure ASMLIB to exclude particular single path disks, complete the following procedure:

  1. Using a text editor, open the ASMLIB configuration file /etc/sysconfig/oracleasm.

  2. Edit the ORACLEASM_SCANEXCLUDE variable to provide the prefix path of the single path disks. For example, if you want to exclude the single path disks /dev sdb and /dev/sdc, then provide a prefix path similar to the following:

    ORACLEASM_SCANEXCLUDE="sdb sdc"
    
  3. Save the file.

When you have completed this procedure, then when ASMLIB scans disks, it scans all disks except for the disks with the sdb and sdc prefixes, so that it ignores /dev/sdb and /dev/sdc. It does not ignore other SCSI disks, nor multipath disks. If you have a multipath disk (for example, /dev/multipatha), which accesses both /dev/sdb and /dev sdc, but you have configured ASMLIB to ignore sdb and sdc, then ASMLIB ignores these disks and instead marks only the multipath disk as an Oracle ASM disk.

3.3.1.6 Configuring Disk Devices Manually for Oracle ASM

By default, the 2.6 kernel device file naming scheme udev dynamically creates device file names when the server is started, and assigns ownership of them to root. If udev applies default settings, then it changes device file names and owners for voting disks or Oracle Cluster Registry partitions, corrupting them when the server is restarted. For example, a voting disk on a device named /dev/sdd owned by the user grid may be on a device named /dev/sdf owned by root after restarting the server.If you use ASMLIB, then you do not need to ensure permissions and device path persistency in udev.

If you do not use ASMLIB, then you must create a custom rules file. When udev is started, it sequentially carries out rules (configuration directives) defined in rules files. These files are in the path /etc/udev/rules.d/. Rules files are read in lexical order. For example, rules in the file 10-wacom.rules are parsed and carried out before rules in the rules file 90-ib.rules.

When specifying the device information in the UDEV rules file, ensure that the OWNER, GROUP and MODE are specified before any other characteristics in the order shown. For example, if you want to include the characteristic ACTION on the UDEV line, then specify ACTION after OWNER, GROUP, and MODE.

Where rules files describe the same devices, on the supported Linux kernel versions, the last file read is the one that is applied.

To configure a permissions file for disk devices, complete the following tasks:

  1. To obtain information about existing block devices, run the command scsi_id (/sbin/scsi_id) on storage devices from one cluster node to obtain their unique device identifiers. When running the scsi_id command with the -s argument, the device path and name passed should be that relative to the sysfs directory /sys (for example, /block/device) when referring to /sys/block/device. For example:

    # /sbin/scsi_id -g -s /block/sdb/sdb1
    360a98000686f6959684a453333524174
     
    # /sbin/scsi_id -g -s /block/sde/sde1
    360a98000686f6959684a453333524179
    

    Record the unique SCSI identifiers of clusterware devices, so you can provide them when required.

    Note:

    The command scsi_id should return the same device identifier value for a given device, regardless of which node the command is run from.
  2. Configure SCSI devices as trusted devices (white listed), by editing the /etc/scsi_id.config file and adding options=-g to the file. For example:

    # cat > /etc/scsi_id.config
    vendor="ATA",options=-p 0x80
    options=-g
    
  3. Using a text editor, create a UDEV rules file for the Oracle ASM devices, setting permissions to 0660 for the installation owner and the group whose members are administrators of the Oracle Grid Infrastructure software. For example, using the installation owner grid and using a role-based group configuration, with the OSASM group asmadmin:

    # vi /etc/udev/rules.d/99-oracle-asmdevices.rules
    
    KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id",
    RESULT=="14f70656e66696c00000000", OWNER="grid", GROUP="asmadmin", MODE="0660"
    KERNEL=="sd?2", BUS=="scsi", PROGRAM=="/sbin/scsi_id",
    RESULT=="14f70656e66696c00000001", OWNER="grid", GROUP="asmadmin", MODE="0660"
    KERNEL=="sd?3", BUS=="scsi", PROGRAM=="/sbin/scsi_id",
    RESULT=="14f70656e66696c00000002", OWNER="grid", GROUP="asmadmin", MODE="0660"
    
  4. Copy the rules.d file to all other nodes on the cluster. For example:

    # scp 99-oracle-asmdevices.rules root@node2:/etc/udev/rules.d/99-oracle-asmdevices.rules
    
  5. Load updated block device partition tables on all member nodes of the cluster, using /sbin/partprobe devicename. You must do this as root.

  6. Run the command udevtest (/sbin/udevtest) to test the UDEV rules configuration you have created. The output should indicate that the block devices are available and the rules are applied as expected. For example:

    # udevtest /block/sdb/sdb1
    main: looking at device '/block/sdb/sdb1' from subsystem 'block'
    udev_rules_get_name: add symlink
    'disk/by-id/scsi-360a98000686f6959684a453333524174-part1'
    udev_rules_get_name: add symlink
    'disk/by-path/ip-192.168.1.1:3260-iscsi-iqn.1992-08.com.netapp:sn.887085-part1'
    udev_node_mknod: preserve file '/dev/.tmp-8-17', because it has correct dev_t
    run_program: '/lib/udev/vol_id --export /dev/.tmp-8-17'
    run_program: '/lib/udev/vol_id' returned with status 4
    run_program: '/sbin/scsi_id'
    run_program: '/sbin/scsi_id' (stdout) '360a98000686f6959684a453333524174'
    run_program: '/sbin/scsi_id' returned with status 0
    udev_rules_get_name: rule applied, 'sdb1' becomes 'ocr1'
    udev_device_event: device '/block/sdb/sdb1' validate currently present symlinks
    udev_node_add: creating device node '/dev/ocr1', major = '8', minor = '17', 
    mode = '0640', uid = '0', gid = '500'
    udev_node_add: creating symlink
    '/dev/disk/by-id/scsi-360a98000686f6959684a453333524174-part1' to '../../ocr1'
    udev_node_add: creating symlink
    '/dev/disk/by-path/ip-192.168.1.1:3260-iscsi-iqn.1992-08.com.netapp:sn.84187085
    -part1' to '../../ocr1'
    main: run: 'socket:/org/kernel/udev/monitor'
    main: run: '/lib/udev/udev_run_devd'
    main: run: 'socket:/org/freedesktop/hal/udev_event'
    main: run: '/sbin/pam_console_apply /dev/ocr1
    /dev/disk/by-id/scsi-360a98000686f6959684a453333524174-part1
    /dev/disk/by-path/ip-192.168.1.1:3260-iscsi-iqn.1992-08.com.netapp:sn.84187085-
    part1'
    

    In the example output, note that applying the rules renames OCR device /dev/sdb1 to /dev/ocr1.

  7. Enter the command to restart the UDEV service.

    On Asianux, Oracle Linux 5, and RHEL5, the commands are:

    # /sbin/udevcontrol reload_rules
    # /sbin/start_udev
    

    On SUSE 10 and 11, the command is:

    # /etc/init.d boot.udev restart
    

3.3.2 Using Disk Groups with Oracle Database Files on Oracle ASM

Review the following sections to configure Oracle Automatic Storage Management (Oracle ASM) storage for Oracle Clusterware and Oracle Database Files:

3.3.2.1 Identifying and Using Existing Oracle Database Diskgroups on Oracle ASM

The following section describes how to identify existing disk groups and determine the free disk space that they contain.

  • Optionally, identify failure groups for the Oracle ASM disk group devices.

    If you intend to use a normal or high redundancy disk group, then you can further protect your database against hardware failure by associating a set of disk devices in a custom failure group. By default, each device comprises its own failure group. However, if two disk devices in a normal redundancy disk group are attached to the same SCSI controller, then the disk group becomes unavailable if the controller fails. The controller in this example is a single point of failure.

    To protect against failures of this type, you could use two SCSI controllers, each with two disks, and define a failure group for the disks attached to each controller. This configuration would enable the disk group to tolerate the failure of one SCSI controller.

    Note:

    If you define custom failure groups, then you must specify a minimum of two failure groups for normal redundancy and three failure groups for high redundancy.

3.3.2.2 Creating Diskgroups for Oracle Database Data Files

If you are sure that a suitable disk group does not exist on the system, then install or identify appropriate disk devices to add to a new disk group. Use the following guidelines when identifying appropriate disk devices:

  • All of the devices in an Oracle ASM disk group should be the same size and have the same performance characteristics.

  • Do not specify multiple partitions on a single physical disk as a disk group device. Oracle ASM expects each disk group device to be on a separate physical disk.

  • Although you can specify a logical volume as a device in an Oracle ASM disk group, Oracle does not recommend their use because it adds a layer of complexity that is unnecessary with Oracle ASM. In addition, Oracle RAC requires a cluster logical volume manager in case you decide to use a logical volume with Oracle ASM and Oracle RAC.

3.3.3 Configuring Oracle Automatic Storage Management Cluster File System

Oracle ACFS is installed as part of an Oracle Grid Infrastructure installation (Oracle Clusterware and Oracle Automatic Storage Management) for 11g release 2 (11.2).

You can also create a General Purpose File System configuration of ACFS using ASMCA.

See Also:

Section 3.1.3, "General Information About Oracle ACFS" for supported deployment options

To configure Oracle ACFS for an Oracle Database home for an Oracle RAC database:

  1. Install Oracle Grid Infrastructure for a cluster (Oracle Clusterware and Oracle Automatic Storage Management)

  2. Change directory to the Oracle Grid Infrastructure home. For example:

    $ cd /u01/app/11.2.0/grid
    
  3. Ensure that the Oracle Grid Infrastructure installation owner has read and write permissions on the storage mountpoint you want to use. For example, if you want to use the mountpoint /u02/acfsmounts/:

    $ ls -l /u02/acfsmounts
    
  4. Start Oracle ASM Configuration Assistant as the grid installation owner. For example:

    ./asmca
    
  5. The Configure ASM: ASM Disk Groups page shows you the Oracle ASM disk group you created during installation. Click the ASM Cluster File Systems tab.

  6. On the ASM Cluster File Systems page, right-click the Data disk, then select Create ACFS for Database Home.

  7. In the Create ACFS Hosted Database Home window, enter the following information:

    • Database Home ADVM Volume Device Name: Enter the name of the database home. The name must be unique in your enterprise. For example: dbase_01

    • Database Home Mountpoint: Enter the directory path for the mount point. For example: /u02/acfsmounts/dbase_01

      Make a note of this mount point for future reference.

    • Database Home Size (GB): Enter in gigabytes the size you want the database home to be.

    • Database Home Owner Name: Enter the name of the Oracle Database installation owner you plan to use to install the database. For example: oracle1

    • Database Home Owner Group: Enter the OSDBA group whose members you plan to provide when you install the database. Members of this group are given operating system authentication for the SYSDBA privileges on the database. For example: dba1

    • Click OK when you have completed your entries.

  8. Run the script generated by Oracle ASM Configuration Assistant as a privileged user (root). On an Oracle Clusterware environment, the script registers the ACFS as a resource managed by Oracle Clusterware. Registering ACFS as a resource helps Oracle Clusterware to mount the ACFS automatically in proper order when ACFS is used for an Oracle RAC database Home.

  9. During Oracle RAC installation, ensure that you or the DBA who installs Oracle RAC selects for the Oracle home the mount point you provided in the Database Home Mountpoint field (in the preceding example, /u02/acfsmounts/dbase_01).

See Also:

Oracle Automatic Storage Management Administrator's Guide for more information about configuring and managing your storage with Oracle ACFS

3.3.4 Upgrading Existing Oracle ASM Instances

If you have an Oracle ASM installation from a prior release installed on your server, or in an existing Oracle Clusterware installation, then you can use Oracle Automatic Storage Management Configuration Assistant (ASMCA, located in the path Grid_home/bin) to upgrade the existing Oracle ASM instance to 11g release 2 (11.2), and subsequently configure failure groups, Oracle ASM volumes and Oracle Automatic Storage Management Cluster File System (Oracle ACFS).

Note:

You must first shut down all database instances and applications on the node with the existing Oracle ASM instance before upgrading it.

During installation, if you are upgrading from an Oracle ASM release prior to 11.2, and you chose to use Oracle ASM and ASMCA detects that there is a prior Oracle ASM version installed in another Oracle ASM home, then after installing the Oracle ASM 11g release 2 (11.2) binaries, you can start ASMCA to upgrade the existing Oracle ASM instance. You can then choose to configure an Oracle ACFS deployment by creating Oracle ASM volumes and using the upgraded Oracle ASM to create the Oracle ACFS.

If you are upgrading from Oracle ASM 11g release 2 (11.2.0.1) or later, then Oracle ASM is always upgraded with Oracle Grid Infrastructure as part of the rolling upgrade, and ASMCA is started by the root scripts during upgrade. ASMCA cannot perform a separate upgrade of Oracle ASM from release 11.2.0.1 to 11.2.0.2.

On an existing Oracle Clusterware or Oracle RAC installation, if the prior version of Oracle ASM instances on all nodes is 11g release 1, then you are provided with the option to perform a rolling upgrade of Oracle ASM instances. If the prior version of Oracle ASM instances on an Oracle RAC installation are from a release prior to 11g release 1, then rolling upgrades cannot be performed. Oracle ASM on all nodes will be upgraded to 11g release 2 (11.2).

猜你喜欢

转载自blog.csdn.net/viviliving/article/details/91798548