Monday, April 15, 2013

Oracle 11g R2 Clusterware installation

Oracle 11g R2 is a two phase installation process as below

1. Oracle Grid installation
2. ORACLE RDBMS (software) installation

before going into installation, below points are important to remember and understand as per the MOS note

  • 11gR2 Clusterware needs to be up and running before 11gR2 Real Application Clusters database is installed
  • The GRID home consists of the Oracle Clusterware and ASM.  ASM should not be in a separate home.
  • The 11gR2 Clusterware can be installed in "Standalone" mode for ASM and/or "Oracle Restart" single node support. This clusterware is a subset of the full clusterware described in this document.
  • The 11gR2 Clusterware can be run by itself or on top of vendor clusterware.  See the certification matrix for certified combinations. Ref: Note: 184875.1 "How To Check The Certification Matrix for Real Application Clusters"
  • The GRID Home and the RAC/DB Home must be installed in different locations.
  • The 11gR2 Clusterware requires a shared OCR files and voting files.  These can be stored on ASM or a cluster filesystem.
  • The OCR is backed up automatically every 4 hours to <GRID_HOME>/cdata/<scan name>/ and can be restored via ocrconfig. 
  • The voting file is backed up into the OCR at every configuration change and can be restored via crsctl. 
  • The 11gR2 Clusterware requires at least one private network for inter-node communication and at least one public network for external communication.  Several virtual IPs need to be registered with DNS.  This includes the node VIPs (one per node), SCAN VIPs (three).  This can be done manually via your network administrator or optionally you could configure the "GNS" (Grid Naming Service) in the Oracle clusterware to handle this for you (note that GNS requires its own VIP).  
  • A SCAN (Single Client Access Name) is provided to clients to connect to.  For more informantion on SCAN see Note: 887522.1
  • The root.sh script at the end of the clusterware installation starts the clusterware stack.  For information on troubleshooting root.sh issues see Note: 1053970.1
  • Only one set of clusterware daemons can be running per node. 
  • On Unix, the clusterware stack is started via the init.ohasd script referenced in /etc/inittab with "respawn".
  • A node can be evicted (rebooted) if a node is deemed to be unhealthy.  This is done so that the health of the entire cluster can be maintained.  For more information on this see: Note: 1050693.1 "Troubleshooting 11.2 Clusterware Node Evictions (Reboots)"
  • Either have vendor time synchronization software (like NTP) fully configured and running or have it not configured at all and let CTSS handle time synchonization.  See Note: 1054006.1 for more information.
  • If installing DB homes for a lower version, you will need to pin the nodes in the clusterware or you will see ORA-29702 errors.  See Note 946332.1 and Note:948456.1 for more information.
  • The clusterware stack can be started by either booting the machine, running "crsctl start crs" to start the clusterware stack, or by running "crsctl start cluster" to start the clusterware on all nodes.  Note that crsctl is in the <GRID_HOME>/bin directory.  Note that "crsctl start cluster" will only work if ohasd is running.
  • The clusterware stack can be stopped by either shutting down the machine, running "crsctl stop crs" to stop the clusterware stack, or by running "crsctl stop cluster" to stop the clusterware on all nodes.  Note that crsctl is in the <GRID_HOME>/bin directory.
  • Killing clusterware daemons is not supported.
  • Instance is now part of .db resources in "crsctl stat res -t" output, there is no separate .inst resource for 11gR2 instance.

so here is the installation process.

1. run the installation command from either software home or CD

./runInstaller

Select the "Install and grid option for a cluster" and click Next


Grid infrastructure can be installed by choosing Typical or Advance option.

Typical is the most simplest installation type. Advance option can let us configure ASM, storage etc.. In our case, we are picking 'Typical' installation type. Hit Next to proceed further.

 Now this below screen is an important one! we'll need to perform following things here

a. Add node details
b. Set up Public and Private network interfaces
c. SCAN IP
d. SSH Connectivity test


As part of the installation, local node details are displayed e.g. public and VIP names. You can add more node by hitting the ADD button and entering the details of each node.




 THE SSH CONNECTIVITY:


The SSH Connectivity configuration between the node members is a important part of Cluster installation. Therefore, if you haven't done that already then now is your chance :)

You'd  need to enter the Grid Infrastructure software owner  details and click the Test button to test and build the connectivity. To assign the right interface type for public and private interfaces, click on the Identify Network Interfaces button. All the network interfaces in the previous release were wrongly assigned to the Private type. However, with 11g R2, you can see that public and private interfaces have been assigned to the correct interface types. You could also choose the right type for the individual interface name using the drop-down list on the Identify Network Interfaces dialogue box. Click on OK and close the Identify Network Interfaces window and click on Next to continue further, as shown in the following screenshot:


The next (Install Locations) screen requires you to make a decision about 
where you are going to configure the OCR and voting disks, whether it will be on a shared disk or on ASM. Unlike the previous versions, with 11g R2, you can configure the critical components of a cluster, that is, OCR and voting disk files on ASM as well. If the ASM option is being selected over the shared storage, you need to enter and confirm the SYSASM Password subsequently, as shown in the following screenshot:

Also, on the Install Locations screen, enter the correct values for the Oracle
base and software locations




Choose one of the options from the drop-down list against Cluster RegistryType: Filesystem or AutomaticStorage Management. If the ASM option is selected, then provide a password for ASM super user, that is, ASMSYS use and assign the relevant OS group. Bypass the password warning message 
here, in case the password doesn't fit into the Oracle recommendation. Click on Next to continue. After choosing ASM as the storage option for the OCR 
and the voting disk, the subsequent screen requires your interaction for creating an ASM disk group in order to place the OCR and voting disks.



7.  Click on the Change Directory Path button to define the storage path in order to discover the storage (disks) for ASM, as shown in the following screenshot:




For example, enter a string value such as /dev/sd*1 in the Change Disk
Discovery Path dialog box and click on OK.


8.  Upon discovering the ASM disks, select the required disks from the list and enter the disk group name and also choose External type for the Redundancy option. Click on Next to continue further. Accept the default inventory location, and click on Next to proceed further on the Create Inventory
screen.

9.  OUI then initiates the prerequisite verifications on the Prerequisites Checks screen. You will then need to progress towards the Summary screen, provided that no problems were found in the prerequisite checks. However, if any concerns are raised during the prerequisites checks, resolve them
before you continue with the installation process.

10.  Verify the summary details given on the Summary screen and click on Next to begin the installation process.

11.  The actual Grid Infrastructure installation process will then kick off on the
Setup screen.


12.  Towards approaching the 100% progress, a pop-up window Execute Configuration scripts will be displayed with instructions to run orainstRoot.sh and root.sh scripts sequentially on all cluster node
members, as shown in the following screenshot:


Therefore, open a new terminal, log in as root user and run the orainstRoot. sh script on the local node. After the script on the local node successfully completes, move on to another node and execute the script again. You need to execute the same script on the rest of the nodes of a cluster.

13.  After executing the orainstRoot.sh script successfully on all nodes, turn back to the first node to execute the root.sh script as root user. After completion, move to the second node and execute the same script. After executing the script successfully on each node of a cluster, click on the OK
button to close the dialog box.

14.  If, for any reason, the root.sh script is not run successfully, then refer to the  deconfigure/reconfigure section at the end of the chapter to know more about how to resume from a failed installation. After the scripts are successfully run on each of the nodes of the cluster, continue further configuration of other options such as, ASM creation, Listener creation, Private interconnect, and the cluster verification utility on the Setup screen. Once everything is successfully completed, click on Finish to end the Grid Infrastructure installation process.



BASH-DBA: Oracle RAC components
BASH-DBA: Finding details of Oracle 11g RAC interconnect
BASH-DBA: Oracle 11g Clusterware Installation Requirements
BASH-DBA: Oracle 11g R1 Clusterware installation
BASH-DBA: Oracle 11g R1 Clusterware post-installation checks
BASH-DBA: How to Start/Stop Oracle Clusterware


source: MOS, 11g R1 R2 RAC essentials

No comments:

Post a Comment