|
|
Step by step procedure for installing Oracle 11g (11gR2, 11.2.0.2) Real Application Clusters (RAC) on Linux (OEL 5.5) without SCAN, DNS,NTP setup and with GNS and CTSS is describe here. You can pretty much run this installation on a desktop pc at home. I have used Oracle VM linux guests in this exercise. As always, do let me know any questions or comments you might have.
- Introduction
- Pre-install tasks
- Install grid infrastructure
- Install database software and create database
- Oracle Enterprise Manager (dbconsole)
Introduction
Oracle 11gR2 has brought in a few new features. Couple of interesting new features and notes about 11g grid infrastructure:
- Patch set installation: Starting with 11.2.0, direct upgrades to the most recent patch set are supported, meaning you don’t have to install 11.2.0 and apply patch set to get it to 11.2.0.2. You can directly perform the 11.2.0.2 installation. Download the installable from patch 10098816 in My Oracle Support.
- Out-of-place upgrades for CRS/Grid home are supported. CRS/Grid goes into a seperate , new oracle home, instead of same CRS home as before.
- Oracle Automatic Storage Management Cluster File System (Oracle ACFS) is introduced. We know that ASMLib is integrated with OS kernel. ACFS is one step further, offering ASM disks as OS cluster file system. Regular file systems are needed for Oracle binaries, needing another LVM for these regular file systems. With ACFS, you do not need any other LVM to manage oracle storage (let it be binaries, data files, etc..)
- ASM and CRS are delivered into the same home, now called Grid Home
- Single client access name (SCAN) is introduced in Oracle 11gR2 Grid Infrastructure. SCAN is meant to facilitate single name for all Oracle clients to connect to the cluster database, irrespective of number of nodes and node location. Until now, we have to keep adding multiple address records in all clients tnsnames.ora, when a new node gets added to or deleted from the cluster.
- Cluster Time Synchronization Service (CTSS) is introduced to take care of the time synchronization between cluster nodes. Clusterware (CRS or Grid) used to depend on external services for time sync. This has been one of the key problems with node evictions and cluster instability. CTSS would be configured during the grid installation if NTP is not configured, which means /etc/ntp.conf should not exist and ntpd should not be running. With CTSS, you have one less thing to worry..
- Grid Naming Service (GNS) is introduced to manage DNS lookups for the cluster node names, SCAN, and dynamically assign IP addresses to Oracle cluster nodes using Dynamic Host Configuration Protocol (DHCP). In short, GNS uses both DNS and DHCP to better address network requirements of the RAC. GNS would be creating a DNS service for a sub domain (for example: grid.freeoraclehelp.com) and regular DNS server delegates name resolutions to this GNS Server for the sub domain. Of course, GNS is optional, which needs SCAN to be configured in existing DNS servers of your organization. SCAN needs three address(A) records in DNS. This could be pretty simple for DNS administrator to set this up. There is no PTR required for these IP addresses.
- However, if you’re installing RAC at your home, where you don’t have standard DNS to set up SCAN, you can spin up DNS service on one of the RAC nodes and set up SCAN quite easily. Check my another blog post (Oracle 11gR2 RAC SCAN - DNS (bind) Configuration on Linux) to set up SCAN.
- GNS assigns three IP addresses to SCAN VIPs, node VIPs, and even private interconnect interfaces. We just need to assign static IPs to the regular host names (as you normally would for any other linux server) only.
- Multicasting is introduced in 11gR2 for private interconnect traffic. Check if your network supports multicasting before the installation. Refer to 1212703.1 for more information.
- I have received a couple of requests for the RAC installation at home without SCAN setup. So, I would like to utilize both CTSS and GNS, to install grid infrastructure without SCAN or DNS server setup in this post.
- This article explain a fresh 11gR2 installation from scratch. If you’re not installing it new and planning to upgrade, refer to Oracle 11gR2 RAC Upgrade from 10gR2 RAC on Linux
- I have not set up ssh either. I wanted to have Installer (OUI) set this for both grid and oracle users
- Two users are used in this install. Grid user for Grid infrastructure and oracle user for Oracle database software and databases. Role separation is another key feature in 11gR2. This comes handy in environments where storage and databases are managed by different teams.
- OCR & Voting files should be stored in ASM Disk Groups from 11gR2 (for a new installation). Normal redundancy means 3 disks and high redundancy means 5 disks.
OCR & Voting ASM storage requirements:
Redundancy | Min no# of disks | OCR Only | Voting only | Both OCR & Voting (min space) |
External | 1 | 300MB | 300MB | 600MB |
Normal | 3 | 600MB | 900MB | 2GB |
High | 5 | 900MB | 1.5GB | 2.4GB |
This is the minimum storage required.. So, better get some room here. Normal redundancy needs only two disks in a regular ASM disk group for data files.. and external needs min three disks. but, ASM disk group for OCR and Voting is special.
Grid Infrastructure home should NOT be under grid base and Oracle database home should be under oracle base. So, plan the homes accordingly. Also, /etc/resolv.conf should have GNS VIP listed, if GNS is going to be used.
Pre-install tasks
All of the following steps need to be executed on all cluster nodes (unless specified explicitly).[root@rac4 ~]# groupadd -g 500 oinstall [root@rac4 ~]# groupadd -g 501 asmadmin [root@rac4 ~]# groupadd -g 502 asmdba [root@rac4 ~]# groupadd -g 503 asmoper [root@rac4 ~]# groupadd -g 504 oper [root@rac4 ~]# groupadd -g 505 dba [root@rac4 ~]# useradd -u 200 -g oinstall -G dba,oper,asmdba oracle [root@rac4 ~]# useradd -u 201 -g oinstall -G asmdba,asmadmin,asmoper grid [root@rac4 ~]# passwd oracle [root@rac4 ~]# passwd gridYou may want to download asmlib rpm from http://www.oracle.com/technetwork/topics/linux/asmlib/index-101839.html
[root@rac3 ~]# yum install oracleasm-support oracleasmlib oracleasm-`uname -r` [root@rac3 ~]# rpm -qa |grep asm oracleasmlib-2.0.4-1.el5 oracleasm-2.6.18-194.el5xen-2.0.5-1.el5 oracleasm-support-2.1.3-1.el5 [root@rac3 ~]#Once ASMLib is installed, configure ASMlib:
[root@rac4 ~]# /usr/sbin/oracleasm configure -i -e -u grid -g asmadmin -o "xvd"Create ASM Disk labels on one node only, because these disks are shared. All other pre-install tasks to be run on all rac nodes.
[root@rac4 ~]# /etc/init.d/oracleasm enable Writing Oracle ASM library driver configuration: done Initializing the Oracle ASMLib driver: [ OK ] Scanning the system for Oracle ASMLib disks: [ OK ] [root@rac4 ~]# /usr/sbin/oracleasm-discover 'ORCL:*' Using ASMLib from /opt/oracle/extapi/32/asm/orcl/1/libasm.so [ASM Library - Generic Linux, version 2.0.4 (KABI_V2)] [root@rac4 ~]# /usr/sbin/oracleasm createdisk OCRV /dev/xvdc1 Writing disk header: done Instantiating disk: done [root@rac4 ~]# /usr/sbin/oracleasm createdisk DATA1 /dev/xvdd1 Writing disk header: done Instantiating disk: done [root@rac4 ~]# /usr/sbin/oracleasm createdisk DATA2 /dev/xvde1 Writing disk header: done Instantiating disk: done [root@rac4 ~]# /usr/sbin/oracleasm createdisk DATA3 /dev/xvdf1 Writing disk header: done Instantiating disk: done [root@rac4 ~]# /usr/sbin/oracleasm-discover 'ORCL:*' Using ASMLib from /opt/oracle/extapi/32/asm/orcl/1/libasm.so [ASM Library - Generic Linux, version 2.0.4 (KABI_V2)] Discovered disk: ORCL:DATA1 [4192902 blocks (2146765824 bytes), maxio 64] Discovered disk: ORCL:DATA2 [4192902 blocks (2146765824 bytes), maxio 64] Discovered disk: ORCL:DATA3 [4192902 blocks (2146765824 bytes), maxio 64] Discovered disk: ORCL:OCRV [2088387 blocks (1069254144 bytes), maxio 64] [root@rac4 ~]#ASM disks should be visible on all other RAC nodes.
[root@rac3 ~]# /usr/sbin/oracleasm-discover 'ORCL:*' Using ASMLib from /opt/oracle/extapi/32/asm/orcl/1/libasm.so [ASM Library - Generic Linux, version 2.0.4 (KABI_V2)] [root@rac3 ~]# /etc/init.d/oracleasm disable Writing Oracle ASM library driver configuration: done Dropping Oracle ASMLib disks: [ OK ] Shutting down the Oracle ASMLib driver: [ OK ] [root@rac3 ~]# /etc/init.d/oracleasm enable Writing Oracle ASM library driver configuration: done Initializing the Oracle ASMLib driver: [ OK ] Scanning the system for Oracle ASMLib disks: [ OK ] [root@rac3 ~]# /usr/sbin/oracleasm-discover 'ORCL:*' Using ASMLib from /opt/oracle/extapi/32/asm/orcl/1/libasm.so [ASM Library - Generic Linux, version 2.0.4 (KABI_V2)] Discovered disk: ORCL:DATA1 [4192902 blocks (2146765824 bytes), maxio 64] Discovered disk: ORCL:DATA2 [4192902 blocks (2146765824 bytes), maxio 64] Discovered disk: ORCL:DATA3 [4192902 blocks (2146765824 bytes), maxio 64] Discovered disk: ORCL:OCRV [2088387 blocks (1069254144 bytes), maxio 64] [root@rac3 ~]#
[root@rac3 ~]# cat /etc/hosts 127.0.0.1 localhost.localdomain localhost 192.168.1.30 rac3.freeoraclehelp.com rac3 192.168.1.40 rac4.freeoraclehelp.com rac4Install the following pre-requisite Linux packages
[root@rac3 ~]# yum install -y binutils compat-libstdc++-33 compat-libstdc++ elfutils-libelf elfutils-libelf-devel expat gcc gcc-c++ glibc glibc glibc-common glibc-devel glibc-headers libaio libaio libaio-devel libaio-devel libgcc libgcc libstdc++ libstdc++ libstdc++-devel make pdksh sysstat unixODBC unixODBC unixODBC-devel unixODBC-develAlso, install cvuqdisk-1.0.9-1 rpm available from database installation dump directory.
Verify that multicast is supported in your network. The mcasttest script available in Note: 1212703.1.
[oracle@rac3 mcasttest]$ perl mcasttest.pl -n rac3,rac4 -i eth1 ########### Setup for node rac3 ########## Checking node access 'rac3' Checking node login 'rac3' Checking/Creating Directory /tmp/mcasttest for binary on node 'rac3' Distributing mcast2 binary to node 'rac3' ########### Setup for node rac4 ########## Checking node access 'rac4' Checking node login 'rac4' Checking/Creating Directory /tmp/mcasttest for binary on node 'rac4' Distributing mcast2 binary to node 'rac4' ########### testing Multicast on all nodes ########## Test for Multicast address 230.0.1.0 Sep 17 18:42:39 | Multicast Succeeded for eth1 using address 230.0.1.0:42000 Test for Multicast address 224.0.0.251 Sep 17 18:42:44 | Multicast Succeeded for eth1 using address 224.0.0.251:42001 [oracle@rac3 mcasttest]$Change kernel parameters as per Oracle documentation and make sure that the same parameter is not duplicated. Here is my sysctl.conf
[root@rac3 ~]# tail /etc/sysctl.conf kernel.shmall = 2097152 kernel.shmmax = 2147573760 kernel.shmmni = 4096 kernel.sem = 256 32000 100 142 fs.file-max = 6815744 net.ipv4.ip_local_port_range = 10000 65000 kernel.msgmni = 2878 kernel.msgmax = 8192 kernel.msgmnb = 65535 xen.independent_wallclock=1 fs.aio-max-nr = 1048576 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048586 [root@rac3 ~]# sysctl -pUpdate ulimit settings for oracle and grid users. Add the following to /etc/security/limits.conf
cat >> /etc/security/limits.conf <<EOF grid soft nproc 2047 grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 EOFRun cluvfy script from the grid directory and address all the problems:
[root@rac3 ~]# /dumps/Database/11gR2/grid/runcluvfy.sh stage -pre crsinst -n rac3,rac4 -verboseNTP Configuration has to be removed.
[root@rac3 ~]# mv /etc/ntp.conf /etc/ntp.conf.orig [root@rac3 ~]# chkconfig ntpd off [root@rac3 ~]# service ntpd stop
Install grid infrastructure
/etc/resolv.conf does not have GNS as the name server. After correcting the nameserver to GNS VIP, Cluster verification succeeded.
Edit /etc/resolv.conf and replace
search freeoraclehelp.com
nameserver 192.168.1.1
to
search freeoraclehelp.com
nameserver 192.168.1.31
[root@rac4 ~]# nslookup scan.grid.freeoraclehelp.com Server: 192.168.1.31 Address: 192.168.1.31#53 Name: scan.grid.freeoraclehelp.com Address: 192.168.1.101 Name: scan.grid.freeoraclehelp.com Address: 192.168.1.102 Name: scan.grid.freeoraclehelp.com Address: 192.168.1.103 [root@rac4 ~]#
Install database software and create database
I have not run ASMCA or created disk groups manually after grid installation. So, there are no disk groups here for database file storage. I have manually created the ASM disk group for database on the first node:
SQL> CREATE DISKGROUP DATA_DG EXTERNAL REDUNDANCY DISK 2 'ORCL:DATA1', 3 'ORCL:DATA2', 4 'ORCL:DATA3' ; Diskgroup created. SQL> SQL> select group_number,name,state,type,total_mb,free_mb,usable_file_mb from v$asm_diskgroup; 1 OCRV_GDG MOUNTED EXTERN 1019 623 623 2 DATA_DG MOUNTED EXTERN 6141 6087 6087 SQL>
Then, this mount this data group on the second RAC node.
SQL> select group_number,name,state,type,total_mb,free_mb,usable_file_mb from v$asm_diskgroup; 1 OCRV_GDG MOUNTED EXTERN 1019 623 623 0 DATA_DG DISMOUNTED 0 0 0 SQL> alter diskgroup DATA_DG mount ; Diskgroup altered. SQL> select group_number,name,state,type,total_mb,free_mb,usable_file_mb from v$asm_diskgroup; 1 OCRV_GDG MOUNTED EXTERN 1019 623 623 2 DATA_DG MOUNTED EXTERN 6141 6044 6044 SQL> SQL> show parameter asm_diskgroups asm_diskgroups string DATA_DG SQL>
Once disk group is created and mounted on all RAC nodes, try refresh button here. You should see the newly created disk group.
Enterprise Manager (DBConsole)
You may to check few things out at OEM.. https://rac3.freeoraclehelp:1158/em and login as sys user (with sysdba).
To connect to ASM instance, you need to login to ASM again.