|
|
Oracle 10g release 2 (10.2.0.1) RAC installation on Linux (OEL 4.8) is described in this article:
- Introduction
- Install Oracle VM Linux Guests
- Download the software
- Perform pre-install tasks
- Install Oracle 10gR2 CRS
- Install Oracle 10gR2 ASM
- Install Oracle 10gR2 Database Software
- Create Oracle 10gR2 Database
Oracle Real Application Cluster (RAC) enables clustering of the database instances from multiple servers, which share the storage and enable high availability, scalability, and manageability of the databases. Starting with Oracle 10g, Oracle supplies Cluster software(CRS) and Oracle Automatic Storage Management (ASM). Oracle 8i parallel server and 9i RAC mandates the customer to own and maintain the vendor cluster software. Oracle 10g makes it easy for customers to run the cluster databases, as customers do not have to go to multiple vendors to get the things done.
In this article, Oracle 10gR2 RAC is set up on OEL 4.8 Oracle VM Guests. Oracle VM is 2.2.2 32bit. I did not install Oracle VM Manager, rather I use commands to create and manage guests. I intend to use raw devices for cluster storage, not ASMLib. I will convert these RAW devices into ASMLib disks in another article. I have selected "Install software only" for the database installation.. you may want to select create database, if you wish to.
CRS Home: /oracle/product/10.2.0/crs
CRS Home: /oracle/product/10.2.0/asm
Database: /oracle/product/10.2.0/rdbms
OCR Disks
/dev/raw/raw1 /dev/xvdc
/dev/raw/raw2 /dev/xvdd
Voting Disks
/dev/raw/raw3 /dev/xvde
/dev/raw/raw4 /dev/xvdf
/dev/raw/raw5 /dev/xvdg
Data Disk:
/dev/raw/raw6 /dev/xvdh
Download the OEL4.8 DVD ISO file into OVM, mount, and share it:
# mkdir -p /VCD # mount -t iso9660 -o ro,loop /dumps/OS/Enterprise-R4-U8-i386-dvd.iso /VCD # service portmap start # service nfs start # exportfs *:/VCD # exportfs # showmount -e `hostname` # service iptables stopYou can run virt-install for both the RAC nodes or you can clone RAC1 from RAC2 (Cloning the system.img and copy /etc/xen/rac1 to /etc/xen/rac2 and change entries.)
# cd /OVS/running_pool; mkdir rac1 rac2 # virt-install --name=rac2 --ram=4096 --file=/OVS/running_pool/rac2/system.img --file-size=8 --location=nfs:192.168.1.99:/VCD --vnc # virt-install --name=rac1 --ram=4096 --file=/OVS/running_pool/rac1/system.img --file-size=8 --location=nfs:192.168.1.99:/VCD --vncAdd storage for oracle binaries and ASM disks.
dd if=/dev/zero of=/OVS2/running_pool/rac1/oracle.img bs=1024k seek=30480 count=0 dd if=/dev/zero of=/OVS2/running_pool/rac2/oracle.img bs=1024k seek=30480 count=0 dd if=/dev/zero of=/OVS2/running_pool/rac_storage/ocr1.img bs=1024k seek=200 count=0 dd if=/dev/zero of=/OVS2/running_pool/rac_storage/ocr2.img bs=1024k seek=200 count=0 dd if=/dev/zero of=/OVS2/running_pool/rac_storage/voting1.img bs=1024k seek=50 count=0 dd if=/dev/zero of=/OVS2/running_pool/rac_storage/voting2.img bs=1024k seek=50 count=0 dd if=/dev/zero of=/OVS2/running_pool/rac_storage/voting3.img bs=1024k seek=50 count=0 dd if=/dev/zero of=/OVS2/running_pool/rac_storage/data1.img bs=1024k seek=10240 count=0OVM Guests would have only one network interface in the beginning.. So, add the second interface for Private inter connect:
[root@ovm ~]# cat /etc/xen/rac2 # Automatically generated xen config file name = "rac2" memory = "4096" disk = [ 'file:/OVS/running_pool/rac2/system.img,xvda,w', 'file:/OVS2/running_pool/rac2/oracle.img,xvdb,w', 'file:/OVS2/running_pool/rac_storage/ocr1.img,xvdc,w!', 'file:/OVS2/running_pool/rac_storage/ocr2.img,xvdd,w!', 'file:/OVS2/running_pool/rac_storage/voting1.img,xvde,w!', 'file:/OVS2/running_pool/rac_storage/voting2.img,xvdf,w!', 'file:/OVS2/running_pool/rac_storage/voting3.img,xvdg,w!', 'file:/OVS2/running_pool/rac_storage/data1.img,xvdh,w!',] vif = ['mac=00:16:3e:29:ae:b5, bridge=xenbr0', 'mac=00:16:3e:29:ae:b6, bridge=xenbr1',] vfb = ["type=vnc,vncunused=1"] uuid = "af9ff92f-5ad0-920e-81e7-0c30d6801fe0" bootloader="/usr/bin/pygrub" vcpus=2 on_reboot = 'restart' on_crash = 'restart' [root@ovm ~]#To add second NIC, you can either copy the current MAC and increment it by one or do not use MAC addresses at all. for example:
vif = [ 'bridge=xenbr0', 'bridge=xenbr1', ]
Note the bang sign in the shared disks. This lets Oracle VM share the disks across multiple guests. I have used oracle.img to create oracle binaries (CRS, ASM, and Database Homes):
mkfs.ext3 /dev/xvdb1
Add the following kernel parameters to /etc/sysctl.conf and run "sysctl -p" as root.
kernel.shmall = 2097152
kernel.shmmax = 1866465280
kernel.shmmni = 4096
kernel.sem = 256 32000 100 142
fs.file-max = 131072
net.ipv4.ip_local_port_range = 10000 65000
kernel.msgmni = 2878
kernel.msgmax = 8192
kernel.msgmnb = 65535
Make sure that the following RPMs are installed:
binutils-2.15.92.0.2-10.EL4
compat-db-4.1.25-9
control-center-2.8.0-12
gcc-3.4.3-9.EL4
gcc-c++-3.4.3-9.EL4
glibc-2.3.4-2
glibc-common-2.3.4-2
gnome-libs-1.4.1.2.90-44.1
libstdc++-3.4.3-9.EL4
libstdc++-devel-3.4.3-9.EL4
make-3.80-5
pdksh-5.2.14-30
sysstat-5.0.5-1
xscreensaver-4.18-5.rhel4.2
Enable YUM to get RPMs from Oracle:
# cd /etc/yum.repos.d # mv Oracle-Base.repo Oracle-Base.repo.disabled # wget http://public-yum.oracle.com/public-yum-el4.repo
Edit this file to make enabled=1 for your OS
[el4_u8_base] name=Enterprise Linux $releasever U8 - $basearch - base baseurl=http://public-yum.oracle.com/repo/EnterpriseLinux/EL4/8/base/$basearch/ gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-el4 gpgcheck=1 enabled=1
Once YUM in configured, you can install all RPMs just in one command:
# yum install -y binutils compat-db control-center gcc gcc-c++ glibc glibc-common gnome-libs libstdc++ libstdc++-devel make pdksh sysstat xscreensaver libaio
Create groups and oracle user to own oracle software:
# /usr/sbin/groupadd oper -g 7000 # /usr/sbin/groupadd dba -g 8000 # /usr/sbin/groupadd oinstall -g 9000 # /usr/sbin/useradd -g oinstall -G dba,oper -u 9000 oracle # passwd oracle
I have disabled unnecessary services in the guests.
# chkconfig ip6tables off # chkconfig iptables off # chkconfig sendmail off # chkconfig portmap off # chkconfig nfs off # chkconfig nfslock off # chkconfig cups offChange default run level to 3 in /etc/inittab
id:3:initdefault:
Create entries for public, private, and VIPs in /etc/hosts:
[oracle@rac1 10.2.0]$ cat /etc/hosts # Do not remove the following line, or various programs # that require network functionality will fail. 127.0.0.1 localhost.localdomain localhost # Public 192.168.1.10 rac1.freeoraclehelp.com rac1 192.168.1.20 rac2.freeoraclehelp.com rac2 #Private 192.168.2.10 rac1-priv.freeoraclehelp.com rac1-priv 192.168.2.20 rac2-priv.freeoraclehelp.com rac2-priv #Virtual 192.168.1.11 rac1-vip.freeoraclehelp.com rac1-vip 192.168.1.22 rac2-vip.freeoraclehelp.com rac2-vip [oracle@rac1 10.2.0]$
Create a single primary partition to occupy the whole disk space on /dev/xvdc../dev/xvdd...etc.
# cat /etc/sysconfig/rawdevices # This file and interface are deprecated. # Applications needing raw device access should open regular # block devices with O_DIRECT. # raw device bindings # format:# # example: /dev/raw/raw1 /dev/sda1 # /dev/raw/raw2 8 5 #OCR /dev/raw/raw1 /dev/xvdc1 /dev/raw/raw2 /dev/xvdd1 #Voting /dev/raw/raw3 /dev/xvde1 /dev/raw/raw4 /dev/xvdf1 /dev/raw/raw5 /dev/xvdg1 #Data /dev/raw/raw6 /dev/xvdh1
Raw devices permissions are reset with the server reboot...add permissions to the rc.local
[oracle@rac1 ~]$ cat /etc/rc.local #!/bin/sh # # This script will be executed *after* all the other init scripts. # You can put your own initialization stuff in here if you don't # want to do the full Sys V style init stuff. touch /var/lock/subsys/local chown oracle:oinstall /dev/raw/* chmod 644 /dev/raw/* chown root:oinstall /dev/raw/raw[12] chmod 660 /dev/raw/raw[12] [oracle@rac1 ~]$
Refer to http://download.oracle.com/docs/cd/B19306_01/install.102/b14203/prelinux.htm#BABFDGHJ for more details.
Install Oracle 10gR2 CRS
As oracle user, run runInstaller from Clusterware dump.
As oracle user, run runInstaller from Clusterware dump.
If global oracle inventory doesn't exist, OUI creates one.
Enter 10gR2 CRS Home here.
Make sure that you do not ignore any package missing errors.
Cluster name and public, private, vip names
Make sure to identify the interfaces right.
2 OCR disks with normal redundancy.
3 Voting disks with normal redundancy.
Again, Make sure Home location is good.
[root@rac1 ~]# /oracle/product/10.2.0/crs/root.sh WARNING: directory '/oracle/product/10.2.0' is not owned by root WARNING: directory '/oracle/product' is not owned by root WARNING: directory '/oracle' is not owned by root Checking to see if Oracle CRS stack is already configured Setting the permissions on OCR backup directory Setting up NS directories Oracle Cluster Registry configuration upgraded successfully WARNING: directory '/oracle/product/10.2.0' is not owned by root WARNING: directory '/oracle/product' is not owned by root WARNING: directory '/oracle' is not owned by root assigning default hostname rac1 for node 1. assigning default hostname rac2 for node 2. Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node: node 1: rac1 rac1-priv rac1 node 2: rac2 rac2-priv rac2 Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Now formatting voting device: /dev/raw/raw3 Now formatting voting device: /dev/raw/raw4 Now formatting voting device: /dev/raw/raw5 Format of 3 voting devices complete. Startup will be queued to init within 90 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. CSS is active on these nodes. rac1 CSS is inactive on these nodes. rac2 Local node checking complete. Run root.sh on remaining nodes to start CRS daemons. [root@rac1 ~]# [root@rac2 ~]# /oracle/product/10.2.0/crs/root.sh WARNING: directory '/oracle/product/10.2.0' is not owned by root WARNING: directory '/oracle/product' is not owned by root WARNING: directory '/oracle' is not owned by root Checking to see if Oracle CRS stack is already configured /etc/oracle does not exist. Creating it now. Setting the permissions on OCR backup directory Setting up NS directories Oracle Cluster Registry configuration upgraded successfully WARNING: directory '/oracle/product/10.2.0' is not owned by root WARNING: directory '/oracle/product' is not owned by root WARNING: directory '/oracle' is not owned by root clscfg: EXISTING configuration version 3 detected. clscfg: version 3 is 10G Release 2. assigning default hostname rac1 for node 1. assigning default hostname rac2 for node 2. Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node : node 1: rac1 rac1-priv rac1 node 2: rac2 rac2-priv rac2 clscfg: Arguments check out successfully. NO KEYS WERE WRITTEN. Supply -force parameter to override. -force is destructive and will destroy any previous cluster configuration. Oracle Cluster Registry for cluster has already been initialized Startup will be queued to init within 90 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. CSS is active on these nodes. rac1 rac2 CSS is active on all nodes. Waiting for the Oracle CRSD and EVMD to start Waiting for the Oracle CRSD and EVMD to start Oracle CRS stack installed and running under init(1M) Running vipca(silent) for configuring nodeapps The given interface(s), "eth0" is not public. Public interfaces should be used to configure virtual IPs. [root@rac2 ~]#
root.sh on second node runs the vipca in silent mode. It failed on my network. So, run VIPCA(/oracle/product/10.2.0/crs/bin/vipca) on the second node as root.
Select the right interface for VIPs.. this is same as public interface.
Enter VIP name and IP addresses
After VIPCA completed successfully, then "OK" on the root.sh prompt on OUI installer.
Install Oracle 10g2 ASM
Install Oracle 10gR2 Database Software
Create Oracle 10gR Database