10gR2 RAC Linux x86 64 Installation
Short Description
Download 10gR2 RAC Linux x86 64 Installation...
Description
10g RAC on Linux x86_64 Installation Oracle 10.2.0.2 2006 – Q3
Confidential - For Internal Use Only
58399525.doc
Page 1 of 25
Table of Contents 1. PRE-INSTALLATION..........................................................................................................................3 2. ORACLE CLUSTERWARE (FORMERLY CRS) AND ASM.....................................................................13 PRE-INSTALL OF CLUSTERWARE FILES (OCR AND VOTING DISK)..............................................................13 PRE-INSTALL OF DATABASE FILES FOR ASM (AUTOMATIC STORAGE MANAGEMENT)..................................14 CONFIGURE THE DISK DEVICES TO USE THE ASM LIBRARY DRIVER.........................................................15 INSTALL ORACLE CLUSTERWARE...........................................................................................................15 POST-INSTALLATION ADMINISTRATION INFO............................................................................................16 PRE-INSTALL NOTES............................................................................................................................18 INSTALL.............................................................................................................................................18 4. PATCH ORACLE DATABASE SOFTWARE............................................................................................19 DOWNLOAD AND INSTALL PATCHES.......................................................................................................19 FIX AN INSTALL BUG (5117016)...........................................................................................................19 FIX A PERMISSION BUG (PATCH 5087548).............................................................................................20 5. RAC DATABASE USING THE DBCA WITH ASM.............................................................................21 PRE-INSTALL......................................................................................................................................21 INSTALL.............................................................................................................................................21 6. POST-INSTALLATION TASKS............................................................................................................23 7. ORACLE FILES...............................................................................................................................24 LOCAL FILES......................................................................................................................................24 SHARED ORACLE DATABASE FILES .......................................................................................................24 SHARED DATABASE FILES FOR THE APPLICATION.....................................................................................25
58399525.doc
Page 2 of 25
1. Pre-Installation Source: Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide 10g Release 2 (10.2) for Linux http://download-east.oracle.com/docs/cd/B19306_01/install.102/b14203/prelinux.htm#sthref133
1. Required Software • Redhat Enterprise Linux 3 AS Update 3 (kernel 2.4.21-20) x86_64 o uname –r o cat /etc/redhat-release o cat /etc/issue • Oracle Database Enterprise Edition 10.2 x86_64 • Oracle Clusterware 10.2 x86_64 • Oracle Patch 10.2.0.2 (p4547817_10202_Linux-x86-64) • Oracle Permissions Patch (p5087548_10202_Linux-x86-64) • Oracle Critical Patch Update Jul 2006 (p5225799_10202_Linux-x86-64) • Oracle ASMLib 2.0 • Oracle Cluster Verification Utility 1.0 • Oracle Client 10.2.x 2. Minimum hardware requirements for each RAC node • 1 GB of physical RAM o cat /proc/meminfo | grep MemTotal • 1.5 GB of swap space (or the same size as RAM) o cat /proc/meminfo | grep SwapTotal • 400 MB of disk space in the /tmp directory o df -h /tmp • Up to 4 GB of disk space for the Oracle software • Optional: 1.2 GB of disk space for a preconfigured database that uses file system storage • Shared Database Disk: 2TB usable, 33G LUNs, RAID 1+0 o /sbin/fdisk -l 3. Networking hardware requirements • Each node must have at least two network adapters; one for the public network interface and one for the private network interface (the RAC interconnect). • The interface names associated with the network adapters for each network must be the same on all nodes • For increased reliability, you can configure redundant public and private network adapters for each node. • For the public network, each network adapter must support TCP/IP. • For the private network, the interconnect must support the user datagram protocol (UDP) using high-speed network adapters and switches that support TCP/IP (Gigabit Ethernet or better recommended). • UDP is the default interconnect protocol for RAC and TCP is the interconnect protocol for Oracle CRS.
58399525.doc
Page 3 of 25
4. IP Address requirements for each RAC node • An IP address and an associated host name registered in the domain name service (DNS) for each public network interface. If you do not have an available DNS, then record the network name and IP address in the system hosts file, /etc/hosts. • One unused virtual IP address and an associated virtual host name registered in DNS that you will configure for the primary public network interface. The virtual IP address must be in the same subnet as the associated public interface. After installation, you can configure clients to use the virtual host name or IP address. If a node fails, its virtual IP address fails over to another node. • A private IP address and optional host name for each private interface. Oracle recommends that you use non-routable IP addresses for the private interfaces, for example: 10.*.*.* or 192.168.*.*. You can use the /etc/hosts file on each node to associate private host names with private IP addresses. o cat /etc/hosts o /sbin/ifconfig –a Example: Node
Interface Name
Type
IP Address
Registered In
rac1
rac1
Public
143.46.43.100
DNS (if available, else the hosts file)
rac1
rac1-vip
Virtual
143.46.43.104
DNS (if available, else the hosts file)
rac1
rac1-priv
Private
10.0.0.1
Hosts file
rac2
rac2
Public
143.46.43.101
DNS (if available, else the hosts file)
rac2
rac2-vip
Virtual
143.46.43.105
DNS (if available, else the hosts file)
rac2
rac2-priv
Private
10.0.0.2
Hosts file
5. Linux x86 software requirements • To see installed packages o rpm –qa o
rpm -q kernel --queryformat "%{NAME}-%{VERSION}.%{RELEASE} (%{ARCH})\n"
o rpm -q Item Operating systems x86 (64-bit)
Requirement •
Red Hat Enterprise Linux AS/ES 3 (Update 4 or later) Red Hat Enterprise Linux AS/ES 4 (Update 1 or later)
•
Kernel version x86 (64-
58399525.doc
SUSE Linux Enterprise Server 9 (Service Pack 2 or later)
The system must be running one of the following kernel
Page 4 of 25
Item
Requirement
bit)
versions (or a later version): Red Hat Enterprise Linux 3 (Update 4): 2.4.21-27.EL Note: This is the default kernel version. Red Hat Enterprise Linux 4 (Update 1): 2.6.9-11.EL SUSE Linux Enterprise Server 9 (Service Pack 2): 2.6.5-7.201
Red Hat Enterprise Linux The following packages (or later versions) must be installed: 3 (Update 4) Packages make-3.79.1-17 compat-db 4.0.14-5.1 control-center-2.2.0.1-13 gcc-3.2.3-47 gcc-c++-3.2.3-47 gdb-6.1post-1.20040607.52 glibc-2.3.2-95.30 glibc-common-2.3.2-95.30 glibc-devel-2.3.2-95.30 glibc-devel-2.3.2-95.20 (32 bit) glibc-devel-2.3.4-2.13.i386 (32-bit) compat-db-4.0.14-5 compat-gcc-7.3-2.96.128 compat-gcc-c++-7.3-2.96.128 compat-libstdc++-7.3-2.96.128 compat-libstdc++-devel-7.3-2.96.128 gnome-libs-1.4.1.2.90-34.2 (32 bit) libstdc++-3.2.3-47 libstdc++-devel-3.2.3-47 openmotif-2.2.3-3.RHEL3 sysstat-5.0.5-5.rhel3 setarch-1.3-1 libaio-0.3.96-3 libaio-devel-0.3.96-3 Note: XDK is not supported with gcc on Red Hat Enterprise Linux 3.
58399525.doc
Page 5 of 25
Item
Requirement
Red Hat Enterprise Linux The following packages (or later versions) must be installed: 4 (Update 1):Packages binutils-2.15.92.0.2-10.EL4 binutils-2.15.92.0.2-13.0.0.0.2.x86_64 compat-db-4.1.25-9 control-center-2.8.0-12 gcc-3.4.3-9.EL4 gcc-c++-3.4.3-9.EL4 glibc-2.3.4-2 glibc-common-2.3.4-2 gnome-libs-1.4.1.2.90-44.1 libstdc++-3.4.3-9.EL4 libstdc++-devel-3.4.3-9.EL4 make-3.80-5 Note: XDK is not supported with gcc on Red Hat Enterprise Linux 4. SUSE Linux Enterprise Server 9 Packages
The following packages (or later versions) must be installed: binutils-2.15.90.0.1.1-32.5 gcc-3.3.3-43.24 gcc-c++-3.3.3-43.24 glibc-2.3.3-98.28 gnome-libs-1.4.1.7-671.1 libstdc++-3.3.3-43.24 libstdc++-devel-3.3.3-43.24 make-3.80-184.1
PL/SQL native compilation, Pro*C/C++, Oracle Call Interface, Oracle C++ Call Interface, Oracle XML Developer's Kit (XDK)
Intel C++ Compiler 8.1 or later and the version of GNU C and C++ compilers listed previously for the distribution are supported for use with these products. Note: Intel C++ Compiler v8.1 or later is supported. However, it is not required for installation. On Red Hat Enterprise Linux 3, Oracle C++ Call Interface (OCCI) is supported with version 2.2 of the GNU C++ compiler. This is the default compiler version. OCCI is also supported with Intel Compiler v8.1 with gcc 3.2.3 standard template libraries. On Red Hat Enterprise Linux 4.0, OCCI does not support GCC 3.4.3. To use OCCI on Red Hat Enterprise Linux 4.0,
58399525.doc
Page 6 of 25
Item
Requirement you need to install GCC 3.2.3. Oracle XML Developer's Kit is not supported with GCC on Red Hat Linux 4.0. It is supported only with Intel C++ Compiler (ICC).
Oracle JDBC/OCI Drivers You can use the following optional JDK versions with the Oracle JDBC/OCI drivers; however, they are not required for the installation:
Oracle Real Application Clusters
• •
Sun JDK 1.5.0 (64-bit) Sun JDK 1.5.0 (32-bit)
•
Sun JDK 1.4.2_09 (32-bit)
For a cluster file system, use one of the following options: Red Hat 3: Oracle Cluster File System (OCFS) •
Version 1.0.13-1 or later
OCFS requires the following kernel packages: ocfs-support ocfs-tools ocfs-kernel_version In the preceding list, the variable kernel_version represents the kernel version of the operating system on which you are installing OCFS. Note: OCFS is required only if you want to use a cluster file system for database file storage. If you want to use Automatic Storage Management or raw devices for database file storage, then you do not need to install OCFS. Obtain OCFS kernel packages, installation instructions, and additional information about OCFS from the following URL: http://oss.oracle.com/projects/ocfs/
58399525.doc
Page 7 of 25
Item
Requirement Red Hat 4: Oracle Cluster File System 2 (OCFS2) •
Version 1.0.1-1 or later
For information about Oracle Cluster File System version 2, refer to the following Web site: http://oss.oracle.com/projects/ocfs2/ For OCFS2 certification status, refer to the Certify page on OracleMetaLink. SUSE 9: Oracle Cluster File System 2 (OCFS2) • •
OCFS2 is bundled with SuSE Linux Enterprise Server 9, Service Pack 2 or higher. If you are running SUSE9, then ensure that you are upgraded to the latest kernel (Service Pack 2 or higher), and ensure that you have installed the packages ocfs2-tools and ocfs2console.
For OCFS2 certification status, refer to the Certify page on OracleMetaLink. 6. Additional RAC specific software requirements • See ASMLib downloads at: http://www.oracle.com/technology/software/tech/linux/asmlib/rhel3.html Real Application Clusters
ASMLIB 2.0 for Red Hat 3.0 AS Library and Tools • •
oracleasm-support-2.0.3-1.x86_64.rpm oracleasmlib-2.0.2-1.x86_64.rpm
Driver for kernel 2.4.21-40.EL •
oracleasm-2.4.21-40.ELsmp-1.0.4-1.x86_64.rpm
7. Create the Linux groups and users on each RAC node o dba group (/usr/sbin/groupadd dba)
58399525.doc
Page 8 of 25
o oinstall (/usr/sbin/groupadd oinstall) o oracle user (/usr/sbin/useradd -G dba oracle) o nobody user • The Oracle software owner user and the Oracle Inventory, OSDBA, and OSOPER groups must exist and be identical on all cluster nodes. To create these identical users and groups, you must identify the user ID and group IDs assigned them on the node where you created them, then create the user and groups with the same name and ID on the other cluster nodes 8. Configure SSH on each RAC node • Login as oracle • Create the .ssh directory in oracle’s home directory (then chmod 700 .ssh) • Generate an RSA key for version 2 of the SSH protocol /usr/bin/ssh-keygen -t rsa • Generate a DSA key for version 2 of the SSH protocol /usr/bin/ssh-keygen -t dsa • Copy the contents of the ~/.ssh/id_rsa.pub and ~/.ssh/id_dsa.pub files to the ~/.ssh/authorized_keys file for all nodes and share ~/.ssh/authorized_keys to all cluster nodes • chmod 644 ~/.ssh/authorized_keys • Enable the Installer to use the ssh and scp commands without being prompted for a pass phrase o exec /usr/bin/ssh-agent $SHELL o /usr/bin/ssh-add o At the prompts, enter the pass phrase for each key that you generated o Test ssh connections and confirm authenticity message • Also test ssh and confirm the authenticity message back to the node you are working on. Example: If you are on node1, ssh to node1. • Ensure that X11 forwarding will not cause the installation to fail o Edit or create the ~oracle/.ssh/config as follows o Host * ForwardX11 no • If necessary, start required X emulation software on the client • Test: /usr/X11R6/bin/xclock 9. Configure kernel parameters on each RAC node • Values should be equal or greater than those in the following table on all nodes (/etc/sysctl.conf) o /sbin/sysctl -a | grep sem o /sbin/sysctl -a | grep shm o /sbin/sysctl -a | grep file-max o /sbin/sysctl -a | grep ip_local_port_range o /sbin/sysctl -a | grep net.core
58399525.doc
Page 9 of 25
Parameter
Value
File
semmsl semmns semopm 250 32000 100 128 semmni
/proc/sys/kernel/sem
Shmmax
Half the size of physical memory (in bytes)
/proc/sys/kernel/shmmax
Shmmni
4096
/proc/sys/kernel/shmmni
Shmall
2097152
/proc/sys/kernel/shmall
file-max
65536
/proc/sys/fs/file-max
ip_local_port_range
Minimum: 1024
/proc/sys/net/ipv4/ip_local_port_range
Maximum: 65000 rmem_default
262144
/proc/sys/net/core/rmem_default
rmem_max
262144
/proc/sys/net/core/rmem_max
wmem_default
262144
/proc/sys/net/core/wmem_default
wmem_max
262144
/proc/sys/net/core/wmem_max
•
To change values: o Edit with values below o Once edited, execute /sbin/sysctl -p to apply changes manually kernel.shmall = 2097152 kernel.shmmax = 2147483648 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 fs.file-max = 65536 net.ipv4.ip_local_port_range = 1024 65000 net.core.rmem_default = 1048576 net.core.rmem_max = 1048576 net.core.wmem_default = 262144 net.core.wmem_max = 262144 10. Set shell limits for the oracle user on all nodes to improve performance • Add the following lines to /etc/security/limits.conf file oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 • Add or edit the following line in the /etc/pam.d/login file session required /lib/security/pam_limits.so • Edit /etc/profile with the following if [ $USER = "oracle" ]; then
58399525.doc
Page 10 of 25
if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi
11. Create Oracle software directories on each node • Oracle Base ex. /u01/app/oracle o Minimum 3G available disk space # mkdir -p /u01/app/oracle # chown -R oracle:oinstall /u01/app/oracle # chmod -R 775 /u01/app/oracle • Oracle Cluster Ready Services ex. /u01/crs/oracle/product/10/crs o Should not be a subdirectory of the Oracle Base directory o Minimum 1G available disk space # mkdir -p /u01/crs/oracle/product/10/crs # chown -R oracle:oinstall /u01/crs # chmod -R 775 /u01/crs • Note, the Oracle Home directory will be created by the OUI o Oracle Home directories will be listed in /etc/oratab 12. Oracle database files and Oracle database recovery files (if utilized) must reside on shared storage: • ASM: Automatic Storage Management • NFS file system (requires a NAS device) • Shared Raw Partitions 13. The Oracle Cluster Registry and Voting disk files must reside on shared storage, but not on ASM. You cannot use Automatic Storage Management to store OCR or Voting disk files because these files must be accessible before any Oracle instance starts. • These files MUST be raw files on shared storage. Files: o ora_ocr: 100M each, 2 for redundancy o ora_vote: 20M each, 3 for redundancy • Clustered sharing of these raw files will be handled by CRS. Third party clusterware is not required. 14. Configure the oracle user’s environment • PATH o In PATH: $ORACLE_HOME/bin before /usr/X11R6/bin • ORACLE_BASE (ex. /u01/app/oracle) • ORACLE_HOME (ex. $ORACLE_BASE/product/) • ORA_CRS_HOME (ex. $ORACLE_BASE/crs) • DISPLAY • umask 022 • Test X emulator
58399525.doc
Page 11 of 25
15. Ensure a switch resides on the network between the nodes 16. For Oracle Clusterware (CRS) on x86 64-bit, you must run rootpre.sh • Loaded with Clusterware software 17. Oracle Database 10g installation requires you to perform a two-phase process in which you run Oracle Universal Installer (OUI) twice. The first phase installs Oracle Clusterware 10g Release 2 (10.2) and the second phase installs the Oracle Database 10g software with RAC. These steps are documented below.
58399525.doc
Page 12 of 25
2. Oracle Clusterware (formerly CRS) and ASM Source: Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide 10g Release 2 (10.2) for Linux http://download-east.oracle.com/docs/cd/B19306_01/install.102/b14203/storage.htm#sthref666
1. Verify user equivalence by testing ssh to all nodes • ssh may need to be in /usr/local/bin/. o Softlinks may need to be created for ssh and scp in /usr/local/bin/. 2. IP Addresses: In addition to the host machine's public internet protocol (IP) address, obtain two more IP addresses for each node • Both nodes require a separate public IP address for the node's Virtual IP address (VIP). Oracle uses VIPs for client-to-database connections. Therefore, the VIP address must be publicly accessible. 3. The third address for each node must be a private IP address for inter-node, or instance-to-instance Cache Fusion traffic. Using public interfaces for Cache Fusion can cause performance problems. 4. Oracle Clusterware should be installed in a separate home directory. You should not install Oracle Clusterware in a release-specific Oracle home mount point.
Pre-Install of Clusterware files (OCR and Voting Disk) http://download-east.oracle.com/docs/cd/B19306_01/install.102/b14203/storage.htm#BABCEDJB
5. Oracle Clusterware files to be installed are: • Oracle Cluster Registry (OCR): 100M: ora_ocr • CRS Voting Disk: 20M: ora_vote 6. The CRS files listed above must be on shared storage (OCFS, NFS, or raw) and bound and visible to all nodes. • You cannot use Automatic Storage Management to store Oracle CRS files, because these files must be accessible before any Oracle instance starts. 7. If using raw, do the following on all nodes as root • To identify device names: /sbin/fdisk –l o devicename examples: /dev/sdv OR /dev/emcpowera • Create (raw) partitions: /sbin/fdisk o Use the “p” command to list the partition table of the device. o Use the “n” command to create a partition. o After creating required partitions on this device, use the “w” command to write the modified partition table to the device • Bind partitions to the raw devices o See what devices are already bound: /usr/bin/raw -qa o Add a line to /etc/sysconfig/rawdevices for each partition created: /dev/raw/raw1 o For the raw device created chown root:dba /dev/raw/raw1 chmod 640 /dev/raw/raw1 o To bind the partitions to the raw devices, enter the following command:
58399525.doc
Page 13 of 25
/sbin/service rawdevices restart
Pre-Install of Database files for ASM (Automatic Storage Management) http://download-east.oracle.com/docs/cd/B19306_01/install.102/b14203/storage.htm#sthref838
8. Determine how many devices and free disk space required • Determine space needed for Database files • Determine space needed for recovery files (optional) 9. ASM redundancy level: determines how ASM mirrors, the number of disks needed for mirroring, and amount of disk space needed. • External Redundancy: ASM does not mirror • Normal Redundancy: Two-way ASM mirroring. Minimum of 2 disks are required. Useable disk space is 1/2 the sum of the disk space. • High Redundancy: Three-way ASM mirroring. Minimum of 3 disks are required. Useable disk space is 1/3 the sum of the disk space. 10. ASM metadata requires additional disk space. Use the following calculation to determine space in megabytes: • 15 + (2 * number_of_disks) + (126 * number_of_ASM_instances) 11. Failure groups for ASM disk group devices: optional. Associating a set of disk devices in a custom failure group. • Only available in Normal or High redundancy level 12. Guidelines for disk devices and disk groups • All devices in an ASM disk group should be the same size and have the same performance characteristics • Do not specify more than one partition on a single physical disk as a disk group device. ASM expects each disk group device to be on a separate physical disk. • Although you can specify a logical volume as a device in an ASM disk group, Oracle does not recommend their use. Logical volume managers can hide the physical disk architecture, preventing ASM from optimizing I/O across the physical devices. 13. If necessary, download the required ASMLIB packages from the OTN Web site: • http://www.oracle.com/technology/software/tech/linux/asmlib/rhel3.html 14. Install the following three packages on all nodes, where version is the version of the ASMLIB driver, arch is the system architecture, and kernel is the version of the kernel that you are using: • oracleasm-support-version.arch.rpm • oracleasm-kernel-version.arch.rpm • oracleasmlib-version.arch.rpm 15. On all nodes, install the packages as root: • rpm -Uvh oracleasm-support-version.arch.rpm \ oracleasm-kernel-version.arch.rpm \ oracleasmlib-version.arch.rpm • check kernel modules: /sbin/modprobe -v oracleasm 16. Run the oracleasm initialization script as root on all nodes:
58399525.doc
Page 14 of 25
• •
/etc/init.d/oracleasm configure When requested, select owner (oracle), group (dba), and start on boot (y)
Configure the Disk Devices to Use the ASM Library Driver 17. Install or configure the shared disk devices that you intend to use for the disk group(s) and restart the system. 18. Identify the device name for the disks: /sbin/fdisk -l 19. Use either fdisk (or parted) to create a single whole-disk partition on the disk devices that you want to use. • On Linux systems, Oracle recommends that you create a single whole-disk partition on each disk • To identify device names: /sbin/fdisk –l o devicename examples: /dev/sdv OR /dev/emcpowera • Create (raw) partitions: /sbin/fdisk o Use the “p” command to list the partition table of the device. o Use the “n” command to create a partition. o After creating required partitions on this device, use the “w” command to write the modified partition table to the device 20. Mark the disk(s) as a ASM disk(s). As root run: • /etc/init.d/oracleasm createdisk DISK1 /dev/sdb1 • /etc/init.d/oracleasm createdisk DISK2 /dev/sda1 • Where DISK1 and DISK2 are the name you want to assign to the disks. It MUST start with an uppercase letter. 21. On each node, to make the disk available on the other cluster nodes, enter the following command as root : • /etc/init.d/oracleasm scandisks 22. On each node confirm disks • /etc/init.d/oracleasm listdisks 23. If you are using EMC Powerpath on Red Hat 3 add the following line to the /etc/sysconfig/oracleasm • ORACLEASM_SCANEXCLUDE="emcpower"
Install Oracle Clusterware 24. During the installation, hidden files on the system (for example, .bashrc or .cshrc) will cause installation errors if they contain stty commands. you must modify these files to suppress all output on STDERR, as in the following examples: • Bourne, Bash, or Korn shell: if [ -t 0 ]; then stty intr ^C fi • C shell: test -t 0 if ($status == 0) then stty intr ^C
58399525.doc
Page 15 of 25
endif 25. As root, run rootpre.sh which is located in the ../clusterware/rootpre directory on the Oracle Database 10g Release 2 (10.2) installation media. 26. Using an X Windows emulator, start the runInstaller command from the clusterware directory on the Oracle Database 10g Release 2 (10.2) installation media. • /mountpoint/clusterware/runInstaller • •
When OUI displays the Welcome page, click Next. On the “Specify Home Details” page, remember, the Clusterware home CANNOT be the same as the ORACLE_HOME. 27. When the OUI is complete, Run orainstRoot.sh and root.sh on all the nodes when requested 28. Without user intervention, OUI runs • Oracle Notification Server Configuration Assistant • Oracle Private Interconnect Configuration Assistant, • Cluster Verification Utility (CVU). These programs run without user intervention. 29. If the CVU fails because of a missing VIP, this could be just because all of the IP Addresses are incorrectly considered private by Oracle (because they begin with 172.16.x.x - 172.31.x.x, 192.168.x.x, or 10.x.x.). In a separate window as root run the vipca manually. • DO NOT exit the OUI • As root, launch VIPCA (ex: /apps/crs/oracle/product/10.2/crs/bin/vipca) • Enter the VIP node names and IP address for every node. • Exit VIPCA • Back in the OUI, Retry the Cluster Verification Utility
Post-Installation Administration info 30. init.crs: should have been added to server boot scripts to stop/start CRS 31. The following are the CRS (CSS) background processes that must be running for CRS to function. These are stopped and started with init.crs: • evmd -- Event manager daemon that starts the racgevt process to manage callouts. • ocssd -- Manages cluster node membership and runs as oracle user; failure of this process results in cluster restart. • crsd -- Performs high availability recovery and management operations such as maintaining the OCR. Also manages application resources and runs as root user and restarts automatically upon failure. 32. To administer the ASM library driver and disks, use the oracleasm initialization script (used in the previous steps) with different options, as follows: • /etc/init.d/oracleasm configure o To reconfigure the ASM library driver • /etc/init.d/oracleasm enable OR disable
58399525.doc
Page 16 of 25
• • • • • •
58399525.doc
o Change the behavior of the ASM library driver when the system boots. The enable option causes the ASM library driver to load when the system boots /etc/init.d/oracleasm restart OR stop OR start o Load or unload the ASM library driver without restarting the system /etc/init.d/oracleasm createdisk DISKNAME devicename o Mark a disk device for use with the ASM library driver and give it a name /etc/init.d/oracleasm deletedisk DISKNAME o To unmark a named disk device. You must drop the disk from the ASM disk group before you unmark it.. /etc/init.d/oracleasm querydisk {DISKNAME | devicename} o to determine whether a disk device or disk name is being used by the ASM library driver. /etc/init.d/oracleasm listsdisks o To list the disk names of marked ASM library driver disks /etc/init.d/oracleasm scandisks o To enable cluster nodes to identify which shared disks have been marked as ASM library driver disks on another node.
Page 17 of 25
3. Oracle Database 10g with RAC – Software (binaries) Source: Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide 10g Release 2 (10.2) for Linux http://download-east.oracle.com/docs/cd/B19306_01/install.102/b14203/racinstl.htm#sthref1048
Pre-Install Notes 1. The Oracle home that you create for installing Oracle Database 10g with the RAC software cannot be the same Oracle home that you used during the CRS installation 2. During the installation, unless you are placing your Oracle home on a clustered file system, the OUI copies software to the local node and then copies the software to the remote nodes. On UNIX-based systems, the OUI then prompts you to run the root.sh script on all the selected nodes.
Install 3. Using an X Windows emulator, start the runInstaller command from the database directory on the Oracle Database 10g Release 2 (10.2) installation media. • /mountpoint/database/runInstaller • Execute a normal Oracle install except where noted below. 4. Ensure the OUI is cluster aware • After the Specify Home Details page you should see the Specify Hardware Cluster Installation Mode pages. • If you do not, the OUI is not cluster aware and will not install components required to run RAC. • View the OUI log in /logs/. for install details 5. On the Select Configuration Option page, select “Install Database Software only” 6. Complete Install
58399525.doc
Page 18 of 25
4. Patch Oracle Database Software Source: Oracle Metalink - http://metalink.oracle.com
At this time, 10.2.0.2 is the latest GA version for Linux x86. The Oracle CD pack used for this install is 10.2.0.1.
Download and Install Patches Refer to the OracleMetaLink Web site for required patches for your installation and to download required patches: 1. Use a Web browser to view the OracleMetaLink Web site: http://metalink.oracle.com 2. Log in to OracleMetaLink. 3. On the main OracleMetaLink page click Patches & Updates tab. 4. Click Simple Search link, then Advanced Search button. 5. On the Advanced Search page click the search icon next to the Product or Product Family field. 6. In the Search and Select: Product Family field, enter RDBMS Server in the For field and click Go. 7. Select RDBMS Server under the Results heading and click Select. RDBMS Server appears in the Product or Product Family field and the current release appears in the Release field. 8. Select your platform from the list in the Platform field and click Go. 9. Any available patches appear under the Results heading. 10. Click the number of the patch that you want to download. 11. On the Patch Set page, click View README and read the page that appears. The README page contains information about the patch set and how to apply the patches to your installation. 12. Return to the Patch Set page, click Download, and save the file on your system. 13. Use the unzip utility provided with Oracle Database 10g to uncompress the Oracle patches that you downloaded from OracleMetaLink. the unzip utility is located in the $ORACLE_HOME/bin directory.
Fix an install bug (5117016) 14. cd /apps/oracle/10.2/rdbms/lib/ -- we need to make a copy 15. cp libserver10.a libserver10.a.base_cpOHRDBMSLIB 16. cd /apps/oracle/10.2/lib 17. mv libserver10.a libserver10.a.base_cpOHLIB 18. mv /apps/oracle/10.2/rdbms/lib/libserver10.a . 19. ls -al $ORACLE_HOME/bin/oracle* 20. relink oracle 21. ls -al $ORACLE_HOME/bin/oracle* 22. If oracle does not relink stop and contact support
58399525.doc
Page 19 of 25
Fix a Permission bug (patch 5087548) 23. Transfer the 10.2 patch file to the server 24. Unzip the patch file: unzip 25. Set 10g oracle environment 26. cd 5087548/ 27. Run OPatch: /apps/oracle/10.2/OPatch/opatch apply 28. cd $ORACLE_HOME/install 29. . ./changePerm.sh 30. Hit 'y' and enter -- permission changes should take around 10-15 minutes -- *** if the script hangs then exit the window/job -- verify permission changes in /apps/oracle/10.2/bin/ -- most permissions (sqlplus) should show: -rwxr-xr-x
58399525.doc
Page 20 of 25
5. RAC Database using the DBCA with ASM Source: Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide 10g Release 2 (10.2) for Linux http://download-east.oracle.com/docs/cd/B19306_01/install.102/b14203/dbcacrea.htm#sthref1091
Pre-Install 1. Database creation requirements of the ASM Library Driver • You must use Database Configuration Assistant (DBCA) in interactive mode to create the database. You can run DBCA in interactive mode by choosing the Custom installation type or the Advanced database configuration option. • You must also change the default disk discovery string to ORCL:*.
Install 2. Run CVU to verify that your system is prepared to create Oracle Database with RAC • /mountpoint/crs/Disk1/cluvfy/runcluvfy.sh stage -pre dbcfg -n node_list -d oracle_home [-verbose] • Example: /dev/dvdrom/crs/Disk1/cluvfy/runcluvfy.sh stage -pre dbcfg -n node1,node2 -d /oracle/product/10.2.0/ 3. Start the DBCA using an X Windows emulator: $ORACLE_HOME/bin/dbca • Execute a normal Oracle install except where noted below. 4. Ensure the DBCA is cluster aware • The first page should be the Welcome page for RAC. • If not, the DBCA is not cluster aware • To diagnose o Run the CVU: /mountpoint/crs/Disk1/cluvfy/runcluvfy.sh stage -post crsinst -n nodename o Run olsnodes 5. When asked “Select the operation that you want to perform”, choose Create a Database 6. Select the Custom Database template to manually define datafiles and options 7. If you choose to manage the RAC database with Enterprise Manager, you can also choose on of the following • Grid Control • Database Control 8. On the Storage Options page • The Cluster File System option is the default. • Change to ASM 9. For ASM, you will need to create an ASM instance (if one does not already exist). You will be taken to the ASM Instance Creation page • Unless $ORACLE_HOME/dbs/. is a shared filesystem, you will not be able to create an SPFILE. Use an IFILE. • Let ASM create a listener if prompted. 10. On the ASM Disk Group page: 58399525.doc
Page 21 of 25
•
Click the “Create New” button. The disk groups configured above in the ASM Library Driver install should appear. • On the Create Disk Group page, your ASM disk(s) should appear. If not, exit the DBCA and restart. • At the top, choose a Disk Group Name • Choose your redundancy level (external) • Then check disks to belong to the Disk Group, click OK 11. On the Recovery Configuration page, for Cluster File System, the optional flash recovery area defaults to $ORACLE_BASE/flash_recovery_area 12. Following remaining steps in a typical database creation. 13. Before creating the database, choose Generate Database Creation Scripts
58399525.doc
Page 22 of 25
6. Post-Installation Tasks Source: Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide 10g Release 2 (10.2) for Linux http://download-east.oracle.com/docs/cd/B19306_01/install.102/b14203/postinst.htm#sthref1144
1. Ensure NETCA has run to configure Oracle Networking components. 2. Backup the voting disk • Also make a backup of the voting disk after adding or removing a node
58399525.doc
Page 23 of 25
7. Oracle Files Creation of these files is not necessary when using ASM. They are listed here to assist with Database planning and sizing.
Local Files These files are local to each node and do not need to be OCFS or ASM files. - archived redo logs - init file
Shared Oracle Database Files These files (except ora_ocr and ora_vote) may live in ASM disk groups. -
ora_ocr ora_vote controlfile_01 controlfile_02 system_01 system_02 sysaux_01 sysaux_02 srvcfg_01 sp_file_01 example_01 cwmlite_01 xdb_01 odm_01 indx_01 tools_01 drsys_01 drsys_02 snaplogs_01 users_01 temp_01 temp_02 undo_i1_01 undo_i1_02 undo_i1_03 undo_i1_04 undo_i2_01 undo_i2_02 undo_i2_03 undo_i2_04 redo_i1_01 redo_i1_02 redo_i1_03 redo_i1_04 redo_i1_05 redo_i1_06 redo_i1_07 redo_i1_08 redo_i1_09 redo_i1_10 redo_i2_01 redo_i2_02 redo_i2_03 redo_i2_04 redo_i2_05 redo_i2_06 redo_i2_07 redo_i2_08 redo_i2_09 redo_i2_10
58399525.doc
100M 20M 500M 500M 1G 1G 800M 800M 500M 100M 200M 200M 100M 300M 100M 500M 500M 500M 2G 500M 2G 2G 2G 2G 2G 2G 2G 2G 2G 2G 100M 100M 100M 100M 100M 100M 100M 100M 100M 100M 100M 100M 100M 100M 100M 100M 100M 100M 100M 100M
raw file for CRS cluster registry raw file for CRS voting disk
300M + 250M for each instance optional (for server management file) optional (for server parameter file) optional optional optional optional optional optional (for intermedia) optional (for intermedia) optional (for replication) (for default temp TS switching) (for undo TS switching) (for undo TS switching) (for undo TS switching) (for undo TS switching)
(for (for (for (for (for
high high high high high
trans. trans. trans. trans. trans.
growth) growth) growth) growth) growth)
(for (for (for (for (for
high high high high high
trans. growth) trans. growth) trans. growth) trans. growth) trans)
Page 24 of 25
Shared Database Files for the Application
58399525.doc
Page 25 of 25
View more...
Comments