What is RAC

Share Embed Donate


Short Description

Download What is RAC...

Description

What is RAC? RAC stands for Real Application cluster. It is a clustering solution from Oracle Corporation that ensures high availability of databases by providing instance failover, media failover features.

Mention the Oracle RAC software components:Oracle RAC is composed of two or more database instances. They are composed of Memory structures and background processes same as the single instance database. Oracle RAC instances use two processes GES(Global Enqueue Service), GCS(Global Cache Service) that enable cache fusion. Oracle RAC instances are composed of following background processes: ACMS—Atomic Controlfile to Memory Service (ACMS) GTX0-j—Global Transaction Process LMON—Global Enqueue Service Monitor LMD—Global Enqueue Service Daemon LMS—Global Cache Service Process LCK0—Instance Enqueue Process RMSn—Oracle RAC Management Processes (RMSn) RSMN—Remote Slave Monitor

What is GRD? GRD stands for Global Resource Directory. The GES and GCS maintains records of the statuses of each datafile and each cahed block using global resource directory.This process is referred to as cache fusion and helps in data integrity.

Give Details on Cache Fusion:Oracle RAC is composed of two or more instances. When a block of data is read from datafile by an instance within the cluster and another instance is in need of the same block,it is easy to get the block image from the insatnce which has the block in its SGA rather than reading from the disk. To enable inter instance communication Oracle RAC makes use of interconnects. The Global Enqueue Service(GES) monitors and Instance enqueue process manages the cahce fusion. Give Details on ACMS:ACMS stands for Atomic Controlfile Memory Service.In an Oracle RAC environment ACMS is an agent that ensures a distributed SGA memory update(ie)SGA updates are globally committed on success or globally aborted in event of a failure.

Give details on GTX0-j :The process provides transparent support for XA global transactions in a RAC environment.The database autotunes the number of these processes based on the workload of XA global transactions. Give details on LMON:This process monitors global enques and resources across the cluster and performs global enqueue recovery operations.This is called as Global Enqueue Service Monitor.

Give details on LMD:This process is called as global enqueue service daemon. This process manages incoming remote resource requests within each instance.

Give details on LMS:This process is called as Global Cache service process.This process maintains statuses of datafiles and each cahed block by recording information in a Global Resource Dectory(GRD).This process also controls the flow of messages to remote instances and manages global data block access and transmits block images between the buffer caches of different instances.This processing is a part of cache fusion feature.

Give details on LCK0:This process is called as Instance enqueue process.This process manages non-cache fusion resource requests such as libry and row cache requests.

Give details on RMSn:This process is called as Oracle RAC management process.These pocesses perform managability tasks for Oracle RAC.Tasks include creation of resources related Oracle RAC when new instances are added to the cluster.

Give details on RSMN:This process is called as Remote Slave Monitor.This process manages background slave process creation andd communication on remote instances. This is a background slave process.This process performs tasks on behalf of a co-ordinating process running in another instance.

What components in RAC must reside in shared storage? All datafiles, controlfiles, SPFIles, redo log files must reside on cluster-aware shred storage.

What is the significance of using cluster-aware shared storage in an Oracle RAC environment? All instances of an Oracle RAC can access all the datafiles,control files, SPFILE's, redolog files when these files are hosted out of cluster-aware shared storage which are group of shared disks. Give few examples for solutions that support cluster storage:ASM(automatic storage management),raw disk devices,network file system(NFS), OCFS2 and OCFS(Oracle Cluster Fie systems).

What is an interconnect network? an interconnect network is a private network that connects all of the servers in a cluster. The interconnect network uses a switch/multiple switches that only the nodes in the cluster can access.

How can we configure the cluster interconnect? Configure User Datagram Protocol(UDP) on Gigabit ethernet for cluster interconnect.On unia and linux systems we use UDP and RDS(Reliable data socket) protocols to be used by Oracle Clusterware.Windows clusters use the TCP protocol.

Can we use crossover cables with Oracle Clusterware interconnects? No, crossover cables are not supported with Oracle Clusterware intercnects.

What is the use of cluster interconnect? Cluster interconnect is used by the Cache fusion for inter instance communication.

How do users connect to database in an Oracle RAC environment? Users can access a RAC database using a client/server configuration or through one or more middle tiers ,with or without connection pooling.Users can use oracle services feature to connect to database.

What is the use of a service in Oracle RAC environemnt? Applications should use the services feature to connect to the Oracle database.Services enable us to define rules and characteristics to control how users and applications connect to database instances.

What are the characteriscs controlled by Oracle services feature? The charateristics include a unique name, workload balancing and failover options,and

high availability characteristics.

Which enable the load balancing of applications in RAC? Oracle Net Services enable the load balancing of application connections across all of the instances in an Oracle RAC database.

What is a virtual IP address or VIP? A virtual IP address or VIP is an alternate IP address that the client connectins use instead of the standard public IP address. To configureVIP address, we need to reserve a spare IP address for each node, and the IP addresses must use the same subnet as the public network.

What is the use of VIP? If a node fails, then the node's VIP address fails over to another node on which the VIP address can accept TCP connections but it cannot accept Oracle connections.

Give situations under which VIP address failover happens:VIP addresses failover happens when the node on which the VIP address runs fails, all interfaces for the VIP address fails, all interfaces for the VIP address are disconnected from the network.

What is the significance of VIP address failover? When a VIP address failover happens, Clients that attempt to connect to the VIP address receive a rapid connection refused error .They don't have to wait for TCP connection timeout messages.

What are the administrative tools used for Oracle RAC environments? Oracle RAC cluster can be administered as a single image using OEM(Enterprise Manager),SQL*PLUS,Servercontrol(SRVCTL),clusterverificationutility(cvu),DBCA,NE TCA

How do we verify that RAC instances are running? Issue the following query from any one node connecting through SQL*PLUS. $connect sys/sys as sysdba SQL>select * from V$ACTIVE_INSTANCES; The query gives the instance number under INST_NUMBER column,host_:instancename under INST_NAME column.

What is FAN? Fast application Notification as it abbreviates to FAN relates to the events related to instances,services and nodes.This is a notification mechanism that Oracle RAc uses to notify other processes about the configuration and service level information that includes service status changes such as,UP or DOWN events.Applications can respond to FAN events and take immediate action.

Where can we apply FAN UP and DOWN events? FAN UP and FAN DOWN events can be applied to instances,services and nodes.

State the use of FAN events in case of a cluster configuration change? During times of cluster configuration changes,Oracle RAC high availability framework publishes a FAN event immediately when a state change occurs in the cluster.So applications can receive FAN events and react immediately.This prevents applications from polling database and detecting a problem after such a state change.

Why should we have seperate homes for ASm instance? It is a good practice to have ASM home seperate from the database hom(ORACLE_HOME).This helps in upgrading and patching ASM and the Oracle database software independent of each other.Also,we can deinstall the Oracle database software independent of the ASM instance.

What is the advantage of using ASM? Having ASM is the Oracle recommended storage option for RAC databases as the ASM maximizes performance by managing the storage configuration across the disks.ASM does this by distributing the database file across all of the available storage within our cluster database environment.

What is rolling upgrade? It is a new ASM feature from Database 11g.ASM instances in Oracle database 11g release(from 11.1) can be upgraded or patched using rolling upgrade feature. This enables us to patch or upgrade ASM nodes in a clustered environment without affecting database availability.During a rolling upgrade we can maintain a functional cluster while one or more of the nodes in the cluster are running in different software versions.

Can rolling upgrade be used to upgrade from 10g to 11g database? No,it can be used only for Oracle database 11g releases(from 11.1).

State the initialization parameters that must have same value for every instance in an Oracle RAC database:Some initialization parameters are critical at the database creation time and must have same values.Their value must be specified in SPFILE or PFILE for every instance.The list of parameters that must be identical on every instance are given below: ACTIVE_INSTANCE_COUNT ARCHIVE_LAG_TARGET COMPATIBLE CLUSTER_DATABASE CLUSTER_DATABASE_INSTANCE CONTROL_FILES DB_BLOCK_SIZE DB_DOMAIN DB_FILES DB_NAME DB_RECOVERY_FILE_DEST DB_RECOVERY_FILE_DEST_SIZE DB_UNIQUE_NAME INSTANCE_TYPE (RDBMS or ASM) PARALLEL_MAX_SERVERS REMOTE_LOGIN_PASSWORD_FILE UNDO_MANAGEMENT Can the DML_LOCKS and RESULT_CACHE_MAX_SIZE be identical on all instances? These parameters can be identical on all instances only if these parameter values are set to zero. What two parameters must be set at the time of starting up an ASM instance in a RAC environment? The parameters CLUSTER_DATABASE and INSTANCE_TYPE must be set. Mention the components of Oracle clusterware:Oracle clusterware is made up of components like voting disk and Oracle Cluster Registry(OCR). What is a CRS resource? Oracle clusterware is used to manage high-availability operations in a cluster.Anything that Oracle Clusterware manages is known as a CRS resource.Some examples of CRS resources are database,an instance,a service,a listener,a VIP address,an application process etc. What is the use of OCR? Oracle clusterware manages CRS resources based on the configuration information of

CRS resources stored in OCR(Oracle Cluster Registry). How does a Oracle Clusterware manage CRS resources? Oracle clusterware manages CRS resources based on the configuration information of CRS resources stored in OCR(Oracle Cluster Registry).

Name some Oracle clusterware tools and their uses? OIFCFG - allocating and deallocating network interfaces OCRCONFIG - Command-line tool for managing Oracle Cluster Registry OCRDUMP - Identify the interconnect being used

CVU - Cluster verification utility to get status of CRS resources What are the modes of deleting instances from ORacle Real Application cluster Databases? We can delete instances using silent mode or interactive mode using DBCA(Database Configuration Assistant).

How do we remove ASM from a Oracle RAC environment? We need to stop and delete the instance in the node first in interactive or silent mode.After that asm can be removed using srvctl tool as follows: srvctl stop asm -n node_name srvctl remove asm -n node_name We can verify if ASM has been removed by issuing the following command: srvctl config asm -n node_name

How do we verify that an instance has been removed from OCR after deleting an instance? Issue the following srvctl command: srvctl config database -d database_name cd CRS_HOME/bin ./crs_stat

How do we verify an existing current backup of OCR? We can verify the current backup of OCR using the following command : ocrconfig-showbackup

What are the performance views in an Oracle RAC environment? We have v$ views that are instance specific. In addition we have GV$ views called as global views that has an INST_ID column of numeric data type.GV$ views obtain

information from individual V$ views.

What are the types of connection load-balancing? There are two types of connection load-balancing:server-side load balancing and clientside load balancing.

What is the differnece between server-side and client-side connection load balancing? Client-side balancing happens at client side where load balancing is done using listener.In case of server-side load balancing listener uses a load-balancing advisory to redirect connections to the instance providing best service.

Give the usage of srvctl:srvctl start instance -d db_name -i "inst_name_list" [-o start_options]srvctl stop instance -d name -i "inst_name_list" [-o stop_options]srvctl stop instance -d orcl -i "orcl3,orcl4" -o immediatesrvctl start database -d name [-o start_options]srvctl stop database -d name [-o stop_options]srvctl start database -d orcl -o mount

cluster demons Demon means background process 1) crsd - it is update the ocr file. it use for node apps work(ons,vip,gsd) 2) cssd - it is update the vd file. it is monitor the node membership when ever HB(heart beat) fail between the nodes the cssd process reboot the node 3) evmd - update the diag information http://www.idevelopment.info/data/Oracle/DBA_tips/Oracle10gRAC/CLUSTER_65.sht ml

Overview Oracle Clusterware 10g, formerly known as Cluster Ready Services (CRS) is software that when installed on servers running the same operating system, enables the servers to be bound together to operate and function as a single server or cluster. This infrastructure simplifies the requirement for an Oracle Real Application Clusters (RAC) database by providing cluster software that is tightly integrated with the Oracle Database. The Oracle Clusterware requires two critical clusterware components: a voting disk to record node membership information and the Oracle Cluster Registry (OCR) to record cluster configuration information:

Voting Disk The voting disk is a shared partition that Oracle Clusterware uses to verify cluster node membership and status. Oracle Clusterware uses the voting disk to determine which instances are members of a cluster by way of a health check and arbitrates cluster ownership among the instances in case of network failures. The primary function of the voting disk is to manage node membership and prevent what is known as Split Brain Syndrome in which two or more instances attempt to control the RAC database. This can occur in cases where there is a break in communication between nodes through the interconnect. The voting disk must reside on a shared disk(s) that is accessible by all of the nodes in the cluster. For high availability, Oracle recommends that you have multiple voting disks. Oracle Clusterware can be configured to maintain multiple voting disks (multiplexing) but you must have an odd number of voting disks, such as three, five, and so on. Oracle Clusterware supports a maximum of 32 voting disks. If you define a single voting disk, then you should use external mirroring to provide redundancy. A node must be able to access more than half of the voting disks at any time. For example, if you have five voting disks configured, then a node must be able to access at least three of the voting disks at any time. If a node cannot access the minimum required number of voting disks it is evicted, or removed, from the cluster. After the cause of the failure has been corrected and access to the voting disks has been restored, you can instruct Oracle Clusterware to recover the failed node and restore it to the cluster. Oracle Cluster Registry (OCR) Maintains cluster configuration information as well as configuration information about any cluster database within the cluster. OCR is the repository of configuration information for the cluster that manages information about like the cluster node list and instance-to-node mapping information. This configuration information is used by many of the processes that make up the CRS as well as other cluster-aware applications which use this repository to share information amoung them. Some of the main components included in the OCR are: Node membership information Database instance, node, and other mapping information ASM (if configured) Application resource profiles such as VIP addresses, services, etc. Service characteristics Information about processes that Oracle Clusterware controls Information about any third-party applications controlled by CRS (10g R2 and later) • • • • • • •

The OCR stores configuration information in a series of key-value pairs within a directory tree structure. To view the contents of the OCR in a human-readable format, run the ocrdump command. This will dump the contents of the OCR into an ASCII text file in the current directory named OCRDUMPFILE.

The OCR must reside on a shared disk(s) that is accessible by all of the nodes in the cluster. Oracle Clusterware 10g Release 2 allows you to multiplex the OCR and Oracle recommends that you use this feature to ensure cluster high availability. Oracle Clusterware allows for a maximum of two OCR locations; one is the primary and the second is an OCR mirror. If you define a single OCR, then you should use external mirroring to provide redundancy. You can replace a failed OCR online, and you can update the OCR through supported APIs such as Enterprise Manager, the Server Control Utility (SRVCTL), or the Database Configuration Assistant (DBCA). This article provides a detailed look at how to administer the two critical Oracle Clusterware components — the voting disk and the Oracle Cluster Registry (OCR). The examples described in this guide were tested with Oracle RAC 10g Release 2 (10.2.0.4) on the Linux x86 platform.

Example Configuration The example configuration used in this article consists of a two-node RAC with a clustered database named racdb.idevelopment.info running Oracle RAC 10g Release 2 on the Linux x86 platform. The two node names are racnode1 and racnode2, each hosting a single Oracle instance named racdb1 and racdb2 respectively. For a detailed guide on building the example clustered database environment, please see: Building an Inexpensive Oracle RAC 10g Release 2 on Linux - (CentOS 5.3 / iSCSI) The example Oracle Clusterware environment is configured with a single voting disk and a single OCR file on an OCFS2 clustered file system. Note that the voting disk is owned by the oracle user in the oinstall group with 0644 permissions while the OCR file is owned by root in the oinstall group with 0640 permissions: [oracle@racnode1 ~]$ ls -l /u02/oradata/racdb total 16608 -rw-r--r-- 1 oracle oinstall 10240000 Aug 26 22:43 CSSFile drwxr-xr-x 2 oracle oinstall 3896 Aug 26 23:45 dbs/ -rw-r----- 1 root oinstall 6836224 Sep 3 23:47 OCRFile

Check Current OCR File [oracle@racnode1 ~]$ ocrcheck Status of Oracle Cluster Registry Version Total space (kbytes) Used space (kbytes) Available space (kbytes) ID Device/File Name

is as follows : : 2 : 262120 : 4660 : 257460 : 1331197 : /u02/oradata/racdb/OCRFile Device/File integrity check succeeded Device/File not configured

Cluster registry integrity check succeeded

Check Current Voting Disk

[oracle@racnode1 ~]$ crsctl query css votedisk 0. 0 /u02/oradata/racdb/CSSFile located 1 votedisk(s).

Preparation To prepare for the examples used in this guide, five new iSCSI volumes were created from the SAN and will be bound to RAW devices on all nodes in the RAC cluster. These five new volumes will be used to demonstrate how to move the current voting disk and OCR file from an OCFS2 file system to RAW devices: Five New iSCSI Volumes and their Local Device Name Mappings iSCSI Target Name

Local Device Name

Disk Size

iqn.2006-01.com.openfiler:racdb.ocr1

/dev/iscsi/ocr1/part

512 MB

iqn.2006-01.com.openfiler:racdb.ocr2

/dev/iscsi/ocr2/part

512 MB

iqn.2006-01.com.openfiler:racdb.voting1

/dev/iscsi/voting1/part

32 MB

iqn.2006-01.com.openfiler:racdb.voting2

/dev/iscsi/voting2/part

32 MB

iqn.2006-01.com.openfiler:racdb.voting3

/dev/iscsi/voting3/part

32 MB

After creating the new iSCSI volumes from the SAN, they now need to be configured for access and bound to RAW devices by all Oracle RAC nodes in the database cluster. 1. From all Oracle RAC nodes in the cluster as root, discover the five new iSCSI volumes

from the SAN which will be used to store the voting disks and OCR files. [root@racnode1 ~]# iscsiadm -m discovery -t sendtargets -p openfiler1-san 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm1 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm2 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.crs 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.ocr1 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.ocr2 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.voting1 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.voting2 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.voting3 [root@racnode2 ~]# iscsiadm -m discovery -t sendtargets -p openfiler1-san 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm1 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.asm2 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.crs 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.ocr1 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.ocr2 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.voting1 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.voting2 192.168.2.195:3260,1 iqn.2006-01.com.openfiler:racdb.voting3

2. Manually login to the new iSCSI targets from all Oracle RAC nodes in the cluster.

[root@racnode1 ~]# iscsiadm 192.168.2.195 -l [root@racnode1 ~]# iscsiadm 192.168.2.195 -l [root@racnode1 ~]# iscsiadm -p 192.168.2.195 -l [root@racnode1 ~]# iscsiadm -p 192.168.2.195 -l [root@racnode1 ~]# iscsiadm -p 192.168.2.195 -l

-m node -T iqn.2006-01.com.openfiler:racdb.ocr1 -p

[root@racnode2 ~]# iscsiadm 192.168.2.195 -l [root@racnode2 ~]# iscsiadm 192.168.2.195 -l [root@racnode2 ~]# iscsiadm -p 192.168.2.195 -l [root@racnode2 ~]# iscsiadm -p 192.168.2.195 -l [root@racnode2 ~]# iscsiadm -p 192.168.2.195 -l

-m node -T iqn.2006-01.com.openfiler:racdb.ocr1 -p

-m node -T iqn.2006-01.com.openfiler:racdb.ocr2 -p -m node -T iqn.2006-01.com.openfiler:racdb.voting1 -m node -T iqn.2006-01.com.openfiler:racdb.voting2 -m node -T iqn.2006-01.com.openfiler:racdb.voting3

-m node -T iqn.2006-01.com.openfiler:racdb.ocr2 -p -m node -T iqn.2006-01.com.openfiler:racdb.voting1 -m node -T iqn.2006-01.com.openfiler:racdb.voting2 -m node -T iqn.2006-01.com.openfiler:racdb.voting3

3. Create a single primary partition on each of the five new iSCSI volumes that span the entire disk. Perform this from only one of the Oracle RAC nodes in the cluster: [root@racnode1 [root@racnode1 [root@racnode1 [root@racnode1 [root@racnode1

~]# ~]# ~]# ~]# ~]#

fdisk fdisk fdisk fdisk fdisk

/dev/iscsi/ocr1/part /dev/iscsi/ocr2/part /dev/iscsi/voting1/part /dev/iscsi/voting2/part /dev/iscsi/voting3/part

4. Re-scan the SCSI bus from all Oracle RAC nodes in the cluster: [root@racnode2 ~]# partprobe

5. Create a shell script (/usr/local/bin/setup_raw_devices.sh) on all Oracle RAC

nodes in the cluster to bind the five Oracle Clusterware component devices to RAW devices as follows: # +---------------------------------------------------------+ # | FILE: /usr/local/bin/setup_raw_devices.sh | # +---------------------------------------------------------+ # +---------------------------------------------------------+ # | Bind OCR files to RAW device files. | # +---------------------------------------------------------+ /bin/raw /dev/raw/raw1 /dev/iscsi/ocr1/part1 /bin/raw /dev/raw/raw2 /dev/iscsi/ocr2/part1 sleep 3 /bin/chown root:oinstall /dev/raw/raw1 /bin/chown root:oinstall /dev/raw/raw2 /bin/chmod 0640 /dev/raw/raw1 /bin/chmod 0640 /dev/raw/raw2 # +---------------------------------------------------------+

# | Bind voting disks to RAW device files. | # +---------------------------------------------------------+ /bin/raw /dev/raw/raw3 /dev/iscsi/voting1/part1 /bin/raw /dev/raw/raw4 /dev/iscsi/voting2/part1 /bin/raw /dev/raw/raw5 /dev/iscsi/voting3/part1 sleep 3 /bin/chown oracle:oinstall /dev/raw/raw3 /bin/chown oracle:oinstall /dev/raw/raw4 /bin/chown oracle:oinstall /dev/raw/raw5 /bin/chmod 0644 /dev/raw/raw3 /bin/chmod 0644 /dev/raw/raw4 /bin/chmod 0644 /dev/raw/raw5

6. From all Oracle RAC nodes in the cluster, change the permissions of the new shell script to execute: [root@racnode1 ~]# chmod 755 /usr/local/bin/setup_raw_devices.sh [root@racnode2 ~]# chmod 755 /usr/local/bin/setup_raw_devices.sh

7. Manually execute the new shell script from all Oracle RAC nodes in the cluster to bind the voting disks to RAW devices: [root@racnode1 ~]# /usr/local/bin/setup_raw_devices.sh /dev/raw/raw1: bound to major 8, minor 97 /dev/raw/raw2: bound to major 8, minor 17 /dev/raw/raw3: bound to major 8, minor 1 /dev/raw/raw4: bound to major 8, minor 49 /dev/raw/raw5: bound to major 8, minor 33 [root@racnode2 ~]# /usr/local/bin/setup_raw_devices.sh /dev/raw/raw1: bound to major 8, minor 65 /dev/raw/raw2: bound to major 8, minor 49 /dev/raw/raw3: bound to major 8, minor 33 /dev/raw/raw4: bound to major 8, minor 1 /dev/raw/raw5: bound to major 8, minor 17

8. Check that the character (RAW) devices were created from all Oracle RAC nodes in the cluster: [root@racnode1 ~]# ls -l /dev/raw total 0 crw-r----- 1 root oinstall 162, crw-r----- 1 root oinstall 162, crw-r--r-- 1 oracle oinstall 162, crw-r--r-- 1 oracle oinstall 162, crw-r--r-- 1 oracle oinstall 162,

1 2 3 4 5

Sep Sep Sep Sep Sep

24 24 24 24 24

00:48 00:48 00:48 00:48 00:48

raw1 raw2 raw3 raw4 raw5

[root@racnode2 ~]# ls -l /dev/raw total 0 crw-r----- 1 root oinstall 162, crw-r----- 1 root oinstall 162, crw-r--r-- 1 oracle oinstall 162, crw-r--r-- 1 oracle oinstall 162,

1 2 3 4

Sep Sep Sep Sep

24 24 24 24

00:48 00:48 00:48 00:48

raw1 raw2 raw3 raw4

crw-r--r-- 1 oracle oinstall 162, 5 Sep 24 00:48 raw5 [root@racnode1 ~]# raw -qa /dev/raw/raw1: bound to major /dev/raw/raw2: bound to major /dev/raw/raw3: bound to major /dev/raw/raw4: bound to major /dev/raw/raw5: bound to major

8, 8, 8, 8, 8,

minor minor minor minor minor

97 17 1 49 33

[root@racnode2 ~]# raw -qa /dev/raw/raw1: bound to major /dev/raw/raw2: bound to major /dev/raw/raw3: bound to major /dev/raw/raw4: bound to major /dev/raw/raw5: bound to major

8, 8, 8, 8, 8,

minor minor minor minor minor

65 49 33 1 17

9. Include the new shell script in /etc/rc.local to run on each boot from all Oracle RAC

nodes in the cluster: [root@racnode1 ~]# echo "/usr/local/bin/setup_raw_devices.sh" >> /etc/rc.local [root@racnode2 ~]# echo "/usr/local/bin/setup_raw_devices.sh" >> /etc/rc.local

10. Once the raw devices are created, use the dd command to zero out the device and make

sure no data is written to the raw devices. Only perform this action from one of the Oracle RAC nodes in the cluster: [root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw1 dd: writing to '/dev/raw/raw1': No space left on device 1048516+0 records in 1048515+0 records out 536839680 bytes (537 MB) copied, 773.145 seconds, 694 kB/s [root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw2 dd: writing to '/dev/raw/raw2': No space left on device 1048516+0 records in 1048515+0 records out 536839680 bytes (537 MB) copied, 769.974 seconds, 697 kB/s [root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw3 dd: writing to '/dev/raw/raw3': No space left on device 65505+0 records in 65504+0 records out 33538048 bytes (34 MB) copied, 47.9176 seconds, 700 kB/s [root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw4 dd: writing to '/dev/raw/raw4': No space left on device 65505+0 records in 65504+0 records out 33538048 bytes (34 MB) copied, 47.9915 seconds, 699 kB/s [root@racnode1 ~]# dd if=/dev/zero of=/dev/raw/raw5 dd: writing to '/dev/raw/raw5': No space left on device 65505+0 records in 65504+0 records out

33538048 bytes (34 MB) copied, 48.2684 seconds, 695 kB/s

It is highly recommended to take a backup of the voting disk and OCR file before making any changes! Instruction are included in this guide on how to perform backups of the voting disk and OCR file. CRS_home The Oracle Clusterware binaries included in this article (i.e. crs_stat, ocrcheck, crsctl, etc.) are being executed from the Oracle Clusterware home directory which for the purpose of this article is /u01/app/crs. The environment variable $ORA_CRS_HOME is set for both the oracle and root user accounts to this directory and is also included in the $PATH: [root@racnode1 ~]# echo $ORA_CRS_HOME /u01/app/crs [root@racnode1 ~]# which ocrcheck /u01/app/crs/bin/ocrcheck

Administering the OCR File View OCR Configuration Information Two methods exist to verify how many OCR files are configured for the cluster as well as their location. If the cluster is up and running, use the ocrcheck utility as either the oracle or root user account: [oracle@racnode1 ~]$ ocrcheck Status of Oracle Cluster Registry Version Total space (kbytes) Used space (kbytes) Available space (kbytes) ID Device/File Name (primary)

is as follows : : 2 : 262120 : 4660 : 257460 : 1331197 : /u02/oradata/racdb/OCRFile

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF