RAC for Beginners
Short Description
Download RAC for Beginners...
Description
RAC for Beginners Real Application Clusters (RAC) Oracle RAC, introduced with Oracle9i, is the successor to Oracle Parallel Server (OPS). Oracle RAC allows multiple instances to access the same database (storage) simultaneously. RAC provides fault tolerance, load balancing, and performance benefits by allowing the system to scale out, and at the same time since all nodes access the same database, the failure of one instance will not cause the loss of access to the database. Oracle RAC 10g is a shared disk subsystem. All nodes in the cluster must be able to access all of the data, redo log files, control files and parameter files for all nodes in the cluster. The data disks must be globally available in order to allow all nodes to access the database. Each node has its own redo log file(s) and UNDO tablespace, but the other nodes must be able to access them (and the shared control file) in order to recover that node in the event of a system failure. The difference between Oracle RAC and OPS is the addition of Cache Fusion. With OPS a request for data from one node to another required the data to be written to disk first, then the requesting node can read that data. With cache fusion, data is passed along a high-speed interconnect using a sophisticated locking algorithm. With Oracle RAC 10g, the data files, redo log files, control files, and archived log files reside on shared storage on raw-disk devices, a NAS, ASM, or on a clustered file system Oracle RAC is composed of two or more database instances. They are composed of Memory structures and background processes same as the single instance database. Oracle RAC instances use two processes ==> GES(Global Enqueue Service) ==> GCS(Global Cache Service) this enable cache fusion. Oracle RAC instances are composed of following background processes: ACMS—Atomic Controlfile to Memory Service (ACMS) GTX0-j—Global Transaction Process LMON—Global Enqueue Service Monitor LMD—Global Enqueue Service Daemon LMS—Global Cache Service Process LCK0—Instance Enqueue Process RMSn—Oracle RAC Management Processes (RMSn) RSMN—Remote Slave Monitor
LMON The background Global Enqueue Service Monitor (LMON) monitors the entire cluster to manage global resources. LMON manages instance and process failures and the associated recovery for the Global Cache Service (GCS) and Global Enqueue Service (GES). In particular, LMON handles the part of recovery associated with global resources. LMON-provided services are also known as cluster group services (CGS) This process monitors global enques and resources across the cluster and performs global enqueue recovery operations.This is called as Global Enqueue Service Monitor. LCKx The LCK process manages instance global enqueue requests and cross-instance call operations. Workload is automatically shared and balanced when there are multiple Global Cache Service Processes (LMSx). This process is called as Instance enqueue process.This process manages non-cache fusion resource requests such as libry and row cache requests. LMSx The Global Cache Service Processes (LMSx) are the processes that handle remote Global Cache Service (GCS) messages. Current Real Application Clusters software provides for up to 10 Global Cache Service Processes. The number of LMSx varies depending on the amount of messaging traffic among nodes in the cluster. The LMSx handles the acquisition interrupt and blocking interrupt requests from the remote instances for Global Cache Service resources. For cross-instance consistent read requests, the LMSx will create a consistent read version of the block and send it to the requesting instance. The LMSx also controls the flow of messages to remote instances. This process is called as Global Cache service process.This process maintains statuses of datafiles and each cahed block by recording information in a Global Resource Dectory(GRD).This process also controls the flow of messages to remote instances and manages global data block access and transmits block images between the buffer caches of different instances.This processing is a part of cache fusion feature. LMDx The Global Enqueue Service Daemon (LMD) is the resource agent process that manages Global Enqueue Service (GES) resource requests. The LMD process also handles deadlock detection Global Enqueue Service (GES) requests. Remote resource requests are requests originating from another instance. This process is called as global enqueue service daemon. This process manages incoming remote
resource requests within each instance. DIAG The diagnose daemon is a Real Application Clusters background process that captures diagnostic data on instance process failures. No user control is required for this demo. ACMS ACMS stands for Atomic Controlfile Memory Service.In an Oracle RAC environment ACMS is an agent that ensures a distributed SGA memory update(ie)SGA updates are globally committed on success or globally aborted in event of a failure. GTX0-j The process provides transparent support for XA global transactions in a RAC environment.The database autotunes the number of these processes based on the workload of XA global transactions. RMSn This process is called as Oracle RAC management process.These pocesses perform managability tasks for Oracle RAC.Tasks include creation of resources related Oracle RAC when new instances are added to the cluster. RSMN This process is called as Remote Slave Monitor.This process manages background slave process creation andd communication on remote instances. This is a background slave process.This process performs tasks on behalf of a co-ordinating process running in another instance. CRS CRS (Cluster Ready Services) is a new feature for 10g Real Application Clusters that provides a standard cluster interface on all platforms and performs new high availability operations not available in previous versions. CRS manages cluster database functions including node membership, group services, global resource management, and high availability. CRS serves as the clusterware software for all platforms. It can be the only clusterware or run on top of vendor clusterware such as Sun Cluster, HP Serviceguard, etc. CRS automatically starts the following resources: · Nodeapps o Virtual Internet Protocol(VIP) address for each node o Global Services Daemon o Oracle Net Listeners
o Oracle Network Services (ONS) · Database Instance · Services Oracle Clusterware (Cluster Ready Services in 10g/ Cluster Manager in 9i) - provides infrastructure that binds multiple nodes that then operate as single server. Clusterware monitors all components like instances and listeners. There are two important components in Oracle clusterware, Voting Disk and OCR (Oracle Cluster Registry) OCR & Voting Disk Oracle, 10g RAC, provided its own cluster-ware stack called CRS. The main file components of CRS are the Oracle Cluster Repository (OCR) and the Voting Disk. The OCR contains cluster and database configuration information for RAC and Cluster Ready Services (CRS). Some of this information includes the cluster node list, cluster database instanceto-node mapping information, and the CRS application resource profiles. The OCR contains configuration details for the cluster database and for high availability resources such as services, Virtual Inerconnect Protocoal (VIP) addresses. The Voting Disk is used by the Oracle cluster manager in various layers. The Node Monitor (NM) uses the Voting Disk for the Disk Hearbeat, which is essential in the detection and resolution of cluster "split brain". Cache Fusion:Oracle RAC is composed of two or more instances. When a block of data is read from datafile by an instance within the cluster and another instance is in need of the same block,it is easy to get the block image from the insatnce which has the block in its SGA rather than reading from the disk. To enable inter instance communication Oracle RAC makes use of interconnects. The Global Enqueue Service(GES) monitors and Instance enqueue process manages the cahce fusion Cache Fusion and Global Cache Service (GCS) Memory-to-memory copies between buffer caches over high-speed interconnects · fast remote access times · memory transfers for write or read access · transfers for all types (e.g data, index, undo, headers ) · Cache coherency across the cluster · globally managed access permissions to cached data · GCS always knows whether and where a data block is cached · a local cache miss may result in remote cache hit or disk read
Implementing Dataguard on 11g RAC
Creating RAC Standby Database Configuration Details: • Primary Host Names are RAC_PRIM01 and RAC_PRIM02 • Standby Host Names are RAC_STDBY01 and RAC_STDBY02 • The primary database is RAC_PRIM • Virtual Names are RAC_PRIM01-vip, RAC_PRIM02-vip, RAC_STDBY01-vip and RAC_STDBY02-vip • Both the primary and standby databases use ASM for storage • The following ASM disk groups are being used +DATA (for data) and +FRA for Recovery/Flashback • The standby database will be referred to as RAC_STDBY • Oracle Managed Files will be used. • ORACLE_BASE is set to /u01/app/oracle
1. Configure Primary and Standby sites For Better and Simpler configuration of Data Guard, it is recommended that the Primary and Standby machines have exactly the same structure, i.e. • ORACLE_HOME points to the same mount point on both sites. • ORACLE_BASE/admin points to the same mount point on both sites. • ASM Disk Groups are the same on both sites 2. Install Oracle Software on each site. • Oracle Clusterware • Oracle database executables for use by ASM • Oracle database executables for use by the RDBMS 3. Server Names / VIPs
The Oracle Real Application Clusters 11g virtual server names and IP addresses are used and maintained by Oracle Cluster Ready Services (CRS). Note: Both short and fully qualified names will exist. Server Name/Alias/Host Entry Purpose RAC_PRIM01.local Public Host Name (PRIMARY Node 1) RAC_PRIM02.local Public Host Name (PRIMARY Node 2) RAC_STDBY01.local Public Host Name (STANDBY Node 1) RAC_STDBY02.local Public Host Name (STANDBY Node 2) RAC_PRIM01-vip.local Public Virtual Name (PRIMARY Node 1) RAC_PRIM02-vip.local Public Virtual Name (PRIMARY Node 2) RAC_STDBY01-vip.local Public Virtual Name (STANDBY Node 1) RAC_STDBY02-vip.local Public Virtual Name (STANDBY Node 2) 4. Configure Oracle Networking 4.1 Configure Listener on Each Site Each site will have a listener defined which will be running from the ASM Oracle Home. The following listeners have been defined in this example configuration. Primary Role Listener_RAC_PRIM01 Listener_RAC_PRIM02 Listener_RAC_STDBY01 Listener_RAC_STDBY02 4.2 Static Registration Oracle must be able to access all instances of both databases whether they are in an open, mounted or closed state. This means that these must be statically registered with the listener. These entries will have a special name which will be used to facilitate the use of the Data Guard Broker, discussed later. 4.3 Sample Listener.ora LISTENER_RAC_STDBY01 = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01-vip)(PORT = 1521) (IP = FIRST))
) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01)(PORT = 1521) (IP = FIRST)) ) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC)) ) ) ) SID_LIST_LISTENER_RAC_STDBY01 = (SID_LIST = (SID_DESC = (GLOBAL_DBNAME=RAC_STDBY_dgmgrl.local) (SID_NAME = RAC_STDBY1) (ORACLE_HOME = $ORACLE_HOME) ) )
4.4 Configure TNS entries on each site. In order to make things simpler the same network service names will be generated on each site. These service names will be called: Alias Comments RAC_PRIM1_DGMGRL.local Points to the RAC_PRIM instance on RAC_PRIM01 using the service name RAC_PRIM_DGMGRL.local. This can be used for creating the standby database. RAC_PRIM1.local Points to the RAC_PRIM instance on RAC_PRIM01. using the service name RAC_PRIM.local RAC_PRIM2.local Points to the RAC_PRIM instance on RAC_PRIM02 using the service name RAC_PRIM.local RAC_PRIM.local Points to the RAC_PRIM database i.e. Contains all database instances. RAC_STDBY1_DGMGRL.local Points to the RAC_STDBY instance on RAC_STDBY01 using the service name RAC_STDBY1_DGMGRL ** This will be used for the database duplication. RAC_STDBY1.local Points to the RAC_STDBY instance on RAC_STDBY01 using the service name RAC_STDBY.local RAC_STDBY2.local Points to the RAC_STDBY instance on RAC_STDBY02 using the service name RAC_STDBY.local RAC_STDBY.local Points to the RAC_STDBY database i.e. Contains all the database instances listener_DB_UNIQUE_NAME.local This will be a tns alias entry consisting of two address lines. The first address line will be the address of the listener on Node1 and the second will be the address of the listener on Node 2. Placing both of the above listeners in the address list will ensure
that the database automatically registers with both nodes. There must be two sets of entries. One for the standby nodes call listener_RAC_STDBY and one for the primary nodes called listener_RAC_PRIM Sample tnsnames.ora (RAC_PRIM01) RAC_PRIM1_DGMGRL.local = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM01-vip)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = RAC_PRIM_DGMGRL.local) ) ) RAC_PRIM1.local = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM01-vip)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = RAC_PRIM.local) (INSTANCE_NAME = RAC_PRIM1) ) ) RAC_PRIM2.local = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM02-vip)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = RAC_PRIM.local) (INSTANCE_NAME = RAC_PRIM2) ) ) RAC_PRIM.local = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM01-vip)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM02-vip)(PORT = 1521)) (LOAD_BALANCE = yes) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = RAC_PRIM.local) )
) RAC_STDBY1_DGMGRL.local = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01-vip)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = RAC_STDBY_DGMGRL.local) ) ) RAC_STDBY2.local= (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY02-vip)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = RAC_STDBY.local) (INSTANCE_NAME=RAC_STDBY2) ) ) RAC_STDBY1.local= (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01-vip)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = RAC_STDBY.local) (INSTANCE_NAME=RAC_STDBY1) ) ) RAC_STDBY.local= (DESCRIPTION = (ADDRESS_LIST= (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01-vip)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY02-vip)(PORT = 1521))) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = RAC_STDBY.local) ) ) LISTENERS_RAC_PRIM.local= (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM01-vip)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = RAC_PRIM02-vip)(PORT = 1521)) )
4.5 Configure ASM on each Site Certain initialisation parameters are only applicable when a database is running in either a standby or primary database role. Defining ALL of the parameters on BOTH sites will ensure that, if the roles are switched (Primary becomes Standby and Standby becomes the new Primary), then no further configuration will be necessary. Some of the parameters will however be node-specific; therefore there will be one set of parameters for the Primary site nodes and one for the Standby site nodes. 4.6 Primary Site Preparation The following initialisation parameters should be set on the primary site prior to duplication. Whilst they are only applicable to the primary site, they will be equally configured on the standby site. Dg_broker_config_file1 Point this to a file within the ASM disk group – Note File need not exist. Dg_broker_config_file2 Point this to a file within the ASM disk group – Note File need not exist. db_block_checksum To enable datablock integrity checking (OPTIONAL) db_block_checking To enable datablock consistency checking (OPTIONAL) As long as performance implications allow and do not violate existing SLAs it should be mandatory to have db_block_checksum and db_block_checking enabled. Additionally, the following must also be configured: Archive Log Mode The primary database must be placed into archive log mode. Forced Logging The standby database is kept up to date by applying transactions on the standby site, which have been recorded in the online redo logs. In some environments that have not previously utilized Data Guard, the NOLOGGING option may have been utilized to enhance database performance. Usage of this feature in a Data Guard protected environment is strongly undesirable. From Oracle version 9.2, Oracle introduced a method to prevent NOLOGGING transactions from occurring. This is known as forced logging mode of the database. To enable forced logging, issue the following command on the primary database: alter database force logging;
Password File The primary database must be configured to use an external password file. This is generally done at the time of installation. If not, then a password file can be created using the following command: orapwd file=$ORACLE_HOME/dbs/orapwRAC_PRIM1 password=mypasswd Before issuing the command ensure that the ORACLE_SID is set to the appropriate instance – in this case RAC_PRIM1. Repeat this for each node of the cluster. Also ensure that the initialisation parameter remote_login_passwordfile is set to ‘exclusive’. As with Oracle11.1 the Orale Net sessions for Redo Transport can alternatively be auhenticated through SSL (see also section 6.2.1 in the Data Guard Concepts manual).
Standby Site Preparation Initialization Parameter File : As part of the duplication process a temporary initialisation file will be used. For the purposes of this document this file will be called /tmp/initRAC_PRIM.ora have one line: db_name=RAC_PRIM Password File The standby database must be configured to use a password file. This must be created by copying the password file from the primary site to the standby site and renaming it to reflect the standby instances. Repeat this for each node of the cluster. Additionally ensure that the initialisation parameter remote_login_passwordfile is set to xclusive. Create Audit File Destination Create a directory on each node of the standby system to hold audit files. mkdir /u01/app/oracle/admin/RAC_STDBY/adump Start Standby Instance
Now that everything is in place the standby instance needs to be started ready for duplication to commence: export ORACLE_SID=RAC_STDBY1 sqlplus / as sysdba startup nomount pfile=’/tmp/initRAC_PRIM.ora’ Test Connection From the primary database test the connection to the standby database using the command: sqlplus sys/mypasswd@RAC_STDBY_dgmgrl as sysdba This should successfully connect. Duplicate the Primary database The standby database is created from the primary database. In order to achieve this, up to Oracle10g a backup of the primary database needs to be made and transferred to the standby and restored. Oracle RMAN 11g simplifies this process by providing a new method which allows an ‘on the fly’-duplicate to take place. This will be the method used here (the pre-11g method is described in the Appendicies). From the primary database invoke RMAN using the following command: export ORACLE_SID=RAC_PRIM1 rman target / auxiliary sys/mypasswd@RAC_STDBY1_dgmgrl NOTE: If RMAN returns the error “rman: can’t open target” then ensure that ‘ORACLE_HOME/bin’ appears first in the PATH because there exists a Linux utility also named RMAN. Next, issue the following duplicate command: duplicate target database for standby from active database spfile set db_unique_name=’RAC_STDBY’ set control_files=’+DATA/RAC_STDBY/controlfile/control01.dbf’ set instance_number=’1’ set audit_file_dest=’/u01/app/oracle/admin/RAC_STDBY/adump’ set remote_listener=’LISTENERS_RAC_STDBY’ nofilenamecheck;
Create an SPFILE for the Standby Database By default the RMAN duplicate command will have created an spfile for the instance located in $ORACLE_HOME/dbs. This file will contain entries that refer to the instance names on the primary database. As part of this creation process the database name is being changed to reflect the DB_UNIQUE_NAME for the standby database, and as such the spfile created is essentially worthless. A new spfile will now be created using the contents of the primary database’s spfile.
Get location of the Control File Before starting this process, note down the value of the control_files parameter from the currently running standby database Create a text initialization pfile The first stage in the process requires that the primary databases initialisation parameters be dumped to a text file: set ORACLE_SID=RAC_PRIM1 sqlplus “/ as sysdba” create pfile=’/tmp/initRAC_STDBY.ora’ from spfile; Copy the created file ‘/tmp/initRAC_STDBY.ora’ to the standby server. Edit the init.ora On the standby server, edit the /tmp/initRAC_STDBY.ora file: NOTE: Change every occurrence of RAC_PRIM with RAC_STDBY with the exception of the parameter DB_NAME which must NOT change. Set the control_files parameter to reflect the value obtained in 4.3.8.1 above. This will most likely be +DATA/RAC_STDBY/controlfile/control01.dbf. Save the changes. Create SPFILE Having created the textual initialisation file it now needs to be converted to a spfile and stored
within ASM by issuing: export ORACLE_SID=RAC_STDBY1 sqlplus “/ as sysdba” create spfile=’+DATA/RAC_STDBY/spfileRAC_STDBY.ora’ from pfile= ’/tmp/initRAC_STDBY.ora’ Create Pointer File With the spfile now being in ASM, the RDBMS instances need to be told where to find it. Create a file in the $ORACLE_HOME/dbs directory of standby node 1 (RAC_STDBY01 ) called initRAC_STDBY1.ora . This file will contain one line: spfile=’+DATA/RAC_STDBY/spfileRAC_STDBY.ora’ Create a file in the $ORACLE_HOME/dbs directory of standby node 2 (RAC_STDBY02) called initRAC_STDBY2.ora . This file will also contain one line: spfile=’ +DATA/RAC_STDBY/spfileRAC_STDBY.ora’ Additionally remove the RMAN created spfile from $ORACLE_HOME/dbs located on standby node 1 (RAC_STDBY01 ) Create secondary control files When the RMAN duplicate completed, it created a standby database with only one control file. This is not good practice, so the next step in the process is to create extra control files. This is a two-stage process: 1. Shutdown and startup the database using nomount : shutdown immediate; startup nomount; 2. Change the value of the control_files parameter to ‘+DATA’,’ +FRA’ alter system set control_files=‘+DATA’,’ +FRA’ scope=spfile; 3. Shutdown and startup the database again : shutdown immediate; startup nomount;
3. Use RMAN to duplicate the control file already present: export ORACLE_SID=RAC_STDBY1 rman target / restore controlfile from ‘+DATA/RAC_STDBY/controlfile/control01.dbf’ This will create a control file in both the ASM Disk group’s +DATA and +FRA. It will also update the control file parameter in the spfile. If you wish 3 to have control files simply update the control_files parameter to include the original controlfile as well as the ones just created. Cluster-enable the Standby Database The standby database now needs to be brought under clusterware control, i.e. registered with Cluster Ready Services. Before commencing, check that it is possible to start the instance on the second standby node (RAC_STDBY02): export ORACLE_SID=RAC_STDBY2 sqlplus “/ as sysdba” startup mount; Ensure Server Side Load Balancing is configured Check whether the init.ora parameter remote_listener is defined in the standby instances. If the parameter is not present then create an entry in the tnsnames.ora files (of all standby nodes) which has the following format: LISTENERS_RAC_STDBY.local = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY01 -vip.local)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = RAC_STDBY02-vip.local)(PORT = 1521)) ) ) Then set the value of the parameter remote_listener to LISTENERS_ RAC_STDBY.local. Register the Database with CRS Issue the following commands to register the database with Oracle Cluster Ready Services:
srvctl add database –d RAC_STDBY –o $ORACLE_HOME –m local –p “+DATA/RAC_STDBY/spfileRAC_STDBY.ora” –n RAC_PRIM –r physical_standby –s mount srvctl add instance –d RAC_STDBY –i RAC_STDBY1 –n RAC_STDBY01 srvctl add instance –d RAC_STDBY –i RAC_STDBY2 –n RAC_STDBY02 Test Test that the above has worked by stopping any running standby instances and then starting the database (all instances) using the command: srvctl start database –d RAC_STDBY Once started check that the associated instances are running by using the command: srvctl status database –d RAC_STDBY Temporary Files Temporary files associated with a temporary tablespace are automatically created with a standby database. Create Standby Redo Logs Standby Redo Logs (SRL) are used to store redo data from the primary databases when the transport is configured using the Logwriter (LGWR), which is the default. Each standby redo log file must be at least as large as the largest redo log file in the primary database. It is recommended that all redo log files in the primary database and the standby redo logs in the respective standby database(s) be of the same size. The recommended number of SRLs is : (# of online redo logs per primary instance + 1) * # of instances . Whilst standby redo logs are only used by the standby site, they should be defined on both the primary as well as the standby sites. This will ensure that if the two databases change their roles (primary-> standby and standby -> primary) then no extra configuration will be required. The standby database must be mounted (mount as ‘standby’ is the default) before SRLs can be created. SRLs are created as follows (the size given below is just an example and has to be adjusted to the
current environment): 1. sqlplus ‘ / a sysdba’ 2. startup mount 3. alter database add standby logfile SIZE 100M; NOTE: Standby Redo Logs are also created in logfile groups. But be aware of the fact that group numbers then must be greater than the group numbers which are associated with the ORLs in the primary database. Wrt group numbering Oracle makes no difference between ORLs and SRLs. NOTE: Standby Redo Logs need to be created on both databases. The standby database is now created. The next stage in the process concerns enabling transaction synchronisation. There are two ways of doing this: 1. Using SQL Plus 2. Using the Data Guard Broker Configuring Data Guard using SQL Plus Configure the Standby Database The following initialisation parameters need to be set on the standby database: Parameter Value (RAC_STDBY01 ) Value (RAC_STDBY02) db_unique_name RAC_STDBY db_block_checking TRUE (OPTIONAL) db_block_checksum TRUE (OPTIONAL) log_archive_config dg_config=(RAC_PRIM, RAC_STDBY) log_archive_max_processes 5 fal_client RAC_STDBY1.local RAC_STDBY2.local fal_server ‘RAC_PRIM1.local’, ‘RAC_PRIM2.local’ Standby_file_management Auto log_archive_dest_2 service=RAC_PRIM LGWR SYNC AFFIRM db_unique_name=PRIMARY_RAC_PRIM VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE) log_archive_dest_2 (Max. Performance Mode) service=RAC_PRIM ARCH db_unique_name=PRIMARY_RAC_PRIM VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE)
Configure the Primary Database The following initialisation parameters need to be set on the primary database: Parameter Value (RAC_PRIM01 ) Value (RAC_PRIM02) db_unique_name RAC_PRIM db_block_checking TRUE (OPTIONAL) db_block_checksum TRUE (OPTIONAL) log_archive_config dg_config=(RAC_PRIM, RAC_STDBY) log_archive_max_processes 5 fal_client RAC_PRIM1.local RAC_PRIM2.local fal_server ‘RAC_STDBY1.local’, ‘RAC_STDBY2.local’ standby_file_management Auto Log_archive_dest_2 service=RAC_STDBY LGWR SYNC AFFIRM db_unique_name=RAC_STDBY VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE) Log_archive_dest_2 (Max. Performance Mode) service=RAC_STDBY ARCH db_unique_name=RAC_STDBY VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE Set the Protection Mode In order to specify the protection mode, the primary database must be mounted but not opened. NOTE: The database must be mounted in exclusive mode which effectively means that all RAC instances but one be shutdown and the remaining instance be started with a parameter setting of cluster_database=false. Once this is the case then the following statement must be issued on the primary site: If using Maximum Protection mode then use the command: Alter database set standby database to maximize protection; If using Maximum Availability mode then use the command: Alter database set standby database to maximize availability; If using Maximum Performance mode then use the command: Alter database set standby database to maximize performance; Enable Redo Transport & Redo Apply
Enabling the transport and application of redo to the standby database is achieved by the following: Standby Site The standby database needs to be placed into Managed Recovery mode. This is achieved by issuing the statement: Alter database recover managed standby database disconnect; Oracle 10gR2 introduced Real Time redo apply (SRLs required). Enabling real time apply is achieved by issuing the statement: alter database recover managed standby database using current logfile disconnect; Primary Site: Set: log_archive_dest_state_2=enable in the init.ora file or issue via SQLPlus : alter system set log_archive_dest_state_2=enable
Oracle Database 11g Top New Features : Summary
1) Automatic Diagnostic Repository [ADR] 2) Database Replay 3) Automatic Memory Tuning 4) Case sensitive password 5) Virtual columns and indexes 6) Interval Partition and System Partition
7) The Result Cache 8) ADDM RAC Enhancements 9) SQL Plan Management and SQL Plan Baselines 10) SQL Access Advisor & Partition Advisor 11) SQL Query Repair Advisor 12) SQL Performance Analyzer (SPA) New 13) DBMS_STATS Enhancements 14) The Result Cache 15) Total Recall (Flashback Data Archive) Note: The above are only top new features, there are other features as well introduced in 11g which will be included subsequently Oracle 11g Database DBA New Features with brief explanation ========================================== # Database Capture/replay database workloads : This allows the total database workload to be captured, transferred to a test database create from a backup or standby database, then replayed to test the affects of an upgrade or system change. Currently, these are working to a capture performance overhead of 5%, so this will capture real production workloads # Automatic Memory Tuning: Automatic PGA tuning was introduced in Oracle 9i. Automatic SGA tuning was already introduced in Oracle 10g. But In 11g, all memory can be tuned automatically by setting one parameter. We can literally tell Oracle how much memory it has and it
determines how much to use for PGA, SGA and OS Processes. Maximum and minimum thresholds can be set # Interval partitioning for tables : Interval partitions are extensions to range partitioning. These provide automation for equi-sized range partitions. Partitions are created as metadata and only the start partition is made persistent. The additional segments are allocated as the data arrives. The additional partitions and local indexes are automatically created. # Feature Based Patching: All one-off patches will be classified as to which feature they affect. This allows you to easily identify which patches are necessary for the features you are using. EM will allow you to subscribe to a feature based patching service, so EM automatically scans for available patches for the features you are using # RMAN UNDO bypass : Rman backup can bypass undo. Undo tablespaces are getting huge, but contain lots of useless information. Now rman can bypass those types of tablespace. Great for exporting a tablespace from backup. # Virtual columns/indexes : User can create Virtual index on table. This Virtual index is not visible to optimizer, so it will not affect performance, Developer can user HINT and see is Index is useful or not.Invisible Indexesprevent premature use of newly created indexes # New default audit settings :
Oracle database where general database auditing was "off" by default, logging is intended to be enabled by default with the Oracle Database 11g beta secure configuration. Notable performance improvements are planned to be introduced to reduce the performance degradation typically associated with auditing. # Case sensitive password : Passwords are expected to also become case sensitive This and other changes should result in better protection against password guessing scenarios. For example, in addition to limiting the number of failed login attempts to 10 (default configuration in 10gR2), Oracle 11g beta’s planned default settings should expire passwords every 180 days, and limit to seven the number of times a user can login with an expired password before disabling access. # Faster DML triggers : Create a disabled trigger; specify trigger firing order # Fine grained access control for Utl_TCP: in 10g all port are available, now it is controlled. # Data Guard supports "Flashback Standby" # New Trigger features # Partitioning by logical object and automated partition creation. # LOB's - New high-performance LOB features. # New Oracle11g Advisors # Enhanced Read only tables
# Table trigger firing order # Enhanced Index rebuild online : - Online index build with NO pause to DML. # No recompilation of dependent objects:- When A) Columns added to tables B) Procedures added to packages # Improved optimizer statistics collection speed # Online index build with NO pause to DML # Read only table :alter table t read only alter table t read write Oracle 11g Database SQL/PL-SQL New Features ---------------------------------------------> Fine Grained Dependency Tracking: In 11g we track dependencies at the level of element within unit. so that these changes have no consequence • Transparent performance improvement •Unnecessary recompilation certainly consumes CPU create table t(a number) create view v as select a from t alter table t add(Unheard_Of number) select status from User_Objectswhere Object_Name = 'V' - ----VALID
No recompilation of dependent objects when Columns added to tables OR Procedures added to packages > Named and Mixed Notation from SQL: select fun(P4=>10) from DUAL In 10g not possible to call function in select statment by passing 4th parameter, but in 11g it is possible > PL/SQL "continue" keyword - It is same as we read in c/c++ loop > Support for “super”: It is same "super" in Java. > Powerfull Regular Expression: Now we can access data between TAGS like data between tags ......... The new built-in REGEXP_COUNT returns the number of times the pattern is matched in the input string. > New table Data Type "simple_integer" > SQL Performance Analyzer(SPA) : It is same as Database replay except it not capture all transaction.The SQL Performance Analyzer (SPA) leverages existing Oracle Database 10g SQL tuning components. The SPA provides the ability to capture a specific SQL workload in a SQL Tuning Set, take a performance baseline before a major database or system change, make the desired change to the system, and then replay the SQL workload against the modified database or configuration. The before and after performance of the SQL workload can then be compared
with just a few clicks of the mouse. The DBA only needs to isolate any SQL statements that are now performing poorly and tune them via the SQL Tuning Advisor > Caching The Results with /*+ result_cache */ : select /*+ result_cache */ * from my_table, New for Oracle 11g, the result_cache hint caches the result set of a select statement. This is similar to alter table table_name cache,but as you can adding predicates makes /*+ result_cache */ considerably more powerful by caching a subset of larger tables and common queries. select /*+ result_cache */ col1, col2, col3 from my_table where colA = :B1 > The compound trigger : A compound trigger lets you implement actions for each of the table DML timing points in a single trigger > PL/SQL unit source can exceeds 32k characters > Easier to execute table DDL operations online: Option to wait for active DML operations instead of aborting > Fast add column with default value: Does not need to updateall rows to default value. Oracle 11g Database Backup & Recovery New Features -----------------------------------------------* Enhanced configuration of archive deletion policies Archive can be deleted , if it is not need DG , Streams Flashback etc When you CONFIGURE an archived log deletion policy applies to all archiving destinations, including the flash recovery area. BACKUP ... DELETE
INPUT and DELETE... ARCHIVELOG use this configuration, as does the flash recovery area. When we back up the recovery area, RMAN can fail over to other archived redo log destinations if the flash recovery area is inaccessible. * Configuring backup compression: In 11g can use CONFIGURE command to choose between the BZIP2 and ZLIB compression algorithms for RMAN backups. * Active Database Duplication: Now DUPLICATE command is network aware i.e.we can create a duplicate or standby database over the network without taking backup or using old backup. * Parallel backup and restore for very large files: RMAN Backups of large data files now use multiple parallel server processes to efficiently distribute theworkload for each file. This features improves the performance of backups. * Improved block media recovery performance: RECOVER command can recover individual data blocks. RMAN take older, uncorrupted blocks from flashback and the RMAN can use these blocks, thereby speeding up block media recovery. * Fast incremental backups on physical standby database: 11g has included new feature of enable block change tracking on a physical standby database (ALTER DATABASE ENABLE/DISABLE BLOCK CHANGE TRACKING SQL statement). This new 11g feature enables faster incremental backups on a physical standby database than in previous releases.because RMAN identifywe the changed blocks
sincethe last incremental backup. 11g ASM New Features ----------------------The new features in Automatic Storage Management (ASM) extend the storage management automation, improve scalability, and further simplify management for Oracle Database files. ■ ASM Fast Mirror Resync A new SQL statement, ALTER DISKGROUP ... DISK ONLINE, can be executed after a failed disk has been repaired. The command first brings the disk online for writes so that no new writes are missed. Subsequently, it initiates a copy of all extents marked as stale on a disk from their redundant copies. This feature significantly reduces the time it takes to repair a failed diskgroup, potentially from hours to minutes. The repair time is proportional to the number of extents that have been written to or modified since the failure. ■ ASM Manageability Enhancements The new storage administration features for ASM manageability include the following: ■ New attributes for disk group compatibility To enable some of the new ASM features, you can use two new disk group compatibility attributes, compatible.rdbms and compatible.asm. These attributes specify the minimum software version that is required to use disk groups for the database and for ASM, respectively. This feature enables heterogeneous environments with disk groups from both Oracle Database 10g and Oracle Database 11g. By default, both attributes are set to 10.1. You must
advance these attributes to take advantage of the new features. ■ New ASM command-line utility (ASMCMD) commands and options ASMCMD allows ASM disk identification, disk bad block repair, and backup and restore operations in your ASM environment for faster recovery. ■ ASM fast rebalance Rebalance operations that occur while a disk group is in RESTRICTED mode eliminate the lock and unlock extent map messaging between ASM instances in Oracle RAC environments, thus improving overall rebalance throughput. This collection of ASM management features simplifies and automates storage management for Oracle databases. ■ ASM Preferred Mirror Read When ASM failure groups are defined, ASM can now read from the extent that is closest to it, rather than always reading the primary copy. A new initialization parameter, ASM_PREFERRED_READ_FAILURE_GROUPS, lets the ASM administrator specify a list of failure group names that contain the preferred read disks for each node in a cluster. In an extended cluster configuration, reading from a local copy provides a great performance advantage. Every node can read from its local diskgroup (failure group), resulting in higher efficiency and performance and reduced network traffic. ■ ASM Rolling Upgrade
Rolling upgrade is the ability of clustered software to function when one or more of the nodes in the cluster are at different software versions. The various versions of the software can still communicate with each other and provide a single system image. The rolling upgrade capability will be available when upgrading from Oracle Database 11g Release 1 (11.1). This feature allows independent nodes of an ASM cluster to be migrated or patched without affecting the availability of the database. Rolling upgrade provides higher uptime and graceful migration to new releases. ■ ASM Scalability and Performance Enhancements This feature increases the maximum data file size that Oracle can support to 128 TB. ASM supports file sizes greater than 128 TB in any redundancy mode. This provides near unlimited capacity for future growth. The ASM file size limits are: ■ External redundancy - 140 PB ■ Normal redundancy - 42 PB ■ High redundancy - 15 PB Customers can also increase the allocation unit size for a disk group in powers of 2 up to 64 MB. ■ Convert Single-Instance ASM to Clustered ASM This feature provides support within Enterprise Manager to convert a non-clustered ASM database to a clustered ASM database by implicitly configuring ASM on all nodes. It also extends the single-instance to Oracle RAC conversion utility to support standby databases.
Simplifying the conversion makes it easier for customers to migrate their databases and achieve the benefits of scalability and high availability provided by Oracle RAC. ■ New SYSASM Privilege for ASM Administration This feature introduces the new SYSASM privilege to allow for separation of database management and storage management responsibilities. The SYSASM privilege allows an administrator to manage the disk groups that can be shared by multiple databases. The SYSASM privilege provides a clear separation of duties from the SYSDBA privilege. 11g ASM New Features The new features in Automatic Storage Management (ASM) extend the storage management automation, improve scalability, and further simplify management for Oracle Database files. ■ ASM Fast Mirror Resync A new SQL statement, ALTER DISKGROUP ... DISK ONLINE, can be executed after a failed disk has been repaired. The command first brings the disk online for writes so that no new writes are missed. Subsequently, it initiates a copy of all extents marked as stale on a disk from their redundant copies. This feature significantly reduces the time it takes to repair a failed diskgroup, potentially from hours to minutes. The repair time is proportional to the number of extents that have been written to or modified since the failure. ■ ASM Manageability Enhancements The new storage administration features for ASM manageability include the following: ■ New attributes for disk group compatibility To enable some of the new ASM features, you can use two new disk group compatibility attributes, compatible.rdbms and compatible.asm. These attributes specify the minimum software version that is required to use disk groups for the database and for ASM, respectively. This feature enables heterogeneous environments with disk groups from both Oracle Database 10g and Oracle Database 11g. By default, both attributes are set to 10.1. You must advance
these attributes to take advantage of the new features. ■ New ASM command-line utility (ASMCMD) commands and options ASMCMD allows ASM disk identification, disk bad block repair, and backup and restore operations in your ASM environment for faster recovery. ■ ASM fast rebalance Rebalance operations that occur while a disk group is in RESTRICTED mode eliminate the lock and unlock extent map messaging between ASM instances in Oracle RAC environments, thus improving overall rebalance throughput. This collection of ASM management features simplifies and automates storage management for Oracle databases. ■ ASM Preferred Mirror Read When ASM failure groups are defined, ASM can now read from the extent that is closest to it, rather than always reading the primary copy. A new initialization parameter, ASM_PREFERRED_READ_FAILURE_GROUPS, lets the ASM administrator specify a list of failure group names that contain the preferred read disks for each node in a cluster. In an extended cluster configuration, reading from a local copy provides a great performance advantage. Every node can read from its local diskgroup (failure group), resulting in higher efficiency and performance and reduced network traffic. ■ ASM Rolling Upgrade Rolling upgrade is the ability of clustered software to function when one or more of the nodes in the cluster are at different software versions. The various versions of the software can still communicate with each other and provide a single system image. The rolling upgrade capability will be available when upgrading from Oracle Database 11g Release 1 (11.1). This feature allows independent nodes of an ASM cluster to be migrated or patched without affecting the availability of the database. Rolling upgrade provides higher uptime and graceful migration to new releases. ■ ASM Scalability and Performance Enhancements This feature increases the maximum data file size that Oracle can support to 128 TB. ASM supports file sizes greater than 128 TB in any redundancy mode. This provides near unlimited capacity for future growth. The ASM file size limits are:
■ External redundancy - 140 PB ■ Normal redundancy - 42 PB ■ High redundancy - 15 PB Customers can also increase the allocation unit size for a disk group in powers of 2 up to 64 MB. ■ Convert Single-Instance ASM to Clustered ASM This feature provides support within Enterprise Manager to convert a non-clustered ASM database to a clustered ASM database by implicitly configuring ASM on all nodes. It also extends the single-instance to Oracle RAC conversion utility to support standby databases. Simplifying the conversion makes it easier for customers to migrate their databases and achieve the benefits of scalability and high availability provided by Oracle RAC. ■ New SYSASM Privilege for ASM Administration This feature introduces the new SYSASM privilege to allow for separation of database management and storage management responsibilities. The SYSASM privilege allows an administrator to manage the disk groups that can be shared by multiple databases. The SYSASM privilege provides a clear separation of duties from the SYSDBA privilege.
How to Start the CSS Process ,ASM & DB Instances Services On Windows using Command Prompt. 1. To start the services for CSS, ASM & DB Instances: C:\> net start oraclecsservice C:\> net start OracleASMService+ASM C:\> net start OracleServiceDBAASMW 2.Check whether above services
are running using following command :
C:\> net start Example as shows: C:\> net start
These Windows services are started: OracleASMService+ASM
OracleCSService OracleServiceDBAASMW
3. To stop the above services C:\> net stop OracleServiceDBAASMW C:\> net stop OracleASMService+ASM C:\> net stop oraclecsservice
ASM Creation (Windows) You can follow this steps and create a ASM diskgroup on your local machine and play with it ( Windows) 1) Creating a dummy disks F:\>mkdir asmdisks F:\>cd asmdisks F:\asmdisks>asmtool -create F:\asmdisks\ disk1 512 F:\asmdisks>asmtool -create F:\asmdisks\ disk2 512 F:\asmdisks>asmtool -create F:\asmdisks\ disk3 512 Now you have 3 disks(dummy) of 512mb each which can be used to create a ASM disk group. 2) Create ASM instance a) Configure Cluster Synchronization Servie C:\>c:\oracle\product\ 10.2.0\db_ 1\BIN\localconfi g add Step 1: stopping local CSS stack Step 2: deleting OCR repository Step 3: creating new OCR repository Successfully accumulated necessary OCR keys. Creating OCR keys for user 'ap\arogyaa' , privgrp ''.. Operation successful. Step 4: creating new CSS service successfully created local CSS service successfully reset location of CSS setup b) Create Init pfile Open notepad edit the following parameters and save file as "C:\oracle\product\ 10.2.0\db_ 1\database\ init+ASM. ora" INSTANCE_TYPE= ASM DB_UNIQUE_NAME= +ASM LARGE_POOL_SIZE= 8M
ASM_DISKSTRING= 'F:\asmdisks\ *' _ASM_ALLOW_ONLY_ RAW_DISKS= FALSE c) Create service and password file oradim will create an ASM instance and start it automatically. c:\> orapwd file=C:\oracle\ product\10. 2.0\db_1\ database\ PWD+ASM.ora password=asm c:\> oradim -NEW -ASMSID +ASM -STARTMODE auto 3) Create ASM disk group a) Create asm disk group SQL> select path, mount_status from v$asm_disk; PATH MOUNT_S ------------ --------- --------- -F:\ASMDISKS\ DISK1 CLOSED F:\ASMDISKS\ DISK3 CLOSED F:\ASMDISKS\ DISK2 CLOSED SQL> create diskgroup data external redundancy disk 2 'F:\ASMDISKS\ DISK1', 3 'F:\ASMDISKS\ DISK2', 4* 'F:\ASMDISKS\ DISK3'; Diskgroup created. b) Change PFILE to SPFILE, Add ASM Diskgroup parameter and your all set to go and use ASM. SQL> create spfile from pfile; SQL> startup force; SQL> alter system set asm_diskgroups= data scope=spfile; SQL> startup force; SQL> startup force ASM instance started Total System Global Area 83886080 bytes Fixed Size 1247420 bytes Variable Size 57472836 bytes ASM Cache 25165824 bytes ASM diskgroups mounted SQL> Now you can go ahead and use your DBCA and create a database and on step 6 of 13, you can use Automatic Storage management as your Filesystem.
RMAN Backup and Recovery Scenarios RMAN Backup and Recovery Scenarios => Complete Closed Database Recovery. System tablespace is missing If the system tablespace is missing or corrupted the database cannot be started up so a complete closed database recovery must be performed. Pre requisites: A closed or open database backup and archived logs.
1. Use OS commands to restore the missing or corrupted system datafile to its original location, ie: cp -p /user/backup/uman/system01.dbf /user/oradata/u01/dbtst/system01.dbf 2. startup mount; 3. recover datafile 1; 4. alter database open; => Complete Open Database Recovery. Non system tablespace is missing If a non system tablespace is missing or corrupted while the database is open, recovery can be performed while the database remain open. Pre requisites: A closed or open database backup and archived logs. 1. Use OS commands to restore the missing or corrupted datafile to its original location, ie: cp -p /user/backup/uman/user01.dbf /user/oradata/u01/dbtst/user01.dbf 2. alter tablespace offline immediate; 3. recover tablespace ; 4. alter tablespace online;
=> Complete Open Database Recovery (when the database is initially closed).Non system tablespace is missing If a non system tablespace is missing or corrupted and the database crashed,recovery can be performed after the database is open. Pre requisites: A closed or open database backup and archived logs. 1. startup; (you will get ora-1157 ora-1110 and the name of the missing datafile, the database will remain mounted) 2. Use OS commands to restore the missing or corrupted datafile to its original location, ie: cp -p /user/backup/uman/user01.dbf /user/oradata/u01/dbtst/user01.dbf 3. alter database datafile3 offline; (tablespace cannot be used because the database is not open) 4. alter database open; 5. recover datafile 3; 6. alter tablespace online; => Recovery of a Missing Datafile that has no backups (database is open). If a non system datafile that was not backed up since the last backup is missing,recovery can be performed if all archived logs since the creation of the missing datafile exist. Pre requisites: All relevant archived logs. 1. alter tablespace offline immediate; 2. alter database create datafile ‘/user/oradata/u01/dbtst/newdata01.dbf’; 3. recover tablespace ; 4. alter tablespace online; If the create datafile command needs to be executed to place the datafile on a location different than the original use: alter database create datafile ‘/user/oradata/u01/dbtst/newdata01.dbf’ as
‘/user/oradata/u02/dbtst/newdata01.dbf’ => Restore and Recovery of a Datafile to a different location. If a non system datafile is missing and its original location not available, restore can be made to a different location and recovery performed. Pre requisites: All relevant archived logs. 1.Use OS commands to restore the missing or corrupted datafile to the new location, ie: cp -p /user/backup/uman/user01.dbf /user/oradata/u02/dbtst/user01.dbf 2. alter tablespace offline immediate; 3. alter tablespace rename datafile ‘/user/oradata/u01/dbtst/user01.dbf’ to ‘/user/oradata/u02/dbtst/user01.dbf’; 4. recover tablespace ; 5. alter tablespace online; =>Control File Recovery Always multiplex your controlfiles. Controlfiles are missing, database crash. Pre requisites: A backup of your controlfile and all relevant archived logs. 1. startup; (you get ora-205, missing controlfile, instance start but database is not mounted) 2. Use OS commands to restore the missing controlfile to its original location: cp -p /user/backup/uman/control01.dbf /user/oradata/u01/dbtst/control01.dbf cp -p /user/backup/uman/control02.dbf /user/oradata/u01/dbtst/control02.dbf 3. alter database mount; 4. recover automatic database using backup controlfile; 5. alter database open resetlogs; 6. make a new complete backup, as the database is open in a new incarnation and previous archived log are not relevant.
=>Incomplete Recovery, Until Time/Sequence/Cancel Incomplete recovery may be necessaire when an archived log is missing, so recovery can only be made until the previous sequence, or when an important object was dropped, and recovery needs to be made until before the object was dropped. Pre requisites: A closed or open database backup and archived logs, the time or sequence that the ‘until’ recovery needs to be performed. 1. If the database is open, shutdown abort 2. Use OS commands to restore all datafiles to its original locations: cp -p /user/backup/uman/u01/*.dbf /user/oradata/u01/dbtst/ cp -p /user/backup/uman/u02/*.dbf /user/oradata/u01/dbtst/ cp -p /user/backup/uman/u03/*.dbf /user/oradata/u01/dbtst/ cp -p /user/backup/uman/u04/*.dbf /user/oradata/u01/dbtst/ etc… 3. startup mount;
4. recover automatic database until time ‘2004-03-31:14:40:45′; 5. alter database open resetlogs; 6. make a new complete backup, as the database is open in a new incarnation and previous archived log are not relevant.Alternatively you may use instead of until time, until sequence or until cancel: recover automatic database until sequence 120 thread 1; OR recover database until cancel; =>Rman Recovery Scenarios Rman recovery scenarios require that the database is in archive log mode, and that backups of datafiles, control files and archived redolog files are made using Rman. Incremental Rman backups may be used also. Rman can be used with the repository installed on the archivelog, or with a recovery catalog that may be installed in the same or other database. Configuration and operation recommendations: Set the parameter controlfile autobackup to ON to have with each backup a controlfile backup also: configure controlfile autobackup on; set the parameter retention policy to the recovery window you want to have, ie redundancy 2 will keep the last two backups available, after executing delete obsolete commands: configure retention policy to redundancy 2; Execute your full backups with the option ‘plus archivelogs’ to include your archivelogs with every backup: backup database plus archivelog; Perform daily maintenance routines to maintain on your backup directory the number of backups you need only: crosscheck backup; crosscheck archivelog all; delete noprompt obsolete backup; To work with Rman and a database based catalog follow these steps: 1. sqlplus / 2. create tablespace repcat; 3. create user rcuser identified by rcuser default tablespace repcat temporary tablespace temp; 4. grant connect, resource, recovery_catalog_owner to rcuser 5. exit 6. rman catalog rcuser/rcuser # connect to rman catalog as the rcuser 7. create catalog # create the catalog 8. connect target / #
=>Complete Closed Database Recovery. System tablespace is missing In this case complete recovery is performed, only the system tablespace is missing,so the database
can be opened without reseting the redologs. 1. rman target / 2. startup mount; 3. restore database; 4. recover database; 5. alter database open; =>Complete Open Database Recovery. Non system tablespace is missing,database is up 1. rman target / 2. sql ‘alter tablespace offline immediate’; 3. restore datafile 3; 4. recover datafile 3; 5. sql ‘alter tablespace online’; => Complete Open Database Recovery (when the database is initially closed).Non system tablespace is missing A user datafile is reported missing when tryin to startup the database. The datafile can be turned offline and the database started up. Restore and recovery are performed using Rman. After recovery is performed the datafile can be turned online again. 1. sqlplus /nolog 2. connect / as sysdba 3. startup mount 4. alter database datafile ‘’ offline; 5. alter database open; 6. exit; 7. rman target / 8. restore datafile ‘’; 9. recover datafile ‘’; 10. sql ‘alter tablespace online’; => Recovery of a Datafile that has no backups (database is up). If a non system datafile that was not backed up since the last backup is missing,recovery can be performed if all archived logs since the creation of the missing datafile exist. Since the database is up you can check the tablespace name and put it offline. The option offline immediate is used to avoid that the update of the datafile header. Pre requisites: All relevant archived logs. 1. sqlplus ‘/ as sysdba’ 2. alter tablespace offline immediate; 3. alter database create datafile ‘/user/oradata/u01/dbtst/newdata01.dbf; 4. exit 5. rman target / 6. recover tablespace ; 7. sql ‘alter tablespace online’; If the create datafile command needs to be executed to place the datafile on a location different than the original use:
alter database create datafile ‘/user/oradata/u01/dbtst/newdata01.dbf’ as ‘/user/oradata/u02/dbtst/newdata01.dbf’ => Restore and Recovery of a Datafile to a different location. Database is up. If a non system datafile is missing and its original location not available, restore can be made to a different location and recovery performed. Pre requisites: All relevant archived logs, complete cold or hot backup. 1. Use OS commands to restore the missing or corrupted datafile to the new location, ie: cp -p /user/backup/uman/user01.dbf /user/oradata/u02/dbtst/user01.dbf 2. alter tablespace offline immediate; 3. alter tablespace rename datafile ‘/user/oradata/u01/dbtst/user01.dbf’ to ‘/user/oradata/u02/dbtst/user01.dbf’; 4. rman target / 5. recover tablespace ; 6. sql ‘alter tablespace online’; => Control File Recovery Always multiplex your controlfiles. If you loose only one controlfile you can replace it with the one you have in place, and startup the Database. If both controlfiles are missing, the database will crash. Pre requisites: A backup of your controlfile and all relevant archived logs. When using Rman alway set configuration parameter autobackup of controlfile to ON. You will need the dbid to restore the controlfile, get it from the name of the backed up controlfile.It is the number following the ‘c-’ at the start of the name. 1. rman target / 2. set dbid 3. startup nomount; 4. restore controlfile from autobackup; 5. alter database mount; 6. recover database; 7. alter database open resetlogs; 8. make a new complete backup, as the database is open in a new incarnation and previous archived log are not relevant. Incomplete Recovery, Until Time/Sequence/Cancel Incomplete recovery may be necessaire when the database crash and needs to be recovered, and in the recovery process you find that an archived log is missing. In this case recovery can only be made until the sequence before the one that is missing. Another scenario for incomplete recovery occurs when an important object was dropped or incorrect data was committed on it. In this case recovery needs to be performed until before the object was dropped. Pre requisites: A full closed or open database backup and archived logs, the time or sequence that the ‘until’ recovery needs to be performed. 1. If the database is open, shutdown it to perform full restore. 2. rman target \ 3. startup mount; 4. restore database;
5. recover database until sequence 8 thread 1; # you must pass the thread, if a single instance will always be 1. 6. alter database open resetlogs; 7. make a new complete backup, as the database is open in a new incarnation and previous archived log are not relevant.Alternatively you may use instead of until sequence, until time, ie: ‘2004-12-28:01:01:10′.
How to kill all ORACLE Process in one command At OS prompt, Execute the following command to kill all ORACLE process $ kill -9 `ps -ef |grep PROD |awk '{print $2}'`
How to change the characterset of Oracle 10g DB Decide the character set you want to change and check whether new character is superset of old character set 1.SQL> shutdown immediate 2.SQL> startup open restrict 3.SQL> alter database character set internal_use UTF8; 4.SQL> shutdown immediate 5.SQL> startup
View more...
Comments