1
ORACLE DATA BASE ADMINISTRATION
1. Configure Rman Backupset Compression RMAN compresses the backup set contents before writing them to disk. No extra uncompress ion steps are required during recovery when we use RMAN compression. RMAN has two types of compression: 1.) Null Compression and 2.) Unused Block Compression 1.) Null Compression: When backing up datafiles into backup sets, RMAN does not back up the contents of data blocks that have never been allocated. This means RMAN will never backup the blocks that are ever used. For example: We have a tablespace having one datafile of size 100MB and out of 100MB only 50 MB is used. Then RMAN will backup only 50MB. 2.) Unused Block Compression: RMAN skips the blocks that do not currently contain data and this is called Unused Block Compression. RMAN now creates more compact backups of datafiles, by skipping datafile blocks that are not currently used to store data. No extra action is required on the part of the DBA to use this feature. Example: We have a tablespace having one datafile of size 100MB and out of 100MB, 50MB is used by the user tables. Then user dropped a table belonging to that tablespace which was of 25MB, with Unused Block Compression only 25MB of the files is backed up. In this example if null compression is used then it would have backed up 50MB because Null Compression will consider the blocks that are formatted/ever used. Binary Compression : Binary Compression can be done by specifying "AS COMPRESSED" clause in backup command, this compression is called as binary compression. RMAN can apply a binary compression algorithm as it writes data to backup sets. This compression is similar to the compression provided by many tape vendors when backing up data to tape. But we cannot give exact percentage of compression. This binary compression algorithm can greatly reduce the space required for disk backup storage. It is typically 2x to 4x, and greater for text-intensive databases. The command to take the compressed backup : RMAN> backup as compressed backupset database ; There is no special command to restore database from the compressed backupsets. The restore command will be the same as with uncompressed backups.The restore from the compressed backpuset will take more time than uncompressed backupsets. To use rman compression option, we can run the following RMAN commands to configure compression RMAN> CONFIGURE followed by .. RMAN> CONFIGURE or RMAN> CONFIGURE or RMAN> CONFIGURE
DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET; COMPRESSION ALGORITHM ‗HIGH‘ ; COMPRESSION ALGORITHM ‗MEDIUM‘ ; COMPRESSION ALGORITHM ‗LOW‘ ;
2
ORACLE DATA BASE ADMINISTRATION
or RMAN> CONFIGURE COMPRESSION ALGORITHM ‗BASIC‘ ; Oracle 11g added several compression algorithms to compress data. They can be used for compressing tables, LOBs , compressed data pump exports or even RMAN backups. Unfortunately for some compression algorithms we need to purchase the ―Advanced Compression Option‖. The following table lists the available RMAN compression options, the most likely compression algorithm being used and states if an additional license is required:
The compression levels are BASIC, LOW, MEDIUM and HIGH and each affords a trade off related to backup throughput and the degree of compression afforded. If we have enabled the Oracle Database 11g Release 2 Advanced Compression Option, then we can choose from the following compression levels : HIGH - Best suited for backups over slower networks where the limiting factor is network speed MEDIUM - Recommended for most environments. Good combination of compression ratios and speed LOW - Least impact on backup throughput and suited for environments where CPU resources are the limiting factor. Note: The compression ratio generally increases from LOW to HIGH, with a trade-off of potentially consuming more CPU resources. We can check the compression level by using the command . SQL> select * from V$RMAN_COMPRESSION_ALGORITHM; Output :
3
ORACLE DATA BASE ADMINISTRATION
I found a good scenario on net related to compression level having statistics about the this compression level . Here is the scenario : The environment being used was a freshly created 11g Release 2 database with some smaller tables in it. The total sum of all segments equals to 4.88 GB. All database data files excluding the temporary ones are 7.3 GB total. Excluding temporary and undo data files total size equates to 5.9 GB. Here is the test results displays of the compression level : Test results
As we can see from the table HIGH compression does an incredibly high load on the machine and take extremely long but produces the smallest backup set size.Surprisingly BASIC compression (which is available without advanced compression license) does a good job as well and produces the second smallest backup set but takes nearly as long as doing uncompressed backups. But in other environment with faster CPUs this will change . In the test environment used either LOW or MEDIUM compression seems to be the best choice. Due to the fact MEDIUM produces a approx. 15% smaller backup set but taking only a few seconds more time to complete i would rank MEDIUM on 1st and LOW on second. Finally we came to the conclusion that stronger the compression the smaller the backup size but the more CPU-intensive the backup is. If we do not have the advanced compression license BASIC compression will produce reasonable compression rates at moderate Load. If we have the licence we have a lot more options to suit our needs. If we want to test and optimize our rman backup, we basically have three major switches to play with :
compression algorithmn
rman parallelism and
data transfer mechanism (SAN or Ethernet [this includes: iSCSI, NFS, CIFS, Backup to tape over Ethernet]) 2.Oracle Advanced Compression Oracle Advanced Compression and Oracle Database 11g Release 2 helps manage more data in a cost-effective manner. With data volumes, on average, tripling every two years, Oracle Advanced Compression delivers compression rates of 2-4x across all types of data
4
ORACLE DATA BASE ADMINISTRATION
and applications.storage savings from compression will cascade throughout the data center, reducing network traffic and data backups as well. And by reading fewer blocks off disk, Oracle Advanced Compression also improves query performance. Oracle Advanced Compression is an option of the Oracle 11g database (separately licensed) that allows data in the database to be compressed. Oracle Advanced Compression offers the following advantages: 1.) OLTP Compression : It allows structured and unstructured data to be compressed on insert,update and delete operations.The following are features : New compression algorithm uses deferred or batched approach
Data is inserted as is without compression until PCTFREE value is reached.
Compression of data starts once PCTFREE threshold is reached
Can be enabled at table, partition or tablespace level
No need of decompressing the data during reads
Recommended for low update activity tables
2.) Data Pump Compression : In Data Pump, the compression of metadata was introduced in 10g and compression of "data" was introduced in 11g.This covers the following features : Both are Inline operation
Save on storage allocation
No need to uncompress before Import
Implemented with COMPRESSION attribute, Values supported are ALL, DATA_ONLY, METADATA_ONLY . 3.) Data guard Compression : It includes the following features : Redo is compressed as it is transmitted over a network .
Helps efficiently utilize network bandwidth when data guard is across data centers
Faster re-synchronization of Data guard during gap resolution.
Recommended for low network bandwidth .
Implemented with parameter log_archive_dest_n
attribute
―COMPRESSION‖
of
initialization
.4.) RMAN Backup Compression : It compresses the RMAN backups.The followinf features are Supports compression of backups using "ZLIB" algorithm .
Faster compression and low CPU utilization compared to default BZIP2 (10g) .
Low compression ratio compared to BZIP2 .
Implement with CONFIGURE COMPRESSION ALGORITHM ‗value‘ command where value can be High , Medium(ZLIB) and Low(LZO) .
The Oracle Database 11g Advanced Compression option introduces a comprehensive set of compression capabilities to help customers maximize resource utilization and reduce costs. It allows IT administrators to significantly reduce their overall database storage footprint by enabling compression for all types of data – be it relational (table), unstructured (file), or
5
ORACLE DATA BASE ADMINISTRATION
backup data . Although storage cost savings are often seen as the most tangible benefit of compression, innovative technologies included in the Advanced Compression Option are designed to reduce resource requirements and technology costs for all components of our IT infrastructure, including memory and network bandwidth . The benefits of compression are manyfold 1.) Reduction of disk space used for storage . 2.) Reduction in I/O bandwidth requirements . 3.) Faster full table scans . 4.) Lower server memory usage.
3.Change Database Character Set using CSSCAN CSSCAN (Database Character Set Scanner) is a SCAN tool that allows us to see the impact of a database character set change or assist us to correct an incorrect database nls_charactersetsetup. Data scanning identifies the amount of effort required to migrate data into the new character encoding scheme before changing the database character set. This information helps to determine the best approach for converting the database character set. Before altering the character set of a database, check the convertibility of the data before converting. Character set conversions can cause data loss or data corruption. The Character Set Scanner utility provides the below two features: 1.) Convertibility check of existing data and potential issues. The Scanner checks all character data in the database including the data dictionary and tests for the effects and problems of changing the character set encoding (characterset). At the end of the scan, it generates a summary and exception report of the database scan. 2.) Csscan allows also us to do a check if there is no data in the database that is incorrectly stored. The CSALTER script is part of the Database Character Set Scanner utility. The CSALTER script is the most straightforward way to migrate a character set, but it can be used only if all of the schema data is a strict subset of the new character set. Each and every character in the current character set is available in the new character set and has the same code point value in the new character set. With the strict superset criteria in mind, only the metadata is converted to the new character set by the CSALTER script, with the following exception: the CSALTER script performs data conversion only on CLOB columns in the data dictionary and sample schemas that have been created by Oracle. CLOB columns that users have created may need to be handled separately Note : it's possible to run Csscan from a client, but this client needs to be the same base version as the database home.(i.e, oracle 11g server need oracle 11g client) . To change the database character set, perform the following steps :
6
ORACLE DATA BASE ADMINISTRATION
STEP 1 : Remove the invalid objects and purge the recyclebin , then take a full backup of the database. STEP 2 : Install the CSS utility if not install . We will get error css-00107 if css utility is not install .Install the CSS utility by running the csminst.sql script which is found in $ORACLE_HOME\rdbms\admin . STEP 3 : Run the Database Character Set Scanner utility as set the oracle_sid and run as CCSSCAN /AS SYSDBA FULL=Y STEP 4 : Run the CSALTER script.This script is in $ORACLE_HOME\rdbms\admin folder . i.> shut down ii.> startup restrict iii.> @csalter.plb iv.> shut immediate v.> startup Note : i.> The CSALTER script does not perform any user data conversion. It only changes the character set metadata in the data dictionary.Thus, after the CSALTER operation, Oracle will behave as if the database was created using the new character set. ii.> Changing the database characterset is not an easy task . It is quite tricky tasks and may face errors which need the oracle support . So,it's better to raise as SR for this task and involve the oracle support . 4.How to Reduce DB File Sequential Read Wait DB File Sequential Read wait event occurs when we are trying to access data using index and oracle is waiting for the read of index block from disk to buffer cache to complete. A sequential read is a single-block read.Single block I/Os are usually the result of using indexes. Rarely, full table scan calls could get truncated to a single block call due to extent boundaries, or buffers already present in the buffer cache.Db file sequential read wait events may also appear when undo blocks are read from disk in order to provide a consistent get(rarely). To determine the actual object being waited can be checked by the p1, p2, p3 info in v$session_wait . A sequential read is usually a single-block read, although it is possible to see sequential reads for more than one block (See P3). This wait may also be seen for reads from datafile headers (P2 indicates a file header read) ,where p1,p2 and p3 gives the the absolute file number ,the block being read ,and the number of blocks (i.e, P3 should be 1) respectively. Block reads are fairly inevitable so the aim should be to minimise un-necessary IO. This is best achieved by good application design and efficient execution plans. Changes to execution plans can yield orders of magnitude changes in performance.Hence to reduce this wait event follow the below points . 1.) Tune Oracle - tuning SQL statements to reduce unnecessary I/O request is the only guaranteed way to reduce "db file sequential read" wait time.
7
ORACLE DATA BASE ADMINISTRATION
2.) Tune Physical Devices - Distribute(stripe) the data on diferent disk to reduce the i/o . Logical distribution is useless. "Physical" I/O performance is only governed by "independency of devices". 3.) Faster Disk - Buy the faster disk to reduce the unnecessary I/O request . 4.) Increase db_block_buffers - A larger buffer cache can (not will, "might") help . 5.Cannot Load OCI.DLL : While Connecting Sometimes the error "cannot load OCI.DLL" occur whenever we try to connect with the oracle database by using the third-party tools(i.e, toad,sqltools and others) or command prompt . This error may occur because of the following reason . 1.) The oci.dll error may occur because you have not set the correct ORACLE_HOME and path in environment variables . 2.) It might be possible that the oci.dll file may be corrupt or may not exist on the correct path . 3.) May be possible that oci.dll may not be correct version. (e.g. 32bit s/w will load a 32bit DDL - we cannot for example use a 64bit DLL for a 32bit executable) . To solve this issue , consider the below points . 1.) Check the ORACLE_HOME and Path setting in the envirnoment variable. 2.) Check the correct location of the oci.dll path . The path of the oci.dll file is $ORACLE_HOME\bin\oci.dll 3.) Check the oci.dll correct version . In my case , i am facing this issue because the $ORACLE_HOME is not correctly set in the environment variables . So setting the correct path in environment variables, we find not any error while connecting the database . Transparent Data Encryption in Oracle 11g Oracle Transparent Data Encryption (TDE) enables the organizations to encrypt sensitive application data on storage media completely transparent to the application. TDE addresses encryption requirements associated with public and private privacy and security regulations such as PCI DSS. TDE column encryption was introduced in Oracle Database 10g Release 2, enabling encryption of table columns containing sensitive information. The TDE tablespace encryption and the support for hardware security modules (HSM) were introduced in Oracle Database 11gR1. TDE is protecting the data at rest. It is encrypting the data in the datafiles so that in case they are obtained by other parties it will not be possible to access the clear text data. TDE cannot be used to obfuscate the data for the users who have privileges to access the tables. In the databases where TDE is configured any user who has access on an encrypted table will be able to see the data in clear text because Oracle will transparently decrypt the data for any user having the necessary privileges.
8
ORACLE DATA BASE ADMINISTRATION
TDE is using a two tier encryption key architecture consisting of : a master encryption key - this is the encryption key used to encrypt secondary keys used for column encryption and tablespace encryption. one or more table and/or tablespace keys - these are the keys that are used to encrypt one or more specific columns or the keys used to encrypt tablespaces. There is only one table key regardless of the number of encrypted columns in a table and it will be stored in the data dictionary. The tablespace key is stored in the header of each datafile of the encrypted tablespace. The table and tablespace keys are encrypted using the master key. The master key is stored in an external security module (ESM) that can be one of the following: an Oracle Wallet - a secure container outside of the database. It is encrypted with a password. a Hardware Security Module (HSM) - a device used to secure keys and perform cryptographic operations. To start using TDE the following operations have to be performed: 1.) Make sure that the wallet location exists. If a non default wallet location must be used then specify it in the sqlnet.ora file : ENCRYPTION_WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = C:\app\neerajs\admin\orcl\wallet) ) ) Note : The default encryption wallet location is $ORACLE_BASE/admin//wallet. If we want to let Oracle manage a wallet in the default location then there is no need to set theENCRYPTION_WALLET_LOCATION parameter in sqlnet.ora. It is important to check that the location specified in sqlnet.ora or the default location exists and can be read/written by the Oracle processes.
9
ORACLE DATA BASE ADMINISTRATION
2.) Generate a master key : SQL> alter system set encryption key identified by "wallet_password" ; system altered This command will do the following : A.) If there is no wallet currently in the wallet location then a new wallet with the password "wallet_password" will be generated. The password is enclosed in double quotes to preserve the case of the characters. If the double quotes are not used then the characters of the password will be all in upper case. This command will also cause the new wallet to be opened and ready for use. B.) A new master key will be generated and will be written to the wallet. This newly generated master key will become the active master key. The old master keys (if there were any) will still be kept in the wallet but they will not be active. They are kept there to be used when decrypting data that was previously encrypted using them . To see the status of an wallet run the following query: SQL> select * from v$encryption_wallet; WRL_TYPE WRL_PARAMETER STATUS ----------- ---------------------------------------file C:\app\neerajs\admin\orcl\wallet OPEN 3.) Enable encryption for a column or for an entire tablespace: 3.1) Create a table by specifying the encrypt option: SQL> create table test(col1 number, col2 varchar2(100) encrypt using 'AES256' NO SALT) ; 3.2) Encrypt the column(s) of an existing table : SQL> alter table test modify( col2 encrypt SALT ) ; Note : If the table has many rows then this operation might take some time since all the values stored in col2 must be replaced by encrypted strings. If the access to the table during this operations is needed then useOnline Table Redefinition 3.3) Create an encrypted tablespace : The syntax is the same as creating a normal tablespace except for two clauses: We specify the encryption algorithm – in this case ‗AES256′. If we do not specify this, it will default to ‗AES128′. At the time of tablespace creation specify the encryption and default storage clause. Define the encryption algorithem as " using 'algorithm' " along with the encryption clause. We can use the following algorithms while creating an encrypted tablespace. AES128 AES192 AED256 3DES168 If we don't specify any algorithm with the encryption clause it will use AES128 as default.
10
ORACLE DATA BASE ADMINISTRATION
The DEFAULT STORAGE (ENCRYPT) clause.
SQL> create tablespace encryptedtbs datafile 'C:\app\neerajs\oradata\orcl\encryptedtbs01.dbf' size 100M encryption using 'AES256' default storage(encrypt) ; Note: An existing non encrypted tablespace cannot be encrypted. If we must encrypt the data from an entire tablespace then create a new encrypted tablespace and then move the data from the old tablespace to the new one TDE Master Key and Wallet Management . The wallet is a critical component and should be backed up in a secure location (different to the location where the database backups are stored!). If the wallet containing the master keys is lost or if its password is forgotten then the encrypted data will not be accessible anymore. Make sure that the wallet is backed up in the following scenarios: Immediately after creating it. 1. When regenerating the master key 2. When backing up the database. Make sure that the wallet backup is not stored in the same location with the database backup 3. Before changing the wallet password Make sure that the wallet password is complex but at the same time easy to remember. When it is possible split knowledge about wallet password .If needed, the wallet password can be changed within Oracle Wallet Manager or with the following command using orapki (starting from 11.1.0.7): c:\> orapki wallet change_pwd -wallet Oracle recommends that the wallet files are placed outside of the $ORACLE_BASE directory to avoid having them backed up to same location as other Oracle files. Furthermore it is recommended to restrict the access to the directory and to the wallet files to avoid accidental removals. we can identify encrypted tablespaces in the database by using the below query : SQL>SELECT ts.name, es.encryptedts, es.encryptionalg FROM v$tablespace ts INNER JOIN v$encrypted_tablespaces es ON es.ts# = ts.ts# ; The following are supported with encrypted tablespaces Move table back and forth between encrypted tablespace and non-encrypted tablespace .
Datapump is supported to export/import encrypted content/tablespaces.
Transportable tablespace is supported using datapump.
The following are not supported with encrypted tablespaces Tablespace encryption cannot be used for SYSTEM, SYSAUX, UNDO and TEMP tablespaces .
Existing tablespace cannot be encrypted .
Traditional export/import utilities for encrypted content.
Enable block change tracking in oracle 11g
11
ORACLE DATA BASE ADMINISTRATION
The block change tracking (BCT) feature for incremental backups improves incremental backup performance by recording changed blocks in each datafile in a block change tracking file. This file is a small binary file called block change tracking (BCT) file stored in the database area. RMAN tracks changed blocks as redo is generated.If we enable block change tracking, then RMAN uses the change tracking file(BCT) to identify changed blocks for an incremental backup, thus avoiding the need to scan every block in the datafile. RMAN only uses block change tracking when the incremental level is greater than 0 because a level 0 incremental backup includes all blocks. Enable block change tracking (BCT) SQL> alter database enable block change tracking using file 'C:\app\neerajs\admin\noida\bct.dbf' ; When data blocks change, shadow processes track the changed blocks in a private area of memory at the same time they generate redo . When a commit is issued, the BCT information is copied to a shared area in Large Pool called 'CTWR dba buffer' . At the checkpoint, a new background process, Change Tracking Writer (CTWR) , writes the information from the buffer to the change-tracking file . If contention for space in the CTWR dba buffer occurs, a wait event called , 'Block Change Tracking Buffer Space' is recorded. Several causes for this wait event are poor I/O performance on the disk where the changetracking file resides , or the CTWR dba buffer is too small to record the number of concurrent block changes .By default, the CTWR process is disabled because it can introduce some minimal performance overhead on the database. The v$block_change_tracking views contains the name and size of the block change tracking file plus the status of change tracking: We can check by the below command : SQL> select filename, status, bytes from v$block_change_tracking; To check whether the block change tracking file is being used or not, use the below command . SQL> select file#, avg(datafile_blocks), avg(blocks_read), avg(blocks_read/datafile_blocks) * 100 as "% read for backup" from v$backup_datafile where incremental_level > 0 and used_change_tracking = 'YES' group by file# order by file# ; To disable Block Change Tracking (BCT) issue the below command : SQL> alter database disable block change tracking ; RMAN Tablespace Point-in-Time Recovery(TSPIR) in Oracle 11gR2 Recovery Manager (RMAN) Automatic TSPITR enables quick recovery of one or more tablespaces in a database to an earlier time without affecting the rest of the tablespaces and objects in the database. RMAN TSPITR is most useful for the following situations: We want to recover a logical database to a point different from the rest of the physical database, when multiple logical databases exist in separate tablespaces of one physical database. For example, we maintain logical databases in the orders and
12
ORACLE DATA BASE ADMINISTRATION
personnel tablespaces. An incorrect batch job or DML statement corrupts the data in only one of the tablespaces. We want to recover data lost after DDL operations that change the structure of tables. We cannot use Flashback Table to rewind a table to before the point of a structural change such as a truncate table operation.
We want to recover a table after it has been dropped with the PURGE option.
We want to recover from the logical corruption of a table.
We want to recover dropped tablespaces. In fact, RMAN can perform TSPITR on dropped tablespaces even when a recovery catalog is not used. We can also use Flashback Database to rewind data, but we must rewind the entire database rather than just a subset. Also, unlike TSPITR, the Flashback Database feature necessitates the overhead of maintaining flashback logs. The point in time to which you can flash back the database is more limited than the TSPITR window, which extends back to our earliest recoverable backup. TSPIR was there in earleir release but have some limitation i.e, we cannot recover a dropped tablespace . Oracle 11gr2 performs a fully automated managed TSPIR. It automatically creates and start the auxiliary instance and restores the datafiles it requires and the files pertaining to the dropped tablespace. It will first perform a recovery of the tablespace on the auxiliary instance and then use Data Pump and Transportable Tablespace technology to extract and import the tablespace meta data into the original source database . Here we will illustrate the Concept of TSPIR with an example . We will create a user say "TSPIR" and assign the default tablespace say "tspir" and create tables in this tablespace. We take the full backup of the database and further drop the tablespace "tspir" . Before dropping we we note the scn and use this scn to do TSPIR . Below are steps for the TSPIR Step 1 : Clean the previous failed TSPIR SQL> exec dbms_backup_restore.manageauxinstance ('TSPITR',1) ; PL/SQL procedure successfully completed. Step 2 : Create tablespace and Users and Create tables SQL> create tablespace tspir datafile 'C:\app\neerajs\oradata\orcl\tspir.dbf' size 150m autoextend on; Tablespace created. SQL> create user tspir identified by tspir 2 default tablespace tspir 3 quota unlimited on tspir; User created. SQL> grant resource,connect to tspir; Grant succeeded. SQL> connect tspir/tspir@orcl Connected.
13
ORACLE DATA BASE ADMINISTRATION
SQL> create table test(id number); Table created. SQL> insert into test values(12); 1 row created. SQL> insert into test values(121); 1 row created. SQL> commit; Commit complete. SQL> select * from test; ID ---------12 121 SQL> create table emp as select * from user_objects; Table created. SQL> select count(*) from emp; COUNT(*) ---------2 SQL> conn / as sysdba Connected. Step 3 :
Take the fresh backup of database
SQL> host rman target sys@orcl Recovery Manager: Release 11.2.0.1.0 - Production on Wed Nov 30 14:35:44 2011 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. target database Password: connected to target database: ORCL (DBID=1296005542) RMAN> backup database plus archivelog; Starting backup at 30-NOV-11 current log archived using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=141 device type=DISK channel ORA_DISK_1: starting archived log backup set channel ORA_DISK_1: specifying archived log(s) in backup set input archived log thread=1 sequence=3 RECID=1 STAMP=768238310 input archived log thread=1 sequence=4 RECID=2 STAMP=768238310 input archived log thread=1 sequence=5 RECID=3 STAMP=768238311 input archived log thread=1 sequence=6 RECID=4 STAMP=768238314 input archived log thread=1 sequence=7 RECID=5 STAMP=768239453 input archived log thread=1 sequence=8 RECID=6 STAMP=768239455 input archived log thread=1 sequence=9 RECID=7 STAMP=768305386
14
ORACLE DATA BASE ADMINISTRATION
input archived log thread=1 sequence=10 RECID=8 STAMP=768334227 input archived log thread=1 sequence=11 RECID=9 STAMP=768393025 input archived log thread=1 sequence=12 RECID=10 STAMP=768454251 input archived log thread=1 sequence=13 RECID=11 STAMP=768521484 input archived log thread=1 sequence=14 RECID=12 STAMP=768580566 channel ORA_DISK_1: starting piece 1 at 30-NOV-11 channel ORA_DISK_1: finished piece 1 at 30-NOV-11 piece handle=F:\RMAN_BKP\01MSV6UP_1_1 tag=TAG20111130T143608 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:55 Finished backup at 30-NOV-11 Starting backup at 30-NOV-11 using channel ORA_DISK_1 channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00001 name=C:\APP\NEERAJS\ORADATA\ORCL\SYSTEM01.DBF input datafile file number=00002 name=C:\APP\NEERAJS\ORADATA\ORCL\SYSAUX01.DBF input datafile file number=00006 name=C:\APP\NEERAJS\ORADATA\ORCL\TSPIR.DBF input datafile file number=00005 name=C:\APP\NEERAJS\ORADATA\ORCL\EXAMPLE01.DBF input datafile file number=00003 name=C:\APP\NEERAJS\ORADATA\ORCL\UNDOTBS01.DBF input datafile file number=00004 name=C:\APP\NEERAJS\ORADATA\ORCL\USERS01.DBF channel ORA_DISK_1: starting piece 1 at 30-NOV-11 channel ORA_DISK_1: finished piece 1 at 30-NOV-11 piece handle=F:\RMAN_BKP\02MSV70H_1_1 tag=TAG20111130T143705 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:01:55 Finished backup at 30-NOV-11 Starting backup at 30-NOV-11 current log archived using channel ORA_DISK_1 channel ORA_DISK_1: starting archived log backup set channel ORA_DISK_1: specifying archived log(s) in backup set input archived log thread=1 sequence=15 RECID=13 STAMP=768580741 channel ORA_DISK_1: starting piece 1 at 30-NOV-11 channel ORA_DISK_1: finished piece 1 at 30-NOV-11 piece handle=F:\RMAN_BKP\04MSV746_1_1 tag=TAG20111130T143901 comment=NONE channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01 Finished backup at 30-NOV-11 Starting Control File and SPFILE Autobackup at 30-NOV-11 piece handle=F:\RMAN_BKP\CF\C-1296005542-20111130-01 comment=NONE Finished Control File and SPFILE Autobackup at 30-NOV-11 Step 4 : Note the SCN and drop the tablespace SQL> select current_scn from v$database; CURRENT_SCN ----------------5659022 SQL> drop tablespace tspir including contents and datafiles; Tablespace dropped.
15
ORACLE DATA BASE ADMINISTRATION
Step 5 : Connect with rman and perform TSPIR Here we have used the auxiliary destination with the recover tablespace command because auxiliary destination is an optional disk location where RMAN uses to temporarily store the auxiliary set files. The auxiliary destination is used only when using a RMANmanaged auxiliary instance. Specifying an auxiliary destination with a user-managed auxiliary instance results in an error. C:\>rman target sys/xxxx@orcl Recovery Manager: Release 11.2.0.1.0 - Production on Wed Nov 30 14:58:11 2011 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. connected to target database: ORCL (DBID=1296005542) RMAN> recover tablespace tspir until scn 5659022 auxiliary destination 'F:\'; Starting recover at 30-NOV-11 using channel ORA_DISK_1 RMAN-05026: WARNING: presuming following set of tablespaces applies to specified pointin-time List of tablespaces expected to have UNDO segments Tablespace SYSTEM Tablespace UNDOTBS1 Creating automatic instance, with SID='nume' initialization parameters used for automatic instance: db_name=ORCL db_unique_name=nume_tspitr_ORCL compatible=11.2.0.0.0 db_block_size=8192 db_files=200 sga_target=280M processes=50 db_create_file_dest=F:\ log_archive_dest_1='location=F:\' #No auxiliary parameter file used starting up automatic instance ORCL Oracle instance started Total System Global Area 292933632 bytes Fixed Size 1374164 bytes Variable Size 100665388 bytes Database Buffers 184549376 bytes Redo Buffers 6344704 bytes Automatic instance created List of tablespaces that have been dropped from the target database: Tablespace tspir contents of Memory Script: { # set requested point in time set until scn 5659022; # restore the controlfile restore clone controlfile; # mount the controlfile sql clone 'alter database mount clone database'; # archive current online log sql 'alter system archive log current';
16
ORACLE DATA BASE ADMINISTRATION
# avoid unnecessary autobackups for structural changes during TSPITR sql 'begin dbms_backup_restore.AutoBackupFlag(FALSE); end;'; } executing Memory Script executing command: SET until clause Starting restore at 30-NOV-11 allocated channel: ORA_AUX_DISK_1 channel ORA_AUX_DISK_1: SID=59 device type=DISK channel ORA_AUX_DISK_1: starting datafile backup set restore channel ORA_AUX_DISK_1: restoring control file channel ORA_AUX_DISK_1: reading from backup piece F:\RMAN_BKP\CF\C-129600554220111130-01 channel ORA_AUX_DISK_1: piece handle=F:\RMAN_BKP\CF\C-1296005542-20111130-01 tag=TAG20111130T143903 channel ORA_AUX_DISK_1: restored backup piece 1 channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:04 output file name=F:\ORCL\CONTROLFILE\O1_MF_7FD0QK8S_.CTL Finished restore at 30-NOV-11 sql statement: alter database mount clone database sql statement: alter system archive log current sql statement: begin dbms_backup_restore.AutoBackupFlag(FALSE); end; contents of Memory Script: { # set requested point in time set until scn 5659022; # set destinations for recovery set and auxiliary set datafiles set newname for clone datafile 1 to new; set newname for clone datafile 3 to new; set newname for clone datafile 2 to new; set newname for clone tempfile 1 to new; set newname for datafile 6 to "C:\APP\NEERAJS\ORADATA\ORCL\TSPIR.DBF"; # switch all tempfiles switch clone tempfile all; # restore the tablespaces in the recovery set and the auxiliary set restore clone datafile 1, 3, 2, 6; switch clone datafile all; } executing Memory Script executing command: SET until clause executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME renamed tempfile 1 to F:\ORCL\DATAFILE\O1_MF_TEMP_%U_.TMP in control file Starting restore at 30-NOV-11 using channel ORA_AUX_DISK_1 channel ORA_AUX_DISK_1: starting datafile backup set restore channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set channel ORA_AUX_DISK_1: restoring datafile 00001 to F:\ORCL\DATAFILE\O1_MF_SYSTEM_%U_.DBF channel ORA_AUX_DISK_1: restoring datafile 00003 to
17
ORACLE DATA BASE ADMINISTRATION
F:\ORCL\DATAFILE\O1_MF_UNDOTBS1_%U_.DBF channel ORA_AUX_DISK_1: restoring datafile 00002 to F:\ORCL\DATAFILE\O1_MF_SYSAUX_%U_.DBF channel ORA_AUX_DISK_1: restoring datafile 00006 to C:\APP\NEERAJS\ORADATA\ORCL\TSPIR.DBF channel ORA_AUX_DISK_1: reading from backup piece F:\RMAN_BKP\02MSV70H_1_1 channel ORA_AUX_DISK_1: piece handle=F:\RMAN_BKP\02MSV70H_1_1 tag=TAG20111130T143705 channel ORA_AUX_DISK_1: restored backup piece 1 channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:02:15 Finished restore at 30-NOV-11 datafile 1 switched to datafile copy input datafile copy RECID=5 STAMP=768585055 file name=F:\ORCL\DATAFILE\O1_MF_SYSTEM_7FD0QYNZ_.DBF datafile 3 switched to datafile copy input datafile copy RECID=6 STAMP=768585056 file name=F:\ORCL\DATAFILE\O1_MF_UNDOTBS1_7FD0QYRF_.DBF datafile 2 switched to datafile copy input datafile copy RECID=7 STAMP=768585056 file name=F:\ORCL\DATAFILE\O1_MF_SYSAUX_7FD0QYPG_.DBF contents of Memory Script: { # set requested point in time set until scn 5659022; # online the datafiles restored or switched sql clone "alter database datafile 1 online"; sql clone "alter database datafile 3 online"; sql clone "alter database datafile 2 online"; sql clone "alter database datafile 6 online"; # recover and open resetlogs recover clone database tablespace "TSPIR", "SYSTEM", "UNDOTBS1", "SYSAUX" delete archivelog; alter clone database open resetlogs; } executing Memory Script executing command: SET until clause sql statement: alter database datafile 1 online sql statement: alter database datafile 3 online sql statement: alter database datafile 2 online sql statement: alter database datafile 6 online Starting recover at 30-NOV-11 using channel ORA_AUX_DISK_1 starting media recovery archived log for thread 1 with sequence 15 is already on disk as file D:\ARCHIVE\ORCL_ARCHIVE\ARC0000000015_0768224813.0001 archived log for thread 1 with sequence 16 is already on disk as file D:\ARCHIVE\ORCL_ARCHIVE\ARC0000000016_0768224813.0001 archived log file name=D:\ARCHIVE\ORCL_ARCHIVE\ARC0000000015_0768224813.0001 thread=1 sequence=15 archived log file name=D:\ARCHIVE\ORCL_ARCHIVE\ARC0000000016_0768224813.0001 thread=1 sequence=16 media recovery complete, elapsed time: 00:00:04 Finished recover at 30-NOV-11
18
ORACLE DATA BASE ADMINISTRATION
database opened contents of Memory Script: { # make read only the tablespace that will be exported sql clone 'alter tablespace TSPIR read only'; # create directory for datapump import sql "create or replace directory TSPITR_DIROBJ_DPDIR as '' F:\''"; # create directory for datapump export sql clone "create or replace directory TSPITR_DIROBJ_DPDIR as '' F:\''"; } executing Memory Script sql statement: alter tablespace TSPIR read only sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''F:\'' sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ''F:\'' Performing export of metadata... EXPDP> Starting "SYS"."TSPITR_EXP_nume": EXPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK EXPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE EXPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK EXPDP> Master table "SYS"."TSPITR_EXP_nume" successfully loaded/unloaded EXPDP> ************************************************************************* ***** EXPDP> Dump file set for SYS.TSPITR_EXP_nume is: EXPDP> F:\TSPITR_NUME_43731.DMP EXPDP> ************************************************************************* ***** EXPDP> Datafiles required for transportable tablespace TSPIR: EXPDP> C:\APP\NEERAJS\ORADATA\ORCL\TSPIR.DBF EXPDP> Job "SYS"."TSPITR_EXP_nume" successfully completed at 16:20:28 Export completed contents of Memory Script: { # shutdown clone before import shutdown clone immediate } executing Memory Script database closed database dismounted Oracle instance shut down Performing import of metadata... IMPDP> Master table "SYS"."TSPITR_IMP_nume" successfully loaded/unloaded IMPDP> Starting "SYS"."TSPITR_IMP_nume": IMPDP> Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK IMPDP> Processing object type TRANSPORTABLE_EXPORT/TABLE IMPDP> Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK IMPDP> Job "SYS"."TSPITR_IMP_nume" successfully completed at 16:21:48 Import completed contents of Memory Script: {
19
ORACLE DATA BASE ADMINISTRATION
# make read write and offline the imported tablespaces sql 'alter tablespace TSPIR read write'; sql 'alter tablespace TSPIR offline'; # enable autobackups after TSPITR is finished sql 'begin dbms_backup_restore.AutoBackupFlag(TRUE); end;'; } executing Memory Script sql statement: alter tablespace TSPIR read write sql statement: alter tablespace TSPIR offline sql statement: begin dbms_backup_restore.AutoBackupFlag(TRUE); end; Removing automatic instance Automatic instance removed auxiliary instance file F:\ORCL\DATAFILE\O1_MF_TEMP_7FD0Y3PY_.TMP deleted auxiliary instance file F:\ORCL\ONLINELOG\O1_MF_4_7FD0XROZ_.LOG deleted auxiliary instance file F:\ORCL\ONLINELOG\O1_MF_3_7FD0XK9R_.LOG deleted auxiliary instance file F:\ORCL\ONLINELOG\O1_MF_2_7FD0X9RF_.LOG deleted auxiliary instance file F:\ORCL\ONLINELOG\O1_MF_1_7FD0X2LK_.LOG deleted auxiliary instance file F:\ORCL\DATAFILE\O1_MF_SYSAUX_7FD0QYPG_.DBF deleted auxiliary instance file F:\ORCL\DATAFILE\O1_MF_UNDOTBS1_7FD0QYRF_.DBF deleted auxiliary instance file F:\ORCL\DATAFILE\O1_MF_SYSTEM_7FD0QYNZ_.DBF deleted auxiliary instance file F:\ORCL\CONTROLFILE\O1_MF_7FD0QK8S_.CTL deleted Finished recover at 30-NOV-11 RMAN> Step 6 : Check the tablepsace status and existance SQL> select tablespace_name,status from dba_tablespaces; TABLESPACE_NAME STATUS ----------------------------------SYSTEM ONLINE SYSAUX ONLINE UNDOTBS1 ONLINE TEMP ONLINE USERS ONLINE EXAMPLE ONLINE TSPIR OFFLINE Since, we find the tablepspace "TSPIR" is offline . So bring the tablespace online . SQL> alter tablespace tspir online; Tablespace altered. SQL> alter database datafile 'C:\app\neerajs\oradata\orcl\tspir.dbf' online; Database altered. SQL> select table_name from dba_tables where tablespace_name='TSPIR'; TABLE_NAME ----------------TEST EMP SQL> select * from tspir.test; ID
20
ORACLE DATA BASE ADMINISTRATION
---------12 121 SQL> select count(*) from tspir.emp; COUNT(*) ---------2 Hence , we find that both the tables are recovered ORA-7445 Internal Error An ORA-7445 is a generic error, and can occur from anywhere in the Oracle code. The precise location of the error is identified by the core file and/or trace file it produces. Whenever an ORA-7445 error is raised a core file is generated. There may be a trace file generated with the error as well. Prior to 11g, the core files are located in the CORE_DUMP_DEST directory. Starting with 11g, there is a new advanced fault diagnosability infrastructure to manage trace data. Diagnostic files are written into a root directory for all diagnostic data called the ADR home. Core files at 11g will go to the ADR HOME/cdump directory For more Indispensability check the below : 1. Check the Alert Log : The alert log may indicate additional errors or other internal errors at the time of the problem. In some cases, the ORA-7445 error will occur along with ORA-600, ORA-3113, ORA-4030 errors. The ORA-7445 error can be side effects of the other problems and we should review the first error and associated core file or trace file and work down the list of errors. If the ORA-7445 errors are not associated with other error conditions, ensure the trace data is not truncated. If we see a message at the end of the file ―MAX DUMP FILE SIZE EXCEEDED" . theMAX_DUMP_FILE_SIZE parameter is not setup high enough or to ‗unlimited‘. There could be vital diagnostic information missing in the file and discovering the root issue may be very difficult. Set the MAX_DUMP_FILE_SIZE appropriately and regenerate the error for complete trace information. 2. Search 600/7445 Lookup Tool : Visit My Oracle Support to access the ORA-00600 Lookup tool (Note 7445.1). The ORA-600/ORA-7445 Lookup tool may lead you to applicable content in My Oracle Support on the problem and can be used to investigate the problem with argument data from the error message or we can pull out key stack pointers from the associated trace file to match up against known bugs. 3. “Fine tune” searches in Knowledge Base : As the ORA-7445 error indicates an unhandled exception in the Oracle source code, our search in the Oracle Knowledge Base will need to focus on the stack data from the core file or the trace file. Keep in mind that searches on generic argument data will bring back a large result set . The more we can learn about the environment and code leading to the errors, the easier it will be to narrow the hit list to match our problem.
21
ORACLE DATA BASE ADMINISTRATION
4 . If assistance is required from Oracle : Should it become necessary to get assistance from Oracle Support on an ORA-7445 problem, please provide at a minimum, the
Alert log
Associated tracefile(s) or incident package at 11g
Patch level information
Core file(s)
Information about changes in configuration and/or application prior to issues
If error is reproducible, a self-contained reproducible testcase: Note.232963.1 How to Build a Testcase for Oracle Data Server Support to Reproduce ORA-600 and ORA-7445 Errors.
RDA report or Oracle Configuration Manager information
How to resize redolog file in oracle Once , i receive the e-mail regarding the resize of the redo log file . The Sender want the easiest way to size the redo log file something like 'alter database logfile group 1 '?\redo01.log resize 100m ' or using some other trick . We cannot resize the redo log files. We must drop the redolog file and recreate them .This is only method to resize the redo log files. A database requires atleast two groups of redo log files,regardless the number of the members. We cannot the drop the redo log file if its status is current or active . We have change the status to "inactive" then only we can drop it. When a redo log member is dropped from the database, the operating system file is not deleted from disk. Rather, the control files of the associated database are updated to drop the member from the database structure. After dropping a redo log file, make sure that the drop completed successfully, and then use the appropriate operating system command to delete the dropped redo log file. In my case i have four redo log files and they are of 50MB in size .I will resize to 100 MB. Below are steps to resize the redo log files. Step 1 : Check the Status of Redo Logfile SQL> select group#,sequence#,bytes,archived,status from v$log; GROUP# SEQUENCE# BYTES ARC STATUS ---------- ---------- -------------------------1 5 52428800 YES INACTIVE 2 6 52428800 YES ACTIVE 3 7 52428800 NO CURRENT 4 4 52428800 YES INACTIVE Here,we cannot drop the current and active redo log file . Step 2 : Forcing a Checkpoint : The SQL statement alter system checkpoint explicitly forces Oracle to perform a checkpoint for either the current instance or all instances. Forcing a checkpoint ensures that all changes
22
ORACLE DATA BASE ADMINISTRATION
to the database buffers are written to the datafiles on disk .A global checkpoint is not finished until all instances that require recovery have been recovered. SQL> alter system checkpoint global ; system altered. SQL> select group#,sequence#,bytes,archived,status from v$log; GROUP# SEQUENCE# BYTES ARC STATUS ---------- ---------- ---------- -------------------1 5 52428800 YES INACTIVE 2 6 52428800 YES INACTIVE 3 7 52428800 NO CURRENT 4 4 52428800 YES INACTIVE Since the status of group 1,2,4 are inactive .so we will drop the group 1 and group 2 redo log file. Step 3 : Drop Redo Log File : SQL> alter database drop logfile group 1; Database altered. SQL> alter database drop logfile group 2; Database altered. SQL> select group#,sequence#,bytes,archived,status from v$log; GROUP# SEQUENCE# BYTES ARC STATUS ---------- ---------- ---------- -----------------3 7 52428800 NO CURRENT 4 4 52428800 YES INACTIVE Step 4 : Create new redo log file If we don't delete the old redo logfile by OS command when creating the log file with same name then face the below error . Therefore to solve it delete the file by using OS command . SQL> alter database add logfile group 1 'C:\app\neerajs\oradata\orcl\redo01.log' size 100m; alter database add logfile group 1 'C:\app\neerajs\oradata\orcl\redo01.log' size 100m * ERROR at line 1: ORA-00301: error in adding log file 'C:\app\neerajs\oradata\orcl\redo01.log' - file cannot be created ORA-27038: created file already exists OSD-04010: option specified, file already exists SQL> alter database add logfile group 1 'C:\app\neerajs\oradata\orcl\redo01.log' size 100m; Database altered. SQL> alter database add logfile group 2 'C:\app\neerajs\oradata\orcl\redo02.log' size 100m; Database altered.
23
ORACLE DATA BASE ADMINISTRATION
SQL> select group#,sequence#,bytes,archived,status from v$log; GROUP# SEQUENCE# BYTES ARC STATUS ---------- -----------------------------------1 0 104857600 YES UNUSED 2 0 104857600 YES UNUSED 3 7 52428800 NO CURRENT 4 4 52428800 YES INACTIVE Step 5 : Now drop the remaining two old redo log file SQL> alter system switch logfile ; System altered. SQL> alter system switch logfile ; System altered. SQL> select group#,sequence#,bytes,archived,status from v$log; GROUP# SEQUENCE# BYTES ARC STATUS ---------- ---------- ---------- --- ---------------1 8 104857600 YES ACTIVE 2 9 104857600 NO CURRENT 3 7 52428800 YES ACTIVE 4 4 52428800 YES INACTIVE SQL> alter system checkpoint global; System altered. SQL> select group#,sequence#,bytes,archived,status from v$log; GROUP# SEQUENCE# BYTES ARC STATUS ---------- ---------- ---------- --- ---------------1 8 104857600 YES INACTIVE 2 9 104857600 NO CURRENT 3 7 52428800 YES INACTIVE 4 4 52428800 YES INACTIVE SQL> alter database drop logfile group 3; Database altered. SQL> alter database drop logfile group 4; Database altered. SQL> select group#,sequence#,bytes,archived,status from v$log; GROUP# SEQUENCE# BYTES ARC STATUS ---------- ---------- ---------- --- ---------------1 8 104857600 YES INACTIVE 2 9 104857600 NO CURRENT Step 6 : Create the redo log file SQL> alter database add logfile group 3 'C:\app\neerajs\oradata\orcl\redo03.log' size 100m; Database altered. SQL> alter database add logfile group 4 'C:\app\neerajs\oradata\orcl\redo04.log' size 100m;
24
ORACLE DATA BASE ADMINISTRATION
Database altered. SQL> select group#,sequence#,bytes,archived,status from v$log; GROUP# SEQUENCE# BYTES ARC STATUS ---------- ---------- ---------- --- ---------------1 8 104857600 YES INACTIVE 2 9 104857600 NO CURRENT 3 0 104857600 YES UNUSED 4 0 104857600 YES UNUSED How Often Redo Log file should switch ? Redo log file switch has good impact on the performance of the database. Frequent log switches may lead to the slowness of the database .If the log file switches after long times then there may be chances of lossing data when the redo log file get corrupt . Oracle documents suggests to resize the redolog files so that log switches happen more like every 15-30 min (roughly depending on the architecture and recovery requirements). But what happen when there in bulk load ?? since we cannot resize the redolog file size every time because it's seems to be silly. Generally we donot load the data in bulk on regular basis . it's very often twice or thrice in a week . So what should be the accurate size ?? Here is a very good explanation of this question by "howardjr". One of my database have very large logs which are not intended to fill up under normal operation. They are actually big enough to cope with a peak load we get every week. previously, we had two or three log switches recorded under the one alert log timestamp! Now, they switch every 10 minutes or so, even under the heaviest load. So big logs are good for slowing things own under load. But I don't want to sit there with 5 hours of redo sitting in my current log during non-peak-load normal running. Therefore, I set archive_lag_target to 1800 (seconds = 30 minutes), and I know that in the worst possible case, I will only lose 30 minutes of redo. I see LOADS of advantages for using archive_lag_target even for standalone instances. Actually especially for standalone instances. I want logs big enough not to cause rapid log switching. But I have bulk loads. Therefore, I have to have enormous logs to prevent rapid log switching during those times. In fact, on one database I am connected to right now, I have 2GB redo logs which nevertheless manage to switch every 8 minutes on a Friday night. We can imagine the frequency of log switches we had when those logs were originally created at 5MB each! And the number of redo allocation retries. I'd like 8GB logs to get it down to a log switch every 30 minutes or so on a Friday night, but with multiple members and groups, that's just getting silly.But now I have an enormous log that will take forever and a day to fill up and switch when I'm NOT doing bulk loads. Ordinarily, without a forced log switch, my 2GB log takes 3 days to fill up. How FAST_START_MTTR_TARGET
affect the redolog file in case of recovery?
If I were to have a catastrophic hardware failure, I could lose my current redo log. Fast_start_mttr_target can't do anything to ameliorate that loss: flushing the dirty buffers to disk regularly doesn't protect my data, actually. In fact, there is no way to recover transactions that are sitting in the current redo log if that log is lost. Therefore, having an
25
ORACLE DATA BASE ADMINISTRATION
enormous log full of hours and hours (in my case, about 72 hours'-worth) of redo is a massive data loss risk, and not one I'm prepared to take.forcing log switches is a good thing for everyone to be able to do, when appropriate, even if they're not using Data Guard and standby databases. That huge log files are necessary. That a forced log switch is essential thereafter to data security. We can certainly try to minimise the risk: that's what redo log multiplexing is all about. But if we lose all copies of your current log, then we have lost the only copy of that redo, and that means we have lost data. Frequent checkpoints can help minimise the amount of redo that is vulnerable to loss, but they do nothing to minimise the risk of that loss occurring. Redundant disks (mirroring), redundant controllers, multiplexing: those are the only things that can help protect the current redo log and thus actually reduce the risk of failure occurring in the first place. Frequent checkpointing simply reduces the damage that the loss of all currrent logs would inevitably cause, but it doesn't (and cannot) reduce it to zero. It's therefore not a protection mechanism at all, in the slightest. Checkpoints set a limit on potential data loss from redo log damage, absolutely they do. But no matter how frequently we checkpoint, we cannot reduce potential data loss to zero and reducing the potential cost of a disaster should it strike doesn't count as reducing the risk of the disaster happening. Buying car insurance doesn't reduce our risk of having a car accident: it simply means we can pay the bills when the accident eventually happens. Therefore, checkpoints cannot reasonably be called a "current redo logfile protection mechanism" . Mirroring, multiplexing and redundant hardware are the only ways to actually protect the current redo log Safety and performance always have to be traded off against each other, and we cannot realistically propose going for just one or the other without appreciating the impact on the other. Configuration of Snapshot Standby Database in Oracle 11g Snapshot Standby is a new features introduced in Oracle 11g. A snapshot standby database is a type of updatable standby database that provides full data protection for a primary database. A snapshot standby database receives and archives, but does not apply, redo data from its primary database. Redo data received from the primary database is applied when a snapshot standby database is converted back into a physical standby database, after discarding all local updates to the snapshot standby database. The main benefits of snapshot standby database is that we can convert the physical standby database into a read-write real time clone of the production database and we can then temporarily use this environment for carrying out any kind of development testing, QA type work or to test the impact of a proposed production change on an application. The best part of this snapshot features is that the Snapshot Standby database in turn uses the Flashback Database technology to create a guaranteed restore point to which the database can be later flashed back to.All the features of the flashback are inherent in the snapshot standby. Here we will configure the snapshot standby database Step 1 : Create the physical standby database Create the physical standby database .
26
ORACLE DATA BASE ADMINISTRATION
Step 2: Enable Flashack Parameter SQL> alter system set db_recovery_file_dest_size=4G scope=both ; System altered. SQL> alter system set db_recovery_file_dest='D:\standby\fra\' scope=both ; System altered. SQL> show parameter db_recovery NAME TYPE VALUE --------------------------------------------------------------db_recovery_file_dest string D:\standby\fra\ db_recovery_file_dest_size big integer 4G Step 3 : Stop the media recovery process SQL> alter database recover managed standby database cancel; Database altered. Step 4 : Ensure that the database is mounted, but not open. SQL> shut immediate ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> startup mount Step 5 : Create guaranteed restore point SQL> create restore point snapshot_rspt guarantee flashback database; Restore point created. Step 6 : Perform the conversion to snapshot standby database SQL> alter database convert to snapshot standby ; Database altered. SQL> select name,open_mode,database_role from v$database; NAME OPEN_MODE DATABASE_ROLE ----------------------------------------------NOIDA MOUNTED SNAPSHOT STANDBY SQL> alter database open; Database altered. SQL> select name,db_unique_name ,open_mode,database_role from v$database; NAME DB_UNIQUE_NAME OPEN_MODE DATABASE_ROLE ------- ---------------------- -------------------------------------NOIDA gurgoan READ WRITE SNAPSHOT STANDBY Since the database is in read-write mode , so we can make some changes or even change the parameter say tunning parameter and check there performance after converting to physical standby and further flashback the whole changes. SQL> select name,guarantee_flashback_database from v$restore_point; NAME GUA
27
ORACLE DATA BASE ADMINISTRATION
----------------------------------------------------------------SNAPSHOT_STANDBY_REQUIRED_11/18/2011 20:41:01 SNAPSHOT_RSPT
------YES YES
While the original physical standby database has been now converted to snapshot database, some changes are happening on the Primary database and those changes are shipped to the standby site but not yet applied. They will accumulate on the standby site and will be applied after the snapshot standby database is converted back to a physical standby database. Step 7 : Convert snapshot standby to physical standby SQL> shut immediate SQL> startup mount SQL> alter database convert to physical standby ; Database altered. SQL>shut immediate SQL> startup mount SQL> alter database recover managed standby database disconnect from session; Database altered. SQL> alter database recover managed standby database cancel; Database altered. SQL> alter database open; Database altered. SQL> select name,open_mode,db_unique_name,database_role from v$database; NAME OPEN_MODE DB_UNIQUE_NAME DATABASE_ROLE ------- ------------------------------------ ---------------------NOIDA READ ONLY gurgoan PHYSICAL STANDBY SQL> alter database recover managed standby database using current logfile disconnect; Database altered. Hence, we finally back to physical standby database. How to Drop Data Guard Configuration in oracle 11g Once while configuring the dataguard broker, i faced the ORA-16625 and ORA-16501 . This error occurs beacuse the broker rejects an operation requested by the client when the database required to execute that operation was not reachable from the database where the request was made. If the request modifies the configuration, the request must be processed by the copy of the broker running on an instance of the primary database. Few days ago i configured the standby database "RED" and broker, later drop it . Next time while configuring the data broker the error occurs as DGMGRL> create configuration 'dgnoida' > as primary database is 'noida'
28
ORACLE DATA BASE ADMINISTRATION
> connect identifier is 'noida'; Error: ORA-16501: the Data Guard broker operation failed Error: ORA-16625: cannot reach database "red" Failed. To solve this issue, I have remove the data guard broker configuration and then created the dataguard broker. The steps to drop the configuration are as follows : Step 1 : Stop the standby data guard broker process ( On Standby ) SQL>show parameter dg_broker NAME TYPE ------------------------ ----------dg_broker_config_file1 string dg_broker_config_file2 dg_broker_start
VALUE ---------------------------------------------C:\APP\NEERAJS\PRODUCT\11.2.0\ DBHOME_1\DATABASE\DR1NOIDA.DAT string C:\APP\NEERAJS\PRODUCT\11.2.0\ DBHOME_1\DATABASE\DR2NOIDA.DAT boolean True
SQL> alter system set dg_broker_start=false; System altered. Step 2 : Diable the archive log state (On Primary ) SQL> select dest_id,destination,status from v$archive_dest where target='STANDBY'; DEST_ID DESTINATION STATUS ------------------------------2 delhi VALID SQL> alter system set log_archive_dest_state_2=defer ; System altered. SQL> select dest_id,destination,status from v$archive_dest where target='STANDBY'; DEST_ID -------2
DESTINATION STATUS -----------------------delhi DEFERRED
Step 3 : On both system rename or drop the metadata files SQL> show parameter dg_broker NAME TYPE ------------------------ ----------dg_broker_config_file1 string dg_broker_config_file2 dg_broker_start
VALUE ---------------------------------------------C:\APP\NEERAJS\PRODUCT\11.2.0\ DBHOME_1\DATABASE\DR1NOIDA.DAT string C:\APP\NEERAJS\PRODUCT\11.2.0\ DBHOME_1\DATABASE\DR2NOIDA.DAT boolean False
29
ORACLE DATA BASE ADMINISTRATION
Delete or rename the file DR1NOIDA.DAT and
[email protected] . Step-By-Step Configuration Of Data Guard Broker in Oracle 11g As we have already discuss about the Data Guard Broker and its benefits in earlier post . Here we will configure the data Guard Broker . Here are the steps : Primary Databse = Noida Standby Database = Delhi Step 1 : Check the Data Guard Broker process SQL> sho parameter dg_broker NAME TYPE VALUE ----------------------------------dg_broker_start boolean FALSE Step 2 : Start the Data Guard Broker Process on Primary database SQL>alter system set dg_broker_start=true scope=both; System altered. Step 3 : Check DG_BROKER on standby database and start it SQL> sho parameter dg_broker NAME TYPE VALUE ----------------------------------dg_broker_start boolean FALSE SQL>alter system set dg_broker_start=true scope=both ; System altered. Step 4 : Edit the listener.ora file Edit the listener.ora file which includes the db_unique_name_DGMGRL.db_domain values for theGLOBAL_DBNAME in both primary and standby database . To set the value, lets check the db_domainvalue . SQL> show parameter db_domain NAME TYPE -----------------------db_domain string
VALUE --------------
Since the value of db_domain is null so the the value of GLOBAL_DBNAME = NOIDA_DGMGRL for primary database and for standby GLOBAL_DBNAME = DELHI_DGMGRL. The primary listener.ora file is as SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (GLOBAL_DBNAME = noida_DGMGRL) (ORACLE_HOME = C:\app\neerajs\product\11.2.0\dbhome_1) (SID_NAME = noida) ) ) Similarly, edit the listener.ora file on standby database .
30
ORACLE DATA BASE ADMINISTRATION
Step 5 : Configure the Data Guard Configuration C:\> dgmgrl DGMGRL for 32-bit Windows: Version 11.2.0.1.0 - Production Copyright (c) 2000, 2009, Oracle. All rights reserved. Welcome to DGMGRL, type "help" for information. DGMGRL> connect sys/xxxx@noida Connected. DGMGRL> create configuration 'dgnoida' > as primary database is 'noida' > connect identifier is noida ; Configuration "dgnoida" created with primary database "noida" . Once the configuration is created then check the status of configuration . DGMGRL> show configuration Configuration - dgnoida Protection Mode : MaxPerformance Databases : noida - Primary database Fast-Start Failover : DISABLED Configuration Status : DISABLED Step 6 : Add standby database to the data broker configuration DGMGRL> add database 'delhi' as > connect identifier is delhi > maintained as physical ; Database "delhi" added DGMGRL> show configuration Configuration - dgnoida Protection Mode : MaxPerformance Databases : noida - Primary database : delhi - Physical standby database Fast-Start Failover : DISABLED Configuration Status : DISABLED Step 7 : Enable the configuration DGMGRL> enable configuration Enabled. DGMGRL> show configuration Configuration - dgnoida Protection Mode : MaxPerformance Databases : noida - Primary database : delhi - Physical standby database Fast-Start Failover : DISABLED Configuration Status : SUCCESS Step 8 : View the Primary and Standby database properties DGMGRL> show database verbose noida Database - noida
31
ORACLE DATA BASE ADMINISTRATION
Role : PRIMARY Intended State : TRANSPORT-ON Instance(s) : noida Properties: DGConnectIdentifier = 'noida' ObserverConnectIdentifier = '' LogXptMode = 'ASYNC' DelayMins = '0' Binding = 'optional' MaxFailure = '0' MaxConnections = '1' ReopenSecs = '300' NetTimeout = '30' RedoCompression = 'DISABLE' LogShipping = 'ON' PreferredApplyInstance = '' ApplyInstanceTimeout = '0' ApplyParallel = 'AUTO' StandbyFileManagement = 'AUTO' ArchiveLagTarget = '0' LogArchiveMaxProcesses = '4' LogArchiveMinSucceedDest = '1' DbFileNameConvert = '' LogFileNameConvert = '' FastStartFailoverTarget = '' StatusReport = '(monitor)' InconsistentProperties = '(monitor)' InconsistentLogXptProps = '(monitor)' SendQEntries = '(monitor)' LogXptStatus = '(monitor)' RecvQEntries = '(monitor)' HostName = 'TECH-199' SidName = 'noida' StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=TECH199)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=noida_DGMGRL)(INSTANCE_NAME =noida)(SERVER=DEDICATED)))' StandbyArchiveLocation = 'D:\archive\' AlternateLocation = '' LogArchiveTrace = '0' LogArchiveFormat = 'ARC%S_%R.%T' TopWaitEvents = '(monitor)' Database Status = SUCCESS DGMGRL> show database verbose delhi Database Role: Intended State Transport Lag Apply Lag Real Time Query Instance(s)
- delhi PHYSICAL STANDBY : APPLY-ON : 0 seconds : 0 seconds : ON : delhi
32
ORACLE DATA BASE ADMINISTRATION
Properties: DGConnectIdentifier = 'delhi' ObserverConnectIdentifier = '' LogXptMode = 'SYNC' DelayMins = '0' Binding = 'OPTIONAL' MaxFailure = '0' MaxConnections = '1' ReopenSecs = '300' NetTimeout = '30' RedoCompression = 'DISABLE' LogShipping = 'ON' PreferredApplyInstance = '' ApplyInstanceTimeout = '0' ApplyParallel = 'AUTO' StandbyFileManagement = 'AUTO' ArchiveLagTarget = '0' LogArchiveMaxProcesses = '4' LogArchiveMinSucceedDest = '1' DbFileNameConvert = 'C:\app\neerajs\oradata\noida\, D:\app\stand\oradata\, E:\oracle\, D:\app\stand\oradata\' LogFileNameConvert = 'C:\app\neerajs\oradata\noida\, D:\app\stand\oradata\' FastStartFailoverTarget = '' StatusReport = '(monitor)' InconsistentProperties = '(monitor)' InconsistentLogXptProps = '(monitor)' SendQEntries = '(monitor)' LogXptStatus = '(monitor)' RecvQEntries = '(monitor)' HostName = 'TECH-284' SidName = 'delhi' StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=TECH284)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=delhi_DGMGRL)(INSTANCE_NAME= delhi)(SERVER=DEDICATED)))' StandbyArchiveLocation = 'D:\app\stand\archive\' AlternateLocation = '' LogArchiveTrace = '0' LogArchiveFormat = 'ARC%S_%R.%T' TopWaitEvents = '(monitor)' Database Status : SUCCESS DGMGRL> Switchover and Failover in Standby Oracle 11g Data Guard uses two terms when cutting over the standby server, switch-over which is a planned and failover which a unplanned event .
33
ORACLE DATA BASE ADMINISTRATION
1.) Switchover : Switchover is a planned event, it is ideal when we might want to upgrade the primary database or change the storage/hardware configuration (add memory, cpu networking), we may even want to upgrade the configuration to Oracle RAC .
What happens during a switchover is the following :
1.) 2.) 3.) 4.)
Notifies the primary database that a switchover is about to occur Disconnect all users from the primary database Generate a special redo record that signals the End of Redo (EOR) Converts the primary database into a standby database
5.) Once the standby database applies the final EOR record, guaranteeing that no data loss has been lost, converts the standby database into the primary database.
The new standby database (old primary) starts to receive the redo records and continues process until we switch back again. It is important to remember that both databases receive the EOR record so both databases know the next redo that will be received. Although we can have users still connecting to the primary database while the switchover occurs (which generally takes about 60 seconds) I personal have a small outage just to be on the safe side and just in case things don't go as smoothly as I hoped.
We can even switch over form a linux database to a windows database from a 64 bit to a 32 bit database which is great if we want to migrate to a different O/S of 32/64 bit architecture, also our rollback option is very easy simply switchback if it did not work.
2.) Failover : Failover is a unplanned event, this is where the EOR was never written by the primary database, the standby database process what redo it has then waits, data loss now depends on the protection mode in affect .
Maximum Performance - possible chance of data loss
Maximum Availability - possible chance of data loss
Maximum Protection - no data loss
we have the option to manually failover or make the whole process automatic, manual gives the DBA maximum control over the whole process obliviously the the length time of the outage depends on getting the DBA out of bed and failing over. Otherwise Oracle Data Guard Fast-Start Failover feature can automatically detect a problem and failover automatically for us. The failover process should take between 15 to 25 seconds.
34
ORACLE DATA BASE ADMINISTRATION
Which Role Transition Operation Should I Use ?
When faced with the decision on which role transition is best for the given situation, we need to always choose one that best reduces downtime and has the least potential for data loss. Also to consider is how the change will affect any other standby database in the configuration. We should consider the following when making the decision on which operation to use:
What is the current state of the primary database just before the transition? Is it available? What is the state of the selected standby database to be used in the role transition at the time of transition?
Is the standby database configured as a physical or logical standby database?
The following decision tree can be used to assist when making this critical decision as to which operation to perform:
35
ORACLE DATA BASE ADMINISTRATION
One key point to consider is that if it would be faster to repair the primary database (from failure or a simple planned hardware/software upgrade), the most efficient method would be to perform the tasks and then to bring up the primary database as quickly as possible and not perform any type of role transition. This method can impose less risk to the system and does not require any client software to be re-configured.
Another consideration involves a Data Guard configuration which includes a logical standby database. A switchover operation can be performed using either a physical or logical standby database. Take note, however, of the following issues you may run in to regarding physical and logical standby configurations. If the configuration includes a primary, a physical standby, and a logical standby, and a switchover is performed on the logical standby, the physical standby will no longer be a part of the configuration and must be rebuilt. In the same scenario, if a switchover operation is performed on the physical standby, the logical standby remains in the Data Guard configuration and does not need to be rebuilt. Obviously, a physical standby is a better option to be a switchover candidate than a logical standby when multiple standby types exist in a given configuration.
36
ORACLE DATA BASE ADMINISTRATION
Hence finally we come to conclusion that the order to setup Data Guard is the following :
The primary database is up and running
Create a standby database
Setup the redo transport rules
Create the SRL files
Execute one of the following
SQL> alter database set standby to maximum performance; SQL> alter database set standby to maximum availability; SQL> alter database set standby to maximum protection;
//(default)
Data Protection Mode In Data Guard Data Guard protection modes are simply a set of rules that the primary database must adhere to when running in a Data Guard configuration. A protection mode is only set on the primary database and defines the way Oracle Data Guard will maximize a Data Guard configuration for performance, availability, or protection in order to achieve the maximum amount of allowed data loss that can occur when the primary database or site fails A Data Guard configuration will always run in one of the three protection modes listed above. Each of the three modes provide a high degree of data protection; however they differ with regards to data availability and performance of the primary database. When selecting a protection mode, always consider the one that best meets the needs of your business. Carefully take into account the need to protect the data against any loss vs. availability and performance expectations of the primary database Data Guard can support multiple standby databases in a single configuration, they may or may not have the same protection mode settings depending on our requirements. The protection modes are 1.) Maximum Performance 2.) Maximum Availability 3.) Maximum Protection 1.) Maximum Performance : This is the default mode, we get the highest performance but the lowest protection. This mode requires ASYNC redo transport so that the LGWR process never waits for acknowledgment from the standby database for maximum performance.How much data we lose depends on the redo rate and how well our network can handle the amount of redo also known as transport lag. Even if we have a zero lag time we still will lose some data at fail-over time . We can have up to 9 physical standby database in oracle 10g and 30 in oracle 11g and we will use the Asynchronous transport (ASYNC) with no affirmation of the standby I/O (NOAFFIRM). We can use this anywhere in the world but bear in mind the network latency and making sure it can support our redo rate .While it is not mandatory to have standby redo logs (SRL) in this mode, it is advise to do so. The SRL files need to be the same size as the online redo log files (ORL) .
37
ORACLE DATA BASE ADMINISTRATION
The following table describes the attributes that should be defined for the LOG_ARCHIVE_DEST_ninitialization parameter for the standby database destination to participate in Maximum Performance mode.
For example :
log_archive_dest_2='service=res ARCH NOAFFIRM' or log_archive_dest_2='service=red LGWR ASYNC NOAFFIRM'
2.) Maximum Availability : Its first priority is to be available and its second priority is zero loss protection, thus it requires the SYNC redo transport. This is the middle middle of the range, it offers maximum protection but not at the expense of causing problems with the primary database. However we must remember that it is possible to lose data, if our network was out for a period of time and the standby has not had a chance to resynchronize and the primary went down then there will be data loss. Again we can have up to 9 physical standby database in oracle 10g and 30 in oracle 11g and we will use Synchronous transport (SYNC) with affirmation of the standby I/O (AFFIRM) and SRL files. In the event that the standby server is unavailable the primary will wait the specified time in the NET_TIMEOUTparameter before giving up on the standby server and allowing the primary to continue to process. Once the connection has been reestablished the primary will automatically resynchronize the standby database. When the NET_TIMEOUT expires the LGWR process disconnects from the LNS process, acknowledges the commit and proceeds without the standby, processing continues until the current ORL is complete and the LGWR cycles into a new ORL, a new LNS process is started and an attempt to connect to the standby server is made, if it succeeds the new ORL is sent as normal, if not then LGWR disconnects again until the next log switch, the whole process keeps repeating at every log switch, hopefully the standby database will become available at some point in time. Also in the background if we remember if any archive logs have been created during this time the ARCH process will continually ping the standby database waiting until it come online. We might have noticed there is a potential loss of data if the primary goes down and the standby database has also been down for a period of time and here has been no resynchronization, this is similar to Maximum Performance but we do give the standby server a chance to respond using the timeout. The minimum requirements are described in the following table :
38
ORACLE DATA BASE ADMINISTRATION
For example :
log_archive_dest_2='services=red LGWR SYNC AFFIRM
3.) Maximum Protection : This offers the maximum protection even at the expense of the primary database, there is no data loss. This mode uses the SYNC redo transport and the primary will not issue a commit acknowledgment to the application unless it receives an acknowledgment from at least one standby database, basically the primary will stall and eventually abort preventing any unprotected commits from occurring. This guarantees complete data protection, in this setup it is advised to have two separate standby databases at different locations with no Single Point Of Failures (SPOF's), they should not use the same network infrastructure as this would be a SPOF. The minimum requirements are described in the following following table
For Example :
log_archive_dest_2='service=red LWGR SYNC AFFIRM'
Finally the protection mode will be changed from its default of Maximum Performance to Maximum Protection.The protection modes run in the order from highest (most data protection) to the lowest (least data protection)
39
ORACLE DATA BASE ADMINISTRATION
Each of the Data Guard data protection modes require that at least one standby database in the configuration meet the minimum set of requirements listed in the table below.
Data Guard Architecture Oracle 11g Part-III The redo data transmitted from the primary database is written to the standby redo log on the standby database. Apply services automatically apply the redo data on the standby database to maintain consistency with the primary database. It also allows read-only access to the data.The main difference between physical and logical standby databases is the manner in which apply services apply the archived redo data. There are two methods in which to apply redo i.e, 1.) Redo Apply (physical standby) and 2.) SQL Apply (logical standby). They both have the same common features: Both synchronize the primary database
Both can prevent modifications to the data
Both provide a high degree of isolation between the primary and the standby database
Both can quick transition the standby database into the primary database
Both offer a productive use of the standby database which will have no impact on the primary database 1.) Redo Apply (Physical Standby) : Redo apply is basically a block-by-block physical replica of the primary database, redo apply uses media recovery to read records from the SRL into memory and apply change vectors directly to the standby database. Media recovery does parallel recovery for very high performance, it comprises a media recovery coordinator (MRP0) and multiple parallel apply rocesses(PR0?). The coordinator manages the recovery session, merges the redo by SCN from multiple instances (if in a RAC environment) and parses redo into change mappings partitioned by the apply process. The
40
ORACLE DATA BASE ADMINISTRATION
apply processes read data blocks, assemble redo changes from mappings and then apply redo changes to the data blocks. This method allows us to be able to use the standby database in a read-only fashion, Active Data Guard solves the read consistency problem in previous releases by the use of a "query" SCN. The media recovery process on the standby database advances the query SCN after all dependant changes in a transaction have been fully applied. The query SCN is exposed to the user via the current_scn column of the v$databaseview Read-only use will only be able to see data up to the query SCN and thus the standby database can be open in read-only mode while the media recovery is active, which make this an ideal reporting database.
We can use SYNC or ASYNC and is isolated from I/O physical corruptions, corruptiondetections checks occur at the following key interfaces: On the primary during redo transport - LGWR, LNS, ARCH use the DB_UTRA_SAFE parameter On the standby during redo apply - RFS, ARCH, MRP, DBWR use the DB_BLOCK_CHECKSUM and DB_LOST_WRITE_PROTECT parameters . If Data Guard detects any corruption it will automatically fetch new copies of the data from the primary using gap resolution process in the hope of that the original data is free of corruption.The key features of this solution are Complete application and data transparency - no data type or other restrictions
Very high performance, least managed complexity and fewest moving parts
End-to-end validation before applying, including corruptions due to lost writes
Able to be utilized for up-to-date read-only queries and reporting while providing DR
Able to execute rolling database upgrades beginning with Oracle Database 11g
2.) SQL Apply (Logical Standby) : SQL apply uses the logical standby process (LSP) to coordinate the apply of changes to the standby database. SQL apply requires more processing than redo apply, the processes that make up SQL apply, read the SRL and "mine" the redo by converting it to logical change records and then building SQL transactions and applying SQL to the standby database and because there are more moving parts it requires more CPU, memory and I/O then redo apply .
41
ORACLE DATA BASE ADMINISTRATION
SQL apply does not support all data types, such as XML in object relational format and Oracle supplied types such as Oracle spatial, Oracle inter-media and Oracle text . The benefits to SQL apply is that the database is open to read-write while apply is active, while we can not make any changes to the replica data we can insert, modify and delete data from local tables and schemas that have been added to the database, we can even create materialized views and local indexes. This makes it ideal for reporting tools, etc to be used. The key features of this solution are : A standby database that is opened for read-write while SQL apply is active A guard setting that prevents the modification of data that is being maintained by the SQL apply Able to execute rolling database upgrades beginning with Oracle Database 11g using the KEEP IDENTITY clause Data Guard Architecture Oracle 11g Part-II LNS (log-write network-server) and ARCH (archiver) processes running on the primary database select archived redo logs and send them to the standby database, where the RFS (remote file server) background process within the Oracle instance performs the task of receiving archived redo-logs originating from the primary database . The LNS process support two modes as 1.) Synchronous and 2.) Asynchronous. 1.) Synchronous Mode : Synchronous transport (SYNC) is also referred to as "zero data loss" method because the LGWR is not allowed to acknowledge a commit has succeeded until the LNS can confirm that the redo needed to recover the transaction has been written at the standby site. In the below diagram, the phases of a transaction are :
42
ORACLE DATA BASE ADMINISTRATION
The user commits a transaction creating a redo record in the SGA, the LGWR reads the redo record from the log buffer and writes it to the online redo log file and waits for confirmation from the LNS. The LNS reads the same redo record from the buffer and transmits it to the standby database using Oracle Net Services, the RFS receives the redo at the standby database and writes it to the SRL. When the RFS receives a write complete from the disk, it transmits an acknowledgment back to the LNS process on the primary database which in turns notifies the LGWR that the transmission is complete, the LGWR then sends a commit acknowledgment to the user. This setup really does depend on network performance and can have a dramatic impact on the primary databases, low latency on the network will have a big impact on response times. The impact can be seen in the wait event "LNS wait on SENDREQ" found in the v$system_event dynamic performance view. 2.) Asynchronous Mode : Asynchronous transport (ASYNC) is different from SYNC in that it eliminates the requirement that the LGWR waits for a acknowledgment from the LNS, creating a "near zero" performance on the primary database regardless of distance between the primary and the standby locations. The LGWR will continue to acknowledge commit success even if the bandwidth prevents the redo of previous transaction from being sent to the standby database immediately. If the LNS is unable to keep pace and the log buffer is recycled before the redo is sent to the standby, the LNS automatically transitions to reading and sending from the log file instead of the log buffer in the SGA. Once the LNS has caught up it then switches back to reading directly from the buffer in the SGA . The log buffer ratio is tracked via the view X$LOGBUF_READHIST a low hit ratio indicates that the LNS is reading from the log file instead of the log buffer, if this happens try increasing the log buffer size.
43
ORACLE DATA BASE ADMINISTRATION
The drawback with ASYNC is the increased potential for data loss, if a failure destroys the primary database before the transport lag is reduced to zero, any committed transactions that are part of the transport lag are lost. So again make sure that the network bandwidth is adequate and that get the lowest latency possible.
A log file gap occurs whenever a primary database continues to commit transactions while the LNS process has ceased transmitting redo to the standby database (network issues). The primary database continues writing to the current log file, fills it, and then switches to a new log file, then archiving kicks in and archives the file, before we know it there are a number of archive and log files that need to be processed by the the LNS basically creating a large log file gap. Data Guard uses an ARCH process on the primary database to continuously ping the standby database during the outage, when the standby database eventually comes back, the ARCH process queries the standby control file (via the RFS process) to determine the last complete log file that the standby received from the primary. The ARCH process will then transmit the missing files to the standby database using additional ARCH processes, at the very next log switch the LNS will attempt and succeed in making a connection to the standby database and will begin transmitting the current redo while the ACH processes resolve the gap in the background. Once the standby apply process is able to catch up to the current redo logs the apply process automatically transitions out of reading the archive redo logs and into reading the current SRL. The whole process can be seen in the diagram below :
44
ORACLE DATA BASE ADMINISTRATION
Data Guard Architecture Oracle 11g Part-I I have decided to post the Architecture of the Standby Database, although there are lots of stuff on the Internet but most of them are lengthy and are not so juicy . I have read a good notes on Standby Database Architecture and further decided to post it . Though, I have modified few topics to make it more clear , juicy and interesting .Hope you all find helpful and enjoy this after reading. Oracle Data Guard is the most effective and comprehensive data availability, data protection and disaster recovery solution for enterprise databases. It provides a method for customers to actively utilize their disaster recovery configuration for read-only queries and reports while it is in standby role. Additionally, a standby database can be used to offload backups from production databases or for Quality Assurance and other test activities that require read-write access to an exact replica of production. These capabilities are unique to Oracle . Oracle Data Guard is the management, monitoring, and automation software infrastructure that creates,maintains, and monitors one or more standby databases to protect enterprise data from failures, disasters, errors, and corruptions.Data Guard is basically a ship redo and then apply redo, as we know redo is the information needed to recover a database transaction. A production database referred to as a primary database transmits redo to one or more independent replicas referred to as standby databases. Standby databases are in a continuous state of recovery, validating and applying redo to maintain synchronization with the primary database. A standby database will also automatically re-synchronize if it becomes temporary disconnected to the primary due to power outages, network problems, etc. The diagram below shows the overview of Data Guard, firstly the redo transport services transmits redo data from the primary to the standby as it is generated, secondly services
45
ORACLE DATA BASE ADMINISTRATION
apply the redo data and update the standby database files, thirdly independently of Data Guard the database writer process updates the primary database files and lastly Data Guard will automatically re-synchronize the standby database following power or network outages using redo data that has been archived at the primary.
Redo records contain all the information needed to reconstruct changes made to a database. During recovery the database will read the change vectors in the redo records and apply the changes to the relevant blocks.Redo records are buffered in a circular fashion in the redo log buffer of the SGA, the log writer process (LGWR) is the background process that handles redo log buffer management. The LGWR at specific times writes redo log entries into a sequential file (online redo log file) to free space in the buffer, the LGWR writes the following. 1.) A commit record : When ever a transaction is committed the LGWR writes the transaction redo records from the buffer to the log file and assigns a system change number (SCN), only when this process is complete is the transaction said to be committed. 2.) Redo log buffers : If the redo log becomes a third full or if 3 seconds have passed sine the last time the LGWR wrote to the log file, all redo entries in the buffer will be written to the log file. This means that redo records can be written to the log file before the transaction has been committed and if necessary media recovery will rollback these changes using undo that is also part of the redo entry. Remember that the LGWR can write to the log file using "group" commits, basically entire list of redo entries of waiting transactions (not yet committed) can be written to disk in one operation, thus reducing I/O. Even through the data buffer cache has not been written to disk, Oracle guarantees that no transaction will be lost due to the redo log having successfully saved any changes. Data Guard Redo Transport Services coordinate the transmission of redo from the primary database to the standby database, at the same time the LGWR is processing redo, a separate Data Guard process called the Log Network Server (LNS) is reading from the redo buffer in the SGA and passes redo to Oracle Net Services from transmission to a
46
ORACLE DATA BASE ADMINISTRATION
standby database, it is possible to direct the redo data to nine standby databases, we can also use Oracle RAC and they don't all need to be a RAC setup. The process Remote File Server (RFS) receives the redo from LNS and writes it to a sequential file called a standby redo log file (SRL).
Open Standby in Read-write Mode When Primary is Lost There may be scenario where Primary database is lost and we are only left with the standby database . In this scenario's we have to open the standby database in read-write mode. Below are the steps to convert standby database to Primary database. 1.) Open standby database in mount state : SQL> select name,open_mode from v$database; NAME OPEN_MODE -----------------NOIDA READ ONLY SQL> shut immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount ORACLE instance started. Total System Global Area Fixed Size Variable Size Database Buffers
263639040 bytes 1373964 bytes 230689012 bytes 25165824 bytes
47
ORACLE DATA BASE ADMINISTRATION
Redo Buffers Database mounted.
6410240
SQL> select open_mode ,protection_mode OPEN_MODE PROTECTION_MODE -------------------------------MOUNTED MAXIMUM PERFORMANCE
bytes , database_role from v$database ; DATABASE_ROLE ---------------PHYSICAL STANDBY
2.) Recover if there is any archive logs: SQL>recover standby database; ORA-01153: an incompatible media recovery is active To solve this issue, we cancel the media recovery by using the below command . SQL> alter database recover managed standby database cancel; Database altered. SQL> recover standby database ORA-00279: change 2698969 generated at 10/05/2011 16:46:58 needed for thread ORA-00289: suggestion : D:\ARCHIVE\ARC0000000133_0761068614.0001 ORA-00280: change 2698969 for thread 1 is in sequence #133 Specify log: {=suggested | filename | AUTO | CANCEL} cancel Media recovery cancelled. 3.) Finish the Recovery process : The below command will perform the role transition as quickly as possible with little or no data loss and without rendering other standby databases unusable and to open the database in read-write mode we fire the below command : SQL>alter database recover managed standby database finish; Database altered. 4.) Activate the Standby Database : SQL> alter database activate physical standby database ; Database altered. 5.) Check the new status SQL> select open_mode ,protection_mode , database_role from v$database ; OPEN_MODE --------MOUNTED
PROTECTION_MODE -----------------------MAXIMUM PERFORMANCE
6.) Open the Database SQL> alter database open ; Database altered.
DATABASE_ROLE --------------------PHYSICAL STANDBY
48
ORACLE DATA BASE ADMINISTRATION
SQL> select open_mode ,protection_mode , database_role from v$database ; OPEN_MODE --------READ WRITE
PROTECTION_MODE -----------------------MAXIMUM PERFORMANCE
DATABASE_ROLE -------------------PHYSICAL STANDBY
How to Register Listener in the Database ? The listener is a separate process that runs on the database server computer. It receives incoming client connection requests and manages the traffic of these requests to the database server. There are two methods by which a listener comes to know of a database instance. In Oracle terminology, this is referred as ―Registering with the Listener‖ . The two methods are 1.) Static Instance Registration 2.) Dynamic Instance Registration First we will discuss about the Static Instance Listener : This is the very basic method to register listener .We can either add the entries in $ORACLE_HOME\NETWORK\ADMIN\listener.ora file or by using the GUI i.e, through Net Manager. The configuration inside the listener.ora file looks like : SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (GLOBAL_DBNAME = noida) (ORACLE_HOME = C:\app\neerajs\product\11.2.0\dbhome_1) (SID_NAME = noida) ) (SID_DESC = (GLOBAL_DBNAME = hyd) (ORACLE_HOME = C:\app\neerajs\product\11.2.0\dbhome_1) (SID_NAME = hyd) ) ) LISTENER = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = tech-199)(PORT = 1521)) ) and when we check the registration , it shows the status of UNKNOWN : C:\>lsnrctl LSNRCTL for 32-bit Windows: Version 11.2.0.1.0 - Production on 05-OCT-2011 15:26:27 Copyright (c) 1991, 2010, Oracle. All rights reserved. Welcome to LSNRCTL, type "help" for information. LSNRCTL> status Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=tech199)(PORT=1521)))
49
ORACLE DATA BASE ADMINISTRATION
STATUS of the LISTENER -----------------------Alias LISTENER Version TNSLSNR for 32-bit Windows: Version 11.2.0.1.0 - Production Start Date 28-SEP-2011 15:03:39 Uptime 7 days 0 hr. 22 min. 52 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File C:\app\neerajs\product\11.2.0\dbhome_1\network\admin\listener.ora Listener Log File c:\app\neerajs\diag\tnslsnr\tech-199\listener\alert\log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=tech-199)(PORT=1521))) Services Summary... Service "hyd" has 1 instance(s). Instance "hyd", status UNKNOWN, has 1 handler(s) for this service... Service "noida" has 1 instance(s). Instance "noida", status UNKNOWN, has 1 handler(s) for this service... The command completed successfully LSNRCTL> The status is unknown because there is no mechanism to guarantee that the specified status even exists.Here the listener assumes that instance will be there whenever there will be any request. It donot have inforamtion about the status of the Current Instance. Now, we will check the Dynamic Instance Listener : Dynamic Instance Registration : This dynamic registration feature is called service registration. The registration is performed by the PMON process an instance background process of each database instance that has the necessary configuration in the database initialization parameter file. Dynamic service registration does not require any configuration in the listener.ora file. Service registration offers the following benefits : 1.) Simplified configuration : Service registration reduces the need for SID_LIST_listener_name parameter setting, which specifies information about databases served by the listener, in the listener.ora file.
the the
Note : The SID_LIST_listener_name parameter is still required if we are using Oracle Enterprise Manager to manage the database. 2.) Connect-time failover : Because the listener always knows the state of the instances, service registration facilitates automatic failover of the client connect request to a different instance if one instance is down. In a static configuration model, a listener would start a dedicated server upon receiving a client request. The server would later find out that the instance is not up, causing an "Oracle not available" error message. 3.) Connection load balancing : Service registration enables the listener to forward client connect requests to the least loaded instance and dispatcher or dedicated server. Service registration balances the load across the service handlers and nodes. To ensure service
50
ORACLE DATA BASE ADMINISTRATION
registration works properly, the initialization parameter file should contain the following parameters: SERVICE_NAMES for the database service name INSTANCE_NAME for the instance name For example: SERVICE_NAMES=noida.TECH-199 INSTANCE_NAME=noida Let's have a Demo of Dynamic Listener. The listener is quite capable of running without a listner.ora file at all. It will simply start and run with all default values.Here i have rename the listener.ora file and stop and start the listener and find that listener supports no services.Check the below: LSNRCTL> stop Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=tech199)(PORT=1521))) The command completed successfully. Now start the listener LSNRCTL> start Starting tnslsnr: please wait... TNSLSNR for 32-bit Windows: Version 11.2.0.1.0 - Production Log messages written to c:\app\neerajs\diag\tnslsnr\tech-199\listener\alert\log.xml Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=tech199)(PORT=1521))) Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=tech199)(PORT=1521))) STATUS of the LISTENER -----------------------Alias LISTENER Version TNSLSNR for 32-bit Windows: Version 11.2.0.1.0 - Production Start Date 05-OCT-2011 16:21:30 Uptime 0 days 0 hr. 0 min. 7 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Log File c:\app\neerajs\diag\tnslsnr\tech-199\listener\alert\log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=tech-199)(PORT=1521))) The listener supports no services The command completed successfully Here, we find that listener donot support any services.Since it doesnot found the listener.ora file ,and if we try to connect to the Instance then it will throws the error i.e, ORA-12514 : C:\> tnsping noida TNS Ping Utility for 32-bit Windows: Version 11.2.0.1.0 - Production on 05-OCT-2011 16:23:03
51
ORACLE DATA BASE ADMINISTRATION
Copyright (c) 1997, 2010, Oracle. All rights reserved. Used parameter files: C:\app\neerajs\product\11.2.0\dbhome_1\network\admin\sqlnet.ora Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.100.0.112)(PORT = 1521))) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = noida))) OK (40 msec) Now, we try to connect with Instance "NOIDA" C:\> sqlplus sys/xxxx@noida as sysdba SQL*Plus: Release 11.2.0.1.0 Production on Wed Oct 5 16:23:45 2011 Copyright (c) 1982, 2010, Oracle. All rights reserved. ERROR: ORA-12514: TNS:listener does not currently know of service requested in connect descriptor Since the tnsping proves that our tnsnames.ora resolution is correct, but it throws the error while connecting to database because the listener doesnot knows anything about the services "NOIDA" . Let's start the instance and check again : C:\> set ORACLE_SID=noida SQL> startup ORACLE instance started. Total System Global Area 263639040 bytes Fixed Size 1373964 bytes Variable Size 222300404 bytes Database Buffers 33554432 bytes Redo Buffers 6410240 bytes Database mounted. Database opened. Now check the listener status again : LSNRCTL> status Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=tech199)(PORT=1521))) STATUS of the LISTENER -----------------------Alias LISTENER Version TNSLSNR for 32-bit Windows: Version 11.2.0.1.0 - Production Start Date 05-OCT-2011 16:21:30 Uptime 0 days 0 hr. 19 min. 21 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Log File c:\app\neerajs\diag\tnslsnr\tech-199\listener\alert\log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=tech-199)(PORT=1521)))
52
ORACLE DATA BASE ADMINISTRATION
Services Summary... Service "noida.TECH-199" has 1 instance(s). Instance "noida", status READY, has 1 handler(s) for this service... Service "noidaXDB.TECH-199" has 1 instance(s). Instance "noida", status READY, has 1 handler(s) for this service... Service "noida_DGB.TECH-199" has 1 instance(s). Instance "noida", status READY, has 1 handler(s) for this service... The command completed successfully Here we observe that once the instance is started , when we re-check the listener now knows of service ―NOIDA”, with a status of READY . This obviously did not come from listener.ora as the file is renamed. Notice also that, unlike the static registration, this time the status is READY. The listener knows the instance is ready because the instance itself told the listener it was ready. Now agian connecting to the Instance : C:\>sqlplus sys/xxxx@noida as sysdba SQL*Plus: Release 11.2.0.1.0 Production on Tue Oct 4 18:14:28 2011 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> Here by default, the PMON process registers service information with its local listener on the default local address of TCP/IP, port 1521. As long as the listener configuration is synchronized with the database configuration, PMON can register service information with a nondefault local listener or a remote listener on another node. During service registration PMON provides listener with the following information: - Name of the associated instance - Current load and maximum load on instance - Names of DB services provided by database. - Information about dedicated servers and dispatchers (depends on database server mode i.e dedicated/shared server mode) . PMON process wakes up at every 60 seconds and provide information to the listener. If any problem arises and PMON process fails then it's not possible to register information to listener periodically. In this case we can do 'Manual service registration' using command: SQL> ALTER SYSTEM REGISTER; DataGuard Broker And its Benefits The Oracle Data Guard broker is a distributed management framework that automates and centralizes the creation, maintenance, and monitoring of Data Guard configurations. The following describes some of the operations the broker automates and simplifies I.) Adding additional new or existing (physical, snapshot, logical, RAC or non-RAC) standby databases to an existing Data Guard configuration, for a total of one primary database, and from 1 to 30 standby databases(in Oracle 11g) in the same configuration.
53
ORACLE DATA BASE ADMINISTRATION
II.) Managing an entire Data Guard configuration, including all databases, redo transport services, and log apply services, through a client connection to any database in the configuration. III.) Managing the protection mode for the broker configuration. IV.) Invoking switchover or failover with a single command to initiate and control complex role changes across all databases in the configuration. V.) Configuring failover to occur automatically upon loss of the primary database, increasing availability without manual intervention. VI.) Monitoring the status of the entire configuration, capturing diagnostic information, reporting statistics such as the redo apply rate and the redo generation rate, and detecting problems quickly with centralized monitoring, testing, and performance tools. Oracle Data Guard Broker Diagram : The below diagram will help us to understand Data Guard Broker.
54
ORACLE DATA BASE ADMINISTRATION
We can perform all management operations locally or remotely through the broker's easyto-use interfaces: the Data Guard management pages in Oracle Enterprise Manager, which is the broker's graphical user interface (GUI), and the Data Guard command-line interface called DGMGRL. Benefits of Data Guard Broker : The broker's interfaces improve usability and centralize management and monitoring of the Data Guard configuration.The following benefits are : 1.) Disaster protection : By automating many of the manual tasks required to configure and monitor a Data Guard configuration, the broker enhances the high availability, data protection, and disaster protection capabilities that are inherent in Oracle Data Guard. Access is possible through a client to any system in the Data Guard configuration,
55
ORACLE DATA BASE ADMINISTRATION
eliminating any single point o failure. If the primary database fails, the broker automates the process for any one of the standby databases to replace the primary database and take over production processing. The database availability that Data Guard provides makes it easier to protect our data. 2.) Higher availability and scalability : While Oracle Data Guard broker enhances disaster protection by maintaining transitionally consistent copies of the primary database, Data Guard, configured with Oracle high availability solutions such as Oracle Real Application Clusters (RAC) databases. 3.) Automated creation of a Data Guard configuration : The broker helps us to logically define and create a Data Guard configuration consisting of a primary database and (physical or logical, snapshot, R AC or non-RAC) standby databases. The broker automatically communicates between the databases in a Data Guard configuration using Oracle Net Services. The databases can be local or remote, connected by a LAN or geographically dispersed over a WAN. 4.) Easy configuration of additional standby databases : After we create a Data Guard configuration consisting of a primary and a standby database, we can add up to eight new or existing, physical, snapshot, or logical standby databases to each Data Guard configuration. Oracle Enterprise Manager provides an Add Standby Database wizard to guide us through the process of adding more databases. 5.) Simplified, centralized, and extended management : We can issue commands to manage many aspects of the broker configuration. These include: I.> Simplify the management of all components of the configuration, including the primary and standby databases, redo transport services, and log apply services. II.> Coordinate database state transitions and update database properties dynamically with the broker recording the changes in a broker configuration file that includes profiles of all the databases in the configuration. The broker propagates the changes to all databases in the configuration and their server parameter files. 6.) Simplified switchover and failovers by allowing Enterprise Manager or a Fast-start failover can be amount of data loss.
and failover operations : The broker simplifies switchovers us to invoke them using a single key click in Oracle single command at the DGMGRL command-line interface. configured to occur with no data loss or with a configurable
7.) Built-in monitoring and alert and control mechanisms : The broker provides built-in validation that monitors the health of all of the databases in the configuration. From any system in the configuration connected to any database, we can capture diagnostic information and detect obvious and subtle problems quickly with centralized monitoring, testing, and performance tools.
56
ORACLE DATA BASE ADMINISTRATION
8.) Transparent to application : Use of the broker is possible for any database because the broker works transparently with applications; no application code changes are required to accommodate a configuration that we manage with the broker. Relationship of Objects Managed by the Data Guard Broker :
ORA-16789: Standby Redo Logs Not Configured This is very common error which occur during the switchover the standby database in Dataguard Broker. In my case when i switchover to standby the error occur as below : C:\>dgmgrl DGMGRL for 32-bit Windows: Version 11.2.0.1.0 - Production Copyright (c) 2000, 2009, Oracle. All rights reserved. Welcome to DGMGRL, type "help" for information. DGMGRL> connect sys/xxxx@noida Connected. DGMGRL> switchover to 'red'; Performing switchover NOW, please wait... Error: ORA-16789: standby redo logs not configured Failed. Unable to switchover, primary database is still "noida" This error generally occur because the Standby redo logs were not configured for the database.
57
ORACLE DATA BASE ADMINISTRATION
Therefore,Standby redo logs are required when the redo transport mode is set to SYNC or ASYNC . Hence to solve this we have add standby redolog files on primary database . Below the command to this issue. On Primary Database: SQL> ALTER DATABASE add standby LOGFILE GROUP 6 'C:\APP\NEERAJS\ORADATA\NOIDA\REDO06.LOG' size 50m ; Database altered. Now it's is not giving any further error . How to Drop/Rename Standby Redolog file in Oracle 11g While performing the dataguard Broker, we need to drop the standby database while switchover the standby . As it seems an easy task but it is bit tricky . Below are the steps to drop the redolog file from standby database : On Standby Database : SQL> select member,type from v$logfile; MEMBER ---------------------------------D:\APP\STANDBY\ORADATA\REDO03.LOG D:\APP\STANDBY\ORADATA\REDO02.LOG D:\APP\STANDBY\ORADATA\REDO01.LOG D:\APP\STANDBY\ORADATA\REDO04.LOG D:\APP\STANDBY\ORADATA\REDO05.LOG
TYPE ----------ONLINE ONLINE ONLINE STANDBY STANDBY
Here,we have to drop the two standby redolog file . SQL> alter database drop standby logfile group 4; alter database drop standby logfile group 4 * ERROR at line 1: ORA-01156: recovery or flashback in progress may need access to files Now to solve this issue we have cancel the managed recovery session and set "standby_file_management" to manual and drop the standby redolog file as SQL> alter database recover managed standby database cancel ; Database altered. SQL> alter system set standby_file_management='MANUAL' ; System altered. SQL>alter database drop standby logfile group 4; Database altered. SQL>alter database drop standby logfile group 5; Database altered. If the status of standby redolog show the "clearing_current" then we cannot drop "clearing_current" status logs,and for that we have to sync with Primary and clear the log first before dropping as SQL> alter database clear logfile group n;
58
ORACLE DATA BASE ADMINISTRATION
Once the standby redologs are dropped then again back to recover the standby. SQL>alter system set standby_file_management='AUTO' ; System altered. SQL> alter database recover managed standby database disconnect from session ; Database altered. Switchover to Physical Standby Database in Oracle 11g Once the standby database is configured and works fine then we can switchover to standby database for testing purpose to reduce the primary database downtime .Primary database may need down-time for many reasons like OS upgradation, Hardwares upgradation and for many other issues . Whenever we switchover the primary database to standby database , there is no loss of data during the switchover. Once the maintainance of the primary database is over , then we can again switchover to standby database. In this scenario , the Primary database is "NOIDA" and standby database is "RED". Here i will switchover the primary database to standby database i.e, from "noida" to "red". Before switching, we should check some prerequisites . Step 1 : Verify whether it is possible to perform a switchover On the current primary database, query the "switchover_status" column of the V$DATABASE fixed view on the primary database to verify it is possible to perform switchover. SQL> select switchover_status from v$database ; SWITCHOVER_STATUS -------------------TO STANDBY The TO STANDBY value in the "switchover_status" column indicates that it is possible to switch the primary database to the standby role. If the TO STANDBY value is not displayed, then verify the configuration is functioning correctly . (for example, verify all "log_archive_dest_n" parameter values are specified correctly). If the value in the switchover_status column is SESSIONS ACTIVE or FAILED DESTINATION then click here . Step 2 : Check that there is no active users connected to the databases. SQL> select distinct osuser,username from v$session; Step 3 : Switch the current online redo log file on primary database and verify that it has been appleid SQL>alter system switch logfile ; System altered. Step 4 : Connect with primary database and initiate the switchover C:\>sqlplus sys/xxxx@noida as sysdba SQL> alter database commit to switchover to physical standby; Database altered.
59
ORACLE DATA BASE ADMINISTRATION
Now, the primary database is converted into standby database.The controlfile is backed up to the current SQL session trace file before the switchover. This makes it possible to reconstruct a current control file,if necessary. If we try to perform a switchover when other instances are running then we will get ORA01105 as follows : SQL>alter database ALTER DATABASE * ORA-01105: mount is
commit COMMIT incompatible
to TO
switchover SWITCHOVER
with
mounts
by
to TO
standby ; STANDBY
other
instances
In order to perform a switchover, run below command on the primary database. SQL>alter database commit to switchover to physical standby with session shutdown ; The above statement first terminates all active sessions by closing the primary database. Then any non-archived redo log files are transmitted and applied to standby database. Apart from that an end-of-redo marker is added to the header of the last log file that was archived.A backup of current control file is created and the current control file is converted into a standby control file. Step 5 : Shut down and restart the primary instance(RED). SQL>shutdown immediate; SQL> startup mount ; Step 6 : Verify the switchover status in the v$database view. After we change the primary database to the physical standby role and the switchover notification is received by the standby databases in the configuration, we should verify if the switchover notification was processed by the target standby database by querying the "switchover_status" column of thev$database fixed view on the target standby database. On old Primary database(noida) SQL> select name,open_mode,db_unique_name from v$database; NAME OPEN_MODE DB_UNIQUE_NAME SWITCHOVER_STATUS -------------------------------------------------------NOIDA MOUNTED noida TO PRIMARY On old standby database (RED) SQL> select name,open_mode,db_unique_name,switchover_status from v$database; NAME OPEN_MODE DB_UNIQUE_NAME SWITCHOVER_STATUS ----------------------------------------------------NOIDA MOUNTED red TO PRIMARY Step 8 : Switch the target physical standby database role to the primary role We can switch a physical standby database from the standby role to the primary role when the standby database instance is either mounted in Redo Apply mode or open for read-only access. It must be in one of these modes so that the primary database switchover request can be coordinated. After the standby database is in an appropriate mode, issue the following sql statement on the physical standby database that we want to change to the primary role: SQL>alter database commit to switchover to primary ;
60
ORACLE DATA BASE ADMINISTRATION
Database altered. SQL> shut immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> startup ORACLE instance started. Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers Database mounted. Database opened.
263639040 bytes 1373964 bytes 213911796 bytes 41943040 bytes 6410240 bytes
Step 9 : Check the new primary database(RED) and switch logfile : SQL> select open_mode from v$database; OPEN_MODE --------------READ WRITE Note : it's a good idea to perform a log switch on the primary . SQL> alter system switch logfile; System altered. Step 10 : Open new standby database(Noida) in read-write SQL> alter database open; Database altered SQL> select name,open_mode ,db_unique_name,switchover_status from v$database; NAME -----NOIDA
OPEN_MODE ------------READ ONLY
DB_UNIQUE_NAME -----------------noida
SWITCHOVER_STATUS ----------------------RECOVERY NEEDED
SQL> alter database recover managed standby database disconnect from session; Database altered. SQL> select name,open_mode from v$database; NAME OPEN_MODE ----------------------------------------NOIDA READ ONLY WITH APPLY The switchover_status column of v$database can have the following values: Not Allowed : Either this is a standby database and the primary database has not been switched first, or this is a primary database and there are no standby databases.
61
ORACLE DATA BASE ADMINISTRATION
Session Active : Indicates that there are active SQL sessions attached to the primary or standby database that need to be disconnected before the switchover operation is permitted. Switchover Pending : This is a standby database and the primary database switchover request has been received but not processed. Switchover Latent : The switchover was in pending mode, but did not complete and went back to the primary database. To Primary : This is a standby database, with no active sessions, that is allowed to switch over to a primary database. To Standby : This is a primary database, with no active sessions, that is allowed to switch over to a standby database. Recovery Needed : This is a standby database that has not received the switchover request.
ORA-10456: cannot open standby database; media recovery session may be in progress Once while starting my standby database i found that database is not opening in normal mode. It throws the error-10456 :cannot standby database. On performing some R&D and googling comes to the conclusion that this error is generally occurs because a media recovery or RMAN session may have been in progress on a mounted instance of a standby database when an attempt was made to open the standby database. Hence to solve of this issue we have to cancel any conflicting recovery session and then open the standby database. Here is the issue what i have experienced . C:\>sqlplus sys/xxxx@red as sysdba SQL*Plus: Release 11.2.0.1.0 Production on Mon Oct 3 12:47:32 2011 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> select * from hr.aa; select * from hr.aa * ERROR at line 1: ORA-01219: database not open: queries allowed on fixed tables/views only SQL> select name,open_mode from v$database; NAME OPEN_MODE ---------------------------NOIDA MOUNTED SQL> alter database open; alter database open * ERROR at line 1: ORA-10456: cannot open standby database; media recovery session may be in progress
62
ORACLE DATA BASE ADMINISTRATION
SQL> alter database recover managed standby database cancel; Database altered. SQL> alter database open; Database altered. SQL> alter database recover managed standby database using current logfile disconnect ; Database altered. Hence, finally we solve the issues. Active Standby Database In Oracle 11g A Standby Database is an exact binary copy of an operational database on a remote server, ready to be used for backup, replication, disaster recovery, analysis, shadow environment and reporting, to name a few applications. The most exiting feature of Active Standby Database is that we can open the standby database in read only mode and at the sometime MRP process will be active, so we can redirect users to connect standby to perform select operations for reporting purpose. So that we can control much load on production database and there are plenty of option for active dataguard . Here we will setup the standby database with active duplicate database feature available in 11g where we can create standby database without having any rman backup.In this setup,there is no need to copy the datafiles manually. Datafiles are copeid over the network . As i have setup the standby database on same machine in my earlier POST. Now i will step the standby database on two different machine. Lets have the details of setup : Primary Database : Machine ==> tech-199 Database ==> NOIDA Standby Database : Machine ==> tech-284 Database ==> RED(standby) Platform used is WINDOW XP While configuring the standby database lets' have a look on the directory structure to avoid any confusion .On Primary database all the datafiles and redologs file in directory C:\app\neerajs\oradata\noida\' and archive destination is in directory "D:\archive" on machine tech-199 where as in case of the Standby database all the datafile,redologs and control files are in directory 'D:\app\standby\oradata\' i.e, on machine tech-284 . In standby database, i have set the archive destination in 'D:\archive\' . Let us configure standby database stepby-step. Step 1 : Enable force logging on the Primary database :
63
ORACLE DATA BASE ADMINISTRATION
SQL> alter database force logging ; Database altered. Following steps are performed on Standby database Machine (i.e; tech-284) Step 2 : Create Oracle Instance : C:\> oradim -new -sid red -intpwd xxxx -startmode m instance created Note : Password should be same as that of user "sys" of production database. Step 3 : Update Listener.ora on standby Machine (SID_DESC = (GLOBAL_DBNAME = noida) (ORACLE_HOME = D:\app\Bishwanath\product\11.2.0\dbhome_1) (SID_NAME = red) ) Stop and start the listener on standby Step 4 : Update the tnsnames.ora file on standby database : red = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = tech-284)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = red) ) ) Step 5 : Create pfile for standby database Add just one parameter in pfile. i.e; db_name=Noida save the pfile as initred.ora in $ORACLE_HOME\database\
folder.
Step 6 : Startup standby Instance in nomount state C:\>sqlplus sys/xxxx@red as sysdba SQL>startup nomount ORACLE instance started. Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers
263639040 bytes 1373964 bytes 213911796 bytes 41943040 bytes 6410240 bytes
Step 7 : On production database ,connect with RMAN and establish connection with auxiliary i.e; to standby database SQL> host rman target sys/xxxx@noida auxiliary sys/xxxx@red Recovery Manager: Release 11.2.0.1.0 - Production on Sat Oct 1 16:56:17 2011
64
ORACLE DATA BASE ADMINISTRATION
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. connected to target database: connected to auxiliary database:
NOIDA (DBID=1515011070) NOIDA (not mounted)
RMAN> DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE NOFILENAMECHECK DORECOVER SPFILE SET DB_UNIQUE_NAME='red' SET LOG_ARCHIVE_DEST_2='service=noida LGWR SYNC REGISTER VALID_FOR=(online_logfile,primary_role)' Set STANDBY_FILE_MANAGEMENT='AUTO' SET FAL_SERVER='noida' SET FAL_CLIENT='RED' SET CONTROL_FILES='D:\app\standby\oradata\CONTROL01.CTL' SET DB_FILE_NAME_CONVERT 'C:\app\neerajs\oradata\noida\','D:\app\standby\oradata\' SET LOG_FILE_NAME_CONVERT 'C:\app\neerajs\oradata\noida\','D:\app\standby\oradata\' set log_archive_dest_1='location=D:\archive\' set diagnostic_dest='D:\app\standby\diag\' set db_recovery_file_dest='D:\app\standby\FRA\' ; Starting Duplicate Db at 01-OCT-11 using target database control file instead of recovery catalog allocated channel: ORA_AUX_DISK_1 channel ORA_AUX_DISK_1: SID=5 device type=DISK contents of Memory Script: { backup as copy reuse targetfile 'C:\app\neerajs\product\11.2.0\dbhome_1\DATABASE\PWDnoida.ORA' auxiliary format 'D:\app\Bishwanath\product\11.2.0\dbhome_1\DATABASE\PWDred.ORA' targetfile 'C:\APP\NEERAJS\PRODUCT\11.2.0\DBHOME_1\DATABASE\SPFILENOIDA.ORA' auxiliary format 'D:\APP\BISHWANATH\PRODUCT\11.2.0\DBHOME_1\DATABASE\SPFILERED.ORA' ; sql clone "alter system set spfile= ''D:\APP\BISHWANATH\PRODUCT\11.2.0\DBHOME_1\DATABASE\SPFILERED.ORA''"; } executing Memory Script Starting backup at 01-OCT-11 allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=22 device type=DISK Finished backup at 01-OCT-11 sql statement: alter system set spfile= ''D:\APP\BISHWANATH\PRODUCT\11.2.0\DBHOME_1\DATABASE\SPFILERED.ORA'' contents of Memory Script: { sql clone "alter system set db_unique_name =
65
ORACLE DATA BASE ADMINISTRATION
''red'' comment= '''' scope=spfile"; sql clone "alter system set LOG_ARCHIVE_DEST_2 = ''service=noida LGWR SYNC REGISTER VALID_FOR=(online_logfile,primary_role)'' comment= '''' scope=spfile"; sql clone "alter system set STANDBY_FILE_MANAGEMENT = ''AUTO'' comment= '''' scope=spfile"; sql clone "alter system set FAL_SERVER = ''noida'' comment= '''' scope=spfile"; sql clone "alter system set FAL_CLIENT = ''RED'' comment= '''' scope=spfile"; sql clone "alter system set CONTROL_FILES = ''D:\app\standby\oradata\CONTROL01.CTL'' comment= '''' scope=spfile"; sql clone "alter system set db_file_name_convert = ''C:\app\neerajs\oradata\noida\'', ''D:\app\standby\oradata\'' comment= '''' scope=spfile"; sql clone "alter system set LOG_FILE_NAME_CONVERT = ''C:\app\neerajs\oradata\noida\'', ''D:\app\standby\oradata\'' comment= '''' scope=spfile"; sql clone "alter system set log_archive_dest_1 = ''location=D:\archive\'' comment= '''' scope=spfile"; sql clone "alter system set diagnostic_dest = ''D:\app\standby\diag\'' comment= '''' scope=spfile"; sql clone "alter system set db_recovery_file_dest = ''D:\app\standby\FRA\'' comment= '''' scope=spfile"; shutdown clone immediate; startup clone nomount; } executing Memory Script sql statement: alter system set db_unique_name = ''red'' comment= '''' scope=spfile sql statement: alter system set LOG_ARCHIVE_DEST_2 = ''service=noida LGWR SYNC REGISTER VALID_FOR=(online_logfile,primary_role)'' comment= '''' scope=spfile sql statement: alter system set STANDBY_FILE_MANAGEMENT = ''AUTO'' comment= '''' scope=spfile sql statement: alter system set FAL_SERVER = ''noida'' comment= '''' scope=spfile sql statement: alter system set FAL_CLIENT = ''RED'' comment= '''' scope=spfile sql statement: alter system set CONTROL_FILES = ''D:\app\standby\oradata\CONTROL01.CTL'' comment= '''' scope=spfile sql statement: alter system set db_file_name_convert = ''C:\app\neerajs\oradata\noida\'', ''D:\app\standby\oradata\'' comment= '''' scope=spfile
66
ORACLE DATA BASE ADMINISTRATION
sql statement: alter system set LOG_FILE_NAME_CONVERT = ''C:\app\neerajs\oradata\noida\'', ''D:\app\standby\oradata\'' comment= '''' scope=spfile sql statement: alter system set log_archive_dest_1 = ''location=D:\archive\'' comment= '''' scope=spfile sql statement: alter system set diagnostic_dest = ''D:\app\standby\diag\'' comment= '''' scope=spfile sql statement: alter system set db_recovery_file_dest = ''D:\app\standby\FRA\'' comment= '''' scope=spfile Oracle instance shut down connected to auxiliary database (not started) Oracle instance started Total System Global Area 263639040 bytes Fixed Size 1373964 bytes Variable Size 192940276 bytes Database Buffers 62914560 bytes Redo Buffers 6410240 bytes contents of Memory Script: { backup as copy current controlfile for standby auxiliary format 'D:\APP\STANDBY\ORADATA\CONTROL01.CTL'; } executing Memory Script Starting backup at 01-OCT-11 using channel ORA_DISK_1 channel ORA_DISK_1: starting datafile copy copying standby control file output file name=C:\APP\NEERAJS\PRODUCT\11.2.0\DBHOME_1\DATABASE\SNCFNOIDA.ORA tag=TAG20111001T165811 RECID=11 STAMP=763405095 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07 Finished backup at 01-OCT-11 contents of Memory Script: { sql clone 'alter database mount standby database'; } executing Memory Script sql statement: alter database mount standby database contents of Memory Script: { set newname for tempfile 1 to "D:\APP\STANDBY\ORADATA\TEMP01.DBF"; switch clone tempfile all; set newname for datafile 1 to "D:\APP\STANDBY\ORADATA\SYSTEM01.DBF"; set newname for datafile 2 to "D:\APP\STANDBY\ORADATA\SYSAUX01.DBF"; set newname for datafile 3 to "D:\APP\STANDBY\ORADATA\UNDOTBS01.DBF"; set newname for datafile 4 to
67
ORACLE DATA BASE ADMINISTRATION
"D:\APP\STANDBY\ORADATA\USERS01.DBF"; set newname for datafile 5 to "D:\APP\STANDBY\ORADATA\EXAMPLE01.DBF"; set newname for datafile 6 to "D:\APP\STANDBY\ORADATA\TEST01.DBF"; backup as copy reuse datafile 1 auxiliary format "D:\APP\STANDBY\ORADATA\SYSTEM01.DBF" datafile 2 auxiliary format "D:\APP\STANDBY\ORADATA\SYSAUX01.DBF" datafile 3 auxiliary format "D:\APP\STANDBY\ORADATA\UNDOTBS01.DBF" datafile 4 auxiliary format "D:\APP\STANDBY\ORADATA\USERS01.DBF" datafile 5 auxiliary format "D:\APP\STANDBY\ORADATA\EXAMPLE01.DBF" datafile 6 auxiliary format "D:\APP\STANDBY\ORADATA\TEST01.DBF" ; sql 'alter system archive log current'; } executing Memory Script executing command: SET NEWNAME renamed tempfile 1 to D:\APP\STANDBY\ORADATA\TEMP01.DBF in control file executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME Starting backup at 01-OCT-11 using channel ORA_DISK_1 channel ORA_DISK_1: starting datafile copy input datafile file number=00001 name=C:\APP\NEERAJS\ORADATA\NOIDA\SYSTEM01.DBF output file name=D:\APP\STANDBY\ORADATA\SYSTEM01.DBF tag=TAG20111001T165829 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:01:15 channel ORA_DISK_1: starting datafile copy input datafile file number=00002 name=C:\APP\NEERAJS\ORADATA\NOIDA\SYSAUX01.DBF output file name=D:\APP\STANDBY\ORADATA\SYSAUX01.DBF tag=TAG20111001T165829 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:01:05 channel ORA_DISK_1: starting datafile copy input datafile file number=00005 name=C:\APP\NEERAJS\ORADATA\NOIDA\EXAMPLE01.DBF output file name=D:\APP\STANDBY\ORADATA\EXAMPLE01.DBF tag=TAG20111001T165829 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15 channel ORA_DISK_1: starting datafile copy input datafile file number=00003 name=C:\APP\NEERAJS\ORADATA\NOIDA\UNDOTBS01.DBF output file name=D:\APP\STANDBY\ORADATA\UNDOTBS01.DBF tag=TAG20111001T165829 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07 channel ORA_DISK_1: starting datafile copy input datafile file number=00006 name=C:\APP\NEERAJS\ORADATA\NOIDA\TEST01.DBF output file name=D:\APP\STANDBY\ORADATA\TEST01.DBF tag=TAG20111001T165829
68
ORACLE DATA BASE ADMINISTRATION
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03 channel ORA_DISK_1: starting datafile copy input datafile file number=00004 name=C:\APP\NEERAJS\ORADATA\NOIDA\USERS01.DBF output file name=D:\APP\STANDBY\ORADATA\USERS01.DBF tag=TAG20111001T165829 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01 Finished backup at 01-OCT-11 sql statement: alter system archive log current contents of Memory Script: { backup as copy reuse archivelog like "D:\ARCHIVE\ARC0000000053_0761068614.0001" auxiliary format "D:\ARCHIVE\ARC0000000053_0761068614.0001" ; catalog clone archivelog "D:\ARCHIVE\ARC0000000053_0761068614.0001"; switch clone datafile all; } executing Memory Script Starting backup at 01-OCT-11 using channel ORA_DISK_1 channel ORA_DISK_1: starting archived log copy input archived log thread=1 sequence=53 RECID=38 STAMP=763405284 output file name=D:\ARCHIVE\ARC0000000053_0761068614.0001 RECID=0 STAMP=0 channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:07 Finished backup at 01-OCT-11 cataloged archived log archived log file name=D:\ARCHIVE\ARC0000000053_0761068614.0001 RECID=1 STAMP=763405200 datafile 1 switched to datafile copy input datafile copy RECID=11 STAMP=763405201 file name=D:\APP\STANDBY\ORADATA\SYSTEM01.DBF datafile 2 switched to datafile copy input datafile copy RECID=12 STAMP=763405201 file name=D:\APP\STANDBY\ORADATA\SYSAUX01.DBF datafile 3 switched to datafile copy input datafile copy RECID=13 STAMP=763405201 file name=D:\APP\STANDBY\ORADATA\UNDOTBS01.DBF datafile 4 switched to datafile copy input datafile copy RECID=14 STAMP=763405201 file name=D:\APP\STANDBY\ORADATA\USERS01.DBF datafile 5 switched to datafile copy input datafile copy RECID=15 STAMP=763405201 file name=D:\APP\STANDBY\ORADATA\EXAMPLE01.DBF datafile 6 switched to datafile copy input datafile copy RECID=16 STAMP=763405201 file name=D:\APP\STANDBY\ORADATA\TEST01.DBF contents of Memory Script: { set until scn 2184111; recover standby clone database delete archivelog ; }
69
ORACLE DATA BASE ADMINISTRATION
executing Memory Script executing command: SET until clause Starting recover at 01-OCT-11 allocated channel: ORA_AUX_DISK_1 channel ORA_AUX_DISK_1: SID=130 device type=DISK starting media recovery archived log for thread 1 with sequence 53 is already on disk as file D:\ARCHIVE\ARC0000000053_0761068614.0001 archived log file name=D:\ARCHIVE\ARC0000000053_0761068614.0001 thread=1 sequence=53 media recovery complete, elapsed time: 00:00:02 Finished recover at 01-OCT-11 Finished Duplicate Db at 01-OCT-11 RMAN> **end-of-file** Step 8 : On Primary database SQL> alter system set standby_file_management=AUTO scope=both; System altered. SQL> alter system set fal_server=red scope=both; System altered. SQL> alter system set fal_client=noida scope=both; System altered. SQL> alter system set LOG_ARCHIVE_DEST_2='SERVICE=red LGWR SYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=red' scope=both; System altered. SQL> alter system set LOG_ARCHIVE_DEST_1='LOCATION=D:\archive\ VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=noida' ; System altered. Step 9 : On standby database : shutdown the Standby and enable managed recovery (active standby mode) C:\>sqlplus sys/xxxx@red as sysdba SQL*Plus: Release 11.2.0.1.0 Production on Sat Oct 1 17:14:12 2011 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> select open_mode from v$database; OPEN_MODE ----------------MOUNTED SQL> alter system set standby_file_management=AUTO scope=both; System altered. SQL> shut immediate
70
ORACLE DATA BASE ADMINISTRATION
ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> startup ORACLE instance started. Total System Global Area 263639040 bytes Fixed Size 1373964 bytes Variable Size 205523188 bytes Database Buffers 50331648 bytes Redo Buffers 6410240 bytes Database mounted. Database opened. SQL> recover managed standby database using current logfile disconnect; ORA-38500: USING CURRENT LOGFILE option not available without standby redo logs Standby redo logs are required to enable real time apply of redo data onto the standby.This standby redo logs are populated with redo information as fast as the primary redo logs, rather than waiting for the redo log to be archived and shipped to the standby. This results in faster switchover and failover times because the standby redo log files have been applied already to the standby database by the time the failover or switchover begins.Oracle recommends the below formula to calculate the number of Standby redo logs file as (maximum number of logfiles for each thread + 1) * maximum number of threads Since , I have three redo logs file so i will create four standby redo logs file. Oracle recommends that we should create standby redo logs on both i.e,primary and standby database so that we can safely switchover in future . Here, i am creating standby redo logs on standby database only . SQL> alter database add standby logfile group 4 'D:\APP\STANDBY\ORADATA\REDO04.LOG' size 50m; Database altered. SQL> alter database add standby logfile group 5 'D:\APP\STANDBY\ORADATA\REDO05.LOG' size 50m; Database altered. SQL> alter database add standby logfile group 6 'D:\APP\STANDBY\ORADATA\REDO06.LOG' size 50m; Database altered. SQL>recover managed standby database using current logfile disconnect ; Media recovery complete . ( On standby database ) SQL> select open_mode from V$database ; OPEN_MODE -------------------------READ ONLY WITH APPLY (The above output "read only with apply" shows the active mode is activated ) Following is the Command Used for Active Duplication
71
ORACLE DATA BASE ADMINISTRATION
FROM ACTIVE DATABASE : (This is supplied if we want to do active database duplication) Specifies that the files for the standby database should be provided directly from the source database and not from a backup of the source database NOFILENAMECHECK: Prevents RMAN from checking whether datafiles of the source database share the same names as the standby database files that are in use. The NOFILENAMECHECK option is required when the standby and primary datafiles and online redo logs have identical filenames. Thus, if we want the duplicate database filenames to be the same as the source database filenames, and if the databases are in different hosts, then we must specify NOFILENAMECHECK SPFILE: Copies the server parameter file from the source database to the operating system-specific default location for this file on the standby database. RMAN uses the server parameter file to start the auxiliary instance for standby database creation. Any remaining options of the DUPLICATE command are processed after the database instance is started with the server parameter file. If we execute DUPLICATE with the SPFILE clause, then the auxiliary instance must already be started with a text-based initialization parameter file. In this case, the only required parameter in the temporary initialization parameter file is DB_NAME, which can be set to any arbitrary value. RMAN copies the binary server parameter file, modifies the parameters based on the settings in the SPFILE clause, and then restarts the standby instance with the server parameter file. When we specify SPFILE, RMAN never uses the temporary text-based initialization parameter file to start the instance. DORECOVER: Specifies that RMAN should recover the standby database after creating it. If we specify an until Clause, then RMAN recovers to the specified SCN or time and leaves the database mounted. RMAN leaves the standby database mounted after media recovery is complete, but does not place the standby database in manual or managed recovery mode. After RMAN creates the standby database, we must resolve any gap sequence before placing it in manual or managed recovery mode, or opening it in read-onlymode Why and How to Drop Undo Tablespace ? A condition may be occur when we have to drop the Undo tablespace. Undo tablespace may be drop in various scenarios .In my case ,Once i have imported few tables with table_exists_action=appendparameter in database and these tables has created lots of undo's i.e; near about 102GB. So when we backup the database ,the backup size increases, i.e; backup consumes lots of space. Another scenario may be possible that while clonning if the undo tablespace get missed, then we can recover by just dropping the undo tablespace. Below is demo for drop and re-creating the undo tablespace. Step 1 : Shutdown immediate SQL> shut immediate Step 2 : Create pfile from spfile and edit pfile to set undo_management=manual (if it is set auto then set it to manual and if this parameter is not in pfile than set it i.e, undo_management=manual otherwise it will consider it "auto" management
72
ORACLE DATA BASE ADMINISTRATION
Step 3 : Startup pfile= Step 4 : Drop undo tablespace as SQL> drop tablespace including contents and datafiles. Step 5 : Create Undo tablespace SQL> create undo tablespace undotbs1 datafile size=100M; Step 6 : Shutdown the database and edit pfile to reset "undo_management=AUTO" Step 7 : create spfile from pfile SQL> create spfile from pfile= Step 8 : Startup the database SQL> startup Differences Between Dedicated Servers, Shared Servers, and Database Resident Connection Pooling Oracle creates server processes to handle the requests of user processes connected to an instance. A server process can be either a dedicated server process, where one server process services only one user process, or if our database server is configured for shared server, it can be a shared server process, where a server process can service multiple user processes . Let's have a look Dedicated Servers: 1.) When a client request is received, a new server process and a session are created for the client. 2.) Releasing database resources involves terminating the session and server process 3.) Memory requirement is proportional to the number of server processes and sessions. There is one server and one session for each client. 4.) Session memory is allocated from the PGA. Shared Servers : 1.) When the first request is received from a client, the Dispatcher process places this request on a common queue. The request is picked up by an available shared server process. The Dispatcher process then manages the communication between the client and the shared server process. 2.) Releasing database resources involves terminating the session 3.) Memory requirement is proportional to the sum of the shared servers and sessions. There is one session for each client. 4.) Session memory is allocated from the SGA. Database Resident Connection Pooling :
73
ORACLE DATA BASE ADMINISTRATION
1.) When the first request is received from a client, the Connection Broker picks an available pooled server and hands off the client connection to the pooled server. If no pooled servers are available, the Connection Broker creates one.If the pool has reached its maximum size, the client request is placed onthe wait queue until a pooled server is available. 2.) Releasing database resources involves releasing the pooled server to the pool. 3.) Memory requirement is proportional to the number of pooled servers and their sessions.There is one session for each pooled server. 4.) Session memory is allocated from the PGA. Example of Memory Usage for Dedicated Server, Shared Server, and Database Resident Connection Pooling : Consider an application in which the memory required for each session is 400 KB and the memory required for each server process is 4 MB. The pool size is 100 and the number of shared servers used is 100.If there are 5000 client connections, the memory used by each configuration is as follows: Dedicated Server Memory used = 5000 X (400 KB + 4 MB) = 22 GB Shared Server Memory used = 5000 X 400 KB + 100 X 4 MB = 2.5 GB Out of the 2.5 GB, 2 GB is allocated from the SGA. Database Resident Connection Pooling Memory used = 100 X (400 KB + 4 MB) + (5000 X 35KB)= 615 MB where 35KB is used for others operation All About Temporary Tablespace Part IV Monitoring Temporary Space Usage : We can monitor temporary space usage in the database in real time. At any given time, Oracle can tell us about all of the database‘s temporary tablespaces, sort space usage on a session basis, and sort space usage on a statement basis. All of this information is available from v$ views, and the queries shown in this section can be run by any database user with DBA privileges. Temporary Segments : The following query displays information about all sort segments in the database. (As a reminder, we use the term ―sort segment‖ to refer to a temporary segment in a temporary tablespace.) Typically, Oracle will create a new sort segment the very first time a sort to disk occurs in a new temporary tablespace. The sort segment will grow as needed, but it will not shrink and will not go away after all sorts to disk are completed. A database with one temporary tablespace will typically have just one sort segment. SQL> SELECT A.tablespace_name tablespace, D.mb_total,SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_used,D.mb_total - SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_free FROM v$sort_segment A,(SELECT B.name, C.block_size, SUM
74
ORACLE DATA BASE ADMINISTRATION
(C.bytes) / 1024 / 1024 mb_total FROM v$tablespace B, v$tempfile C WHERE B.ts#= C.ts# GROUP BY B.name, C.block_size ) D WHERE A.tablespace_name = D.name GROUP by A.tablespace_name, D.mb_total; The query displays for each sort segment in the database the tablespace the segment resides in, the size of the tablespace, the amount of space within the sort segment that is currently in use, and the amount of space available. Sample output from this query is as follows: TABLESPACE ----------TEMP
MB_TOTAL MB_USED MB_FREE ------------------------------10000 9 9991
This example shows that there is one sort segment in a 10,000 Mb tablespace called TEMP. Right now, 9 Mb of the sort segment is in use, leaving a total of 9,991 Mb available for additional sort operations. (Note that the available space may consist of unused blocks within the sort segment, unallocated extents in the TEMP tablespace, or a combination of the two.) Sort Space Usage by Session : The following query displays information about each database session that is using space in a sort segment. Although one session may have many sort operations active at once, this query summarizes the information by session. SQL> SELECT S.sid || ',' || S.serial# sid_serial, S.username, S.osuser, P.spid, S.module, S.program, SUM (T.blocks)* TBS.block_size/1024/1024 mb_used, T.tablespace, COUNT(*) sort_ops FROM v$sort_usage T, v$session S, dba_tablespaces TBS, v$process P WHERE T.session_addr = S.saddr AND S.paddr = P.addr AND T.tablespace = TBS.tablespace_name GROUP BY S.sid, S.serial#, S.username, S.osuser, P.spid, S.module, S.program, TBS.block_size, T.tablespace ORDER BY sid_serial; The query displays information about each database session that is using space in a sort segment, along with the amount of sort space and the temporary tablespace being used, and the number of sort operations in that session that are using sort space.Sample output from this query is as follows: SID_SERIAL USERNAME OSUSER SPID MODULE PROGRAM MB_USED TABLESPACE SORT_OPS ---------- -------- ------ ---- ------ --------- ------- ---------- -------33,16998 RPK_APP rpk 3061 inv httpd@db1 9 TEMP 2 This example shows that there is one database session using sort segment space. Session 33 with serial number 16998 is connected to the database as the RPK_APP user. The connection was initiated by the httpd@db1 process running under the rpk operating system user, and the Oracle server process has operating system process ID 3061. The application has identified itself to the database as module ―inv.‖ The session has two active sort operations that are using a total of 9 Mb of sort segment space in the TEMP tablespace. Sort Space Usage by Statement : The following query displays information about each statement that is using space in a sort segment. SQL> SELECT S.sid || ',' || S.serial# sid_serial, S.username, T.blocks * TBS.block_size /
75
ORACLE DATA BASE ADMINISTRATION
1024 / 1024 mb_used, T.tablespace, T.sqladdr address, Q.hash_value, Q.sql_text FROM v$sort_usage T, v$session S, v$sqlarea Q, dba_tablespaces TBS WHERE T.session_addr = S.saddr AND T.sqladdr = Q.address (+) AND T.tablespace = TBS.tablespace_name ORDER BY S.sid ; The query displays information about each statement using space in a sort segment,including information about the database session that issued the statement and the temporary tablespace and amount of sort space being used. Conclusion : When an operation such as a sort, hash, or global temporary table instantiation is too large to fit in memory, Oracle allocates space in a temporary tablespace for intermediate data to be written to disk. Temporary tablespaces are a shared resource in the database, and we can‘t set quotas to limit temporary space used by one session or database user. If a sort operation runs out of space, the statement initiating the sort will fail. It may only take one query missing part of its WHERE clause to fill an entire temporary tablespace and cause many users to encounter failure because the temporary tablespace is full. It is easy to detect when failures have occurred in the database due to a lack of temporary space. With the setting of a simple diagnostic event, it is also easy to see the exact text of each statement that fails for this reason. There are also v$ views that DBAs can query at any time to monitor temporary tablespace usage in real time. These views make it possible to identify usage at the database, session, and even statement level. Oracle DBAs can use the techniques outlined in this paper to diagnose temporary tablespace problems and monitor sorting activity in a proactive way.
All About Temporary Tablespace Part III How DBA determines and handle the database when temporary tablespace running out of space. Here,we have two techniques to find how space in temporaray tablespace is being used : 1.) Direct Oracle to log every statement that fails for lack of temporary space. 2.) A set of queries to run at any time to capture in real time how temporary space is currently being used on a per-session or per-statement basis. Identifying SQL Statements that Fail Due to Lack of Temporary Space : It is helpful that Oracle logs ORA-1652 errors to the instance alert log as it informs a DBA that there is a space issue. The error message includes the name of the tablespace in which the lack of space occurred, and a DBA can use this information to determine if the problem is related to sort segments in a temporary tablespace or if there is a different kind of space allocation problem. Unfortunately, Oracle does not identify the text of the SQL statement that failed. However, Oracle does have a diagnostic event mechanism that can be used to give us more information whenever an ORA-1652 error occurs by causing Oracle server processes to write to a trace file. This trace file will contain a wealth of information,including the exact text of the SQL statement that was being processed at the time that the ORA-1652 error occurred. We can set a diagnostic event for the ORA-1652 error in our individual database session with the following statement:
76
ORACLE DATA BASE ADMINISTRATION
SQL> alter session set events '1652 trace name errorstack'; We can also set diagnostic events in another session (without affecting all sessions instance-wide) by using the ―oradebug event‖ command in SQL*Plus.We can deactivate the ORA-1652 diagnostic event or remove all diagnostic event settings from the server parameter file with statements such as the following: SQL> alter session set events '1652 trace name context off'; If a SQL statement fails due to lack of space in the temporary tablespace and the ORA-1652 diagnostic event has been activated, then the Oracle server process that encountered the error will write a trace file to the directory specified by the user_dump_dest instance parameter. The top portion of a sample trace file is as follows *** ACTION NAME:() 2011-09-17 17:21:14.871 *** MODULE NAME:(SQL*Plus) 2011-09-17 17:21:14.871 *** SERVICE NAME:(SYS$USERS) 2011-09-17 17:21:14.871 *** SESSION ID:(130.13512) 2011-09-17 17:21:14.871 *** 2011-09-17 17:21:14.871 ksedmp: internal or fatal error ORA-01652: unable to extend temp segment by 128 in tablespace TEMP Current SQL statement for this session: SELECT "A1"."INVOICE_ID", "A1"."INVOICE_NUMBER", "A1"."INVOICE_DAT E", "A1"."CUSTOMER_ID", "A1"."CUSTOMER_NAME", "A1"."INVOICE_AMOUNT", "A1"."PAYMENT_TERMS", "A1"."OPEN_STATUS", "A1"."GL_DATE", "A1"."ITE M_COUNT", "A1"."PAYMENTS_TOTAL" FROM "INVOICE_SUMMARY_VIEW" "A1" ORDER BY "A1"."CUSTOMER_NAME", "A1"."INVOICE_NUMBER" From the trace file we can clearly see the full text of the SQL statement that failed. It is important to note that the statements captured in trace files with this method may not themselves be the cause of space issues in the temporary tablespace. For example, one query could run successfully and consume 99.9% of the temporary tablespace due to a Cartesian product, while a second query fails when trying to allocate just a small amount of sort space. The second query is the one that will get captured in a trace file, while the first query is more likely to be the root cause of the problem. All About Temporary Tablespace Part II Oracle sorting Basics As we know there are different cases where oracle sorts data .Oracle session sorts the data in memory.If the amount of data being sorted is small enough, the entire sort will be completed in memory with no intermediate data written to disk.When Oracle needs to store data in a global temporary table or build a hash table for a hash join, Oracle also starts the operation in memory and completes the task without writing to disk if the amount of data involved is small enough. If an operation uses up a threshold amount of memory, then Oracle breaks the operation into smaller ones that can each be performed in memory. Partial results are written to disk in a temporary tablespace. The threshold for how much memory may be used by any one
77
ORACLE DATA BASE ADMINISTRATION
session is controlled by instance parameters. If the workarea_size_policy parameter is set to AUTO, then the pga_aggregate_target parameter indicates how much memory can be used collectively by all sessions for activities such as sorting and hashing. Oracle will automatically assess and decide how much of this memory any individual session should be allowed to use. If the workarea_size_policy parameter is set to MANUAL, then instance parameters such as sort_area_size, hash_area_size, and bitmap_merge_area_size dictate how much memory each session can use for these operations. Each database user has a temporary tablespace designated in their user definition(check through dba_users view) . Whenever a sort operation grows too large to be performed entirely in memory, Oracle will allocate space in the temporary tablespace designated for the user performing the operation. Temporary segments in temporary tablespaces which we will call ―sort segments‖— are owned by the SYS user, not the database user performing a sort operation. There typically is just one sort segment per temporary tablespace, because multiple sessions can share space in one sort segment. Users do not need to have quota on the temporary tablespace in order to perform sorts on disk. Temporary tablespaces can only hold sort segments. Oracle‘s internal behavior is optimized for this fact. For example, writes to a sort segment do not generate redo or undo. Also, allocations of sort segment blocks to a specific session do not need to be recorded in the data dictionary or a file allocation bitmap. Why? Because data in a temporary tablespace does not need to persist beyond the life of the database session that created it. One SQL statement can cause multiple sort operations, and one database session can have multiple SQL statements active at the same time—each potentially with multiple sorts to disk. When the results of a sort to disk are no longer needed, its blocks in the sort segment are marked as no longer in use and can be allocated to another sort operation.A sort operation will fail if a sort to disk needs more disk space and there are 1.) No unused blocks in the sort segment,and 2.) No space available in the temporary tablespace for the sort segment to allocate an additional extent. This will most likely cause the statement that prompted the sort to fail with the Oracle error, ―ORA-1652: unable to extend temp segment.‖ This error message also gets logged in the alert log for the instance.It is important to note that not all ORA-1652 errors indicate temporary tablespace issues. For example, moving a table to a different tablespace with the ALTER TABLE…MOVE statement will cause an ORA-1652 error if the target tablespace does not have enough space for the table. Temporary tablespaces will appear full after a while in a normally running database. Extents are not de-allocated after being used. Rather it would be managed internally and reused. This is normal and to be expected and is not an indication that we do not have any temporary space. If we are not encountering any issue/error related to TEMP then we don't need to worry about this. There is no quick way or scientific approach to calculate the required TEMP tablespace size. The only way to estimate the required TEMP tablespace size is regressive testing.The information inside the temporay segment gets released, not the segment itself. All About Temporary Tablespace Part I
78
ORACLE DATA BASE ADMINISTRATION
There are lots of confusion about the temporary tablespaces.Here i have tried to covered the basic of temporary tablespace and one of the famous related error i.e; ORA-1652 says "unable to extend temp segment" . This problem can be solved as following : 1.) Increase the size of temporary tablespace either by resizing the tempfile or by adding the tempfile. 2.) Check the SQL statements which is consuming the large temp tablespace and kill the corresponding session.(not proper solution). 3.) Check the SQL and tuned it. Here i have explain what we can do when our database runs out of space. Introduction : Temporary tablespaces are used to manage space for database sort operations and for storing global temporary tables. For example, if we join two large tables, and Oracle cannot do the sort in memory (see sort_area_size initialisation parameter), space will be allocated in a temporary tablespace for doing the sort operation. Other SQL operations that might require disk sorting are: create Index , Analyse, Select Distinct, Order By, Group By, Union, Intersect , Minus, Sort-Merge joins, etc. A temporary tablespace contains transient data that persists only for the duration of the session. Temporary tablespaces can improve the concurrency of multiple sort operations, reduce their overhead, and avoid Oracle Database space management operations. Oracle can also allocate temporary segments for temporary tables and indexes created on temporary tables. Temporary tables hold data that exists only for the duration of a transaction or session. Oracle drops segments for a transaction-specific temporary table at the end of the transaction and drops segments for a session-specific temporary table at the end of the session. If other transactions or sessions share the use of that temporary table, the segments containing their data remain in the table. Here, we will cover the following points in next link : 1.) How Oracle managed sorting operations 2.) How DBA determines and handle the database when temporary tablespace running out of space ORA-01555: Snapshot Too Old UNDO is the backbone of the READ CONSISTENCY mechanism provided by Oracle. Multi-User Data Concurrency and Read Consistency mechanism make Oracle stand tall in Relational Database Management Systems (RDBMS) world . Best of all, automatic undo management allows the DBA to specify how long undo information should be retained after commit, preventing ―snapshot too old‖ errors on long running queries. This is done by setting the UNDO_RETENTION parameter. The default is 900 seconds (5 minutes), and we can set this parameter to guarantee that Oracle keeps undo logs for extended periods of time. The flashback query can go upto the point of time specified as a value in the UNDO_RETENTION parameter. Why ORA-01555 error occur? 1.) Oracles does this by reading the "before image" of changed rows from the online undo segments. If we have lots of updates, long running SQL i.e , rollback records needed by a reader fo consistent read are overwritten by other writers.
79
ORACLE DATA BASE ADMINISTRATION
2.) It may also due small size of undo and small undo_retention period . To solve this issues we need to increase the undo tablepsace and undo retention period. Now the issue is how much should be the optimal value of undo retention and undo tablespace. For this we use the advisor. By using OEM, it is quite easy to estimate the size and time duration of undo. Calculate Optimal Undo_Retention : The following query will help us to optimize the UNDO_RETENTION parameter : Optimal Undo Retention = Actual Undo Size / (DB_BLOCK_SIZE × UNDO_BLOCK_REP_SEC) To calculate Actual Undo Size : SQL> SELECT SUM(a.bytes)/1024/1024 "UNDO_SIZE_MB" FROM v$datafile a, v$tablespace b,dba_tablespaces c WHERE c.contents = 'UNDO' AND c.status = 'ONLINE' AND b.name = c.tablespace_name AND a.ts# = b.ts#; Undo Blocks per Second : SQL> SELECT MAX(undoblks/((end_time-begin_time)*3600*24)) "UNDO_BLOCK_PER_SEC" FROM v$undostat ; DB Block Size : SQL> SELECT TO_NUMBER(value) "DB_BLOCK_SIZE [Byte]" WHERE name = 'db_block_size';
FROM
We can do all in one query as SQL> SELECT d.undo_size/(1024*1024) ―ACT_UNDO_SIZE [MB]―, SUBSTR(e.value,1,25) ― UNDO_RTN [Sec] ―, ROUND((d.undo_size / (to_number(f.value) * g.undo_block_per_sec))) ―OPT_UNDO_RET[Sec]‖ FROM ( SELECT SUM(a.bytes) undo_size FROM v$datafile a, v$tablespace b, dba_tablespaces c WHERE c.contents = 'UNDO' AND c.status = 'ONLINE' AND b.name = c.tablespace_name AND a.ts# = b.ts# ) d, v$parameter e, v$parameter f, ( SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
v$parameter
80
ORACLE DATA BASE ADMINISTRATION
undo_block_per_sec FROM v$undostat )g WHERE e.name = 'undo_retention' AND f.name = 'db_block_size' / ACT_UNDO_SIZE [MB] -----------------------50
UNDO_RTN [Sec] ---------------------900
OPT_UNDO_RET[Sec] ---------------------------24000
Calculate Needed UNDO Size : If we are not limited by disk space, then it would be better to choose the UNDO_RETENTION time that is best for us (for FLASHBACK, etc.). Allocate the appropriate size to the UNDO tablespace according to the database activity : Formula :Undo Size = Optimal Undo Retention × DB_BLOCK_SIZE × UNDO_BLOCK_PER_SEC
Here again we can find in a single : SQL> SELECT d.undo_size/(1024*1024) "ACTUAL UNDO SIZE [MByte]", SUBSTR(e.value,1,25) "UNDO RETENTION [Sec]", (TO_NUMBER(e.value) * TO_NUMBER(f.value) * g.undo_block_per_sec) / (1024*1024) "NEEDED UNDO SIZE [MByte]" FROM ( SELECT SUM(a.bytes) undo_size FROM v$datafile a, v$tablespace b, dba_tablespaces c WHERE c.contents = 'UNDO' AND c.status = 'ONLINE' AND b.name = c.tablespace_name AND a.ts# = b.ts# ) d, v$parameter e, v$parameter f, ( SELECT MAX(undoblks/((end_time-begin_time)*3600*24)) undo_block_per_sec FROM v$undostat )g WHERE e.name = 'undo_retention' AND f.name = 'db_block_size' / We can avoid ORA-01555 error as follows :
81
ORACLE DATA BASE ADMINISTRATION
1.) Do not run discrete transactions while sensitive queries or transactions are running, unless we are confident that the data sets required are mutually exclusive. 2.) Schedule long running queries and transactions out of hours, so that the consistent gets will not need to rollback changes made since the snapshot SCN. This also reduces the work done by the server, and thus improves performance. 3.) Code long running processes as a series of restartable steps. 4.) Shrink all rollback segments back to their optimal size manually before running a sensitive query or transaction to reduce risk of consistent get rollback failure due to extent deallocation. 5.) Use a large optimal value on all rollback segments, to delay extent reuse. 6.) Don't fetch across commits. That is, don't fetch on a cursor that was opened prior to the last commit, particularly if the data queried by the cursor is being changed in the current session. 7.) Commit less often in tasks that will run at the same time as the sensitive query, particularly in PL/SQL procedures, to reduce transaction slot reuse. How to configure Shared Server? A shared server process allows a single server process to service several clients, based on the premise that usually in an OLTP environment, a user is more often than not, reading and editing data on the screen than actually executing a DML. What this means is that, there will be chunks of time when the dedicated server process, dedicated to a particular c lient will be sitting idle. It is this idleness that is exploited by the shared server process in servicing several clients together. Shared server is enabled by setting the SHARED_SERVERS initialization parameter to a value greater than 0. The other shared server initialization parameters need not be set. Because shared server requires at least one dispatcher in order to work, a dispatcher is brought up even if no dispatcher has been configured. The SHARED_SERVERS initialization parameter specifies the minimum number of shared servers that we want created when the instance is started. After instance startup, Oracle Database can dynamically adjust the number of shared servers based on how busy existing shared servers are and the length of the request queue. In typical systems, the number of shared servers stabilizes at a ratio of one shared server for every ten connections . For OLTP applications, when the rate of requests is low, or when the ratio of server usage to request is low, the connections-toservers ratio could be higher . If we know the average load on our system, then we can set SHARED_SERVERS to an optimal value. The Below example shows how we can use this parameter . For Example Assume a database is being used by a telemarketing center staffed by 1000 agents. On average, each agent spends 90% of the time talking to customers and only 10% of the time looking up and updating records. To keep the shared servers from being terminated as agents talk to customers and then spawned again as agents access the database, a DBA specifies that the optimal number of shared servers is
82
ORACLE DATA BASE ADMINISTRATION
100 . However, not all work shifts are staffed at the same level. On the night shift, only 200 agents are needed. Since SHARED_SERVERS is a dynamic parameter, a DBA reduces the number of shared servers to 20 at night, thus allowing resources to be freed up for other tasks such as batch jobs . Setting the Initial Number of Dispatchers We can specify multiple dispatcher configurations by setting DISPATCHERS to a comma separated list of strings, or by specifying multiple DISPATCHERS parameters in the initialization file. If we specify DISPATCHERS multiple times, the lines must be adjacent to each other in the initialization parameter file. Internally, Oracle Database assigns an INDEX value (beginning with zero) to each DISPATCHERS parameter. We can later refer to that DISPATCHERS parameter in an ALTER SYSTEM statement by its index number. Some examples of setting the DISPATCHERS initialization parameter follow. DISPATCHERS="(PROTOCOL=TCP)(DISPATCHERS=2)" DISPATCHERS="(ADDRESS=(PROTOCOL=TCP)(HOST=144.25.16.201))(DISPATCHERS=2)" To force the dispatchers to use a specific port as the listening endpoint, add the PORT attribute as follows: DISPATCHERS="(ADDRESS=(PROTOCOL=TCP)(PORT=5000))" DISPATCHERS="(ADDRESS=(PROTOCOL=TCP)(PORT=5001))" Determining the Number of Dispatchers : Once we know the number of possible connections for each process for the operating system, calculate the initial number of dispatchers to create during instance startup, for each network protocol, using the following formula: Number of dispatchers = CEIL ( max. concurrent sessions / connections for each dispatcher ) CEIL returns the result roundest up to the next whole integer. For example, assume a system that can support 970 connections for each process, and that has : A maximum of 4000 sessions concurrently connected through TCP/IP and A maximum of 2,500 sessions concurrently connected through TCP/IP with SSL then DISPATCHERS attribute for TCP/IP should be set to a minimum of five dispatchers (4000 / 970), and for TCP/IP with SSL three dispatchers (2500 / 970) : DISPATCHERS='(PROT=tcp)(DISP=5)', '(PROT-tcps)(DISP=3)' Depending on performance, we may need to adjust the number of dispatchers. Steps to configure shared server : To configure shared server we have to enable the following parameter . All the below parameters are dynamic . Below is Demo to configur e the shared server. 1.) alter system set shared_servers= 25; 2.) alter system set max_shared_servers= 50; 3.) alter system set dispatcherS= '(PROT=tcp)(DISP=30)'; 4.) Add (SERVER = SHARED) in tnsnames.ora file. the tnsnames.ora file look like NOIDA = (DESCRIPTION =
83
ORACLE DATA BASE ADMINISTRATION
(ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = XXXX)(PORT = 1521)) ) (CONNECT_DATA = (SERVER = SHARED) (SERVICE_NAME = noida) ) To check the status of server fire the below query : SQL> select distinct server,username from v$session ; SERVER USERNAME ----------------------------DEDICATED SYS DEDICATED Once, i found that after configuring the shared server the above query shows the server status as'NONE' . What does it mean if SERVER = 'NONE' in v$session? On googling , i found that , in Shared Server configuration when we see value 'NONE' , it means there is no task being processed by shared server for that session. The server column will infact show status of 'SHARED' if there is some task being processed at that particular time by the shared server process for that session. Hence to check the status , fire some big query and then check the server status . Disabling Shared Server : We can disable shared server by setting SHARED_SERVERS to 0. we can do this dynamically with the 'alter system' statement. When we disable shared server, no new clients can connect in shared mode. However, Oracle Database retains some shared servers until all shared server connections are closed. The number of shared servers retained is either the number specified by the preceding setting of shared_servers or the value of the max_ shared_servers parameter , whichever is smaller. If both shared_servers and max_ shared_servers are set to 0, then all shared servers will terminate and requests from remaining shared server clients will be queued until the value of shared_servers or max_ shared_servers is raised again . To terminate dispatchers once all shared server clients disconnect, enter this statement: SQL> alter system set dispatchers='' ; When To Use Database Resident Connection Pooling Database resident connection pooling is useful when multiple clients access the database and when any of the following apply : A large number of client connections need to be supported with minimum memory usage.
The client applications are similar and can share or reuse sessions.
84
ORACLE DATA BASE ADMINISTRATION
Applications are similar if they connect with the same database credentials and use the same schema. The client applications acquire a relatively short duration, and then release it.
database
connection,
work
Session affinity is not required across client requests.
There are multiple processes and multiple hosts on the client side.
on
it
for
a
Advantages of Database Resident Connection Pooling : Using database resident connection pooling provides the following advantages : Enables resource sharing among multiple middle-tier client applications.
Improves scalability of databases and applications by reducing resource usage.
Provides pooling for architectures with multi-process, single-threaded application servers.
Suspending and Resuming a Database The ALTER SYSTEM SUSPEND statement halts all input and output (I/O) to datafiles (file header and file data) and control files. The suspended state lets us back up a database without I/O interference. When the database is suspended all preexisting I/O operations are allowed to complete and any new database accesses are placed in a queued state. The suspend command is not specific to an instance. In an Oracle Real Application Clusters environment, when we issue the suspend command on one system, internal locking mechanisms propagate the halt request across instances, thereby quiescing all active instances in a given cluster. However, if someone starts a new instance another instance is being suspended, the new instance will not be suspended . Using the ALTER SYSTEM RESUME statement to resume normal database operations. The SUSPEND and RESUME commands can be issued from different instances. For example, if instances 1, 2, and 3 are running, and we issue an ALTER SYSTEM SUSPEND statement from instance 1, then we can issue a RESUME statement from instance 1, 2, or 3 with the same effect. The suspend/resume feature is useful in systems that allow us to mirror a disk or file and then split the mirror, providing an alternative backup and restore solution. If we use a system that is unable to split a mirrored disk from an existing database while writes are occurring, then we can use the suspend/resume feature to facilitate the split. The suspend/resume feature is not a suitable substitute for normal shutdown operations, because copies of a suspended database can contain uncommitted updates. The following statements illustrate suspend and resume usage. The V$INSTANCE view is queried to confirm database status. SQL> alter system suspend; System altered SQL> select database_status from v$instance; DATABASE_STATUS -----------------------SUSPENDED SQL> alter system resume ; System altered
85
ORACLE DATA BASE ADMINISTRATION
SQL> select database_status from v$instance ; DATABASE_STATUS ------------------------ACTIVE Flashback Data Archive (FBDA) In Oracle 11g Flashback Data Archive (Oracle Total Recall) provides the ability to track and store all transactional changes to a table over its lifetime. It is no longer necessary to build this intelligence into our application. A Flashback Data Archive is useful for compliance with record stage policies and audit reports. Prior to oracle 11g, Flashback technology to a large part was based on the availability of undo data or flashback logs and both the undo data as well as flashback logs are subject to recycling when out of space pressure exists. The UNDO tablespace in Oracle was primarily meant for transaction consistency and not data archival. A Flashback Data Archive is configured with retention time. Data archived in the Flashback Data Archive is retained for the retention time.Let‘s look at an example : Creating a Flashback Data Archive : SQL> create flashback archive near_term tablespace users retention 1 month ; Flashback archive created. The archive is created in the tablespace Users. Assume we have to record changes to a table called employees which is in "HR" schema . All we need to do is enable the Flashback Data Archive status of the table to start recording the changes in that archive . SQL> alter table hr.employees flashback archive near_term; Table altered. This puts the table into the Flashback Data Archive mode. All the changes to the rows of the table will be now tracked permanently. SQL> select salary,job_id from hr.employees where employee_id=121; SALARY JOB_ID -----------------8200 ST_MAN SQL> update hr.employees set salary=50000 where employee_id=121; 1 row updated. SQL> commit; Commit complete. Now, if we select the row, it will always display 50000 in this column. To find out the older value as of a certain time, we can use the Flashback query as shown below :
86
ORACLE DATA BASE ADMINISTRATION
SQL> select salary from hr.employees as of timestamp to_timestamp 10:55:00','mm/dd/yyyy hh24:mi:ss') where employee_id =121; SALARY --------8200
('09/5/2011
Now, after some time, when the undo data has been purged out of the undo segments, query the flashback data again: SQL> select salary from hr.employees as of timestamp to_timestamp 10:55:00','mm/dd/yyyy hh24:mi:ss') where employee_id =121 ; SALARY --------8200
('09/5/2010
It comes back with the result :500000 The undo is gone, so where did the data come from .We can do that using autotrace and see the execution plan: SQL> set autotrace traceonly explain SQL> select salary from hr.employees as of timestamp to_timestamp 10:55:00','mm/dd/yyyy hh24:mi:ss') where employee_id =121;
('09/5/2010
Check the explain Plan detail by clicking below link : http://www.4shared.com/document/WXMMFOS8/fda_explain_tab.html This output answers the riddle ―Where did the data come from?‖; it came from the table SYS_FBA_HIST_68909, which is a location in the Flashback Archive we defined earlier for that table. We can check the table but it‘s not supported by Oracle to directly peek at that data there. Anyway, I don‘t see a reason we would want to do that. The data inside the archive is retained but until how long? This is where the retention period comes into play. It‘s retained up to that period. After that, when new data comes in, the older data will be purged. We can also purge it ourself, e.g. SQL> alter flashback archive near_term purge before scn xxxxxxxx; Disable flashback : Disable flashback archiving for the table employee : SQL> ALTER TABLE hr.employee NO FLASHBACK ARCHIVE; Remove Flashback Data Archive and all its historical data, but not its tablespaces: SQL> DROP FLASHBACK ARCHIVE near_term ; Use Cases : Flashback Data Archive is handy for many purposes. Here are some ideas: • To audit for recording how data changed • To enable an application to undo changes (correct mistakes) • To debug how data has been changed • To comply with some regulations that require data must not be changed after some time. Flashback Data Archives are not regular tables so they can‘t be changed by typical users. • Recording audit trails on cheaper storage thereby allowing more retention at less cost
87
ORACLE DATA BASE ADMINISTRATION
Difference Between Upgradation and Migration in Oracle Upgradation : Upgrade is the process of replacing our existing software with a newer version of the same product. For example, replacing oracle 9i release to oracle 10g release . Upgrading our applications usually does not require special tools. Our existing reports should look and behave the same in both products. However, sometimes minor changes may be seen in product .Upgradation is done at Software level. Migration : Migration is the process of replicating applications from one product in another product, for example, transforming existing oracle 9i applications to oracle 10g applications.A migration is any change that transforms our hardware and/or software architecture to a new state. Migration is done as database level(say migrating from DB2 to Oracle). Tracking Rman Backup In Oracle 10g it is possible to track changed blocks using a change tracking file. Enabling change tracking does produce a small overhead, but it greatly improves the performance of incremental backups. The current change tracking status can be displayed by below query : SQL> select status from v$block_change_tracking ; Change tracking is enabled using the ALTER DATABASE command. SQL> alter database enable block change tracking ; The tracking file is created with a minumum size of 10M and grows in 10M increments. It's size is typically 1/30,000 the size of the datablocks to be tracked.Change tracking can be disabled using the following command. SQL>alter database disable block change tracking ; We can track the Rman Job at session level done sofar. SQL> select SID, START_TIME,TOTALWORK, sofar, (sofar/totalwork) * 100 done, sysdate + TIME_REMAINING/3600/24 end_at from v$session_longops where totalwork > sofar and opname NOT LIKE '%aggregate%' and opname like 'RMAN%' ; SID ---142
START_TIM TOTALWORK SOFAR DONE END_AT ------------ -------------- ---------------------- ----------18-AUG-11 138240 47230 34.1652199 18-AUG-11
We can also track the Rman job status in running session by using below query : SQL> SELECT s.SID, p.SPID, s.CLIENT_INFO FROM V$PROCESS p, V$SESSION s WHERE p.ADDR = s.PADDR AND CLIENT_INFO LIKE 'rman%'; SID SPID CLIENT_INFO ---------------------------------------142 8924 rman channel=ORA_DISK_1
88
ORACLE DATA BASE ADMINISTRATION
Rman job status at Database Active session : SQL> SELECT /**** session active users ****/ s.sid sid , s.serial# serial_id ,lpad(s.status,9) session_status , lpad(s.username,35) ora_user, lpad(s.osuser,12) os_user , lpad(p.spid,7) os_pid , s.program SES_PRGM , lpad(s.terminal,10) session_terminal ,lpad(s.machine,19) session_machine FROM v$process p , v$session s WHERE p.addr (+) = s.paddr AND s.status = 'ACTIVE' AND s.username IS NOT NULL ORDER BY sid ; SID SERIAL_ID SESSION_S ORA_USER OS_USER OS_PID SES_PRGM ----- ---------------------------------- ---------------------------------142 157
16 1
ACTIVE ACTIVE
SYS SYS
Neerajs Neerajs
8924 10280
rman.exe sqlplus.exe
Flashback Features in Oracle 10g As I have cover the "Architecture Of Flashback" in Oracle 10g in my previous post. Here i am going further to explain and perform the some demo of the flashback features of Oracle 10g. How to check Flashback Status : Flashback status of a database can be checked from the below query and system parameters. SQL> select NAME,FLASHBACK_ON from v$database ; SQL> show parameter undo_retention NAME TYPE VALUE ---------------------------------undo_retention integer 900 SQL> show parameter db_flashback_retention NAME TYPE ------------------------------------------db_flashback_retention_target integer SQL> show parameter db_recovery_file_dest NAME TYPE ------------------------------------------db_recovery_file_dest string D:\oracle\product\10.2.0\flash_recovery_area db_recovery_file_dest_size big integer
VALUE --------1440 VALUE ---------------------------------------------
5G
If the database Flashback feature is off then follow the below steps : 1.) The Database must be started through SPFILE.
89
ORACLE DATA BASE ADMINISTRATION
SQL> show parameter spfile NAME TYPE ----------------spfile string
VALUE ---------------------------------------------D:\ORACLE\PRODUCT\10.2.0\DB_1\ DATABASE\SPFILENOIDA.ORA 2.) The Database must be in Archive log mode. SQL> SQL> SQL> SQL>
shut immediate startup mount alter database archivelog ; alter database open ;
3.) Undo management should be AUTO SQL> show parameter undo_management NAME TYPE VALUE ------------------------------------undo_management string AUTO 4.) Set the Recovery file destination or flashback area which will contain all flashback logs depending on the undo retention period SQL> alter system db_recovery_file_dest='D:\oracle\product\10.2.0\flash_recovery_area' scope=both; System altered.
set
5.) Set the recovery file destination size. This is the hard limit on the total space to be used by target database recovery files created in the flash recovery area . SQL> alter system set db_recovery_file_dest_size=5G scope=both; System altered. 6.) Set the flash back retention target . This is the upper limit (in minutes) on how far back in time the database may be flashed back. How far back one can flashback a database depends on how much flashback data Oracle has kept in the flash recovery area. SQL> alter system set db_flashback_retention_target=1440 scope=both; System altered. 7.) Convert the Database to FLASHBACK ON state. SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount; ORACLE instance started. Total System Global Area 830472192 bytes Fixed Size 2074760 bytes Variable Size 213911416 bytes Database Buffers 608174080 bytes
90
ORACLE DATA BASE ADMINISTRATION
Redo Buffers 6311936 bytes Database mounted. SQL> ALTER DATABASE FLASHBACK ON ; Database altered. SQL> alter database open; Database altered. SQL> select NAME, FLASHBACK_ON NAME FLASHBACK_ON -----------------------------NOIDA YES
from
v$database ;
Flashback technology provides a set of features to view and rewind data back and forth in time. The flashback features offer the capability to query past versions of schema objects, query historical data, perform change analysis, and perform self-service repair to recover from logical corruption while the database is online.Here we will discuss some more features of FlashBack . The Flashback features are : 1.) Flashback Query 2.) Flashback Version Query 3.) Flashback Transaction Query 4.)Flashback Table 5.) Flashback Drop (Recycle Bin) 6.) Flashback Database 7.) Flashback Query Functions 1.) Flashback Query : Flashback Query allows the contents of a table to be queried with reference to a specific point in time, using the AS OF clause. Essentially it is the same as the DBMS_FLASHBACK functionality , but in a more convenient form. For example, Here is a Demo of Flashback Query : SQL> CREATE TABLE flashback_query_test (id NUMBER(10)); Table created. SQL> SELECT current_scn, TO_CHAR(SYSTIMESTAMP, 'YYYY-MM-DD HH24:MI:SS') FROM v$database; CURRENT_SCN TO_CHAR(SYSTIMESTAM -----------------------------------------------1365842 2011-08-12 13:44:15 SQL> INSERT INTO flashback_query_test (id) VALUES (1); 1 row created. SQL> commit; Commit complete. SQL> SELECT COUNT(*) FROM flashback_query_test; COUNT(*) ---------1
91
ORACLE DATA BASE ADMINISTRATION
SQL> SELECT COUNT(*) FROM flashback_query_test AS OF TIMESTAMP TO_TIMESTAMP('2011-08-12 13:44:15', 'YYYY-MM-DD HH24:MI:SS'); COUNT(*) ---------0 SQL> SELECT COUNT(*) FROM flashback_query_test AS OF SCN 1365842; COUNT(*) ---------0 2.) Flashback Version Query : Oracle Flashback Versions Query is an extension to SQL that can be used to retrieve the versions of rows in a given table that existed in a specific time interval. Oracle Flashback Versions Query returns a row for each version of the row that existed in the specified time interval. For any given table, a new row version is created each time the COMMIT statement is executed. Flashback version query allows the versions of a specific row to be tracked during a specified time period using the VERSIONS BETWEEN clause . Here is Demo of Flashback Version Query : SQL> CREATE TABLE VARCHAR2(50)); Table created.
flashback_version_query_test
(id
NUMBER(10),description
SQL> INSERT INTO flashback_version_query_test (id, description) VALUES (1, 'ONE'); 1 row created. SQL> COMMIT; Commit complete. SQL> SELECT current_scn, TO_CHAR(SYSTIMESTAMP, 'YYYY-MM-DD HH24:MI:SS') FROM v$database; CURRENT_SCN TO_CHAR(SYSTIMESTAMP) -------------------------------------------------1366200 2011-08-12 13:53:16 SQL> UPDATE flashback_version_query_test SET description = 'TWO' WHERE id = 1; 1 row updated. SQL> COMMIT; Commit complete. SQL> UPDATE flashback_version_query_test SET description = 'THREE' WHERE id = 1; 1 row updated. SQL> COMMIT; Commit complete. SQL> SELECT current_scn, TO_CHAR(SYSTIMESTAMP, 'YYYY-MM-DD HH24:MI:SS') FROM v$database; CURRENT_SCN TO_CHAR(SYSTIMESTAM --------------------------------------------------1366214 2011-08-12 13:54:38
92
ORACLE DATA BASE ADMINISTRATION
SQL>SELECT versions_startscn, versions_starttime, versions_endscn, versions_endtime, versions_operation, description FROM flashback_version_query_test VERSIONS BETWEEN TIMESTAMP TO_TIMESTAMP('2011-08-12 13:53:11', 'YYYY-MM-DD HH24:MI:SS') AND TO_TIMESTAMP('2011-08-12 13:54:38', 'YYYY-MM-DD HH24:MI:SS') WHERE id = 1; VERSIONS_STARTSCN VERSIONS_ENDTIME 1366212 U 1366209 13:53:35.000 U
VERSIONS_STARTTIME VERSIONS_ENDSCN VERSIONS_OPERATION DESCRIPTION 12.08.11 13:53:35.000 THREE 12.08.11 13:53:35.000 1366212 12.08.11 TWO 1366209
13:53:35.000 3 rows selected
12.08.11
ONE
The available pseudocolumn meanings are: VERSIONS_STARTSCN or VERSIONS_STARTTIME Starting SCN and TIMESTAMP when row took on this value. The value of NULL is returned if the row was created before the lower bound SCN or TIMESTAMP. VERSIONS_ENDSCN or VERSIONS_ENDTIME - Ending SCN and TIMESTAMP when row last contained this value. The value of NULL is returned if the value of the row is still current at the upper bound SCN ot TIMESTAMP.
VERSIONS_XID - ID of the transaction that created the row in it's current state.
VERSIONS_OPERATION - Operation performed by the transaction ( (I)nsert, (U)pdate or (D)elete) . 3.) Flashback Transaction Query : Flashback transaction query can be used to get extra information about the transactions listed by flashback version queries. The VERSIONS_XID column values from a flashback version query can be used to query the FLASHBACK_TRANSACTION_QUERY view. SQL> SELECT xid, operation, start_scn,commit_scn, logon_user, undo_sql FROM flashback_transaction_query WHERE xid = HEXTORAW('0600030021000000'); XID LOGON_USER ---------------------UNDO_SQL -------------0600030021000000
OPERATION --------------
UPDATE
START_SCN --------------
725208
COMMIT_SCN ----------------
725209
------
SCOTT
update "SCOTT"."FLASHBACK_VERSION_QUERY_TEST" set "DESCRIPTION" = 'ONE' where ROWID = 'AAAMP9AAEAAAAAYAAA' ; 1 rows selected. 4.) Flashback Table : There are two distinct table related flashback table features in oracle, flashback table which relies on undo segments and flashback drop which lies on the recyclebin not the undo segments.
93
ORACLE DATA BASE ADMINISTRATION
Flashback table lets we recover a table to a previous point in time, we don't have to take the tablespace offline during a recovery, however oracle acquires exclusive DML locks on the table or tables that we are recovering, but the table continues to be online. When using flashback table oracle does not preserve the ROWIDS when it restores the rows in the changed data blocks of the tables, since it uses DML operations to perform its work, we must have enabled row movement in the tables that we are going to flashback, only flashback table requires we to enable row movement. If the data is not in the undo segments then we cannot recover the table by using flashback table, however we can use other means to recover the table. Restriction on flashback table recovery : we cannot use flashback table on SYS objects we cannot flashback a table that has had preceding DDL operations on the table like table structure changes, dropping columns, etc The flashback must entirely exceed or it will fail, if flashing back multiple tables all tables must be flashed back or none. Any constraint violations will abort the flashback operation we cannot flashback a table that has had any shrink or storage changes to the table (pct-free, initrans and maxtrans. The following example creates a table, inserts some data and flashbacks to a point prior to the data insertion. Finally it flashbacks to the time after the data insertion.Here is demo of the Flashback Table : SQL> CREATE TABLE flashback_table_test (id NUMBER(10)); Table created. SQL> ALTER TABLE flashback_table_test ENABLE ROW MOVEMENT; Table altered. SQL> SELECT current_scn FROM v$database; CURRENT_SCN --------------1368791 SQL> INSERT INTO flashback_table_test (id) VALUES (1); 1 row created. SQL> COMMIT; Commit complete. SQL> SELECT current_scn FROM v$database; CURRENT_SCN ---------------1368802 SQL> FLASHBACK TABLE flashback_table_test TO SCN 1368791; Flashback complete. SQL> SELECT COUNT(*) FROM flashback_table_test; COUNT(*) ---------0 SQL> FLASHBACK TABLE flashback_table_test TO SCN 1368802; Flashback complete.
94
ORACLE DATA BASE ADMINISTRATION
SQL> SELECT COUNT(*) FROM flashback_table_test; COUNT(*) ------------1 Flashback of tables can also be performed using timestamps. SQL> FLASHBACK TABLE flashback_table_test TO TIMESTAMP TO_TIMESTAMP('2004-03-03 10:00:00', 'YYYY-MM-DD HH:MI:SS'); 5.) Flashback Drop (Recycle Bin) : Prior to Oracle 10g, a DROP command permanently removed objects from the database. In Oracle 10g, a DROP command places the object in the recycle bin. The extents allocated to the segment are not reallocated until we purge the object. we can restore the object from the recycle bin at any time. This feature eliminates the need to perform a point-in-time recovery operation. Therefore, it has minimum impact to other database users. In Oracle 10g the default action of a DROP TABLE command is to move the table to the recycle bin (or rename it), rather than actually dropping it. The PURGE option can be used to permanently drop a table. The recycle bin is a logical collection of previously dropped objects, with access tied to the DROP privilege. The contents of the recycle bin can be shown using the SHOW RECYCLEBIN command and purged using the PURGE TABLE command. As a result, a previously dropped table can be recovered from the recycle bin. Recycle Bin : A recycle bin contains all the dropped database objects until : we permanently drop them with the PURGE command.we
recover the dropped objects with the UNDROP command.
There is no room in the tablespace for new rows or updates to existing rows.
The tablespace must be extended.
We can view the dropped objects in the recycle bin from two dictionary views:
user_recyclebin — dba_recyclebin —
list all dropped user objects. list all dropped system-wide objects.
Here is Demo of Flashback Drop : SQL> CREATE TABLE flashback_drop_test ( 2 Table created.
id NUMBER(10) ) ;
SQL> INSERT INTO flashback_drop_test (id) VALUES (1) ; 1 row created. SQL> COMMIT ; Commit complete. SQL> DROP TABLE flashback_drop_test ; Table dropped. SQL> SHOW RECYCLEBIN ; ORIGINAL NAME RECYCLEBIN NAME DROPTIME -------------------------------------------------------------------
OBJECT TYPE -------------
----
95
ORACLE DATA BASE ADMINISTRATION
flashback_drop_test BIN$KEZB6YXdRfW1925mCoGOlg==$0 201108:15:58:31EST
table
SQL> FLASHBACK TABLE flashback_drop_test TO BEFORE DROP; Flashback complete. SQL> SELECT * FROM flashback_drop_test; ID ---------1 If an object is dropped and recreated multiple times all dropped versions will be kept in the recycle bin, subject to space. Where multiple versions are present it's best to reference the tables via the recyclebin_name. For any references to the ORIGINAL_NAME it is assumed the most recent object is drop version in the referenced question. During the flashback operation the table can be renamed. FLASHBACK TABLE flashback_drop_test TO BEFORE DROP RENAME TO flashback_drop_test_old; Several purge options exist : PURGE TABLE tablename; PURGE INDEX indexname; PURGE TABLESPACE ts_name; PURGE TABLESPACE ts_name USER username; for a specific user. PURGE RECYCLEBIN; bin. PURGE DBA_RECYCLEBIN;
-- Specific table. -- Specific index. -- All tables in a specific tablespace. -- All tables in a specific tablespace -- The current users entire recycle -- The whole recycle bin.
Several restrictions apply relating to the recycle bin : Only available for non-system, locally managed tablespaces. There is no fixed size for the recycle bin. The time an object remains in the recycle bin can vary. The objects in the recycle bin are restricted to query operations only (no DDL or DML).
Flashback query operations must reference the recycle bin name.
Tables and all dependent objects are placed into, recovered and purged from the recycle bin at the same time.
Tables with Fine Grained Access policies are not protected by the recycle bin.
Partitioned index-organized tables are not protected by the recycle bin.
The recycle bin does not preserve referential integrity .
Flashback Database
6.) The FLASHBACK DATABASE : Flashback Database command is a fast alternative to performing an incomplete recovery. In order to flashback the database we must have SYSDBA privilege and the flash recovery area must have been prepared in advance.The database can be taken back in time by reversing all work done sequentially. The database must be opened with resetlogs as if an incomplete recovery has happened. This is ideal if we have a database corruption (wrong transaction, etc) and require the database to be
96
ORACLE DATA BASE ADMINISTRATION
rewound before the corruption occurred. If we have media or a physical problem a normal recovery is required. Flashback database is not enabled by default, when enabled flashback database a process (RVWR – recovery Writer) copies modified blocks to the flashback buffer. This buffer is then flushed to disk (flashback logs). Remember the flashback logging is not a log of changes but a log of the complete block images. Not every changed block is logged as this would be too much for the database to cope with, so only as many blocks are copied such that performance is not impacted. Flashback database will construct a version of the data files that is just before the time we want. The data files probably will be in a inconsistent state as different blocks will be at different SCN‘s, to complete the flashback process, Oracle then uses the redo logs to recover all the blocks to the exact time requested thus synchronizing all the data files to the same SCN. Archiving mode must be enabled to use flashback database. An important note to remember is that Flashback can never reserve a change only to redo them. The advantage in using flashback database is speed and convenience with which we can take the database back in time.we can use rman, sql and Enterprise manager to flashback a database. If the flash recovery area does not have enough room the database will continue to function but flashback operations may fail. It is not possible to flashback one tablespace, we must flashback the whole database. If performance is being affected by flashback data collection turn some tablespace flashbacking off . we cannot undo a resized data file to a smaller size. When using ‗backup recovery area‘ and ‗backup recovery files‘ controlfiles , redo logs, permanent files and flashback logs will not be backed up. SQL> CREATE TABLE flashback_database_test (id NUMBER(10)); Table created. SQL> conn / as sysdba SQL> shut immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount exclusive; Database mounted. SQL> FLASHBACK DATABASE TO TIMESTAMP SYSDATE-(1/24/12) ; Flashback complete. SQL> alter database open resetlogs ; Database altered. SQL> conn neer/neer@noida Connected. SQL> desc flashback_database_test ERROR : ORA-04043 : object flashback_database_test does not exist . Some other variations of the flashback database command includes : FLASHBACK DATABASE TO TIMESTAMP my_date ;
FLASHBACK DATABASE TO BEFORE TIMESTAMP my_date;
-----( 5 min back)
97
ORACLE DATA BASE ADMINISTRATION
FLASHBACK DATABASE TO SCN my_scn;
FLASHBACK DATABASE TO BEFORE SCN my_scn;
The window of time that is available for flashback is determined by the db_flashback_retention_targetparameter . The maximum flashback can be determined by querying the v$flashback_database_log view . It is only possible to flashback to a point in time after flashback was enabled on the database and since the last RESETLOGS command. 7.)Flashback Query Functions : The TIMESTAMP_TO_SCN and SCN_TO_TIMESTAMP functions have been added to SQL and PL/SQL to simplify flashback operations. SQL> selet * from emp as of scn timestamp_to_scn(systimestamp - 1/24) ; SQL>select * from emp as of timestamp scn_to_timestamp(9945365); SQL> declare l_scn number ; l_timestamp timestamp; begin l_scn := timestamp_to_scn(systimestamp - 1/24); l_timestamp := scn_to_timestamp(l_scn); end ; /
Difference Between OBSOLETE AND EXPIRED Backup RMAN considers backups of datafiles and control files as obsolete, that is, no longer needed for recovery, according to criteria that we specify in the CONFIGURE command. We can then use the REPORT OBSOLETE command to view obsolete files and DELETE OBSOLETE to delete them . For ex : we set our retention policy to redundancy 2. this means we always want to keep at least 2 backup, after 2 backup, if we take an another backup oldest one become obsolete because there is 3 backup and we want to keep 2. if our flash recovery area is full then obsolete backups can be overwrite. A status of "expired" means that the backup piece or backup set is not found in the backup destination or missing .Since backup info is hold in our controlfile and catalog . Our controlfile thinks that there is a backup under a directory with a name but someone delete this file from operating system. We can run crosscheck command to check if these files are exist and if rman found a file is missing then mark that backup record as expired which means is no more exists. Rman Retention Policy Based On Redundancy Policy I found many of us uses the Rman Recovery window Retention Policy . I have not find the rman Redundancy policy used . Here we will discuss the Disadvantages of the Redundancy Policy.
98
ORACLE DATA BASE ADMINISTRATION
The REDUNDANCY parameter of the CONFIGURE RETENTION POLICY command specifies how many backups of each datafile and control file that RMAN should keep. In other words, if the number of backups for a specific datafile or control file exceeds the REDUNDANCY setting, then RMAN considers the extra backups as obsolete. Suppose we have a RMAN retention policy of "REDUNDANCY 2". This means that as long as we have at least two backups of the same datafile, controlfile/spfile or archivelog the other older backups become obsolete and RMAN is allowed to safely remove them. Now, let's also suppose that every night we backup our database using the following script: SQL> CONFIGURE CONTROLFILE AUTOBACKUP ON; SQL> rman { backup database plus archivelog; delete noprompt obsolete redundancy 2; } The backup task is quite simple : first of all it ensures that we have the controlfile autobackup feature on, then it backups the database and archives and, at the end, it deletes all obsolete backups using the REDUNDANCY 2 retention policy. Using the above approach we might think that we can restore our database as it was two days ago, right? For example, if we have a backup taken on Monday and another one taken on Tuesday we may restore our database as it was within the (Monday_last_backup Today) time interval. Well, that's wrong! Consider the following scenario : 1.) On Monday night we backup the database using the above script; 2.) On Tuesday, during the day, we drop a tablespace. Because this is a structural database change a controlfile autobackup will be triggered. Ieeei, we have a new controlfile backup. 3.) On Tuesday night we backup again the database... nothing unusual, right? Well, the tricky part is regarding the DELETE OBSOLETE command. When the backup script will run this command, RMAN finds out three controlfile backups: One is originating from the Monday backup, One is from the structural change and the third is from our just finished Tuesday backup database command. Now according to the retention policy of "REDUNDANCY 2", RMAN will assume that it is safe to delete the backup of the controlfile taken on Monday night backup because it's out of our retention policy and because this backup is the oldest one. Uuups... this means that we gonna have a big problem restoring the database as it was before our structural change because we don't have a controlfile backup from that time. So, if we intend to incomplete recover our database to a previous time in the past it's really a good idea to switch to a retention policy based on a "RECOVERY WINDOW" instead. In our case a RECOVERY WINDOW OF 2 DAYS would be more appropriate. Flashback Architecture In Oracle Oracle Flashback Technology is a group of Oracle Database features that let us view past states of database objects or to return database objects to a previous state without using point-in-time media recovery. Flashback Database is a part of the backup & recovery enhancements in Oracle 10g Database that are called Flashback Features .
99
ORACLE DATA BASE ADMINISTRATION
Flashback Database enables us to wind our entire database backward in time, reversing the effects of unwanted database changes within a given time window. The effects are similar to database point-in-time recovery. It is similar to conventional point in time recovery in its effects, allowing us to return a database to its state at a time in the recent past. Flashback Database can be used to reverse most unwanted changes to a database, as long as the datafiles are intact. Oracle Flashback Database lets us quickly recover an Oracle database to a previous time to correct problems caused by logical data corruptions or user errors. What are the Benefits? According to many studies and reports, Human Error accounts for 30-35% of data loss episodes. This makes Human Errors one of the biggest single causes of downtime. With Flashback Database feature Oracle is trying to fight against user and operator errors in an extremely fast and effective way. In most cases, a disastrous logical failure caused by human error can be solved by performing a Database Point-in-Time Recovery (DBPITR). Before 10g the only way to do a DBPITR was incomplete media recovery. Media Recovery is a slow and time-consuming process that can take a lot of hours. On the other side, by using of Flashback Database a DBPITR can be done in an extremely fast way: 25 to 105 times faster than usual incomplete media recovery and in result it can minimize the downtime significantly. Flashback Database provides : Very effective way to recover from complex human errors.
Faster database point-in-time recovery.
Simplified management and administration .
Little performance overhead .
It provides a lot of benefits and almost no disadvantages.
The Flashback Database is not just our database ―rewind‖ button. It is a ―Time Machine‖ for our Database data that is one single command away from us. The Flashback Database Architecture : Flashback Database uses its own type of log files, called Flashback Database Log Files. To support this mechanism, Oracle uses new background process called RVWR (Recovery Writer) and a new buffer in the SGA, called Flashback Buffer. The Oracle database periodically logs before images of data blocks in the flashback buffer. The flashback buffer records images of all changed data blocks in the database. This means that every time a data block in the database is altered, the database writes a before image of this block to the flashback buffer. This before image can be used to reconstruct a datafile to the current point of time. The maximum allowed memory for the flashback buffer is 16 MB. We don‘t have direct control on its size. The flashback buffer size depends on the size of the current redo log buffer that is controlled by Oracle. Starting at 10g R2, the log buffer size cannot be controlled manually by setting the initialization parameter LOG_BUFFER.
100
ORACLE DATA BASE ADMINISTRATION
In 10G R2, Oracle combines fixed SGA area and redo buffer together. If there is a free space after Oracle puts the combined buffers into a granule, that space is added to the redo buffer. The sizing of the redo log buffer is fully controlled by Oracle. According to SGA and its atomic sizing by granules, Oracle will calculate automatically the size of the log buffer depending of the current granule size. For smaller SGA size and 4 MB granules, it is possible redo log buffer size + fixed SGA size to be multiple of the granule size. For SGAs bigger than 128 MB, the granule size is 16 MB. We can see current size of the redo log buffer, fixed SGA and granule by querying the V$SGAINFO view , and can query the V$SGASTAT view to display detailed information on the SGA and its structures. To find current size of the flashback buffer, we can use the following query: SQL> SELECT * FROM v$sgastat WHERE NAME = 'flashback generation buff'; There is no official information from Oracle that confirms the relation between 'flashback generation buff' structure in SGA and the real flashback buffer structure. This is only a suggestion. A similar message message is written to the alertSID.log file during opening of the database . Allocated 3981204 bytes in shared pool for flashback generation buffer Starting background process RVWR RVWR started with pid=16, OS id=5392 . RVWR writes periodically flashback buffer contents to flashback database logs. It is an asynchronous process and we don‘t have control over it. All available sources are saying that RVWR writes periodically to flashback logs. The explanation for this behavior is that Oracle is trying to reduce the I/O and CPU overhead that can be an issue in many production environments. Flashback log files can be created only under the Flash Recovery Area (that must be configured before enabling the Flashback Database functionality). RVWR creates flashback log files into a directory named ―FLASHBACK‖ under FRA. The size of every generated flashback log file is again under Oracle‘s control. According to current Oracle environment – during normal database activity flashback log files have size of 8200192 bytes. It is very
101
ORACLE DATA BASE ADMINISTRATION
close value to the current redo log buffer size. The size of a generated flashback log file can differs during shutdown and startup database activities. Flashback log file sizes can differ during high intensive write activity as well. Flashback log files can be written only under FRA (Flash Recovery Area). FRA is closely related and is built on top of Oracle Managed Files (OMF). OMF is a service that automates naming, location, creation and deletion of database files. By using OMF and FRA, Oracle manages easily flashback log files. They are created with automatically generated names with extension .FLB. For instance, this is the name of one flashback log file: O1_MF_26ZYS69S_.FLB By its nature flashback logs are similar to redo log files. LGWR writes contents of the redo log buffer to online redo log files, RVWR writes contents of the flashback buffer to flashback database log files. Redo log files contain all changes that are performed in the database, that data is needed in case of media or instance recovery. Flashback log files contain only changes that are needed in case of flashback operation. The main differences between redo log files and flashback log files are : Flashback log files are never archived - they are reused in a circular manner. Redo log files are used to forward changes in case of recovery while flashback log files are used to backward changes in case of flashback operation. Flashback log files can be compared with UNDO data (contained in UNDO tablespaces) as well. While UNDO data contains changes at the transaction level, flashback log files contain UNDO data at the data block level. While UNDO tablespace doesn‘t record all operations performed on the database (for instance, DDL operations), flashback log files record that data as well. In few words, flashback log files contain the UNDO data for our database. To Summarize : UNDO data doesn‘t contain all changes that are performed in the database while flashback logs contain all altered blocks in the database . UNDO data is used to backward changes at the transaction level while flashback logs are used to backward changes at the database level . We can query the V$FLASHBACK_DATABASE_LOGFILE to find detailed info about our flashback log files. Although this view is not documented it can be very useful to check and monitor generated flashback logs. There is a new record section within the control file header that is named FLASHBACK LOGFILE RECORDS. It is similar to LOG FILE RECORDS section and contains info about the lowest and highest SCN contained in every particular flashback database log file . ************************************************************************* ** FLASHBACK LOGFILE RECORDS ************************************************************************* ** (size = 84, compat size = 84, section max = 2048, section in-use = 136, last-recid= 0, old-recno = 0, last-recno = 0) (extent = 1, blkno = 139, numrecs = 2048) FLASHBACK LOG FILE #1: (name #4) E:\ORACLE\FLASH_RECOVERY_AREA\ORCL102\FLASHBACK\O1_MF_26YR1CQ4_.FLB Thread 1 flashback log links: forward: 2 backward: 26 size: 1000 seq: 1 bsz: 8192 nab: 0x3e9 flg: 0x0 magic: 3 dup: 1
102
ORACLE DATA BASE ADMINISTRATION
Low scn: 0x0000.f5c5a505 05/20/2006 21:30:04 High scn: 0x0000.f5c5b325 05/20/2006 22:00:38 What does a Flashback Database operation ? When we perform a flashback operation, Oracle needs all flashback logs from now on to the desired time. They will be applied consecutively starting from the newest to the oldest. For instance, if we want to flashback the database to SCN 4123376440, Oracle will read flashback logfile section in control file and will check for the availability of all needed flashback log files. The last needed flashback log should be this with Low scn and High scn values between the desired SCN 4123376440 . In current environment this is the file with name: O1_MF_26YSTQ6S_.FLB and with values of: Low SCN : 4123374373 High SCN : 4123376446 Note: If we want to perform successfully a flashback operation we will always need to have available at least one archived (or online redo) log file. This is a particular file that contains redo log information about changes around the desired flashback point in time (SCN 4123376440). In this case, this is the archived redo log with name: ARC00097_0587681349.001 that has values of: First change#: 4123361850 Next change#: 4123380675 The flashback operation will not succeed without this particular archived redo log. The reason for this :Flashback log files contain information about before-images of data blocks, related to some SCN (System Change Number). When we perform flashback operation to SCN 4123376440, Oracle cannot apply all needed flashback logs and to complete successfully the operation because it applying before-images of data. Oracle needs to restore each data block copy (by applying flashback log files) to its state at a closest possible point in time before SCN 4123376440. This will guarantee that the subsequent ―redo apply‖ operation will forward the database to SCN 4123376440 and the database will be in consistent state. After applying flashback logs, Oracle will perform a forward operation by applying all needed archive log files (in this case redo information from the file: ARC00097_0587681349.001) that will forward the database state to the desired SCN. Oracle cannot start applying redo log files before to be sure that all data blocks are returned to their state before the desired point in time. So, if desired restore point of time is 10:00 AM and the oldest restored data block is from 09:47 AM then we will need all archived log files that contain redo data for the time interval between 09:47 AM and 10:00 AM. Without that redo data, the flashback operation cannot succeed. When a database is restored to its state at some past target time using Flashback Database, each block changed since that time is restored from the copy of the block in the flashback logs most immediately prior to the desired target time. The redo log is then used to re-apply changes since the time that block was copied to the flashback logs.
103
ORACLE DATA BASE ADMINISTRATION
Note : Redo logs must be available for the entire time period spanned by the flashback logs, whether on tape or on disk. (In practice, however, redo logs are generally needed much longer than the flashback retention target to support point-in-time recovery.) Flashback logs are not independent. They can be used only with the redo data that contains database changes around the desired SCN. This means that if we want to have working flashback window (and to be able to restore the database to any point in time within this window) we need to ensure the availability of redo logs as well. If we are familiar with this information then we will be able to work in a better way with this feature and to ensure that it will help us to perform faster recovery without unexpected problems. Rules for Retention and Deletion of Flashback Logs : The following rules govern the flash recovery area's creation, retention, overwriting and deletion of flashback logs: A flashback log is created whenever necessary to satisfy the flashback retention target, as long as there is enough space in the flash recovery area. A flashback log can be reused, once it is old enough that it is no longer needed to satisfy the flashback retention target. If the database needs to create a new flashback log and the flash recovery area is full or there is no disk space, then the oldest flashback log is reused instead. If the flash recovery area is full, then an archived redo log may be automatically deleted by the flash recovery area to make space for other files. In such a case, any flashback logs that would require the use of that redo log file for the use of FLASHBACK DATABASE are also deleted. Note : Re-using the oldest flashback log shortens the flashback database window. If enough flashback logs are reused due to a lack of disk space, the flashback retention target may not be satisfied. Limitations of Flashback Database : Since Flashback Database works by undoing changes to the datafiles that exist at the moment that we run the command, it has the following limitations:Flashback Database can only undo changes to a datafile made by an Oracle database. It cannot be used to repair media failures, or to recover from accidental deletion of datafiles.
we cannot use Flashback Database to undo a shrink datafile operation.
If the database control file is restored from backup or re-created, all accumulated flashback log information is discarded. We cannot use FLASHBACK DATABASE to return to a point in time before the restore or re-creation of a control file. When using Flashback Database with a target time at which a NOLOGGING operation was in progress, block corruption is likely in the database objects and datafiles affected by the NOLOGGING operation. For example, if we perform a direct-path INSERT operation in NOLOGGING mode, and that operation runs from 9:00 to 9:15 , and we later need to use Flashback Database to return to the target time 09:07 on that date, the objects and datafiles updated by the direct-path INSERT may be left with block corruption after the Flashback Database operation completes. If possible, avoid using Flashback Database with a target time or SCN that coincides with a NOLOGGING operation. Also, perform a full or incremental backup of the affected datafiles immediately after any NOLOGGING operation to ensure recoverability to points in time after the operation. If we expect to use Flashback Database to return to a point in time during an operation such as a direct-path INSERT, consider performing the operation in LOGGING mode.
104
ORACLE DATA BASE ADMINISTRATION
Finally few important point to be noted : The Flashback Database should be part of our Backup & Recovery Strategy but it not supersedes the normal physical backup & recovery strategy. It is only an additional protection of our database data. The Flashback Database can be used to flashes back a database to its state to any point in time into the flashback window, only if all flashback logs and their related archived redo logs for the spanned time period are physically available and accessible. Always ensure that archived redo logs covering the flashback window are available on either the tape or disk. We cannot perform flashback database operation if we have media failure. In this case we must use the traditional database point-in-time media recovery method. Always write down the current SCN or/and create a restore point (10g R2) before any significant change over our database: applying of patches, running of batch jobs that can can corrupt the data, etc. As we know: The most common cause for downtime is change. Always write down the current SCN or/and create a restore point (10g R2) before to start a flashback operation . Flashback database is the only one flashback operation that can be performed to undone result of a TRUNCATE command (FLASHBACK DROP, FLASHBACK TABLE, or FLASHBACK QUERY cannot be used for this). Dropping of tablespace cannot be reversed with Flashback Database. After such an operation, the flashback database window begins at the time immediately following that operation. Shrink a datafile cannot be reversed with Flashback Database. After such an operation, the flashback database window begins at the time immediately following that operation. Resizing of datafile cannot be reversed with Flashback Database. After such an operation, the flashback database window begins at the time immediately following that operation. If we need to perform flashback operation in this time period, we must offline this datafile before performing of flashback operation. Recreating or restoring of control file prevents using of Flashback Database before this point of time. We can flashback database to a point in time before a RESETLOGS operation. This feature is available from 10g R2 because the flashback log files are not deleted after RESETLOGS operation. We cannot do this in 10g R1 because old flashback logs are deleted immediately after an RESETLOGS operation. Don‘t exclude the SYSTEM tablespace from flashback logging. Otherwise we will not be able to flashback the database. The DB_FLASHBACK_RETENTION_TARGET parameter is a TARGET parameter. It doesn‘t guarantee the flashback database window. Our proper configuration of the Flashback Database should guarantee it. Monitor regularly the size of the FRA and generated flashback logs to ensure that there is no space pressure and the flashback log data is within the desired flashback window
Migrate ASM to NON-ASM in oracle 10g
105
ORACLE DATA BASE ADMINISTRATION
Migrating a database back from ASM storage to non-ASM storage is similar to the original migration from NON-ASM to ASM database. We can migrate from ASM to non-ASM storage similar to that of the NON-ASM to ASM . Here are the steps to migrate from ASM to NONASM. Step 1 : Take the RMAN Full Backup of Database (not mandatory). Step 2 : Start database with ASM Step 3 : Create pfile from spfile C:\>sqlplus sys/xxxx@noida as sysdba SQL*Plus: Release 10.2.0.1.0 - Production on Thu Jul 7 16:09:45 2011 Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options SQL> select name,open_mode from v$database; NAME OPEN_MODE -------------------------NOIDA READ WRITE SQL> sho parameter spfile; NAME TYPE ---------------------------spfile string
VALUE -----------------------------+RATCAT/noida/spfilenoida.ora
SQL> create pfile='c:\qq.ora' from spfile; File created. Step 4 : Edit pfile Parameter Edit the pfile to reflect controlfile name in file system location and the specify the location of udump,adump,cdump in file system location . In my case the changes are as follows : control_files='D:\oracle\product\10.2.0\oradata\control01.ctl','D:\oracle\product\10.2.0\ora data\control02.ctl' core_dump_dest='D:\oracle\product\10.2.0/admin/noida/cdump' background_dump_dest='D:\oracle\product\10.2.0/admin/noida/bdump' user_dump_dest='D:\oracle\product\10.2.0/admin/noida/udump' Step 5 : Startup database at nomount SQL> shut immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> exit Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production With the Partitioning, OLAP and Data Mining options C:\>sqlplus sys/xxxx@noida as sysdba
106
ORACLE DATA BASE ADMINISTRATION
SQL*Plus: Release 10.2.0.1.0 - Production on Thu Jul 7 16:15:41 2011 Copyright (c) 1982, 2005, Oracle. All rights reserved. Connected to an idle instance. SQL> startup pfile='C:\qq.ora' nomount; ORACLE instance started. Total System Global Area 289406976 bytes Fixed Size 1248576 bytes Variable Size 88081088 bytes Database Buffers 192937984 bytes Redo Buffers 7139328 bytes Step 6 : Use RMAN to copy the control file from asm to non-asm SQL> host rman target sys/xxxx@noida Recovery Manager: Release 10.2.0.1.0 - Production on Thu Jul 7 16:23:53 2011 Copyright (c) 1982, 2005, Oracle. All rights reserved. connected to target database: noida (not mounted) RMAN> restore controlfile from '+RATCAT/noida/controlfile/current.260.755871509' ; Starting restore at 07-JUL-11 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=157 devtype=DISK channel ORA_DISK_1: copied control file copy output filename=D:\ORACLE\PRODUCT\10.2.0\ORADATA\CONTROL01.CTL output filename=D:\ORACLE\PRODUCT\10.2.0\ORADATA\CONTROL02.CTL Finished restore at 07-JUL-11 Step 7 : Mount the Database RMAN> alter database mount ; database mounted released channel: ORA_DISK_1 Step 8 : Use RMAN to copy the database from ASM to NON-ASM. RMAN> backup as copy database format 'D:\oracle\product\10.2.0\oradata\%U' ; Starting backup at 07-JUL-11 allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=152 devtype=DISK channel ORA_DISK_1: starting datafile copy input datafile fno=00001 name=+RATCAT/noida/datafile/system.256.755871363 output filename=D:\ORACLE\PRODUCT\10.2.0\ORADATA\DATA_D-NOIDA_I1509813972_TS-SYSTEM_FNO-1_05MGRQAR tag=TAG20110707T162707 recid=2 stamp=755886461 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:35 channel ORA_DISK_1: starting datafile copy input datafile fno=00003 name=+RATCAT/noida/datafile/sysaux.257.755871363
107
ORACLE DATA BASE ADMINISTRATION
output filename=D:\ORACLE\PRODUCT\10.2.0\ORADATA\DATA_D-NOIDA_I1509813972_TS-SYSAUX_FNO-3_06MGRQBU tag=TAG20110707T162707 recid=3 stamp=755886480 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25 channel ORA_DISK_1: starting datafile copy input datafile fno=00005 name=+RATCAT/noida/datafile/example.265.755871581 output filename=D:\ORACLE\PRODUCT\10.2.0\ORADATA\DATA_D-NOIDA_I1509813972_TS-EXAMPLE_FNO-5_07MGRQCN tag=TAG20110707T162707 recid=4 stamp=755886495 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15 channel ORA_DISK_1: starting datafile copy input datafile fno=00002 name=+RATCAT/noida/datafile/undotbs1.258.755871365 output filename=D:\ORACLE\PRODUCT\10.2.0\ORADATA\DATA_D-NOIDA_I1509813972_TS-UNDOTBS1_FNO-2_08MGRQD7 tag=TAG20110707T162707 recid=5 stamp=755886506 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03 channel ORA_DISK_1: starting datafile copy input datafile fno=00004 name=+RATCAT/noida/datafile/users.259.755871365 output filename=D:\ORACLE\PRODUCT\10.2.0\ORADATA\DATA_D-NOIDA_I1509813972_TS-USERS_FNO-4_09MGRQDA tag=TAG20110707T162707 recid=6 stamp=755886507 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01 channel ORA_DISK_1: starting datafile copy copying current control file outputfilename=D:\ORACLE\PRODUCT\10.2.0\ORADATA\CF_D-NOIDA_ID1509813972_0AMGRQDB tag=TAG20110707T162707 recid=7 stamp=755886508 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01 Finished backup at 07-JUL-11 Step 9 : Update the controlfile Switch Database to specify that a datafile copy is now the current datafile, that is, the datafile pointed to by the control file. RMAN> switch database to copy ; datafile 1 switched to datafile copy "D:\ORACLE\PRODUCT\10.2.0\ORADATA\DATA_D-NOIDA_I-1509813972_TS-SYSTEM_FNO1_05MGRQAR" datafile 2 switched to datafile copy "D:\ORACLE\PRODUCT\10.2.0\ORADATA\DATA_D-NOIDA_I-1509813972_TSUNDOTBS1_FNO-2_08MGRQD7" datafile 3 switched to datafile copy "D:\ORACLE\PRODUCT\10.2.0\ORADATA\DATA_D-NOIDA_I-1509813972_TS-SYSAUX_FNO3_06MGRQBU" datafile 4 switched to datafile copy
108
ORACLE DATA BASE ADMINISTRATION
"D:\ORACLE\PRODUCT\10.2.0\ORADATA\DATA_D-NOIDA_I-1509813972_TS-USERS_FNO4_09MGRQDA" datafile 5 switched to datafile copy "D:\ORACLE\PRODUCT\10.2.0\ORADATA\DATA_D-NOIDA_I-1509813972_TSEXAMPLE_FNO-5_07MGRQCN" RMAN>exit SQL> select name,open_mode from v$database; NAME OPEN_MODE ------------------------NOIDA MOUNTED SQL> alter database open; Database altered. SQL> select name from v$controlfile; NAME -------------------------------------------------------------------------------D:\ORACLE\PRODUCT\10.2.0\ORADATA\CONTROL01.CTL D:\ORACLE\PRODUCT\10.2.0\ORADATA\CONTROL02.CTL Step 10 : Migrate redo-log file to file system location. SQL> select member from v$logfile; MEMBER -----------------------------------------------------+RATCAT/noida/onlinelog/group_3.263.755871523 +SATMAT/noida/onlinelog/group_3.259.755871525 +RATCAT/noida/onlinelog/group_2.262.755871519 +SATMAT/noida/onlinelog/group_2.258.755871521 +RATCAT/noida/onlinelog/group_1.261.755871513 +SATMAT/noida/onlinelog/group_1.257.755871515 6 rows selected. SQL> select group#,sequence#,members,archived,status from v$log; GROUP# SEQUENCE# MEMBERS ARC STATUS ---------- ---------- ---------- --- ---------------1 2 2 YES INACTIVE 2 3 2 YES INACTIVE 3 4 2 NO CURRENT SQL> alter database drop logfile group 1; Database altered. SQL> alter database add size 50M; Database altered.
logfile group 1 'D:\oracle\product\10.2.0\oradata\redo01.redo'
SQL> alter database drop logfile group 2;
109
ORACLE DATA BASE ADMINISTRATION
Database altered. SQL> alter database add logfile group 2 'D:\oracle\product\10.2.0\oradata\redo02.redo' size 50M; Database altered. SQL> select member from v$logfile; MEMBER -------------------+SATMAT/noida/onlinelog/group_3.259.755871525 D:\ORACLE\PRODUCT\10.2.0\ORADATA\REDO02.REDO D:\ORACLE\PRODUCT\10.2.0\ORADATA\REDO01.REDO SQL> select group#,sequence#,members,archived,status from v$log; GROUP# SEQUENCE# MEMBERS ARC STATUS ---------- ---------- ---------- --- ---------------1 5 1 NO CURRENT 2 0 1 YES UNUSED 3 4 2 YES ACTIVE SQL> alter system switch logfile ; System altered. SQL> alter database drop logfile group 3; ALTER DATABASE DROP LOGFILE GROUP 3 * ERROR at line 1: ORA-01624: log 3 needed for crash recovery of instance noida (thread 1) ORA-00312: online log 3 thread 1: '+RATCAT/noida/onlinelog/group_3.263.755871523' ORA-00312: online log 3 thread 1: '+SATMAT/noida/onlinelog/group_3.259.755871525' SQL> alter database clear unarchived logfile group 3; Database altered. SQL>alter database drop logfile group 3; Database altered. SQL> alter database add logfile group 3 size 50M; Database altered.
'D:\oracle\product\10.2.0\oradata\redo03.redo'
SQL> select member from v$logfile; MEMBER -------------------D:\ORACLE\PRODUCT\10.2.0\ORADATA\REDO03.REDO D:\ORACLE\PRODUCT\10.2.0\ORADATA\REDO02.REDO D:\ORACLE\PRODUCT\10.2.0\ORADATA\REDO01.REDO Step 11 : Recreate the tempfile SQL> select file_name,tablespace_name from dba_temp_files; FILE_NAME TABLESPACE_NAME -------------------------------------------------------
110
ORACLE DATA BASE ADMINISTRATION
+RATCAT/noida/tempfile/temp.264.755871559
TEMP
SQL> alter tablespace temp add tempfile 'D:\oracle\product\10.2.0\oradata\temp01.dbf' size 200m; Tablespace altered. SQL> alter tablespace temp '+RATCAT/noida/tempfile/temp.264.755871559'; Tablespace altered.
drop
tempfile
SQL> select file_name,tablespace_name from dba_temp_files; FILE_NAME TABLESPACE_NAME ------------------------------------D:\ORACLE\PRODUCT\10.2.0\ORADATA\TEMP01.DBF TEMP Step 12 : Recreate spfile SQL> sho parameter spfile NAME TYPE -----------------------spfile string
VALUE ------------------------------
SQL> create spfile from pfile='C:\qq.ora'; File created. SQL> shut immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> startup ORACLE instance started. Total System Global Area 289406976 bytes Fixed Size 1248576 bytes Variable Size 88081088 bytes Database Buffers 192937984 bytes Redo Buffers 7139328 bytes Database mounted. Database opened. Step 13 : Check the database files SQL> sho parameter spfile NAME TYPE VALUE ----------------------------------------------------------------------------------------------spfile string D:\ORACLE\PRODUCT\10.2.0\DB_1\DATABASE\SPFILENOIDA.ORA SQL> select file_name from dba_data_files; FILE_NAME ----------------------
111
ORACLE DATA BASE ADMINISTRATION
D:\ORACLE\PRODUCT\10.2.0\ORADATA\DATA_D-NOIDA_I-1509813972_TS-USERS_FNO4_09MGRQDA D:\ORACLE\PRODUCT\10.2.0\ORADATA\DATA_D-NOIDA_I-1509813972_TS-SYSAUX_FNO3_06MGRQBU D:\ORACLE\PRODUCT\10.2.0\ORADATA\DATA_D-NOIDA_I-1509813972_TSUNDOTBS1_FNO-2_08MGRQD7 D:\ORACLE\PRODUCT\10.2.0\ORADATA\DATA_D-NOIDA_I-1509813972_TS-SYSTEM_FNO1_05MGRQAR D:\ORACLE\PRODUCT\10.2.0\ORADATA\DATA_D-NOIDA_I-1509813972_TS-EXAMPLE_FNO5_07MGRQCN SQL> select name from v$controlfile; NAME ------------------------D:\ORACLE\PRODUCT\10.2.0\ORADATA\CONTROL01.CTL D:\ORACLE\PRODUCT\10.2.0\ORADATA\CONTROL02.CTL SQL> select member from v$logfile; MEMBER ----------------------------D:\ORACLE\PRODUCT\10.2.0\ORADATA\REDO03.REDO D:\ORACLE\PRODUCT\10.2.0\ORADATA\REDO02.REDO D:\ORACLE\PRODUCT\10.2.0\ORADATA\REDO01.REDO
Migrating Databases from non-ASM to ASM and Vice-Versa by Jeff Hunter, Sr. Database Administrator
Contents 1. 2. 3. 4.
Overview Current Configuration Migrating Oracle Database from Local File System to ASM Migrating Oracle Database from ASM to Local File System
Overview Automatic Storage Management (ASM) was introduced in Oracle10g Release 1 and is used to alleviate the DBA from having to manage individual files and drives. ASM is built into the Oracle kernel and provides the DBA with a way to manage thousands of disk drives 24x7 for both single and clustered instances of Oracle. Essentially, ASM is a file system / volume manager for all Oracle physical database files (datafiles, online redo logs, controlfiles, archived redo logs, RMAN backupsets, and SPFILEs). All of the database files (and
112
ORACLE DATA BASE ADMINISTRATION
directories) to be used for Oracle will be contained in a disk group. ASM automatically performs load balancing in parallel across all available disk drives to prevent hot spots and maximize performance, even with rapidly changing data usage patterns. Configuring an ASM environment (an ASM instance) is a straightforward process and can be done through the Database Configuration Assistant (DBCA) or manually (see Manually Creating an ASM Instance). Once the ASM instance is configured on a node and an ASM Disk Group is created, any database that resides on that node can start taking advantage of it. For example, consider an ASM instance named +ASM with an ASM disk group named TESTDB_DATA1. Creating a tablespace where the datafile will reside in ASM is as easy as: SQL> CREATE TABLESPACE users2 DATAFILE '+TESTDB_DATA1' SIZE 100M; Tablespace created. SQL> SELECT tablespace_name, file_name FROM dba_data_files WHERE tablespace_name = 'USERS2'; TABLESPACE_NAME FILE_NAME --------------- -------------------------------------------------USERS2 +TESTDB_DATA1/testdb/datafile/users2.268.598475429 Given the SQL statement above, a new datafile will be created using Oracle Managed Files (OMF) in an ASM disk group named TESTDB_DATA1. But, what if you already have an existing Oracle database which stores its database files using the local file system on the node you just configured ASM on and now want to relocate the entire database to be stored in ASM? Well, as with most file management tasks that involve ASM, it's RMAN to the rescue! In this article, I will explain the steps necessary to migrate an existing Oracle database stored on the local file system to ASM. This will include all datafiles, tempfiles, online redo logfiles, controlfiles, and all flash recovery area files. I will then, within a follow-up section, explain how the process works in reverse - migrating a database stored in ASM to a local file system.
Current Configuration The testing environment that I will be using for this article is best described in the following illustration and table of values:
113
ORACLE DATA BASE ADMINISTRATION
Oracle ASM Configuration Machine Name:
linux3.idevelopment.info
Oracle SID:
TESTDB
Database Name:
TESTDB
Available ASM Disk Groups:
+TESTDB_DATA1 +TESTDB_DATA2 +FLASH_RECOVERY_AREA
Available File System for DB Files:
/u02/oradata
Available File System for Flash Recovery Area: /u02/flash_recovery_area Operating System:
Red Hat Linux 3 - (CentOS 3.4)
Oracle Release:
Oracle10g Release 2 - (10.2.0.2.0)
Please note that although I have two Oracle ASM disk groups defined for database files (+TESTDB_DATA1 and +TESTDB_DATA2), I will only be using+TESTDB_DATA1. This article assumes the database is open and in ARCHIVELOG mode: SQL> archive log list Database log mode Archive Mode Automatic archival Enabled Archive destination USE_DB_RECOVERY_FILE_DEST Oldest online log sequence 34 Next log sequence to archive 36 Current log sequence 36
114
ORACLE DATA BASE ADMINISTRATION
Migrating Oracle Database from Local File System to ASM The following query lists database files as they exist on the local file system for the TESTDB database. All of the files listed in this query will be relocated from the local file system to ASM: $ ORACLE_SID=TESTDB; export ORACLE_SID $ sqlplus "/ as sysdba" SQL> @dba_files_all Tablespace Name / File Class Filename File Size Auto Next Max --------------------- --------------------------------------------------------------- --------------- --- ----------- -------------APEX22 /u02/oradata/TESTDB/datafile/o1_mf_apex22_2ft4eswu_.dbf 104,857,600 NO 0 0 EXAMPLE /u02/oradata/TESTDB/datafile/o1_mf_example_2fb4ccw2_.dbf 157,286,400 YES 655,360 34,359,721,984 FLOW_1 /u02/oradata/TESTDB/datafile/o1_mf_flow_1_2fb4cegw_.dbf 52,494,336 NO 0 0 SYSAUX /u02/oradata/TESTDB/datafile/o1_mf_sysaux_2fb4cb7z_.dbf 419,430,400 YES 10,485,760 34,359,721,984 SYSTEM /u02/oradata/TESTDB/datafile/o1_mf_system_2fb4b8s2_.dbf 608,174,080 YES 10,485,760 34,359,721,984 TEMP /u02/oradata/TESTDB/datafile/o1_mf_temp_2g17lvcq_.tmp 536,870,912 YES 262,144,000 34,359,721,984 UNDOTBS1 /u02/oradata/TESTDB/datafile/o1_mf_undotbs1_2fb4c2wf_.dbf 209,715,200 YES 5,242,880 34,359,721,984 USERS /u02/oradata/TESTDB/datafile/o1_mf_users_2fb4cqf4_.dbf 2,382,888,960 YES 1,310,720 34,359,721,984 [ CONTROL FILE ] /u02/oradata/TESTDB/controlfile/o1_mf_8du3s3er_.ctl [ CONTROL FILE ] /u02/oradata/TESTDB/controlfile/o1_mf_y2is93je_.ctl [ ONLINE REDO LOG ] /u02/flash_recovery_area/TESTDB/onlinelog/o1_mf_1_2g1g6bq0_.log 262,144,000 [ ONLINE REDO LOG ] /u02/flash_recovery_area/TESTDB/onlinelog/o1_mf_2_2g1gdgn1_.log 262,144,000 [ ONLINE REDO LOG ] /u02/flash_recovery_area/TESTDB/onlinelog/o1_mf_3_2g1ghz8z_.log 262,144,000 [ ONLINE REDO LOG ] /u02/oradata/TESTDB/onlinelog/o1_mf_1_2g1g61bm_.log 262,144,000 [ ONLINE REDO LOG ] /u02/oradata/TESTDB/onlinelog/o1_mf_2_2g1gd4pr_.log 262,144,000 [ ONLINE REDO LOG ] /u02/oradata/TESTDB/onlinelog/o1_mf_3_2g1ghs0t_.log 262,144,000 --------------sum 6,044,581,888
115
ORACLE DATA BASE ADMINISTRATION
16 rows selected. Also note that the target database uses an SPFILE on the local file system: $ORACLE_HOME/dbs/spfileTESTDB.ora Use the following steps to fully migrate an existing Oracle database from a local file system to ASM: 1. With the target database open, edit the initialization parameter control_files and db_create_file_dest to point to the ASM disk group +TESTDB_DATA1. Also configuredb_recovery_file_dest to point to the ASM disk group +FLASH_RECOVERY_AREA: 2. SQL> ALTER SYSTEM SET control_files='+TESTDB_DATA1' SCOPE=spfile; 3. 4. System altered. 5. 6. SQL> ALTER SYSTEM SET db_create_file_dest='+TESTDB_DATA1' SCOPE=spfile; 7. 8. System altered. 9. 10. SQL> ALTER SYSTEM SET db_recovery_file_dest='+FLASH_RECOVERY_AREA' SCOPE=spfile; 11. System altered. 12. Startup the target database in NOMOUNT mode: 13. SQL> SHUTDOWN IMMEDIATE 14. Database closed. 15. Database dismounted. 16. ORACLE instance shut down. 17. 18. SQL> STARTUP NOMOUNT 19. ORACLE instance started. 20. 21. Total System Global Area 285212672 bytes 22. Fixed Size 1260420 bytes 23. Variable Size 171967612 bytes 24. Database Buffers 109051904 bytes Redo Buffers 2932736 bytes 25. From an RMAN session, copy one of your controlfiles from the local file system to its new location in ASM. The new controlfile will be copied to the value specified in the initialization parametercontrol_files: 26. RMAN> RESTORE CONTROLFILE FROM '/u02/oradata/TESTDB/controlfile/o1_mf_8du3s3er_.ctl'; 27. 28. Starting restore at 14-AUG-06 29. using channel ORA_DISK_1 30. 31. channel ORA_DISK_1: copied control file copy 32. output filename=+TESTDB_DATA1/testdb/controlfile/backup.268.598481391 Finished restore at 14-AUG-06
116
ORACLE DATA BASE ADMINISTRATION
33. From an RMAN or SQL*Plus session, mount the database. This will mount the database using the controlfile stored in ASM: 34. RMAN> ALTER DATABASE MOUNT; 35. 36. database mounted released channel: ORA_DISK_1 37. From an RMAN session, copy the database files from the local file system to ASM: 38. RMAN> BACKUP AS COPY DATABASE FORMAT '+TESTDB_DATA1'; 39. 40. Starting backup at 14-AUG-06 41. using target database control file instead of recovery catalog 42. allocated channel: ORA_DISK_1 43. channel ORA_DISK_1: sid=158 devtype=DISK 44. channel ORA_DISK_1: starting datafile copy 45. input datafile fno=00005 name=/u02/oradata/TESTDB/datafile/o1_mf_users_2fb4cqf4_.dbf 46. output filename=+TESTDB_DATA1/testdb/datafile/users.270.598481673 tag=TAG20060814T205432 recid=36 stamp=598482095 47. channel ORA_DISK_1: datafile copy complete, elapsed time: 00:07:06 48. channel ORA_DISK_1: starting datafile copy 49. input datafile fno=00001 name=/u02/oradata/TESTDB/datafile/o1_mf_system_2fb4b8s2_.dbf 50. output filename=+TESTDB_DATA1/testdb/datafile/system.269.598482099 tag=TAG20060814T205432 recid=37 stamp=598482206 51. channel ORA_DISK_1: datafile copy complete, elapsed time: 00:01:55 52. channel ORA_DISK_1: starting datafile copy 53. input datafile fno=00003 name=/u02/oradata/TESTDB/datafile/o1_mf_sysaux_2fb4cb7z_.dbf 54. output filename=+TESTDB_DATA1/testdb/datafile/sysaux.267.598482213 tag=TAG20060814T205432 recid=38 stamp=598482292 55. channel ORA_DISK_1: datafile copy complete, elapsed time: 00:01:25 56. channel ORA_DISK_1: starting datafile copy 57. input datafile fno=00002 name=/u02/oradata/TESTDB/datafile/o1_mf_undotbs1_2fb4c2wf_.dbf 58. output filename=+TESTDB_DATA1/testdb/datafile/undotbs1.256.598482299 tag=TAG20060814T205432 recid=39 stamp=598482340 59. channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:45 60. channel ORA_DISK_1: starting datafile copy 61. input datafile fno=00004 name=/u02/oradata/TESTDB/datafile/o1_mf_example_2fb4ccw2_.dbf 62. output filename=+TESTDB_DATA1/testdb/datafile/example.264.598482345 tag=TAG20060814T205432 recid=40 stamp=598482374 63. channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:35 64. channel ORA_DISK_1: starting datafile copy 65. input datafile fno=00006 name=/u02/oradata/TESTDB/datafile/o1_mf_apex22_2ft4eswu_.dbf 66. output filename=+TESTDB_DATA1/testdb/datafile/apex22.263.598482381 tag=TAG20060814T205432 recid=41 stamp=598482399 67. channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25 68. channel ORA_DISK_1: starting datafile copy
117
ORACLE DATA BASE ADMINISTRATION
69. input datafile fno=00007 name=/u02/oradata/TESTDB/datafile/o1_mf_flow_1_2fb4cegw_.dbf 70. output filename=+TESTDB_DATA1/testdb/datafile/flow_1.262.598482405 tag=TAG20060814T205432 recid=42 stamp=598482415 71. channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15 72. channel ORA_DISK_1: starting datafile copy 73. copying current control file 74. output filename=+TESTDB_DATA1/testdb/controlfile/backup.261.598482421 tag=TAG20060814T205432 recid=43 stamp=598482423 75. channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:03 76. channel ORA_DISK_1: starting full datafile backupset 77. channel ORA_DISK_1: specifying datafile(s) in backupset 78. including current SPFILE in backupset 79. channel ORA_DISK_1: starting piece 1 at 14-AUG-06 80. channel ORA_DISK_1: finished piece 1 at 14-AUG-06 81. piece handle=+TESTDB_DATA1/testdb/backupset/2006_08_14/nnsnf0_tag2006081 4t205432_0.260.598482425 tag=TAG20060814T205432 comment=NONE 82. channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02 Finished backup at 14-AUG-06 83. From an RMAN session, update the control file / data dictionary so that all database files point to the RMAN copy made in ASM: 84. RMAN> SWITCH DATABASE TO COPY; 85. 86. datafile 1 switched to datafile copy "+TESTDB_DATA1/testdb/datafile/system.269.598482099" 87. datafile 2 switched to datafile copy "+TESTDB_DATA1/testdb/datafile/undotbs1.256.598482299" 88. datafile 3 switched to datafile copy "+TESTDB_DATA1/testdb/datafile/sysaux.267.598482213" 89. datafile 4 switched to datafile copy "+TESTDB_DATA1/testdb/datafile/example.264.598482345" 90. datafile 5 switched to datafile copy "+TESTDB_DATA1/testdb/datafile/users.270.598481673" 91. datafile 6 switched to datafile copy "+TESTDB_DATA1/testdb/datafile/apex22.263.598482381" datafile 7 switched to datafile copy "+TESTDB_DATA1/testdb/datafile/flow_1.262.598482405" 92. From a SQL*Plus session, perform incomplete recovery and open the database using the RESETLOGS option: 93. SQL> RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL; 94. 95. ORA-00279: change 7937583 generated at 08/14/2006 20:33:55 needed for thread 1 96. ORA-00289: suggestion : +FLASH_RECOVERY_AREA 97. ORA-00280: change 7937583 for thread 1 is in sequence #36 98. 99. 100. Specify log: {=suggested | filename | AUTO | CANCEL} 101. CANCEL
118
ORACLE DATA BASE ADMINISTRATION
102. Media recovery cancelled. 103. 104. SQL> ALTER DATABASE OPEN RESETLOGS; 105. Database altered. 106. From a SQL*Plus session, re-create any tempfiles that are still currently on the local file system to ASM. This is done by simply dropping the tempfiles from the local file system and re-creating them in ASM. This example relies on the initialization parameter db_create_file_dest=+TESTDB_DATA1: 107. SQL> select tablespace_name, file_name, bytes from dba_temp_files; 108. 109. TABLESPACE_NAME FILE_NAME BYTES 110. --------------- ----------------------------------------------------- --------111. TEMP /u02/oradata/TESTDB/datafile/o1_mf_temp_2g17lvcq_.tmp 536870912 112. 113. SQL> alter database tempfile 114. 2 '/u02/oradata/TESTDB/datafile/o1_mf_temp_2g17lvcq_.tmp' 115. 3 drop including datafiles; 116. 117. Database altered. 118. 119. SQL> alter tablespace temp add tempfile size 512m 120. 2 autoextend on next 250m maxsize unlimited; 121. 122. Tablespace altered. 123. 124. SQL> select tablespace_name, file_name, bytes from dba_temp_files; 125. 126. TABLESPACE_NAME FILE_NAME BYTES 127. --------------- ------------------------------------------------ --------TEMP +TESTDB_DATA1/testdb/tempfile/temp.261.598485663 536870912 If users are currently accessing the tempfile(s) you are attempting to drop, you may receive the following error: SQL> alter database tempfile 2 '/u02/oradata/TESTDB/datafile/o1_mf_temp_2g17lvcq_.tmp' 3 drop including datafiles; ERROR at line 1: ORA-25152: TEMPFILE cannot be dropped at this time As for the poor users who were using the tempfile, their transaction will end and will be greeted with the following error message: SQL> @testTemp.sql join dba_extents c on (b.segment_name = c.segment_name) * ERROR at line 4:
119
ORACLE DATA BASE ADMINISTRATION
ORA-00372: file 601 cannot be modified at this time ORA-01110: data file 601: '/u02/oradata/TESTDB/datafile/o1_mf_temp_2g17lvcq_.tmp' ORA-00372: file 601 cannot be modified at this time ORA-01110: data file 601: '/u02/oradata/TESTDB/datafile/o1_mf_temp_2g17lvcq_.tmp' If this happens, you should attempt to drop the tempfile again so the operation is successful: SQL> alter database tempfile 2 '/u02/oradata/TESTDB/datafile/o1_mf_temp_2g17lvcq_.tmp' 3 drop including datafiles; Database altered. 128. From a SQL*Plus session, re-create any online redo logfiles that are still currently on the local file system to ASM. This is done by simply dropping the logfiles from the local file system and re-creating them in ASM. This example relies on the initialization parameters db_create_file_dest=+TESTDB_DATA1 and db_recovery_file_dest =+FLASH_RECOVERY_AREA: o Determine the current online redo logfiles to move to ASM by examining the file names (and sizes) from V$LOGFILE: o SQL> select a.group#, a.member, b.bytes o 2 from v$logfile a, v$log b where a.group# = b.group#; o o GROUP# MEMBER BYTES o ------ --------------------------------------------------------------- -------o 1 /u02/oradata/TESTDB/onlinelog/o1_mf_1_2g1g61bm_.log 262144000 o 2 /u02/oradata/TESTDB/onlinelog/o1_mf_2_2g1gd4pr_.log 262144000 o 3 /u02/oradata/TESTDB/onlinelog/o1_mf_3_2g1ghs0t_.log 262144000 o 1 /u02/flash_recovery_area/TESTDB/onlinelog/o1_mf_1_2g1g6bq0_.log 262144000 o 2 /u02/flash_recovery_area/TESTDB/onlinelog/o1_mf_2_2g1gdgn1_.log 262144000 o 3 /u02/flash_recovery_area/TESTDB/onlinelog/o1_mf_3_2g1ghz8z_.log 262144000 o 6 rows selected. o o o o o
Force a log switch until the last redo log is marked "CURRENT" by issuing the following command: SQL> select group#, status from v$log; GROUP# STATUS ---------- ----------------
120
ORACLE DATA BASE ADMINISTRATION
o o o o o o o o o o o o o o o o o o
1 CURRENT 2 INACTIVE 3 INACTIVE SQL> alter system switch logfile; SQL> alter system switch logfile; SQL> select group#, status from v$log; GROUP# STATUS ---------- ---------------1 INACTIVE 2 INACTIVE 3 CURRENT After making the last online redo log file the CURRENT one, drop the first online redo log: SQL> alter database drop logfile group 1; Database altered. As a DBA, you should already be aware that if you are going to drop a logfile group, it cannot be the current logfile group. I have run into instances; however, where attempting to drop the logfile group resulted in the following error as a result of the logfile group having an activestatus: SQL> ALTER DATABASE DROP LOGFILE GROUP 1; ALTER DATABASE DROP LOGFILE GROUP 1 * ERROR at line 1: ORA-01624: log 1 needed for crash recovery of instance TESTDB (thread 1) ORA-00312: online log 1 thread 1: '' Easy problem to resolve. Simply perform a checkpoint on the database: SQL> ALTER SYSTEM CHECKPOINT GLOBAL; System altered. SQL> ALTER DATABASE DROP LOGFILE GROUP 1; Database altered.
o o o
Re-create the dropped redo log group in ASM (and a different size if desired): SQL> alter database add logfile group 1 size 250m; Database altered.
o o o
After re-creating the online redo log group, force a log switch. The online redo log group just created should become the CURRENT one: SQL> select group#, status from v$log;
121
ORACLE DATA BASE ADMINISTRATION
o o o o o o o o o o o o o o
GROUP# STATUS ---------- ---------------1 UNUSED 2 INACTIVE 3 CURRENT
o
After re-creating the first online redo log group, loop back to drop / recreate the next online redo logfile until all logs are rebuilt in ASM. Verify all online redo logfiles have been created in ASM: SQL> select a.group#, a.member, b.bytes 2 from v$logfile a, v$log b where a.group# = b.group#;
o o o o o o o o o o
o
o
SQL> alter system switch logfile; SQL> select group#, status from v$log; GROUP# STATUS ---------- ---------------1 CURRENT 2 INACTIVE 3 ACTIVE
GROUP# MEMBER BYTES ------ ----------------------------------------------------------- --------1 +TESTDB_DATA1/testdb/onlinelog/group_1.259.598486831 262144000 2 +TESTDB_DATA1/testdb/onlinelog/group_2.260.598487179 262144000 3 +TESTDB_DATA1/testdb/onlinelog/group_3.258.598487365 262144000 1 +FLASH_RECOVERY_AREA/testdb/onlinelog/group_1.259.598486879 262144000 2 +FLASH_RECOVERY_AREA/testdb/onlinelog/group_2.257.598487225 262144000 3 +FLASH_RECOVERY_AREA/testdb/onlinelog/group_3.260.598487411 262144000
o 6 rows selected. 129. Perform the following steps to relocate the SPFILE from the local file system to an ASM disk group. o Create a text-based initialization parameter file from the current binary SPFILE located on the local file system: o SQL> CREATE PFILE='$ORACLE_HOME/dbs/initTESTDB.ora' o 2 FROM SPFILE='$ORACLE_HOME/dbs/spfileTESTDB.ora'; o File created. o
Create new SPFILE in an ASM disk group:
122
ORACLE DATA BASE ADMINISTRATION
o o o
SQL> CREATE SPFILE='+TESTDB_DATA1/TESTDB/spfileTESTDB.ora' 2 FROM PFILE='$ORACLE_HOME/dbs/initTESTDB.ora'; File created.
o o o o
Shutdown the Oracle database: SQL> SHUTDOWN IMMEDIATE Database closed. Database dismounted. ORACLE instance shut down.
o
Update the text-based init.ora file with the new location of the SPFILE in ASM: $ echo "SPFILE='+TESTDB_DATA1/TESTDB/spfileTESTDB.ora'" > $ORACLE_HOME/dbs/initTESTDB.ora
o
Remove (actually rename) the old SPFILE on the local file system so that the new text-based init.ora will be used: $ mv $ORACLE_HOME/dbs/spfileTESTDB.ora $ORACLE_HOME/dbs/BACKUP_ASM.spfileTESTDB.ora
o
Open the Oracle database using the new SPFILE: SQL> STARTUP
130. Verify that all database files have been created in ASM: 131. $ sqlplus "/ as sysdba" 132. 133. SQL> @dba_files_all 134. 135. Tablespace Name / 136. File Class Filename File Size Auto Next Max 137. --------------------- ----------------------------------------------------------- -------------- ---- ----------- --------------138. APEX22 +TESTDB_DATA1/testdb/datafile/apex22.263.598482381 104,857,600 NO 0 0 139. EXAMPLE +TESTDB_DATA1/testdb/datafile/example.264.598482345 157,286,400 YES 655,360 34,359,721,984 140. FLOW_1 +TESTDB_DATA1/testdb/datafile/flow_1.262.598482405 52,494,336 NO 0 0 141. SYSAUX +TESTDB_DATA1/testdb/datafile/sysaux.267.598482213 419,430,400 YES 10,485,760 34,359,721,984
123
ORACLE DATA BASE ADMINISTRATION
142. SYSTEM +TESTDB_DATA1/testdb/datafile/system.269.598482099 608,174,080 YES 10,485,760 34,359,721,984 143. TEMP +TESTDB_DATA1/testdb/tempfile/temp.261.598485663 536,870,912 YES 262,144,000 34,359,721,984 144. UNDOTBS1 +TESTDB_DATA1/testdb/datafile/undotbs1.256.598482299 209,715,200 YES 5,242,880 34,359,721,984 145. USERS +TESTDB_DATA1/testdb/datafile/users.270.598481673 2,382,888,960 YES 1,310,720 34,359,721,984 146. [ CONTROL FILE ] +TESTDB_DATA1/testdb/controlfile/backup.268.598481391 147. [ ONLINE REDO LOG ] +FLASH_RECOVERY_AREA/testdb/onlinelog/group_1.259.598486879 262,144,000 148. [ ONLINE REDO LOG ] +FLASH_RECOVERY_AREA/testdb/onlinelog/group_2.257.598487225 262,144,000 149. [ ONLINE REDO LOG ] +FLASH_RECOVERY_AREA/testdb/onlinelog/group_3.260.598487411 262,144,000 150. [ ONLINE REDO LOG ] +TESTDB_DATA1/testdb/onlinelog/group_1.259.598486831 262,144,000 151. [ ONLINE REDO LOG ] +TESTDB_DATA1/testdb/onlinelog/group_2.260.598487179 262,144,000 152. [ ONLINE REDO LOG ] +TESTDB_DATA1/testdb/onlinelog/group_3.258.598487365 262,144,000 153. --------------154. sum 6,044,581,888 155. 15 rows selected. 156. At this point, the target database is open with all of its datafiles, controlfiles, online redo logfiles, tempfiles, and SPFILE stored in ASM. If we wanted to remove the database files that were stored on the local file system (which are actually now RMAN copies), this could be done from an RMAN session. You could also then remove the old version of the controfile(s) that were stored on the local file system: If this is a production database, it would be best practice to first backup the database files on the local disk before removing them! 157. 158. 159. 160. 161. 162.
RMAN> DELETE NOPROMPT FORCE COPY; allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=139 devtype=DISK List of Datafile Copies
124
ORACLE DATA BASE ADMINISTRATION
163. Key File S Completion Time Ckp SCN Ckp Time Name 164. ------- ---- - --------------- ---------- --------------- ---165. 44 1 A 14-AUG-06 7937583 14-AUG-06 /u02/oradata/TESTDB/datafile/o1_mf_system_2fb4b8s2_.dbf 166. 45 2 A 14-AUG-06 7937583 14-AUG-06 /u02/oradata/TESTDB/datafile/o1_mf_undotbs1_2fb4c2wf_.dbf 167. 46 3 A 14-AUG-06 7937583 14-AUG-06 /u02/oradata/TESTDB/datafile/o1_mf_sysaux_2fb4cb7z_.dbf 168. 47 4 A 14-AUG-06 7937583 14-AUG-06 /u02/oradata/TESTDB/datafile/o1_mf_example_2fb4ccw2_.dbf 169. 48 5 A 14-AUG-06 7937583 14-AUG-06 /u02/oradata/TESTDB/datafile/o1_mf_users_2fb4cqf4_.dbf 170. 49 6 A 14-AUG-06 7937583 14-AUG-06 /u02/oradata/TESTDB/datafile/o1_mf_apex22_2ft4eswu_.dbf 171. 50 7 A 14-AUG-06 7937583 14-AUG-06 /u02/oradata/TESTDB/datafile/o1_mf_flow_1_2fb4cegw_.dbf 172. 173. List of Control File Copies 174. Key S Completion Time Ckp SCN Ckp Time Name 175. ------- - --------------- ---------- --------------- ---176. 43 A 14-AUG-06 7937583 14-AUG-06 +TESTDB_DATA1/testdb/controlfile/backup.261.598482421 177. 178. List of Archived Log Copies 179. Key Thrd Seq S Low Time Name 180. ------- ---- ------- - --------- ---181. 48 1 34 A 14-AUG-06 +FLASH_RECOVERY_AREA/testdb/archivelog/2006_08_14/thread_1_seq_34. 259.598482825 182. 49 1 35 A 14-AUG-06 +FLASH_RECOVERY_AREA/testdb/archivelog/2006_08_14/thread_1_seq_35. 258.598482825 183. 47 1 36 A 14-AUG-06 +FLASH_RECOVERY_AREA/testdb/archivelog/2006_08_14/thread_1_seq_36. 260.598482821 184. deleted datafile copy 185. datafile copy filename=/u02/oradata/TESTDB/datafile/o1_mf_system_2fb4b8s2_.dbf recid=44 stamp=598482495 186. deleted datafile copy 187. datafile copy filename=/u02/oradata/TESTDB/datafile/o1_mf_undotbs1_2fb4c2wf_.dbf recid=45 stamp=598482495 188. deleted datafile copy 189. datafile copy filename=/u02/oradata/TESTDB/datafile/o1_mf_sysaux_2fb4cb7z_.dbf recid=46 stamp=598482496 190. deleted datafile copy 191. datafile copy filename=/u02/oradata/TESTDB/datafile/o1_mf_example_2fb4ccw2_.dbf recid=47 stamp=598482496 192. deleted datafile copy
125
ORACLE DATA BASE ADMINISTRATION
193. datafile copy filename=/u02/oradata/TESTDB/datafile/o1_mf_users_2fb4cqf4_.dbf recid=48 stamp=598482496 194. deleted datafile copy 195. datafile copy filename=/u02/oradata/TESTDB/datafile/o1_mf_apex22_2ft4eswu_.dbf recid=49 stamp=598482496 196. deleted datafile copy 197. datafile copy filename=/u02/oradata/TESTDB/datafile/o1_mf_flow_1_2fb4cegw_.dbf recid=50 stamp=598482496 198. deleted control file copy 199. control file copy filename=+TESTDB_DATA1/testdb/controlfile/backup.261.598482421 recid=43 stamp=598482423 200. deleted archive log 201. archive log filename=+FLASH_RECOVERY_AREA/testdb/archivelog/2006_08_14/thread_ 1_seq_34.259.598482825 recid=48 stamp=598482824 202. deleted archive log 203. archive log filename=+FLASH_RECOVERY_AREA/testdb/archivelog/2006_08_14/thread_ 1_seq_35.258.598482825 recid=49 stamp=598482824 204. deleted archive log 205. archive log filename=+FLASH_RECOVERY_AREA/testdb/archivelog/2006_08_14/thread_ 1_seq_36.260.598482821 recid=47 stamp=598482824 206. Deleted 11 objects 207. 208. RMAN> exit 209. 210. $ rm /u02/oradata/TESTDB/controlfile/o1_mf_8du3s3er_.ctl 211. $ rm /u02/oradata/TESTDB/controlfile/o1_mf_y2is93je_.ctl
Migrating Oracle Database from ASM to Local File System The following query lists database files as they exist in ASM for the TESTDB database. All of the files listed in this query will be relocated from ASM to the local file system: $ ORACLE_SID=TESTDB; export ORACLE_SID $ sqlplus "/ as sysdba" SQL> @dba_files_all Tablespace Name / File Class Filename File Size Auto Next Max --------------------- ----------------------------------------------------------- --------------- -------------- ---------------
126
ORACLE DATA BASE ADMINISTRATION
APEX22 +TESTDB_DATA1/testdb/datafile/apex22.263.598482381 104,857,600 NO 0 0 EXAMPLE +TESTDB_DATA1/testdb/datafile/example.264.598482345 157,286,400 YES 655,360 34,359,721,984 FLOW_1 +TESTDB_DATA1/testdb/datafile/flow_1.262.598482405 52,494,336 NO 0 0 SYSAUX +TESTDB_DATA1/testdb/datafile/sysaux.267.598482213 419,430,400 YES 10,485,760 34,359,721,984 SYSTEM +TESTDB_DATA1/testdb/datafile/system.269.598482099 608,174,080 YES 10,485,760 34,359,721,984 TEMP +TESTDB_DATA1/testdb/tempfile/temp.261.598485663 536,870,912 YES 262,144,000 34,359,721,984 UNDOTBS1 +TESTDB_DATA1/testdb/datafile/undotbs1.256.598482299 209,715,200 YES 5,242,880 34,359,721,984 USERS +TESTDB_DATA1/testdb/datafile/users.270.598481673 2,382,888,960 YES 1,310,720 34,359,721,984 [ CONTROL FILE ] +TESTDB_DATA1/testdb/controlfile/backup.268.598481391 [ ONLINE REDO LOG ] +FLASH_RECOVERY_AREA/testdb/onlinelog/group_1.259.598486879 262,144,000 [ ONLINE REDO LOG ] +FLASH_RECOVERY_AREA/testdb/onlinelog/group_2.257.598487225 262,144,000 [ ONLINE REDO LOG ] +FLASH_RECOVERY_AREA/testdb/onlinelog/group_3.260.598487411 262,144,000 [ ONLINE REDO LOG ] +TESTDB_DATA1/testdb/onlinelog/group_1.259.598486831 262,144,000 [ ONLINE REDO LOG ] +TESTDB_DATA1/testdb/onlinelog/group_2.260.598487179 262,144,000 [ ONLINE REDO LOG ] +TESTDB_DATA1/testdb/onlinelog/group_3.258.598487365 262,144,000 --------------sum 6,044,581,888 15 rows selected. Also note that the target database uses an SPFILE which is stored in ASM. The target database starts using a text-based init.ora ($ORACLE_HOME/dbs/initTESTDB.ora) which defines the location of the SPFILE in ASM: SPFILE='+TESTDB_DATA1/TESTDB/spfileTESTDB.ora' Use the following steps to fully migrate an existing Oracle database from ASM to a local file system: 1. With the target database open, edit the initialization parameter control_files and db_create_file_dest to point to locations on the local file system (/u02/oradata). Also configuredb_recovery_file_dest to point to the Flash Recovery Area on the local file system (/u02/flash_recovery_area): 2. SQL> ALTER SYSTEM SET control_files='/u02/oradata/TESTDB/control01.ctl' SCOPE=spfile; 3. 4. System altered. 5. 6. SQL> ALTER SYSTEM SET db_create_file_dest='/u02/oradata' SCOPE=spfile; 7.
127
ORACLE DATA BASE ADMINISTRATION
8. System altered. 9. 10. SQL> ALTER SYSTEM SET db_recovery_file_dest='/u02/flash_recovery_area' SCOPE=spfile; 11. System altered. 12. Backup the current SPFILE (in ASM) to a text-based init.ora file on the local file system. Then convert the text-based init.ora file to a binary SPFILE on the local file system: 13. SQL> HOST mv $ORACLE_HOME/dbs/initTESTDB.ora $ORACLE_HOME/dbs/BACKUP_ASM.initTESTDB.ora 14. 15. SQL> CREATE PFILE='$ORACLE_HOME/dbs/initTESTDB.ora' FROM SPFILE='+TESTDB_DATA1/TESTDB/spfileTESTDB.ora'; 16. 17. File created. 18. 19. SQL> CREATE SPFILE='$ORACLE_HOME/dbs/spfileTESTDB.ora' FROM PFILE='$ORACLE_HOME/dbs/initTESTDB.ora'; 20. File created. 21. Startup the target database in NOMOUNT mode: 22. SQL> shutdown immediate 23. Database closed. 24. Database dismounted. 25. ORACLE instance shut down. 26. 27. SQL> startup nomount 28. ORACLE instance started. 29. 30. Total System Global Area 285212672 bytes 31. Fixed Size 1260420 bytes 32. Variable Size 150996092 bytes 33. Database Buffers 130023424 bytes Redo Buffers 2932736 bytes 34. From an RMAN session, copy one of your controlfiles from ASM to its new location on the local file system. The new controlfile will be copied to the value specified in the initialization parametercontrol_files: 35. RMAN> RESTORE CONTROLFILE FROM '+TESTDB_DATA1/TESTDB/CONTROLFILE/backup.268.598481391'; 36. 37. Starting restore at 15-AUG-06 38. using target database control file instead of recovery catalog 39. allocated channel: ORA_DISK_1 40. channel ORA_DISK_1: sid=156 devtype=DISK 41. 42. channel ORA_DISK_1: copied control file copy 43. output filename=/u02/oradata/TESTDB/control01.ctl Finished restore at 15-AUG-06
128
ORACLE DATA BASE ADMINISTRATION
44. From an RMAN or SQL*Plus session, mount the database. This will mount the database using the controlfile stored on the local file system: 45. RMAN> ALTER DATABASE MOUNT; 46. 47. using target database control file instead of recovery catalog database mounted 48. From an RMAN session, copy the database files from ASM to the local file system: 49. RMAN> BACKUP AS COPY DATABASE FORMAT '/u02/oradata/TESTDB/%U'; 50. 51. Starting backup at 15-AUG-06 52. allocated channel: ORA_DISK_1 53. channel ORA_DISK_1: sid=156 devtype=DISK 54. channel ORA_DISK_1: starting datafile copy 55. input datafile fno=00005 name=+TESTDB_DATA1/testdb/datafile/users.270.598481673 56. output filename=/u02/oradata/TESTDB/data_D-TESTDB_I-2370649665_TSUSERS_FNO-5_0vhqpltd tag=TAG20060815T101925 recid=51 stamp=598530181 57. channel ORA_DISK_1: datafile copy complete, elapsed time: 00:03:36 58. channel ORA_DISK_1: starting datafile copy 59. input datafile fno=00001 name=+TESTDB_DATA1/testdb/datafile/system.269.598482099 60. output filename=/u02/oradata/TESTDB/data_D-TESTDB_I-2370649665_TSSYSTEM_FNO-1_10hqpm45 tag=TAG20060815T101925 recid=52 stamp=598530235 61. channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:55 62. channel ORA_DISK_1: starting datafile copy 63. input datafile fno=00003 name=+TESTDB_DATA1/testdb/datafile/sysaux.267.598482213 64. output filename=/u02/oradata/TESTDB/data_D-TESTDB_I-2370649665_TSSYSAUX_FNO-3_11hqpm5s tag=TAG20060815T101925 recid=53 stamp=598530274 65. channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:46 66. channel ORA_DISK_1: starting datafile copy 67. input datafile fno=00002 name=+TESTDB_DATA1/testdb/datafile/undotbs1.256.598482299 68. output filename=/u02/oradata/TESTDB/data_D-TESTDB_I-2370649665_TSUNDOTBS1_FNO-2_12hqpm7a tag=TAG20060815T101925 recid=54 stamp=598530304 69. channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25 70. channel ORA_DISK_1: starting datafile copy 71. input datafile fno=00004 name=+TESTDB_DATA1/testdb/datafile/example.264.598482345 72. output filename=/u02/oradata/TESTDB/data_D-TESTDB_I-2370649665_TSEXAMPLE_FNO-4_13hqpm83 tag=TAG20060815T101925 recid=55 stamp=598530323 73. channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25 74. channel ORA_DISK_1: starting datafile copy 75. input datafile fno=00006 name=+TESTDB_DATA1/testdb/datafile/apex22.263.598482381
129
ORACLE DATA BASE ADMINISTRATION
76. output filename=/u02/oradata/TESTDB/data_D-TESTDB_I-2370649665_TSAPEX22_FNO-6_14hqpm8s tag=TAG20060815T101925 recid=56 stamp=598530343 77. channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:16 78. channel ORA_DISK_1: starting datafile copy 79. input datafile fno=00007 name=+TESTDB_DATA1/testdb/datafile/flow_1.262.598482405 80. output filename=/u02/oradata/TESTDB/data_D-TESTDB_I-2370649665_TSFLOW_1_FNO-7_15hqpm9c tag=TAG20060815T101925 recid=57 stamp=598530353 81. channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07 82. channel ORA_DISK_1: starting datafile copy 83. copying current control file 84. output filename=/u02/oradata/TESTDB/cf_D-TESTDB_id2370649665_16hqpm9j tag=TAG20060815T101925 recid=58 stamp=598530356 85. channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01 86. channel ORA_DISK_1: starting full datafile backupset 87. channel ORA_DISK_1: specifying datafile(s) in backupset 88. including current SPFILE in backupset 89. channel ORA_DISK_1: starting piece 1 at 15-AUG-06 90. channel ORA_DISK_1: finished piece 1 at 15-AUG-06 91. piece handle=/u02/oradata/TESTDB/17hqpm9k_1_1 tag=TAG20060815T101925 comment=NONE 92. channel ORA_DISK_1: backup set complete, elapsed time: 00:00:02 Finished backup at 15-AUG-06 93. From an RMAN session, update the control file / data dictionary so that all database files point to the RMAN copy made on the local file system: 94. RMAN> SWITCH DATABASE TO COPY; 95. 96. datafile 1 switched to datafile copy "/u02/oradata/TESTDB/data_D-TESTDB_I2370649665_TS-SYSTEM_FNO-1_10hqpm45" 97. datafile 2 switched to datafile copy "/u02/oradata/TESTDB/data_D-TESTDB_I2370649665_TS-UNDOTBS1_FNO-2_12hqpm7a" 98. datafile 3 switched to datafile copy "/u02/oradata/TESTDB/data_D-TESTDB_I2370649665_TS-SYSAUX_FNO-3_11hqpm5s" 99. datafile 4 switched to datafile copy "/u02/oradata/TESTDB/data_D-TESTDB_I2370649665_TS-EXAMPLE_FNO-4_13hqpm83" 100. datafile 5 switched to datafile copy "/u02/oradata/TESTDB/data_DTESTDB_I-2370649665_TS-USERS_FNO-5_0vhqpltd" 101. datafile 6 switched to datafile copy "/u02/oradata/TESTDB/data_DTESTDB_I-2370649665_TS-APEX22_FNO-6_14hqpm8s" datafile 7 switched to datafile copy "/u02/oradata/TESTDB/data_D-TESTDB_I2370649665_TS-FLOW_1_FNO-7_15hqpm9c" 102. From a SQL*Plus session, perform incomplete recovery and open the database using the RESETLOGS option: 103. SQL> RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL; 104. 105. ORA-00279: change 7970332 generated at 08/15/2006 10:12:34 needed for thread 1
130
ORACLE DATA BASE ADMINISTRATION
106. ORA-00289: suggestion : 107. /u02/flash_recovery_area/TESTDB/archivelog/2006_08_15/o1_mf_1_5_ %u_.arc 108. ORA-00280: change 7970332 for thread 1 is in sequence #5 109. 110. 111. Specify log: {=suggested | filename | AUTO | CANCEL} 112. CANCEL 113. Media recovery cancelled. 114. 115. SQL> ALTER DATABASE OPEN RESETLOGS; 116. Database altered. 117. From a SQL*Plus session, re-create any tempfiles that are still currently using ASM to the local file system. This is done by simply dropping the tempfiles from ASM and re-creating them in the local file system. This example relies on the initialization parameter db_create_file_dest=/u02/oradata: 118. SQL> select tablespace_name, file_name, bytes from dba_temp_files; 119. 120. TABLESPACE_NAME FILE_NAME BYTES 121. --------------- ----------------------------------------------------- --------122. TEMP +TESTDB_DATA1/testdb/tempfile/temp.261.598485663 536870912 123. 124. SQL> alter database tempfile 125. 2 '+TESTDB_DATA1/testdb/tempfile/temp.261.598485663' 126. 3 drop including datafiles; 127. 128. Database altered. 129. 130. SQL> alter tablespace temp add tempfile size 512m 131. 2 autoextend on next 250m maxsize unlimited; 132. 133. Tablespace altered. 134. 135. SQL> select tablespace_name, file_name, bytes from dba_temp_files; 136. 137. TABLESPACE_NAME FILE_NAME BYTES 138. --------------- ----------------------------------------------------- --------TEMP /u02/oradata/TESTDB/datafile/o1_mf_temp_2g3spvq5_.tmp 536870912 If users are currently accessing the tempfile(s) you are attempting to drop, you may receive the following error: SQL> alter database tempfile 2 '+TESTDB_DATA1/testdb/tempfile/temp.261.598485663' 3 drop including datafiles;
131
ORACLE DATA BASE ADMINISTRATION
ERROR at line 1: ORA-25152: TEMPFILE cannot be dropped at this time As for the poor users who were using the tempfile, their transaction will end and will be greeted with the following error message: SQL> @testTemp.sql join dba_extents c on (b.segment_name = c.segment_name) * ERROR at line 4: ORA-00372: file 601 cannot be modified at this time ORA-01110: data file 601: '+TESTDB_DATA1/testdb/tempfile/temp.261.598485663' ORA-00372: file 601 cannot be modified at this time ORA-01110: data file 601: '+TESTDB_DATA1/testdb/tempfile/temp.261.598485663' If this happens, you should attempt to drop the tempfile again so the operation is successful: SQL> alter database tempfile 2 '+TESTDB_DATA1/testdb/tempfile/temp.261.598485663' 3 drop including datafiles; Database altered. 139. From a SQL*Plus session, re-create any online redo logfiles that are still currently using ASM to the local file system. This is done by simply dropping the logfiles from ASM and re-creating them on the local file system. This example relies on the initialization parameters db_create_file_dest=/u02/oradata and db_recovery_file_dest=/u 02/flash_recovery_area: o Determine the current online redo logfiles to move to the local file system by examining the file names (and sizes) from V$LOGFILE: o SQL> select a.group#, a.member, b.bytes o 2 from v$logfile a, v$log b where a.group# = b.group#; o o GROUP# MEMBER BYTES o ------ ----------------------------------------------------------- ---------o 1 +TESTDB_DATA1/testdb/onlinelog/group_1.259.598486831 262144000 o 2 +TESTDB_DATA1/testdb/onlinelog/group_2.260.598487179 262144000 o 3 +TESTDB_DATA1/testdb/onlinelog/group_3.258.598487365 262144000 o 1 +FLASH_RECOVERY_AREA/testdb/onlinelog/group_1.259.598486879 262144000 o 2 +FLASH_RECOVERY_AREA/testdb/onlinelog/group_2.257.598487225 262144000 o 3 +FLASH_RECOVERY_AREA/testdb/onlinelog/group_3.260.598487411 262144000 o 6 rows selected.
132
ORACLE DATA BASE ADMINISTRATION
o o o o o o o o o o o o o o o o o o o o o o o
Force a log switch until the last redo log is marked "CURRENT" by issuing the following command: SQL> select group#, status from v$log; GROUP# STATUS ---------- ---------------1 CURRENT 2 INACTIVE 3 INACTIVE SQL> alter system switch logfile; SQL> alter system switch logfile; SQL> select group#, status from v$log; GROUP# STATUS ---------- ---------------1 INACTIVE 2 INACTIVE 3 CURRENT After making the last online redo log file the CURRENT one, drop the first online redo log: SQL> alter database drop logfile group 1; Database altered. As a DBA, you should already be aware that if you are going to drop a logfile group, it cannot be the current logfile group. I have run into instances; however, where attempting to drop the logfile group resulted in the following error as a result of the logfile group having an activestatus: SQL> ALTER DATABASE DROP LOGFILE GROUP 1; ALTER DATABASE DROP LOGFILE GROUP 1 * ERROR at line 1: ORA-01624: log 1 needed for crash recovery of instance TESTDB (thread 1) ORA-00312: online log 1 thread 1: '' Easy problem to resolve. Simply perform a checkpoint on the database: SQL> ALTER SYSTEM CHECKPOINT GLOBAL; System altered. SQL> ALTER DATABASE DROP LOGFILE GROUP 1; Database altered.
o o o
Re-create the dropped redo log group in the local file system (and a different size if desired): SQL> alter database add logfile group 1 size 250m;
133
ORACLE DATA BASE ADMINISTRATION
Database altered. o o o o o o o o o o o o o o o o o
o
o o o o o o o o o o
o
o
After re-creating the online redo log group, force a log switch. The online redo log group just created should become the CURRENT one: SQL> select group#, status from v$log; GROUP# STATUS ---------- ---------------1 UNUSED 2 INACTIVE 3 CURRENT SQL> alter system switch logfile; SQL> select group#, status from v$log; GROUP# STATUS ---------- ---------------1 CURRENT 2 INACTIVE 3 ACTIVE After re-creating the first online redo log group, loop back to drop / recreate the next online redo logfile until all logs are rebuilt in the local file system. Verify all online redo logfiles have been created in the local file system: SQL> select a.group#, a.member, b.bytes 2 from v$logfile a, v$log b where a.group# = b.group#; GROUP# MEMBER BYTES ------ --------------------------------------------------------------------- --------1 /u02/oradata/TESTDB/onlinelog/o1_mf_1_2g3tc008_.log 262144000 2 /u02/oradata/TESTDB/onlinelog/o1_mf_2_2g3tkbwn_.log 262144000 3 /u02/oradata/TESTDB/onlinelog/o1_mf_3_2g3tmwno_.log 262144000 1 /u02/flash_recovery_area/TESTDB/onlinelog/o1_mf_1_2g3tc763_.log 262144000 2 /u02/flash_recovery_area/TESTDB/onlinelog/o1_mf_2_2g3tkmr6_.log 262144000 3 /u02/flash_recovery_area/TESTDB/onlinelog/o1_mf_3_2g3tn2g5_.log 262144000
o 6 rows selected.
134
ORACLE DATA BASE ADMINISTRATION
140. Verify that all database files have been created in the local file system: 141. $ sqlplus "/ as sysdba" 142. 143. SQL> @dba_files_all 144. 145. Tablespace Name / 146. File Class Filename File Size Auto Next Max 147. -------------------- ------------------------------------------------------------------------ --------------- ---- ----------- --------------148. APEX22 /u02/oradata/TESTDB/data_D-TESTDB_I2370649665_TS-APEX22_FNO-6_14hqpm8s 104,857,600 NO 0 0 149. EXAMPLE /u02/oradata/TESTDB/data_D-TESTDB_I2370649665_TS-EXAMPLE_FNO-4_13hqpm83 157,286,400 YES 655,360 34,359,721,984 150. FLOW_1 /u02/oradata/TESTDB/data_D-TESTDB_I2370649665_TS-FLOW_1_FNO-7_15hqpm9c 52,494,336 NO 0 0 151. SYSAUX /u02/oradata/TESTDB/data_D-TESTDB_I2370649665_TS-SYSAUX_FNO-3_11hqpm5s 419,430,400 YES 10,485,760 34,359,721,984 152. SYSTEM /u02/oradata/TESTDB/data_D-TESTDB_I2370649665_TS-SYSTEM_FNO-1_10hqpm45 608,174,080 YES 10,485,760 34,359,721,984 153. TEMP /u02/oradata/TESTDB/datafile/o1_mf_temp_2g3spvq5_.tmp 536,870,912 YES 262,144,000 34,359,721,984 154. UNDOTBS1 /u02/oradata/TESTDB/data_D-TESTDB_I2370649665_TS-UNDOTBS1_FNO-2_12hqpm7a 209,715,200 YES 5,242,880 34,359,721,984 155. USERS /u02/oradata/TESTDB/data_D-TESTDB_I2370649665_TS-USERS_FNO-5_0vhqpltd 2,382,888,960 YES 1,310,720 34,359,721,984 156. [ CONTROL FILE ] /u02/oradata/TESTDB/control01.ctl 157. [ ONLINE REDO LOG ] /u02/flash_recovery_area/TESTDB/onlinelog/o1_mf_1_2g3tc763_.log 262,144,000 158. [ ONLINE REDO LOG ] /u02/flash_recovery_area/TESTDB/onlinelog/o1_mf_2_2g3tkmr6_.log 262,144,000 159. [ ONLINE REDO LOG ] /u02/flash_recovery_area/TESTDB/onlinelog/o1_mf_3_2g3tn2g5_.log 262,144,000 160. [ ONLINE REDO LOG ] /u02/oradata/TESTDB/onlinelog/o1_mf_1_2g3tc008_.log 262,144,000 161. [ ONLINE REDO LOG ] /u02/oradata/TESTDB/onlinelog/o1_mf_2_2g3tkbwn_.log 262,144,000 162. [ ONLINE REDO LOG ] /u02/oradata/TESTDB/onlinelog/o1_mf_3_2g3tmwno_.log 262,144,000
135
ORACLE DATA BASE ADMINISTRATION
163. -----164. sum 6,044,581,888 165. 15 rows selected.
---------
166. At this point, the target database is open with all of its datafiles, controlfiles, online redo logfiles, tempfiles, and SPFILE stored in the local file system. If we wanted to remove the database files that were stored using ASM (which are actually now RMAN copies), this could be done from an RMAN session. You could also then remove the old version of the controfile(s) and SPFILE that were stored using ASM using a SQL*Plus session logged in to the ASM instance: If this is a production database, it would be best practice to first backup the database files on the local disk before removing the copies stored using ASM! 167. RMAN> DELETE NOPROMPT FORCE COPY; 168. 169. using target database control file instead of recovery catalog 170. allocated channel: ORA_DISK_1 171. channel ORA_DISK_1: sid=155 devtype=DISK 172. 173. List of Datafile Copies 174. Key File S Completion Time Ckp SCN Ckp Time Name 175. ------- ---- - --------------- ---------- --------------- ---176. 59 1 A 15-AUG-06 7970332 15-AUG-06 +TESTDB_DATA1/testdb/datafile/system.269.598482099 177. 60 2 A 15-AUG-06 7970332 15-AUG-06 +TESTDB_DATA1/testdb/datafile/undotbs1.256.598482299 178. 61 3 A 15-AUG-06 7970332 15-AUG-06 +TESTDB_DATA1/testdb/datafile/sysaux.267.598482213 179. 62 4 A 15-AUG-06 7970332 15-AUG-06 +TESTDB_DATA1/testdb/datafile/example.264.598482345 180. 63 5 A 15-AUG-06 7970332 15-AUG-06 +TESTDB_DATA1/testdb/datafile/users.270.598481673 181. 64 6 A 15-AUG-06 7970332 15-AUG-06 +TESTDB_DATA1/testdb/datafile/apex22.263.598482381 182. 65 7 A 15-AUG-06 7970332 15-AUG-06 +TESTDB_DATA1/testdb/datafile/flow_1.262.598482405 183. 184. List of Control File Copies 185. Key S Completion Time Ckp SCN Ckp Time Name 186. ------- - --------------- ---------- --------------- ---187. 58 A 15-AUG-06 7970332 15-AUG-06 /u02/oradata/TESTDB/cf_D-TESTDB_id-2370649665_16hqpm9j 188. 189. List of Archived Log Copies 190. Key Thrd Seq S Low Time Name 191. ------- ---- ------- - --------- ----
136
ORACLE DATA BASE ADMINISTRATION
192. 54 1 4 A 14-AUG-06 /u02/flash_recovery_area/TESTDB/archivelog/2006_08_15/o1_mf_1_4_2g3q xnxt_.arc 193. 55 1 5 A 14-AUG-06 /u02/flash_recovery_area/TESTDB/archivelog/2006_08_15/o1_mf_1_5_2g3q xo2c_.arc 194. 56 1 1 A 15-AUG-06 /u02/flash_recovery_area/TESTDB/archivelog/2006_08_15/o1_mf_1_1_2g3t 3v8b_.arc 195. 57 1 2 A 15-AUG-06 /u02/flash_recovery_area/TESTDB/archivelog/2006_08_15/o1_mf_1_2_2g3t 4q1g_.arc 196. 58 1 3 A 15-AUG-06 /u02/flash_recovery_area/TESTDB/archivelog/2006_08_15/o1_mf_1_3_2g3t dm2s_.arc 197. 59 1 4 A 15-AUG-06 /u02/flash_recovery_area/TESTDB/archivelog/2006_08_15/o1_mf_1_4_2g3tl kfk_.arc 198. 60 1 5 A 15-AUG-06 /u02/flash_recovery_area/TESTDB/archivelog/2006_08_15/o1_mf_1_5_2g3t qb1h_.arc 199. deleted datafile copy 200. datafile copy filename=+TESTDB_DATA1/testdb/datafile/system.269.598482099 recid=59 stamp=598531780 201. deleted datafile copy 202. datafile copy filename=+TESTDB_DATA1/testdb/datafile/undotbs1.256.598482299 recid=60 stamp=598531780 203. deleted datafile copy 204. datafile copy filename=+TESTDB_DATA1/testdb/datafile/sysaux.267.598482213 recid=61 stamp=598531780 205. deleted datafile copy 206. datafile copy filename=+TESTDB_DATA1/testdb/datafile/example.264.598482345 recid=62 stamp=598531780 207. deleted datafile copy 208. datafile copy filename=+TESTDB_DATA1/testdb/datafile/users.270.598481673 recid=63 stamp=598531780 209. deleted datafile copy 210. datafile copy filename=+TESTDB_DATA1/testdb/datafile/apex22.263.598482381 recid=64 stamp=598531780 211. deleted datafile copy 212. datafile copy filename=+TESTDB_DATA1/testdb/datafile/flow_1.262.598482405 recid=65 stamp=598531780 213. deleted control file copy 214. control file copy filename=/u02/oradata/TESTDB/cf_D-TESTDB_id2370649665_16hqpm9j recid=58 stamp=598530356 215. deleted archive log
137
ORACLE DATA BASE ADMINISTRATION
216. archive log filename=/u02/flash_recovery_area/TESTDB/archivelog/2006_08_15/o1_mf_ 1_4_2g3qxnxt_.arc recid=54 stamp=598531956 217. deleted archive log 218. archive log filename=/u02/flash_recovery_area/TESTDB/archivelog/2006_08_15/o1_mf_ 1_5_2g3qxo2c_.arc recid=55 stamp=598531962 219. deleted archive log 220. archive log filename=/u02/flash_recovery_area/TESTDB/archivelog/2006_08_15/o1_mf_ 1_1_2g3t3v8b_.arc recid=56 stamp=598534203 221. deleted archive log 222. archive log filename=/u02/flash_recovery_area/TESTDB/archivelog/2006_08_15/o1_mf_ 1_2_2g3t4q1g_.arc recid=57 stamp=598534231 223. deleted archive log 224. archive log filename=/u02/flash_recovery_area/TESTDB/archivelog/2006_08_15/o1_mf_ 1_3_2g3tdm2s_.arc recid=58 stamp=598534483 225. deleted archive log 226. archive log filename=/u02/flash_recovery_area/TESTDB/archivelog/2006_08_15/o1_mf_ 1_4_2g3tlkfk_.arc recid=59 stamp=598534673 227. deleted archive log 228. archive log filename=/u02/flash_recovery_area/TESTDB/archivelog/2006_08_15/o1_mf_ 1_5_2g3tqb1h_.arc recid=60 stamp=598534826 229. Deleted 15 objects 230. 231. RMAN> exit 232. 233. $ ORACLE_SID=+ASM; export ORACLE_SID 234. $ sqlplus "/ as sysdba" 235. 236. SQL> ALTER DISKGROUP TESTDB_DATA1 DROP FILE '+TESTDB_DATA1/TESTDB/CONTROLFILE/backup.268.598481391'; 237. 238. Diskgroup altered. 239. 240. SQL> ALTER DISKGROUP TESTDB_DATA1 DROP FILE '+TESTDB_DATA1/TESTDB/spfileTESTDB.ora'; 241. 242. Diskgroup altered. 243. One final note. Throughout this article, you will have noticed that I relied on using Oracle Managed Files (OMF) whenever possible. However, you can see that after relocating the database files from ASM to the local file system that our datafiles and controlfile(s) are not OMF. In this final task, I will convert the current datafiles and controlfile(s) to OMF: o First, determine the non-OMF controlfiles (to be renamed) currently in use: o SQL> select name from v$controlfile; o o NAME
138
ORACLE DATA BASE ADMINISTRATION
o
--------------------------------/u02/oradata/TESTDB/control01.ctl
o
Next, determine the non-OMF datafiles (to be renamed) by examining the view DBA_DATA_FILES: SQL> SELECT tablespace_name, file_name 2 FROM dba_data_files;
o o o o o o o o o o o o
TABLESPACE_NAME FILE_NAME --------------- -----------------------------------------------------------------------SYSTEM /u02/oradata/TESTDB/data_D-TESTDB_I2370649665_TS-SYSTEM_FNO-1_10hqpm45 UNDOTBS1 /u02/oradata/TESTDB/data_D-TESTDB_I2370649665_TS-UNDOTBS1_FNO-2_12hqpm7a SYSAUX /u02/oradata/TESTDB/data_D-TESTDB_I2370649665_TS-SYSAUX_FNO-3_11hqpm5s EXAMPLE /u02/oradata/TESTDB/data_D-TESTDB_I2370649665_TS-EXAMPLE_FNO-4_13hqpm83 USERS /u02/oradata/TESTDB/data_D-TESTDB_I2370649665_TS-USERS_FNO-5_0vhqpltd APEX22 /u02/oradata/TESTDB/data_D-TESTDB_I2370649665_TS-APEX22_FNO-6_14hqpm8s FLOW_1 /u02/oradata/TESTDB/data_D-TESTDB_I2370649665_TS-FLOW_1_FNO-7_15hqpm9c
o 7 rows selected. o o o o
Shutdown the database: SQL> shutdown immediate Database closed. Database dismounted. ORACLE instance shut down.
o
Rename the non-OMF controlfile(s) in the file system to the newest OMF file name format: $ cd /u02/oradata/TESTDB $ mkdir -p controlfile $ cp control01.ctl controlfile/o1_mf_8du3s3er_.ctl $ cp control01.ctl controlfile/o1_mf_y2is93je_.ctl $ rm control01.ctl
o o o o
o o o o o o o o o
Modify the control_files initialization parameter in your init.ora or SPFILE to reference the new name(s): SQL> startup nomount ORACLE instance started. Total System Global Area 285212672 bytes Fixed Size 1260420 bytes Variable Size 150996092 bytes Database Buffers 130023424 bytes Redo Buffers 2932736 bytes
139
ORACLE DATA BASE ADMINISTRATION
o o o
o o o o o o o o o
SQL> alter system set 2 control_files='/u02/oradata/TESTDB/controlfile/o1_mf_8du3s 3er_.ctl', 3 '/u02/oradata/TESTDB/controlfile/o1_mf_y2is93je_.ctl' 4 scope=spfile; System altered. SQL> shutdown immediate ORA-01507: database not mounted ORACLE instance shut down.
o o o o o o o o o
o o o o o o o o o o o o o o
Rename the non-OMF datafiles in the file system to the newest OMF file name format: $ cd /u02/oradata/TESTDB $ mkdir -p datafile $ mv data_D-TESTDB_I-2370649665_TS-SYSTEM_FNO1_10hqpm45 datafile/o1_mf_system_2fb4b8s2_.dbf $ mv data_D-TESTDB_I-2370649665_TS-UNDOTBS1_FNO2_12hqpm7a datafile/o1_mf_undotbs1_2fb4c2wf_.dbf $ mv data_D-TESTDB_I-2370649665_TS-SYSAUX_FNO3_11hqpm5s datafile/o1_mf_sysaux_2fb4cb7z_.dbf $ mv data_D-TESTDB_I-2370649665_TS-EXAMPLE_FNO4_13hqpm83 datafile/o1_mf_example_2fb4ccw2_.dbf $ mv data_D-TESTDB_I-2370649665_TS-USERS_FNO5_0vhqpltd datafile/o1_mf_users_2fb4cqf4_.dbf $ mv data_D-TESTDB_I-2370649665_TS-APEX22_FNO6_14hqpm8s datafile/o1_mf_apex22_2ft4eswu_.dbf $ mv data_D-TESTDB_I-2370649665_TS-FLOW_1_FNO7_15hqpm9c datafile/o1_mf_flow_1_2fb4cegw_.dbf Rename the files in the controlfile: SQL> startup mount ORACLE instance started. Total System Global Area 285212672 bytes Fixed Size 1260420 bytes Variable Size 150996092 bytes Database Buffers 130023424 bytes Redo Buffers 2932736 bytes Database mounted. SQL> alter database rename file 2 '/u02/oradata/TESTDB/data_D-TESTDB_I2370649665_TS-SYSTEM_FNO-1_10hqpm45' 3 to '/u02/oradata/TESTDB/datafile/o1_mf_system_2fb4b8s2_.db f';
140
ORACLE DATA BASE ADMINISTRATION
o o o o o o
o o o o o o
o o o o o o
o o o o o o o o o o o o
o o o o o o
Database altered. SQL> alter database rename file 2 '/u02/oradata/TESTDB/data_D-TESTDB_I2370649665_TS-UNDOTBS1_FNO-2_12hqpm7a' 3 to '/u02/oradata/TESTDB/datafile/o1_mf_undotbs1_2fb4c2wf_. dbf'; Database altered. SQL> alter database rename file 2 '/u02/oradata/TESTDB/data_D-TESTDB_I2370649665_TS-SYSAUX_FNO-3_11hqpm5s' 3 to '/u02/oradata/TESTDB/datafile/o1_mf_sysaux_2fb4cb7z_.dbf '; Database altered. SQL> alter database rename file 2 '/u02/oradata/TESTDB/data_D-TESTDB_I2370649665_TS-EXAMPLE_FNO-4_13hqpm83' 3 to '/u02/oradata/TESTDB/datafile/o1_mf_example_2fb4ccw2_.d bf'; Database altered. SQL> alter database rename file 2 '/u02/oradata/TESTDB/data_D-TESTDB_I2370649665_TS-USERS_FNO-5_0vhqpltd' 3 to '/u02/oradata/TESTDB/datafile/o1_mf_users_2fb4cqf4_.dbf'; Database altered. SQL> alter database rename file 2 '/u02/oradata/TESTDB/data_D-TESTDB_I2370649665_TS-APEX22_FNO-6_14hqpm8s' 3 to '/u02/oradata/TESTDB/datafile/o1_mf_apex22_2ft4eswu_.db f'; Database altered. SQL> alter database rename file 2 '/u02/oradata/TESTDB/data_D-TESTDB_I2370649665_TS-FLOW_1_FNO-7_15hqpm9c' 3 to '/u02/oradata/TESTDB/datafile/o1_mf_flow_1_2fb4cegw_.db f';
141
ORACLE DATA BASE ADMINISTRATION
o Database altered. o o o o o o o o o o o o o o o o o
Open database and verify the new OMF files: SQL> alter database open; Database altered. SQL> SELECT tablespace_name, file_name 2 FROM dba_data_files; TABLESPACE_NAME FILE_NAME --------------- --------------------------------------------------------SYSTEM /u02/oradata/TESTDB/datafile/o1_mf_system_2fb4b8s2_.dbf UNDOTBS1 /u02/oradata/TESTDB/datafile/o1_mf_undotbs1_2fb4c2wf_.dbf SYSAUX /u02/oradata/TESTDB/datafile/o1_mf_sysaux_2fb4cb7z_.dbf EXAMPLE /u02/oradata/TESTDB/datafile/o1_mf_example_2fb4ccw2_.dbf USERS /u02/oradata/TESTDB/datafile/o1_mf_users_2fb4cqf4_.dbf APEX22 /u02/oradata/TESTDB/datafile/o1_mf_apex22_2ft4eswu_.dbf FLOW_1 /u02/oradata/TESTDB/datafile/o1_mf_flow_1_2fb4cegw_.dbf
o 7 rows selected.
ORA-28000 ,ORA-28001, ORA-28002 : The Account locked ,expired or password will expire within xx days ORA-28000 specifies the user's account is locked .The common reason of occurring this error is when it gets locked internally based on the profile resource limit. This error may also occur when the user has entered wrong password consequently for maximun no. of times as specified by the user's profile parameter i.e, Failed_Login_Attempts. To solve this error either wait for the Password_lock_time or the DBA can fire the below command to solve this issue : SQL> alter user abc identified by password account unlock ; ORA-28001 specifies the user account is expired . This error commonly occurs when the expiry time is reached . By default the expiry date for a newly created user is of 180 days . Hence to solve this issue, increase the limit of the password expiry date. For this check the profile assigned to the user and then limit the password expiry date. To solve this issue increase the password expiry periods .
142
ORACLE DATA BASE ADMINISTRATION
SQL>select username,profile from dba_users where username='TEST' ; SQL> alter profile profile_name limit PASSWORD_LIFE_TIME UNLIMITED; ORA-28002 specifies that the user's account is about to about to expire and the password needs to be changed. This can be solved either by changing the password or by changing the user profile. If we do want this behavior, we need to do the following: 1.) Logon to the product database as the SYSTEM user (not the application administration user). 2.) Find the profile that has the PASSWORD_LIFE_TIME set to anything but UNLIMITED. SQL> select * from dba_profiles where RESOURCE_NAME LIKE 'PASSWORD_LIFE_TIME'; If the user name say "test" and password is also "test" then check the profile assign to the user as SQL>select username,profile from dba_users where username='TEST' ; Once ,we have profile, we alter the profile and password . 3.) Alter the profiles with the following statement: SQL> alter user test identified by test account unlock ; SQL> alter profile profile_name limit PASSWORD_LIFE_TIME UNLIMITED; where profile_name is the name of the profile where wer need to set the password life to UNLIMITED. This should remove the password life message. Automatic Workload Repository (AWR) in Oracle Oracle have provided many performance gathering and reporting tools over the years. Originally theUTLBSTAT/UTLESTAT scripts were used to monitor performance metrics. Oracle8i introduced theStatspack functionality which Oracle9i extended. In Oracle 10g statspack has evolved into the AutomaticWorkload Repository (AWR).The AWR is a repository of performance information collected by the database to aid in the tuning process for DBAs. Oracle 10g uses a scheduled job, GATHER_STATS_JOB, to collect AWR statistics. This job is created, and enabled automatically when we create a new Oracle database. We can disable and enable the schedule job by following command: we can disable this job by using the dbms_scheduler.disable procedure as below : Exec dbms_scheduler.disable(‘GATHER_STATS_JOB‘); And we can enable the job using the dbms_scheduler.enable procedure as below :
143
ORACLE DATA BASE ADMINISTRATION
Exec dbms_scheduler.enable(‘GATHER_STATS_JOB‘); AWR consists of a collection of performance statistics including : Wait events used to identify performance problems. Time model statistics indicating the amount of DB time associated with a process from the v$sess_time_model and v$sys_time_model views.
Active Session History (ASH) statistics from the v$active_session_history view.
Some system and session statistics from the v$sysstat and v$sesstat views.
Object usage statistics.
Resource intensive SQL and PL/SQL.
The resource intensive SQL and PL/SQL section of the report can be used to focus tuning efforts on those areas that will yield the greatest returns. The statements are ordered by several criteria including : SQL ordered by Elapsed Time
SQL ordered by CPU Time
SQL ordered by Gets
SQL ordered by Reads
SQL ordered by Executions
SQL ordered by Parse Calls
SQL ordered by Sharable Memory
Several of the automatic database tuning features require information from the AWR to function correctly, including: Automatic Database Diagnostic Monitor
SQL Tuning Advisor
Undo Advisor
Segment Advisor
How to generate AWR report ? There are two scripts that are provided by oracle to generate the AWR report. The scripts are available in the directory $ORACLE_HOME\rdbms\admin. The two scripts are 1.) awrrpt.sql
: If we have only One Oracle Database then run awrrpt.sql sql script.
2.) awrrpti.sql : If we have more than One Oracle Instance (Like RAC) then run awrrpti.sql script so that we can particular instance for awr report creation. By default snapshots of the relevant data are taken every hour and retained for 7 days. The default values for these settings can be altered using the below procedure : BEGIN DBMS_WORKLOAD_REPOSITORY.modify_snapshot_settings( retention => 43200, -- Minutes (= 30 Days). Current value retained if NULL. interval => 15); -- Minutes. Current value retained if NULL. END; /
144
ORACLE DATA BASE ADMINISTRATION
Here we have alter the snapshot interval to 15min. It is recommended that 15 Minutes is enough in two snapshot for better performance bottleneck. AWR using Enterprise Manager : The automated workload repository administration tasks have been included in Enterprise Manager. The "Automatic Workload Repository" page is accessed from the main page by clicking on the "Administration" link, then the "Workload Repository" link under the "Workload" section. The page allows us to modify AWR settings or manage snapshots without using the PL/SQL APIs. Here is the Demo of the AWR report . C:\>sqlplus sys/xxxx@orcl as sysdba SQL*Plus: Release 11.2.0.1.0 Production on Thu Jun 16 11:42:19 2011 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> @D:\app\Neerajs\product\11.2.0\dbhome_1\RDBMS\ADMIN\awrrpt.sql Current Instance ~~~~~~~~~~~~~~~~ DB Id DB Name Inst Num Instance ---------------------------------------1281052636 ORCL 1 orcl Specify the Report Type ~~~~~~~~~~~~~~~~~~~~~~~ Would you like an HTML report, or a plain text report? Enter 'html' for an HTML report, or 'text' for plain text Defaults to 'html' Enter value for report_type: HTML Type Specified: html Instances in this Workload Repository schema ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ DB Id Inst Num DB Name Instance Host ---------------------------------------------------* 1281052636 1 ORCL orcl xxxx Using 1281052636 for database Id Using 1 for instance number Specify the number of days of snapshots to choose from ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Entering the number of days (n) will result in the most recent (n) days of snapshots being listed. Pressing without specifying a number lists all completed snapshots. Enter value for num_days: (Press Enter to see all the snapshots) Listing all Completed Snapshots Instance ---------
DB Name ------------
Snap Id Snap Started --------------------------
Level -----
145
ORACLE DATA BASE ADMINISTRATION
orcl
ORCL
1 3 4
08 Jun 2011 11:30 08 Jun 2011 14:41 08 Jun 2011 15:30
1 1 1
. Data is truncated . 120 121 122 123 124 125 126
16 Jun 2011 05:30 16 Jun 2011 06:30 16 Jun 2011 07:30 16 Jun 2011 08:30 16 Jun 2011 09:30 16 Jun 2011 10:30 16 Jun 2011 11:30
1 1 1 1 1 1 1
Specify the Begin and End Snapshot Ids ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Enter value for begin_snap: 125 Begin Snapshot Id specified: 125 Enter value for end_snap: 126 End Snapshot Id specified: 126 Specify the Report Name ~~~~~~~~~~~~~~~~~~~~~~~ The default report file name is awrrpt_1_125_126.html. To use this name, press to continue, otherwise enter an alternative. Enter value for report_name: (Press enter if you want to use the above name) Using the report name awrrpt_1_125_126.html . . Report is truncated . . End of Report Report written to awrrpt_1_125_126.html SQL> In the above report the line which are shaded with red colour are the entered values when it prompts.
Handling Corrupt Datafile Blocks in RMAN Backup We have two different kinds of block corruption: Physical corruption (media corrupt) : Physical corruption can be caused by defected memory boards, controllers or broken sectors on a hard disk. Logical corruption (soft corrupt) : Logical corrution can among other reasons be caused by an attempt to recover through a NOLOGGING action. When RMAN encounters a corrupt datafile block during a backup, the behavior depends upon whether RMAN has encountered this corrupt block during a previous backup. If the block is already identified as corrupt, then it is included in the backup. If the block is not
146
ORACLE DATA BASE ADMINISTRATION
previously identified as corrupt, then RMAN's default behavior is to stop the backup. We can override this behavior using the SET MAXCORRUPTcommand with BACKUP in a RUN block. Setting MAXCORRUPT allows a specified number of previously undetected block corruptions in datafiles during the execution of an RMAN BACKUP command. Here is the example of set maxcorrupt example. Syntax : set maxcorrupt for datafile TO Example : i.) RMAN>run { set maxcorrupt for datafile 3,4,5,6 to 1 ; backup check logical database ; } In the above example datafile 3,4,5,6 may not more than 1 corruption datafile block otherwise backup will fail. ii.)RMAN> run { set maxcorrupt for datafile 1 to 10; backup database; skip inaccessible; skip readonly } If RMAN detects more than this number of new corrupt blocks while taking the backup, then the backup job aborts, and no backup is created. As RMAN finds corrupt blocks during the backup process, it writes the corrupt blocks to the backup with a special header indicating that the block has media corruption. If the backup completes without exceeding the specified MAXCORRUPT limit, then the database records the address of the corrupt blocks and the type of corruption found (logical or physical) in the control file. We can access these records through the V$DATABASE_BLOCK_CORRUPTION view. Detecting Physical Block Corruption With RMAN BACKUP : RMAN checks only for physically corrupt blocks with every backup it takes and every image copy it makes. RMAN depends upon database server sessions to perform backups, and the database server can detect many types of physically corrupt blocks during the backup process. Each new corrupt block not previously encountered in a backup is recorded in the control file and in the alert.log. By default, error checking for physical corruption is enabled. At the end of a backup, RMAN stores the corruption information in the recovery catalog and control file. How to detect block corruption ? 1.) DBVERIFY utility : DBVERIFY is an external command-line utility that performs a physical data structure integrity check. It can be used on offline or online databases, as well on backup files. we use DBVERIFY primarily when we need to ensure that a backup database (or datafile) is valid before it is restored 2.) Block checking parameters : There are two initialization parameters for dealing with block corruption : DB_BOCK_CHECKSUM (calculates a checksum for each block before it is written to disk, every time) causes 1-2% performance overhead . DB_BLOCK_CHECKING (server process checks block for internal consistency after every DML) causes 1-10% performance overhead . For more about db_block_checking parameter click here
147
ORACLE DATA BASE ADMINISTRATION
3.) ANALYZE TABLE : ANALYZE TABLE tablename VALIDATE STRUCTURE CASCADE SQL statement Validate the structure of an index or index partition, table or table partition, index-organized table, cluster, or object reference (REF) . More about Analyze:Report Corruption click here 4.) RMAN BACKUP command with VALIDATE option : We can use the VALIDATE option of the BACKUP command to verify that database files exist and are in the correct locations, and have no physical or logical corruptions that would prevent RMAN from creating backups of them. When performing a BACKUP... VALIDATE, RMAN reads the files to be backed up in their entirety, as it would during a real backup. It does not, however, actually produce any backup sets or image copies. RMAN> RESTORE DATABASE VALIDATE; RMAN> RESTORE ARCHIVELOG ALL VALIDATE; To check for logical corruptions in addition to physical corruptions, run the following variation of the preceding command: RMAN> BACKUP VALIDATE CHECK LOGICAL DATABASE ARCHIVELOG ALL; Detection of Logical Block Corruption : Besides testing for media corruption, the database can also test data and index blocks for logical corruption, such as corruption of a row piece or index entry. If RMAN finds logical corruption, then it logs the block in the alert.log. If CHECK LOGICAL was used,the block is also logged in the server session trace file. By default, error checking for logical corruption is disabled. 1.) RMAN found any block corruption in database then following Data Dictionary view populated. V$COPY_CORRUPTION V$BACKUP_CORRUPTION V$DATABASE_BLOCK_CORRUPTION 2.) EXPORT/IMPORT command line utility Full database EXPORT/IMPORT show=y is another method. . about to export SCOTT's tables via Conventional Path ... . . exporting table BONUS EXP-00056: ORACLE error 1578 encountered ORA-01578: ORACLE data block corrupted (file # 4, block # 43) ORA-01110: data file 4: 'D:\app\Neerajs\oradata\orcl\USERS01.DBF' 3.) DBMS_REPAIR package dbms_repair is a utility that can detect and repair block corruption within Oracle. It is provided by Oracle as part of the standard database installation.For detail about dbms_repair Package click here .
ORA-04043 , ORA-00942 : object does not exist Today, i have face very usual error. I have imported a table from ms-excess into "scott" schemas. when i check the table in scott schemas,i found it was there.And when try to access the table it throws as error "ORA-00942 " . I get puzzled.
148
ORACLE DATA BASE ADMINISTRATION
SQL> select * from tab; TNAME TABTYPE -------------------------------------BONUS TABLE DEPT TABLE EMP TABLE SALGRADE TABLE Table11 TABLE
CLUSTERID ----------
SQL> select * from table11; select * from table11 * ERROR at line 1: ORA-00942: table or view does not exist then, i have decided to describe the table, and got the error "ORA-04043" . SQL> desc Table11 ERROR: ORA-04043: object Table11 does not exist SQL> desc dept Name -------------------------DEPTNO DNAME LOC
Null? ------------NOT NULL
Type --------------------NUMBER(2) VARCHAR2(14) VARCHAR2(13)
While in case of table "dept" it is working fine,then i have decide to rename the table, so that it may solved. SQL> rename Table11 to emp1; rename Table11 to emp1 * ERROR at line 1: ORA-04043: object TABLE11 does not exist After some analysis i come to conclusion that the table are export from the ms-excess and ms-excess support the char datatype i.e, it is right padded . So to solve this issue, i put the table in double quotes to excess the table. For detail click here SQL> desc "Table11" Name ----------------------------ID ACCOUNTNO TEMPLATENO DEFAULTTEMPLATE
Null? --------
Type -----------------------VARCHAR2(20) BINARY_DOUBLE BINARY_DOUBLE BINARY_DOUBLE
ORA_04043 is an special error and cause due to various reason. Few Possible causes are : - An attempt was made to rename an index or a cluster, or some other object that cannot be renamed.
149
ORACLE DATA BASE ADMINISTRATION
- An invalid name for a table, view, sequence, procedure, function, package, or package body was entered. What is Checkpoint ? A checkpoint is an operation that Oracle performs to ensure data file consistency. When a checkpoint occurs, Oracle ensures all modified buffers are written from the data buffer to disk files. Frequent checkpoints decrease the time necessary for recovery should the database crash, but may decrease overall database performance. A checkpoint performs the following three operations: 1.) Every dirty block in the buffer cache is written to the data files. That is, it synchronizes the datablocks in the buffer cache with the datafiles on disk. It's the DBWR that writes all modified databaseblocks back to the datafiles. 2.) The latest SCN is written (updated) into the datafile header. 3.) The latest SCN is also written to the controlfiles. The checkpoint process (CKPT) is responsible for writing checkpoints to the data file headers and control file. Checkpoints occur in a variety of situations. For example, Oracle Database uses the following types of checkpoints: 1.) Thread checkpoints : The database writes to disk all buffers modified by redo in a specific thread before a certain target. The set of thread checkpoints on all instances in a database is a database checkpoint. Thread checkpoints occur in the following situations: Consistent database shutdown .
ALTER SYSTEM CHECKPOINT statement .
Online redo log switch .
ALTER DATABASE BEGIN BACKUP statement
2.) Tablespace and data file checkpoints : The database writes to disk all buffers modified by redo before a specific target. A tablespace checkpoint is a set of data file checkpoints, one for each data file in the tablespace. These checkpoints occur in a variety of situations, including making a tablespace read-only or taking it offline normal, shrinking a data file, or executing ALTER TABLESPACE BEGIN BACKUP. 3.) Incremental checkpoints : An incremental checkpoint is a type of thread checkpoint partly intended to avoid writing large numbers of blocks at online redo log switches. DBWn checks at least every three seconds to determine whether it has work to do. When DBWn writes dirty buffers, it advances the checkpoint position, causing CKPT to write the checkpoint position to the control file, but not to the data file headers. Other types of checkpoints include instance and media recovery checkpoints and checkpoints when schema objects are dropped or truncated. Importance of Checkpoints for Instance Recovery :
150
ORACLE DATA BASE ADMINISTRATION
Instance recovery uses checkpoints to determine which changes must be applied to the data files. The checkpoint position guarantees that every committed change with an SCN lower than the checkpoint SCN is saved to the data files.
Checkpoint Position in Online Redo Log File During instance recovery, the database must apply the changes that occur between the checkpoint position and the end of the redo thread. As shown in Figure, some changes may already have been written to the data files. However, only changes with SCNs lower than the checkpoint position are guaranteed to be on disk. Time and SCN of last checkpoint : The date and time of the last checkpoint through checkpoint_time in v$datafile_header view The SCN of the last checkpoint can be found in v$database.
can
be
retrieved
Types of Checkpoint in Oracle Checkpoint types can be divided as INCREMENTAL and COMPLETE. Also COMPLETE CHECKPOINT can be divided further into PARTIAL and FULL. In Incremental Checkpoint,checkpoint information is written to the controlfile. In the following cases: 1.Every three second. 2.At the time of log switch - Sometimes log switches may trigger a complete checkpoint , if the next log where the log switch is to take place is Active. In complete Checkpoint,checkpoint information is written in controlfile,datafile header and also dirty block is written by DBWR to the datafiles. Full Checkpoint 1.fast_start_mttr_target
151
ORACLE DATA BASE ADMINISTRATION
2.Before Clean Shutdown 3.Some log switches may trigger a complete checkpoint , if the next log where the log switch is to take place is Active. This has more chance of happenning when the Redo Log files are small in size and continuous transactions are taking place. 4.when the 'alter system checkpoint' command is issued Partial Checkpoint happens in the following cases. 1.before begin backup. 2.before tablespace offline. 3.before placing tablespace in read only. 4.Before dropping tablespace. 5.before taking datafile offline. 6.When checpoint queue exceeds its threshold. 7.before segment is dropped. 8.Before adding and removing columns from table. What is a Checkpoint?
A synchronization event at a specific point in time Causes some or all dirty block images to be written to the database thereby guaranteeing that blocks dirtied prior to that point in time get written Brings administration up to date Several types of checkpoint exist
Types of Checkpoints?
Full Checkpoint Thread Checkpoint File Checkpoint Object ―Checkpoint‖ Parallel Query Checkpoint Incremental Checkpoint Log Switch Checkpoint
Full Checkpoint Writes block images to the database for all dirty buffers from all instances • Statistics updated: – DBWR checkpoints – DBWR checkpoint buffers written – DBWR thread checkpoint buffers written • Caused by: – Alter system checkpoint [global] – Alter database close – Shutdown • Controlfile and datafile headers are updated – CHECKPOINT_CHANGE#
152
ORACLE DATA BASE ADMINISTRATION
Thread Checkpoint Writes block images to the database for all dirty buffers from one instance • Statistics updated: – DBWR checkpoints – DBWR checkpoint buffers written – DBWR thread checkpoint buffers written • Caused by: – Alter system checkpoint local • Controlfile and datafile headers are updated – CHECKPOINT_CHANGE#
File Checkpoint Writes block images to the database for all dirty buffers for all files of a tablespace from all instances • Statistics updated: – DBWR tablespace checkpoint buffers written – DBWR checkpoint buffers written – DBWR checkpoints • Caused by: – Alter tablespace XXX offline – Alter tablespace XXX begin backup – Alter tablespace XXX read only • Controlfile and datafile headers are updated – CHECKPOINT_CHANGE#
Parallel Query Checkpoint Writes block images to the database for all dirty buffers belonging to objects accessed by the query from all instances • Statistics updated: – DBWR checkpoint buffers written – DBWR checkpoints • Caused by: – Parallel Query – Parallel Query component of PDML or PDDL – Mandatory for consistency
Object “Checkpoint” Writes block images to the database for all dirty buffers belonging to an object from all instances • Statistics updated:
153
ORACLE DATA BASE ADMINISTRATION
– DBWR object drop buffers written – DBWR checkpoints • Caused by: – Drop table XXX – Drop table XXX purge – Truncate table XXX • Mandatory for media recovery purposes
Incremental Checkpoint Writes the contents of ―some‖ dirty buffers to the database from CKPT-Q • Block images written in SCN order • Checkpoint RBA updated in SGA • Statistics updated: – DBWR checkpoint buffers written • Controlfile is updated every 3 seconds by CKPT – Checkpoint progress record
Log Switch Checkpoint Writes the contents of ―some‖ dirty buffers to the database • Statistics updated: – DBWR checkpoints – DBWR checkpoint buffers written – background checkpoints started – background checkpoints completed • Controlfile and datafile headers are updated – CHECKPOINT_CHANGE#
What is ―some‖ above? Every 3 seconds CKPT calculates the checkpoint target RBA based on: The most current RBA - log_checkpoint_timeout – log_checkpoint_interval – fast_start_mttr_target – fast_start_io_target – 90% of the size of the smallest online redo log file • All buffers dirtied prior to the time corresponding to the target RBA are written to the database
Useful views:Useful checkpoint administration views: – V$INSTANCE_RECOVERY
154
ORACLE DATA BASE ADMINISTRATION
– – – – – –
V$SYSSTAT V$DATABASE V$INSTANCE_LOG_GROUP V$THREAD V$DATAFILE V$DATAFILE_HEADER
Read-Only Tables in Oracle 11g Sometime it is necessary to make the particular table read only . Prior to 11g ,a read only table was achieved by using the triggers,constraints and other method to prevent the data from being changed. In many of those cases only INSERT, UPDATE, and DELETE operations were prevented while many DDL operations were not. In oracle 11g ,Tables can be marked as read only, hence preventing the DML operation against. As performance point of view , read-only table performance is quite good because Oracle does not have the additional overhead of maintaining internal consistency, there may be a small, but measurable reduction in resource consumption . When a table is in read-only mode, operations that attempt to modify table data are disallowed. The following operations are not permitted on a read-only table: All DML operations on the table or any of its partitions.
TRUNCATE TABLE
SELECT FOR UPDATE
ALTER TABLE ADD/MODIFY/RENAME/DROP COLUMN
ALTER TABLE SET COLUMN UNUSED
ALTER TABLE DROP/TRUNCATE/EXCHANGE (SUB)PARTITION
ALTER TABLE UPGRADE INCLUDING DATA or ALTER TYPE CASCADE INCLUDING TABLE DATA for a type with read-only table dependents
FLASHBACK TABLE
The following operations are permitted on a read-only table : SELECT
CREATE/ALTER/DROP INDEX
ALTER TABLE ADD/MODIFY/DROP/ENABLE/DISABLE CONSTRAINT
ALTER TABLE for physical property changes
ALTER TABLE DROP UNUSED COLUMNS
ALTER TABLE ADD/COALESCE/MERGE/MODIFY/MOVE/RENAME
ALTER TABLE MOVE
ALTER TABLE ENABLE ROW MOVEMENT and ALTER TABLE SHRINK
RENAME TABLE and ALTER TABLE RENAME TO
DROP TABLE
ALTER TABLE DEALLOCATE UNUSED
ALTER TABLE ADD/DROP SUPPLEMENTAL LOG
Here is the Demo of the read only table. SQL> create table test (id number ,name varchar2(12)); Table created.
155
ORACLE DATA BASE ADMINISTRATION
SQL> insert into test values (1,'joy'); 1 row created. SQL> insert into test values (2,'hope'); 1 row created. SQL> insert into test values (3,'peace'); 1 row created. SQL> insert into test values (4,'happy'); 1 row created. SQL> commit ; Commit complete. SQL> select * from test ; ID NAME --------------------1 joy 2 hope 3 peace 4 happy SQL> select table_name,status,read_only from user_tables where table_name='TEST'; TABLE_NAME STATUS REA ---------------------------------TEST VALID NO Now placing the table "test" in read only mode . SQL> alter table test read only; Table altered. SQL> insert into test values (5,'sunny'); insert into test values (5,'sunny') * ERROR at line 1: ORA-12081: update operation not allowed on table "HR"."TEST" SQL> delete from test; delete from test * ERROR at line 1: ORA-12081: update operation not allowed on table "HR"."TEST" SQL> truncate table test; truncate table test * ERROR at line 1: ORA-12081: update operation not allowed on table "HR"."TEST" Now bringing the table "test" in read write mode .
156
ORACLE DATA BASE ADMINISTRATION
SQL> alter table test read write; Table altered. SQL> insert into test values (5,'sunny'); 1 row created. SQL> commit ; Commit complete. SQL> select * from test; ID NAME ---------- -----------1 joy 2 hope 3 peace 4 happy 5 sunny Automatic Diagnostic Repository (ADR) in Oracle 11g A special repository, named ADR (Automatic Diagnostic Repository) is automatically maintained by Oracle 11g to hold diagnostic information about critical error events. This repository is maintained in memory which enables database components to capture diagnostic data at its first failure for critical errors. In Oracle 11g, the init.ora parameters like user_dump_dest and background_dump_dest are deprecated. They have been replaced by the single parameter DIAGNOSTIC_DEST which identifies the location of the ADR . ADR is file based repository for diagnostic data like trace file,process dump,data structure dump etc. The default location of DIAGNOSTIC_DEST is $ORACLE_HOME/log, and if ORACLE_BASE is set in environment then DIAGNOSTIC_DEST is set to $ORACLE_BASE. The ADR can be managed via the 11g Enterprise Manager GUI (Database Control and not Grid Control) or via the ADR command line interpreter adrci . 11g new initialize parameter DIAGNOSTIC_DEST decide location of ADR root.
157
ORACLE DATA BASE ADMINISTRATION
Structure of ADR Directory is designed in such a way that uses consistent diagnostic data formats across products and instances, and a integrated set of tools enable customers and Oracle Support to correlate and analyze diagnostic data across multiple instances . In 11g alert file is saved in 2 location, one is in alert directory ( in XML format) and old style alert file intrace directory . Within ADR base, there can be many ADR homes, where each ADR home is the root directory for all diagnostic data for a particular instance. The location of an ADR home for a database is shown in the above pictures . Both the files can be viewed with EM and ADRCI Utility. SQL> show parameter diag NAME TYPE VALUE ----------------------------------diagnostic_dest string D:\ORACLE Below table shows us the new location of Diagnostic trace files Data Old location ADR location -------------------------------------------------------------Core Dump CORE_DUMP_DEST $ADR_HOME/cdump Alert log data BACKGROUND_DUMP_DEST $ADR_HOME/trace Background process trace BACKGROUND_DUMP_DEST $ADR_HOME/trace User process trace USER_DUMP_DEST $ADR_HOME/trace We can use V$DIAG_INFOview to list some important ADR locations such as ADR Base, ADR Home, Diagnostic Trace, Diagnostic Alert, Default Trace file, etc. SQL> select * from INST_ID NAME VALUE ---------- ------------------------------------1 Diag Enabled TRUE 1 ADR Base d:\oracle 1 ADR Home d:\oracle\diag\rdbms\noida\noida 1 Diag Trace d:\oracle\diag\rdbms\noida\noida\trace 1 Diag Alert d:\oracle\diag\rdbms\noida\noida\alert
v$diag_info;
158
ORACLE DATA BASE ADMINISTRATION
1 Diag Incident d:\oracle\diag\rdbms\noida\noida\incident 1 Diag Cdump d:\oracle\diag\rdbms\noida\noida\cdump 1 Health Monitor d:\oracle\diag\rdbms\noida\noida\hm 1 Active Problem Count 0 1 Active Incident Count 0 10 rows selected. ADRCI ( Automatic Diagnostic Repository Command Interpreter) : The ADR Command Interpreter (ADRCI) is a command-line tool that we use to manage Oracle Database diagnostic data. ADRCI is a command-line tool that is part of the fault diagnosability infrastructure introduced in Oracle Database Release 11g. ADRCI enables:
Viewing diagnostic data within the Automatic Diagnostic Repository (ADR).
Viewing Health Monitor reports.
Packaging of incident and problem information into a zip file for transmission to Oracle Support. Diagnostic data includes incident and problem descriptions, trace files, dumps, health monitor reports, alert log entries, and more . ADRCI has a rich command set, and can be used in interactive mode or within scripts. In addition, ADRCI can execute scripts of ADRCI commands in the same way that SQL*Plus executes scripts of SQL and PL/SQL commands. To use ADRCI in interactive mode : Enter the following command at the operating system command prompt: C:\>adrci ADRCI: Release 11.1.0.6.0 - Beta on Wed May 18 12:31:40 2011 Copyright (c) 1982, 2007, Oracle. All rights reserved. ADR base = "d:\oracle" To get list of adrci command type help command as below : adrci> help HELP [topic] Available Topics: CREATE REPORT ECHO EXIT HELP HOST IPS PURGE RUN SET BASE SET BROWSER SET CONTROL SET ECHO SET EDITOR SET HOMES | HOME | HOMEPATH SET TERMOUT
159
ORACLE DATA BASE ADMINISTRATION
SHOW ALERT SHOW BASE SHOW CONTROL SHOW HM_RUN SHOW HOMES | HOME | HOMEPATH SHOW INCDIR SHOW INCIDENT SHOW PROBLEM SHOW REPORT SHOW TRACEFILE SPOOL There are other commands intended to be used directly by Oracle, type "HELP EXTENDED" to see the list Viewing the Alert Log : The alert log is written as both an XML-formatted file and as a text file. we can view either format of the file with any text editor, or we can run an ADRCI command to view the XMLformatted alert log with the XML tags stripped. By default, ADRCI displays the alert log in your default editor The following are variations on the SHOW ALERT command: adrci > SHOW ALERT -TAIL This displays the last portion of the alert log (the last 10 entries) in your terminal session. adrci> SHOW ALERT -TAIL 50 This displays the last 50 entries in the alert log in your terminal session. adrci> SHOW ALERT -TAIL -F This displays the last 10 entries in the alert log, and then waits for more messages to arrive in the alert log. As each message arrives, it is appended to the display. This command enables you to perform "live monitoring" of the alert log. Press CTRL-C to stop waiting and return to the ADRCI prompt.Here are few Example : adrci> show alert Choose the alert log from the following homes to view: 1: diag\clients\user_neerajs\host_444208803_11 2: diag\clients\user_system\host_444208803_11 3: diag\clients\user_unknown\host_411310321_11 4: diag\rdbms\delhi\delhi 5: diag\rdbms\noida\noida 6: diag\tnslsnr\ramtech-199\listener Q: to quit Please select option: 4 Output the results to c:\docume~1\neeraj~1.ram\locals~1\temp\alert_932_4048_delhi_1.ado 'vi' is not recognized as an internal or external command, operable program or batch file. Please select option: q
file:
Since we are on window platform so we don't have vi editor.So we have set editor for window say notepad.
160
ORACLE DATA BASE ADMINISTRATION
adrci> set editor notepad adrci> SHOW ALERT Choose the alert log from the following homes to view: 1: diag\clients\user_neerajs\host_444208803_11 2: diag\clients\user_system\host_444208803_11 3: diag\clients\user_unknown\host_411310321_11 4: diag\rdbms\delhi\delhi 5: diag\rdbms\noida\noida 6: diag\tnslsnr\ramtech-199\listener Q: to quit Please select option: 4 Output the results to c:\docume~1\neeraj~1.ram\locals~1\temp\alert_916_956_noida_7.ado Here it will open the alert log file and check the file as per our need .
file:
If we want to filter the alert log file then we can filter as below : adrci> show alert -P "message_text LIKE '%ORA-600%'" This displays only alert log messages that contain the string 'ORA-600'. Choose the alert log from the following homes to view: 1: diag\clients\user_neerajs\host_444208803_11 2: diag\clients\user_system\host_444208803_11 3: diag\clients\user_unknown\host_411310321_11 4: diag\rdbms\delhi\delhi 5: diag\rdbms\noida\noida 6: diag\tnslsnr\ramtech-199\listener Q: to quit Please select option: 5 Here, there is no ora-600 error in alert log file so it is blank Finding Trace Files : ADRCI enables us to view the names of trace files that are currently in the automatic diagnostic repository (ADR). We can view the names of all trace files in the ADR, or we can apply filters to view a subset of names. For example, ADRCI has commands that enable us to: Obtain a list of trace files whose file name matches a search string.
Obtain a list of trace files in a particular directory.
Obtain a list of trace files that pertain to a particular incident.
The following statement lists the name of every trace file that has the string 'mmon' in its file name. The percent sign (%) is used as a wildcard character, and the search string is case sensitive. adrci> SHOW TRACEFILE %pmon% This statement lists the name of every trace file that is located in the directory and that has the string 'mmon' in its file name: adrci> SHOW TRACEFILE -RT This statement lists the names of all trace files related to incident number 1681:
161
ORACLE DATA BASE ADMINISTRATION
Viewing Incidents : The ADRCI SHOW INCIDENT command displays information about open incidents. For each incident, the incident ID, problem key, and incident creation time are shown. If the ADRCI homepath is set so that there are multiple current ADR homes, the report includes incidents from all of them. adrci> SHOW INCIDENT ADR Home = d:\oracle\diag\rdbms\noida\noida: ******************************************************************* 0 rows fetched Purging Alert Log Content : The adrci command ‗purge‘ can be used to purge entries from the alert log. Note that this purge will only apply to the XML based alert log and not the text file based alert log which still has to be maintained using OS commands. The purge command takes the input in minutes and specifies the number of minutes for which records should be retained. So to purge all alert log entries older than 7 days the following command will be used: adrci > purge -age 10080 -type ALERT ADR Retention can be controlled with ADRCI : There is retention policy for ADR that allow to specify how long to keep the data ADR incidents are controlled by two different policies: The incident metadata retention policy ( default is 1 year ) The incident files and dumps retention policy ( Default is one month) We can change retention policy using ―adrci‖ MMON purge data automatically on expired ADR data. adrci> show control The above command will show the shortp_policy and longp_policy and this policy can the changed as below: adrci> set control (SHORTP_POLICY = 360 ) adrci> set control (LONGP_POLICY = 4380 ) Cold Clonning Using controlfile backup in oracle 11g Cloning is the ability to copy and restore a database image and to set up a database image as a new instance.The new instance can reside on the same system as the original database ,or on a different system. There are two method of Cloning : 1.) Cold Cloning : Cold cloning doesnot required recovery because the source database is shut-down normally before the image is created ,or on a different system . 2.) Hot Cloning : Hot Cloning doesnot include the database . The database is recovered from the hot backup of the database, backup ,controlfiles and archivelogs . Reason for Cloning :
162
ORACLE DATA BASE ADMINISTRATION
In every oracle development and production environment there will become the need to transport the entire database from one physical machine to another. This copy may be used for development, production testing, beta testing, etc, but rest assured that this need will arise and management will ask us to perform this task quickly. Listed below are the most typical uses : There are various reasons for cloning an Oracle Applications system such as :
Creating a copy of the production system for testing updates.
Migrating an existing system to new hardware.
Creating a stage area to reduce patching downtime.
Relocating an Oracle database to another machine.
Renaming Oracle database.
Terms used to describe the method Production Database Database to be clonned Platform Used
===>> "Noida" ===>> "delhi" ===>> Oracle 11GR1
Here is step by step method of Clonning : Step 1 : Create directory structure for clone database : Create directory structure for oracle database files. In my case the all datafiles,controlfiles and redologs will be store in "D:\oracle\oradata\delhi" . So make folder name "delhi" inside oradata folder similary in admin folder make new folder as "delhi" and inside that make new folder as adump,pfile,dpdump respectively. Step 2 : Create pfile for clone database : C:\>sqlplus sys/xxxx@noida as sysdba SQL> create pfile='C:\initdelhi.ora' from spfile ; File created. Pfile of the ‖noida‖ database is : noida.__db_cache_size=109051904 noida.__java_pool_size=12582912 noida.__large_pool_size=4194304 noida.__oracle_base='D:\oracle'#ORACLE_BASE set from environment noida.__pga_aggregate_target=104857600 noida.__sga_target=322961408 noida.__shared_io_pool_size=0 noida.__shared_pool_size=188743680 noida.__streams_pool_size=0 *.audit_file_dest='D:\oracle\admin\noida\adump' *.audit_trail='db' *.compatible='11.1.0.0.0' *.control_files='D:\ORACLE\ORADATA\NOIDA\CONTROL01.CTL','D:\ORACLE\ORADATA\NOI DA\CONTROL02.CTL','D:\ORACLE\ORADATA\NOIDA\CONTROL03.CTL'#Restore Controlfile *.db_block_size=8192 *.db_domain='' *.db_name='noida' *.db_recovery_file_dest_size=2147483648
163
ORACLE DATA BASE ADMINISTRATION
*.db_recovery_file_dest='D:\oracle\flash_recovery_area' *.diagnostic_dest='D:\oracle' *.dispatchers='(PROTOCOL=TCP) (SERVICE=noidaXDB)' *.log_archive_dest='' *.log_archive_dest_1='location=D:\archive\' *.log_archive_format='noida_%s_%t_%r.arc' *.memory_target=425721856 *.open_cursors=300 *.processes=150 *.remote_login_passwordfile='EXCLUSIVE' *.undo_management='AUTO' *.undo_tablespace='UNDOTBS2' Replace the text ―noida‖ with ―delhi ― and save it . Hence we have the pfile for clone database. delhi.__db_cache_size=109051904 delhi.__java_pool_size=12582912 delhi.__large_pool_size=4194304 delhi.__oracle_base='D:\oracle'#ORACLE_BASE set from environment delhi.__pga_aggregate_target=104857600 delhi.__sga_target=322961408 delhi.__shared_io_pool_size=0 delhi.__shared_pool_size=188743680 delhi.__streams_pool_size=0 *.audit_file_dest='D:\oracle\admin\delhi\adump' *.audit_trail='db' *.compatible='11.1.0.0.0' *.control_files='D:\ORACLE\ORADATA\delhi\CONTROL01.CTL','D:\ORACLE\ORADATA\delhi\ CONTROL02.CTL','D:\ORACLE\ORADATA\delhi\CONTROL03.CTL'#Restore Controlfile *.db_block_size=8192 *.db_domain='' *.db_name='delhi' *.db_recovery_file_dest_size=2147483648 *.db_recovery_file_dest='D:\oracle\flash_recovery_area' *.diagnostic_dest='D:\oracle' *.dispatchers='(PROTOCOL=TCP) (SERVICE=delhiXDB)' *.log_archive_dest='' *.log_archive_dest_1='location=D:\archive\' *.log_archive_format='delhi_%s_%t_%r.arc' *.memory_target=425721856 *.open_cursors=300 *.processes=150 *.remote_login_passwordfile='EXCLUSIVE' *.undo_management='AUTO' *.undo_tablespace='UNDOTBS2' Step 3 : Configure Listener and Services for Clone Database Configure listener by using netmgr and configure tns by using netca .Reload the listener and check it status. C:\>lsnrctl LSNRCTL for 32-bit Windows: Version 11.1.0.6.0 - Production on 13-MAY-2011 13:14:31 Copyright (c) 1991, 2007, Oracle. All rights reserved.
164
ORACLE DATA BASE ADMINISTRATION
Welcome to LSNRCTL, type "help" for information. LSNRCTL> reload Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=xxxx)(PORT=1521))) The command completed successfully LSNRCTL> status Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=xxxx)(PORT=1521))) STATUS of the LISTENER --------------------------------------------Alias LISTENER Version TNSLSNR for 32-bit Windows: Version 11.1.0.6.0 - Production Start Date 13-MAY-2011 11:12:13 Uptime 0 days 2 hr. 2 min. 36 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File D:\oracle\product\11.1.0\db_1\network\admin\listener.ora Listener Log File d:\oracle\diag\tnslsnr\ramtech-199\listener\alert\log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=xxxx)(PORT=1521))) Services Summary... Service "delhi" has 1 instance(s). Instance "delhi", status UNKNOWN, has 1 handler(s) for this service... Service "noida" has 1 instance(s). Instance "noida", status UNKNOWN, has 1 handler(s) for this service... The command completed successfully LSNRCTL> exit Check for TNS C:\> tnsping delhi TNS Ping Utility for 32-bit Windows: Version 11.1.0.6.0 - Production on 13-MAY-2011 13:17:06 Copyright (c) 1997, 2007, Oracle. All rights reserved. Used parameter files: D:\oracle\product\11.1.0\db_1\network\admin\sqlnet.ora Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = xxxx)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = delhi))) OK (50 msec) Step 4 : Create Instance for Clone database C:\>oradim -new -sid delhi -intpwd delhi -startmode m Instance created. Step 5 : Startup the clone database at nomount stage C:\>sqlplus sys/delhi@delhi as sysdba SQL*Plus: Release 11.1.0.6.0 - Production on Fri May 13 13:23:11 2011 Copyright (c) 1982, 2007, Oracle. All rights reserved. Connected to an idle instance. SQL> create spfile from pfile='C:\initdelhi.ora'; File created. SQL> startup nomount
165
ORACLE DATA BASE ADMINISTRATION
ORACLE instance started. Total System Global Area 426852352 bytes Fixed Size 1333648 bytes Variable Size 310380144 bytes Database Buffers 109051904 bytes Redo Buffers 6086656 bytes Step 6 : Create Script for Controlfile At the production database : SQL> alter database backup controlfile to trace; Database altered. Now check the alert log file and find the name of the .trc file where the backup of controlfile is. The following information are inside the .trc file.In my case the trace file contains following information. Trace file d:\oracle\diag\rdbms\noida\noida\trace\noida_ora_3952.trc Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Windows XP Version V5.1 Service Pack 2 CPU : 2 - type 586 Process Affinity : 0x00000000 Memory (Avail/Total): Ph:52M/1015M, Ph+PgF:3270M/5518M, VA:1414M/2047M Instance name: noida Redo thread mounted by this instance: 1 Oracle process number: 18 Windows thread id: 3952, image: ORACLE.EXE (SHAD) *** 2011-05-13 13:28:03.750 *** SESSION ID:(170.5) 2011-05-13 13:28:03.750 *** CLIENT ID:() 2011-05-13 13:28:03.750 *** SERVICE NAME:() 2011-05-13 13:28:03.750 *** MODULE NAME:(sqlplus.exe) 2011-05-13 13:28:03.750 *** ACTION NAME:() 2011-05-13 13:28:03.750 Successfully allocated 2 recovery slaves *** 2011-05-13 13:28:04.078 Using 545 overflow buffers per recovery slave Thread 1 checkpoint: logseq 11, block 2, scn 1582181 cache-low rba: logseq 11, block 87608 on-disk rba: logseq 11, block 89694, scn 1628819 start recovery at logseq 11, block 87608, scn 0 ==== Redo read statistics for thread 1 ==== Total physical reads (from disk and memory): 4096Kb -- Redo read_disk statistics -Read rate (ASYNC): 1043Kb in 0.72s => 1.41 Mb/sec Longest record: 13Kb, moves: 0/1112 (0%) Change moves: 2/51 (3%), moved: 0Mb Longest LWN: 404Kb, moves: 0/60 (0%), moved: 0Mb Last redo scn: 0x0000.0018da92 (1628818) ---------------------------------------------*** 2011-05-13 13:28:05.593 ----- Recovery Hash Table Statistics --------Hash table buckets = 32768 Longest hash chain = 2
166
ORACLE DATA BASE ADMINISTRATION
Average hash chain = 414/413 = 1.0 Max compares per lookup = 1 Avg compares per lookup = 1510/1998 = 0.8 --------------------------------------------*** 2011-05-13 13:28:05.593 KCRA: start recovery claims for 414 data blocks *** 2011-05-13 13:28:05.609 KCRA: blocks processed = 414/414, claimed = 414, eliminated = 0 *** 2011-05-13 13:28:07.281 Recovery of Online Redo Log: Thread 1 Group 2 Seq 11 Reading mem 0 *** 2011-05-13 13:28:07.703 Completed redo application *** 2011-05-13 13:28:08.750 Completed recovery checkpoint IR RIA: redo_size 1068032 bytes, time_taken 193 ms *** 2011-05-13 13:28:09.406 ----- Recovery Hash Table Statistics --------Hash table buckets = 32768 Longest hash chain = 2 Average hash chain = 414/413 = 1.0 Max compares per lookup = 2 Avg compares per lookup = 1739/1923 = 0.9 ---------------------------------------------*** 2011-05-13 13:28:26.921 kwqmnich: current time:: 7: 58: 26 kwqmnich: instance no 0 check_only flag 1 *** 2011-05-13 13:28:27.250 kwqmnich: initialized job cache structure *** MODULE NAME:(Oracle Enterprise Manager.pin EM plsql) 2011-05-13 13:29:07.781 *** ACTION NAME:(start) 2011-05-13 13:29:07.781 *** 2011-05-13 13:29:07.781 -- The following are current System-scope REDO Log Archival related -- parameters and can be included in the database initialization file. -- LOG_ARCHIVE_DEST='' -- LOG_ARCHIVE_DUPLEX_DEST='' -- LOG_ARCHIVE_FORMAT=noida_%s_%t_%r.arc -- DB_UNIQUE_NAME="noida" -- LOG_ARCHIVE_CONFIG='SEND, RECEIVE, NODG_CONFIG' -- LOG_ARCHIVE_MAX_PROCESSES=4 -- STANDBY_FILE_MANAGEMENT=MANUAL -- STANDBY_ARCHIVE_DEST=%ORACLE_HOME%\RDBMS -- FAL_CLIENT='' -- FAL_SERVER='' -- LOG_ARCHIVE_DEST_1='LOCATION=D:\archive\' -- LOG_ARCHIVE_DEST_1='OPTIONAL REOPEN=300 NODELAY' -- LOG_ARCHIVE_DEST_1='ARCH NOAFFIRM NOEXPEDITE NOVERIFY SYNC' -- LOG_ARCHIVE_DEST_1='REGISTER NOALTERNATE NODEPENDENCY' -LOG_ARCHIVE_DEST_1='NOMAX_FAILURE NOQUOTA_SIZE NOQUOTA_USED NODB_UNIQUE_NAME' -- LOG_ARCHIVE_DEST_1='VALID_FOR=(PRIMARY_ROLE,ONLINE_LOGFILES)' -- LOG_ARCHIVE_DEST_STATE_1=ENABLE -- Below are two sets of SQL statements, each of which creates a new -- control file and uses it to open the database. The first set opens
167
ORACLE DATA BASE ADMINISTRATION
-- the database with the NORESETLOGS option and should be used only if -- the current versions of all online logs are available. The second -- set opens the database with the RESETLOGS option and should be used -- if online logs are unavailable. -- The appropriate set of statements can be copied from the trace into -- a script file, edited as necessary, and executed when there is a -- need to re-create the control file. -Set #1. NORESETLOGS case -- The following commands will create a new control file and use it -- to open the database. -- Data used by Recovery Manager will be lost. -- Additional logs may be required for media recovery of offline -- Use this only if the current versions of all online logs are -- available. -- After mounting the created controlfile, the following SQL -- statement will place the database in the appropriate -- protection mode: -- ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE STARTUP NOMOUNT CREATE CONTROLFILE REUSE DATABASE "NOIDA" NORESETLOGS ARCHIVELOG MAXLOGFILES 16 MAXLOGMEMBERS 3 MAXDATAFILES 100 MAXINSTANCES 8 MAXLOGHISTORY 292 LOGFILE GROUP 1 'D:\ORACLE\ORADATA\NOIDA\REDO01.LOG' SIZE 50M, GROUP 2 'D:\ORACLE\ORADATA\NOIDA\REDO02.LOG' SIZE 50M, GROUP 3 'D:\ORACLE\ORADATA\NOIDA\REDO03.LOG' SIZE 50M -- STANDBY LOGFILE DATAFILE 'D:\ORACLE\ORADATA\NOIDA\SYSTEM01.DBF', 'D:\ORACLE\ORADATA\NOIDA\SYSAUX01.DBF', 'D:\ORACLE\ORADATA\NOIDA\USERS01.DBF', 'D:\ORACLE\ORADATA\NOIDA\EXAMPLE01.DBF', 'D:\ORACLE\ORADATA\NOIDA\UNDOTBS02.DBF' CHARACTER SET WE8MSWIN1252 ; -- Configure RMAN configuration record 1 VARIABLE RECNO NUMBER; EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RETENTION POLICY','TO RECOVERY WINDOW OF 2 DAYS'); -- Configure RMAN configuration record 2 VARIABLE RECNO NUMBER; EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('BACKUP OPTIMIZATION','ON'); -- Configure RMAN configuration record 3 VARIABLE RECNO NUMBER; EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CONTROLFILE AUTOBACKUP','ON'); -- Configure RMAN configuration record 4 VARIABLE RECNO NUMBER;
168
ORACLE DATA BASE ADMINISTRATION
EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE','DISK TO ''D:\rman_bkp\cf\%F'''); -- Configure RMAN configuration record 5 VARIABLE RECNO NUMBER; EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('ENCRYPTION FOR DATABASE','OFF'); -- Configure RMAN configuration record 6 VARIABLE RECNO NUMBER; EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CHANNEL','DEVICE TYPE DISK FORMAT ''D:\rman_bkp\%U'''); -- Commands to re-create incarnation table -- Below log names MUST be changed to existing filenames on -- disk. Any one log file from each branch can be used to -- re-create incarnation records. -- ALTER DATABASE REGISTER LOGFILE 'D:\ARCHIVE\NOIDA_1_1_636026939.ARC'; -- ALTER DATABASE REGISTER LOGFILE 'D:\ARCHIVE\NOIDA_1_1_749730106.ARC'; -- ALTER DATABASE REGISTER LOGFILE 'D:\ARCHIVE\NOIDA_1_1_750184743.ARC'; -- Recovery is required if any of the datafiles are restored backups, -- or if the last shutdown was not normal or immediate. RECOVER DATABASE -- All logs need archiving and a log switch is needed. ALTER SYSTEM ARCHIVE LOG ALL; -- Database can now be opened normally. ALTER DATABASE OPEN; -- Commands to add tempfiles to temporary tablespaces. -- Online tempfiles have complete space information. -- Other tempfiles may require adjustment. ALTER TABLESPACE TEMP ADD TEMPFILE 'D:\ORACLE\ORADATA\NOIDA\TEMP01.DBF' SIZE 20971520 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M; -- End of tempfile additions. -Set #2. RESETLOGS case -- The following commands will create a new control file and use it -- to open the database. -- Data used by Recovery Manager will be lost. -- The contents of online logs will be lost and all backups will -- be invalidated. Use this only if online logs are damaged. -- After mounting the created controlfile, the following SQL -- statement will place the database in the appropriate -- protection mode: -- ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE STARTUP NOMOUNT CREATE CONTROLFILE REUSE DATABASE "NOIDA" RESETLOGS ARCHIVELOG MAXLOGFILES 16 MAXLOGMEMBERS 3 MAXDATAFILES 100 MAXINSTANCES 8 MAXLOGHISTORY 292 LOGFILE GROUP 1 'D:\ORACLE\ORADATA\NOIDA\REDO01.LOG' SIZE 50M, GROUP 2 'D:\ORACLE\ORADATA\NOIDA\REDO02.LOG' SIZE 50M, GROUP 3 'D:\ORACLE\ORADATA\NOIDA\REDO03.LOG' SIZE 50M -- STANDBY LOGFILE DATAFILE
169
ORACLE DATA BASE ADMINISTRATION
'D:\ORACLE\ORADATA\NOIDA\SYSTEM01.DBF', 'D:\ORACLE\ORADATA\NOIDA\SYSAUX01.DBF', 'D:\ORACLE\ORADATA\NOIDA\USERS01.DBF', 'D:\ORACLE\ORADATA\NOIDA\EXAMPLE01.DBF', 'D:\ORACLE\ORADATA\NOIDA\UNDOTBS02.DBF' CHARACTER SET WE8MSWIN1252 ; -- Configure RMAN configuration record 1 VARIABLE RECNO NUMBER; EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('RETENTION POLICY','TO RECOVERY WINDOW OF 2 DAYS'); -- Configure RMAN configuration record 2 VARIABLE RECNO NUMBER; EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('BACKUP OPTIMIZATION','ON'); -- Configure RMAN configuration record 3 VARIABLE RECNO NUMBER; EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CONTROLFILE AUTOBACKUP','ON'); -- Configure RMAN configuration record 4 VARIABLE RECNO NUMBER; EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE','DISK TO ''D:\rman_bkp\cf\%F'''); -- Configure RMAN configuration record 5 VARIABLE RECNO NUMBER; EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('ENCRYPTION FOR DATABASE','OFF'); -- Configure RMAN configuration record 6 VARIABLE RECNO NUMBER; EXECUTE :RECNO := SYS.DBMS_BACKUP_RESTORE.SETCONFIG('CHANNEL','DEVICE TYPE DISK FORMAT ''D:\rman_bkp\%U'''); -- Commands to re-create incarnation table -- Below log names MUST be changed to existing filenames on -- disk. Any one log file from each branch can be used to -- re-create incarnation records. -- ALTER DATABASE REGISTER LOGFILE 'D:\ARCHIVE\NOIDA_1_1_636026939.ARC'; -- ALTER DATABASE REGISTER LOGFILE 'D:\ARCHIVE\NOIDA_1_1_749730106.ARC'; -- ALTER DATABASE REGISTER LOGFILE 'D:\ARCHIVE\NOIDA_1_1_750184743.ARC'; -- Recovery is required if any of the datafiles are restored backups, -- or if the last shutdown was not normal or immediate. RECOVER DATABASE USING BACKUP CONTROLFILE -- Database can now be opened zeroing the online logs. ALTER DATABASE OPEN RESETLOGS; -- Commands to add tempfiles to temporary tablespaces. -- Online tempfiles have complete space information. -- Other tempfiles may require adjustment. ALTER TABLESPACE TEMP ADD TEMPFILE 'D:\ORACLE\ORADATA\NOIDA\TEMP01.DBF' SIZE 20971520 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M; -- End of tempfile additions. Edit the above the file and make the following changes i.e replace ―reuse‖ with ―set‖ and change the database name from ―noida‖ to ―delhi‖ . After editing it looks like as :
170
ORACLE DATA BASE ADMINISTRATION
CREATE CONTROLFILE SET DATABASE "DELHI" RESETLOGS MAXLOGFILES 16 MAXLOGMEMBERS 3 MAXDATAFILES 100 MAXINSTANCES 8 MAXLOGHISTORY 292 LOGFILE GROUP 1 'D:\ORACLE\ORADATA\NOIDA\REDO01.LOG' SIZE GROUP 2 'D:\ORACLE\ORADATA\NOIDA\REDO02.LOG' SIZE GROUP 3 'D:\ORACLE\ORADATA\NOIDA\REDO03.LOG' SIZE DATAFILE 'D:\ORACLE\ORADATA\NOIDA\SYSTEM01.DBF', 'D:\ORACLE\ORADATA\NOIDA\SYSAUX01.DBF', 'D:\ORACLE\ORADATA\NOIDA\USERS01.DBF', 'D:\ORACLE\ORADATA\NOIDA\EXAMPLE01.DBF', 'D:\ORACLE\ORADATA\NOIDA\UNDOTBS02.DBF' CHARACTER SET WE8MSWIN1252;
ARCHIVELOG
50M, 50M, 50M
Save the above editted file as create_control.sql . Step 7 : Restore the datafiles : Shut down the production database i.e, ―noida‖ and copy all the datafile from production to clone database . In my case , I have copy my all datafile from ‗D:\oracle\oradata\noida\‘ to ‗D:\oracle\oradata\delhi\‘ . Step 8 : Execute the control file script : Since clone database i.e, delhi is in nomount stage so execute the create_control.sql scripts. SQL> @C:\create_control.sql Control file created. Hence controlfile is created, and the database is in mount stage. Step 9 : Finally open the clone database with resetlogs option : SQL> alter database open resetlogs; Database altered. SQL> select name,open_mode from v$database; NAME OPEN_MODE ------------------------DELHI READ WRITE How to Drop UNDO Tablespace It is not an easy task to drop the undo tablespace . Once I have to delete the undo tablespace due to some reason and i find that it is not straight forward to delete the undo tablespace . I got the following error while dropping the error :
171
ORACLE DATA BASE ADMINISTRATION
SQL> select tablespace_name,file_name from dba_data_files; TABLESPACE_NAME FILE_NAME ------------------------------ --------------------------------------------------------------------USERS D:\ORACLE\ORADATA\NOIDA\USERS01.DBF UNDOTBS1 D:\ORACLE\ORADATA\NOIDA\UNDOTBS01.DBF SYSAUX D:\ORACLE\ORADATA\NOIDA\SYSAUX01.DBF SYSTEM D:\ORACLE\ORADATA\NOIDA\SYSTEM01.DBF EXAMPLE D:\ORACLE\ORADATA\NOIDA\EXAMPLE01.DBF SQL> drop tablespace undotbs1; drop tablespace undotbs1 * ERROR at line 1: ORA-30013: undo tablespace 'UNDOTBS1' is currently in use As the error indicate that the undo tablespace is in use so i issue the following command. SQL> alter tablespace undotbs1 offline; alter tablespace undotbs1 offline * ERROR at line 1: ORA-30042: Cannot offline the undo tablespace. Therefore, to drop undo tablespace, we have to perform following steps: 1.) Create new undo tablespace 2.) Make it defalut tablepsace and undo management manual by editting parameter file and restart it. 3.) Check the all segment of old undo tablespace to be offline. 4.) Drop the old tablespace. 5.) Change undo management to auto by editting parameter file and restart the database Step 1 : Create Tablespace
: Create undo tablespace undotbs2
SQL> create undo tablespace UNDOTBS2 datafile 'D:\ORACLE\ORADATA\NOIDA\UNDOTBS02.DBF' size 100M; Tablespace created. Step 2 : Edit the parameter file SQL> alter system set undo_tablespace=UNDOTBS2 ; System altered. SQL> alter system set undo_management=MANUAL scope=spfile; System altered. SQL> shut immediate Database closed. Database dismounted. ORACLE instance shut down. SQL> startup
172
ORACLE DATA BASE ADMINISTRATION
ORACLE instance started. Total System Global Area 426852352 bytes Fixed Size 1333648 bytes Variable Size 360711792 bytes Database Buffers 58720256 bytes Redo Buffers 6086656 bytes Database mounted. Database opened. SQL> show parameter undo_tablespace NAME TYPE VALUE ------------------------------------ ----------- -----------------------------undo_tablespace string UNDOTBS2 Step 3: Check the all segment of old undo tablespace to be offline SQL> select owner, segment_name, tablespace_name, status from dba_rollback_segs order by 3; OWNER SEGMENT_NAME TABLESPACE_NAME STATUS ------ ------------------------------ ------------------------------ ---------------SYS SYSTEM SYSTEM ONLINE PUBLIC _SYSSMU10_1192467665$ UNDOTBS1 OFFLINE PUBLIC _SYSSMU1_1192467665$ UNDOTBS1 OFFLINE PUBLIC _SYSSMU2_1192467665$ UNDOTBS1 OFFLINE PUBLIC _SYSSMU3_1192467665$ UNDOTBS1 OFFLINE PUBLIC _SYSSMU4_1192467665$ UNDOTBS1 OFFLINE PUBLIC _SYSSMU5_1192467665$ UNDOTBS1 OFFLINE PUBLIC _SYSSMU6_1192467665$ UNDOTBS1 OFFLINE PUBLIC _SYSSMU7_1192467665$ UNDOTBS1 OFFLINE PUBLIC _SYSSMU8_1192467665$ UNDOTBS1 OFFLINE PUBLIC _SYSSMU9_1192467665$ UNDOTBS1 ONLINE PUBLIC _SYSSMU12_1304934663$ UNDOTBS2 OFFLINE PUBLIC _SYSSMU13_1304934663$ UNDOTBS2 OFFLINE PUBLIC _SYSSMU14_1304934663$ UNDOTBS2 OFFLINE PUBLIC _SYSSMU15_1304934663$ UNDOTBS2 OFFLINE PUBLIC _SYSSMU11_1304934663$ UNDOTBS2 OFFLINE PUBLIC _SYSSMU17_1304934663$ UNDOTBS2 OFFLINE PUBLIC _SYSSMU18_1304934663$ UNDOTBS2 OFFLINE PUBLIC _SYSSMU19_1304934663$ UNDOTBS2 OFFLINE PUBLIC _SYSSMU20_1304934663$ UNDOTBS2 OFFLINE PUBLIC _SYSSMU16_1304934663$ UNDOTBS2 OFFLINE 21 rows selected. If any one the above segment is online then change it status to offline by using below command . SQL>alter rollback segment "_SYSSMU9_1192467665$" offline; Step 4 : Drop old undo tablespace SQL> drop tablespace UNDOTBS1 including contents and datafiles; Tablespace dropped.
173
ORACLE DATA BASE ADMINISTRATION
Step 5 : Change undo management to auto and restart the database SQL> alter system set undo_management=auto scope=spfile; System altered. SQL> shut immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> startup ORACLE instance started. Total System Global Area 426852352 bytes Fixed Size 1333648 bytes Variable Size 364906096 bytes Database Buffers 54525952 bytes Redo Buffers 6086656 bytes Database mounted. Database opened. SQL> show parameter undo_tablespace NAME TYPE VALUE ------------------------------------ ----------- -----------------------------undo_tablespace string UNDOTBS2 Rman Data Recovery Advisor in Oracle 11g The Data Recovery Advisor automatically diagnoses corruption or loss of persistent data on disk, determines the appropriate repair options, and executes repairs at the user's request. This reduces the complexity of recovery process, thereby reducing the Mean Time To Recover (MTTR). The advisor comes in two flavors: command line mode and as a screen in Oracle Enterprise Manager Database Control. Below are the following command used in Data Recovery Advisor 1.
LIST FAILURE
2.
LIST FAILURE DETAILS
3.
ADVISE FAILURE
4.
REPAIR FAILURE Before we can start identifying and repairing failures, we need to damage a datafile.In this scenario, I have shut my database and open one of the datafile(user01.dbf) with wordpad(os utility ) and edit two letter and save it , and then open the database and got following error message. C:\>sqlplus sys/ramtech@noida as sysdba SQL*Plus: Release 11.1.0.6.0 - Production on Fri May 6 14:11:23 2011 Copyright (c) 1982, 2007, Oracle. All rights reserved. Connected to an idle instance. SQL> startup ORACLE instance started. Total System Global Area 426852352 bytes
174
ORACLE DATA BASE ADMINISTRATION
Fixed Size 1333648 bytes Variable Size 318768752 bytes Database Buffers 100663296 bytes Redo Buffers 6086656 bytes Database mounted. ORA-01157 : cannot identify/lock data file 4 - see DBWR trace file ORA-01110 : data file 4: 'D:\ORACLE\ORADATA\NOIDA\USERS01.DBF' Since the error has occurred , so we want to find out what happened. So we connect to RMAN and check the failure. C:\>rman target sys/ramtech@noida Recovery Manager: Release 11.1.0.6.0 - Production on Fri May 6 14:16:06 2011 Copyright (c) 1982, 2007, Oracle. All rights reserved. connected to target database: NOIDA (DBID=1503672566, not open) LIST FAILURE : If there is no error, this command will come back with the message: "no failures found that match specification " and if there is an error, a more explanatory message will follow: RMAN> list failure; using target database control file instead of recovery catalog List of Database Failures ======================== Failure ID Priority Status Time Detected Summary ---------- -------- --------- ------------- ------382 HIGH OPEN 06-MAY-11 One or more non-system datafiles are corrupt This message shows that some datafiles are corrupt . As the datafiles belong to a tablespace other than SYSTEM, the database stays up with that tablespace being offline. This error is fairly critical, so the priority is set to HIGH. Each failure gets a Failure ID, which makes it easier to identify and address individual failures. For instance we can issue the following command to get the details of Failure 382. LIST FAILURE DETAILS : This command will show us the exact cause of the error.This command will give the details about the failure id i.e, 382 RMAN> list failure 382 detail; List of Database Failures ========================= Failure ID Priority Status Time Detected Summary ---------- -------- --------- ------------- ------382 HIGH OPEN 06-MAY-11 One or more non-system datafiles are corrupt Impact: See impact for individual child failures List of child failures for parent failure ID 382 Failure ID Priority Status Time Detected Summary ---------- -------- --------- ------------- ------385 HIGH OPEN 06-MAY-11 Datafile 4: 'D:\ORACLE\ORADATA\NOIDA\USERS01.DBF' is corrupt Impact: Some objects in tablespace USERS might be unavailable ADVISE FAILURE : It responds with a detailed explanation of the error and how to correct it:
175
ORACLE DATA BASE ADMINISTRATION
RMAN> advise failure; List of Database Failures ========================= Failure ID Priority Status Time Detected Summary ---------- -------- --------- ------------- ------382 HIGH OPEN 06-MAY-11 One or more non-system datafiles are corrupt Impact: See impact for individual child failures List of child failures for parent failure ID 382 Failure ID Priority Status Time Detected Summary ---------- -------- --------- ------------- ------385 HIGH OPEN 06-MAY-11 Datafile 4: 'D:\ORACLE\ORADATA\NOIDA\USERS01.DBF' is corrupt Impact: Some objects in tablespace USERS might be unavailable analyzing automatic repair options; this may take some time allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=155 device type=DISK analyzing automatic repair options complete Mandatory Manual Actions ======================== no manual actions available Optional Manual Actions ======================= no manual actions available Automated Repair Options ======================== Option Repair Description ------ -----------------1 Restore and recover datafile 4 Strategy: The repair includes complete media recovery with no data loss Repair script: d:\oracle\diag\rdbms\noida\noida\hm\reco_1928090031.hm This output has several important parts. First, the advisor analyzes the error. In this case, it's pretty obvious: the datafile is corrupt . Next, it suggests a strategy. In this case, this is fairly simple as well: restore and recover the file. The dynamic performance view V$IR_MANUAL_CHECKLIST also shows this information. However, the most useful task Data Recovery Advisor does is shown in the very last line: it generates a script that can be used to repair the datafile or resolve the issue. The script does all the work; we don't have to write a single line of code. Sometimes the advisor doesn't have all the information it needs. For instance, in this case, it does not know if someone moved the file to a different location or renamed it. In that case, it advises to move the file back to the original location and name (under Optional Manual Actions). So the script is prepared for us . I would verify what the script actually does first. So, I issue the following command to "preview" the actions the repair task will execute: RMAN> repair failure preview ; Strategy: The repair includes complete media recovery with no data loss Repair script: d:\oracle\diag\rdbms\noida\noida\hm\reco_1928090031.hm contents of repair script: # restore and recover datafile
176
ORACLE DATA BASE ADMINISTRATION
restore datafile 4; recover datafile 4; This is good; the repair seems to be doing the same thing I would have done myself using RMAN. Now I can execute the actual repair by issuing: REPAIR FAILURE : This command will execute the above script. After recovery the tablespace it is prompt for opening the database . RMAN> repair failure; Strategy: The repair includes complete media recovery with no data loss Repair script: d:\oracle\diag\rdbms\noida\noida\hm\reco_1928090031.hm contents of repair script: # restore and recover datafile restore datafile 4; recover datafile 4; Do you really want to execute the above repair (enter YES or NO)? yes executing repair script Starting restore at 06-MAY-11 using channel ORA_DISK_1 channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring datafile 00004 to D:\ORACLE\ORADATA\NOIDA\USERS01.DBF channel ORA_DISK_1: reading from backup piece D:\RMAN_BKP\03MBDHJ7_1_1 channel ORA_DISK_1: piece handle=D:\RMAN_BKP\03MBDHJ7_1_1 tag=TAG20110503T141047 channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:03 Finished restore at 06-MAY-11 Starting recover at 06-MAY-11 using channel ORA_DISK_1 starting media recovery archived log for thread 1 with sequence 20 is already on disk as file D:\ARCHIVE\NOIDA_20_1_749730106.ARC archived log for thread 1 with sequence 1 is already on disk as file D:\ARCHIVE\NOIDA_1_1_750184743.ARC archived log for thread 1 with sequence 2 is already on disk as file D:\ARCHIVE\NOIDA_2_1_750184743.ARC archived log for thread 1 with sequence 3 is already on disk as file D:\ARCHIVE\NOIDA_3_1_750184743.ARC archived log for thread 1 with sequence 4 is already on disk as file D:\ARCHIVE\NOIDA_4_1_750184743.ARC archived log for thread 1 with sequence 5 is already on disk as file D:\ARCHIVE\NOIDA_5_1_750184743.ARC archived log file name=D:\ARCHIVE\NOIDA_20_1_749730106.ARC thread=1 sequence=20 archived log file name=D:\ARCHIVE\NOIDA_1_1_750184743.ARC thread=1 sequence=1 archived log file name=D:\ARCHIVE\NOIDA_2_1_750184743.ARC thread=1 sequence=2 archived log file name=D:\ARCHIVE\NOIDA_3_1_750184743.ARC thread=1 sequence=3 media recovery complete, elapsed time: 00:00:13 Finished recover at 06-MAY-11 repair failure complete
177
ORACLE DATA BASE ADMINISTRATION
Do you want to open the database (enter YES or NO)? Y database opened Note how RMAN prompts us before attempting to repair. In a scripting case, we may not want to do that; rather, we would want to just go ahead and repair it without an additional prompt. In such a case, just use repair failure noprompt at the RMAN prompt. Several views have been added to Oracle 11g to support the Data Recovery Advisor. The following views are available: V$IR_FAILURE - This view provides information on the failure. Note that records in this view can be hierarchal. V$IR_FAILURE_SET - This view provides a list of the various advice records associated with the failure. we can use this view to join the V$IR_FAILURE to the V$IR_MANUAL_CHECKLIST view. V$IR_ MANUAL_CHECKLIST - This view provides detailed informational messages related to the failure. These messages provide information on how to manually correct the problem. V$IR_REPAIR - This view, when joined with V$IR_FAILURE and V$IR_FAILURE_SET, can be used to provide a pointer to the physical file created by Oracle that contains the repair steps required to correct a detected error.
Complete loss of all oracle datafiles, redologs and controlfiles (Disaster Recovery ) In this post, we will cover the disaster recovery situation where the oracle database server has been destroyed and all the oracle database files (datafiles,controlfiles,redologs) are lost . In such scenario, a database can be recover, if we have valid backup of the database and then it is possible to recover all the data up to the last full backup. Here, in this testing environment, we take rman full backup and then delete the database(through dbca) and hence finally having only rman full backup. Let's have a look on the below steps : 1.) 2.) 3.) 4.) 5.) 6.) 7.)
Create Directory structure for datafile and for diagonistics files Create oracle services. Configure listener and tns (service) Restore spfile Restore controlfile Restore datafile Recover database and open the database.
1.) Create Directory structure : Create all directories required for datafiles, (online and archived) logs, control files and backups. All directory paths should match those on the original server. Though,this is not mandatory.If we do not know where the database files are located even though we can recover database ( i will come back later on ). 2.) Create oracle services : In case of window, we have to create an oracle service and password file in case of linux and unix . C:\>oradim -new -sid noida -intpwd noida -startmode m Instance created For Password file $orapwd file=filename password=noida entries=5 force=y
178
ORACLE DATA BASE ADMINISTRATION
3.) Configure listener and tns (service name) : Configure the listener through net mgr and reload the listener LSNRCTL> reload Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521))) The command completed successfully LSNRCTL> status Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521))) STATUS of the LISTENER -----------------------Alias LISTENER Version TNSLSNR for 32-bit Windows: Version 11.1.0.6.0 - Production Start Date 03-MAY-2011 13:45:31 Uptime 0 days 2 hr. 19 min. 34 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File D:\oracle\product\11.1.0\db_1\network\admin\listener.ora Listener Log File d:\oracle\diag\tnslsnr\xxxx\listener\alert\log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\EXTPROC1521ipc))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=xxxx)(PORT=1521))) Services Summary... Service "noida" has 1 instance(s). Instance "noida", status UNKNOWN, has 1 handler(s) for this service... The command completed successfully LSNRCTL> exit Now Configure the tns through NETCA and check it by using tnsping . C:\>tnsping noida TNS Ping Utility for 32-bit Windows: Version 11.1.0.6.0 - Production on 03-MAY-2011 16:05:12 Copyright (c) 1997, 2007, Oracle. All rights reserved. Used parameter files: D:\oracle\product\11.1.0\db_1\network\admin\sqlnet.ora Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = xxxx)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = noida))) OK (40 msec) 4.) Restore spfile : Here will restore spfile from the rman backup . But before that we have to set dbidand startup the database in nomount stage with the dummy pfile . C:\> rman target sys/noida@noida Recovery Manager: Release 11.1.0.6.0 - Production on Tue May 3 16:05:23 2011 Copyright (c) 1982, 2007, Oracle. All rights reserved. connected to target database (not started) RMAN> set dbid=1503672566 executing command: SET DBID RMAN> startup force nomount startup failed: ORA-01078: failure in processing system parameters LRM-00109: could not open parameter 'D:\ORACLE\PRODUCT\11.1.0\DB_1\DATABASE\INITNOIDA.ORA' starting Oracle instance without parameter file for retrieval of spfile
file
179
ORACLE DATA BASE ADMINISTRATION
Oracle instance started Total System Global Area 159019008 bytes Fixed Size 1331852 bytes Variable Size 67112308 bytes Database Buffers 83886080 bytes Redo Buffers 6688768 bytes RMAN> restore spfile from 'D:\rman_bkp\cf\C-1503672566-20110503-00'; Starting restore at 03-MAY-11 using target database control file instead of recovery catalog allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=98 device type=DISK channel ORA_DISK_1: restoring spfile from AUTOBACKUP D:\rman_bkp\cf\C-150367256620110503-00 channel ORA_DISK_1: SPFILE restore from AUTOBACKUP complete Finished restore at 03-MAY-11 RMAN> exit Recovery Manager complete. 5.) Restore control file : Now we will shut the database and startup database with spfile which we have restore in above step. After startup with spfile we connect to rman and restore the controlfile. C:\>sqlplus sys/noida@noida as sysdba SQL*Plus: Release 11.1.0.6.0 - Production on Tue May 3 16:07:04 2011 Copyright (c) 1982, 2007, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> shut immediate ORA-01507: database not mounted ORACLE instance shut down. SQL> exit C:\>sqlplus sys/noida@noida as sysdba SQL*Plus: Release 11.1.0.6.0 - Production on Tue May 3 16:07:25 2011 Copyright (c) 1982, 2007, Oracle. All rights reserved. Connected. SQL> startup nomount ORACLE instance started. Total System Global Area 426852352 bytes Fixed Size 1333648 bytes Variable Size 369100400 bytes Database Buffers 50331648 bytes Redo Buffers 6086656 bytes SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options C:\>rman target sys/noida@noida Recovery Manager: Release 11.1.0.6.0 - Production on Tue May 3 16:07:44 2011 Copyright (c) 1982, 2007, Oracle. All rights reserved. connected to target database: NOIDA (not mounted)
180
ORACLE DATA BASE ADMINISTRATION
RMAN> restore controlfile from 'D:\rman_bkp\cf\C-1503672566-20110503-00'; Starting restore at 03-MAY-11 using channel ORA_DISK_1 channel ORA_DISK_1: restoring control file channel ORA_DISK_1: restore complete, elapsed time: 00:00:03 output file name=D:\ORACLE\ORADATA\NOIDA\CONTROL01.CTL output file name=D:\ORACLE\ORADATA\NOIDA\CONTROL02.CTL output file name=D:\ORACLE\ORADATA\NOIDA\CONTROL03.CTL Finished restore at 03-MAY-11 RMAN> alter database mount ; database mounted released channel: ORA_DISK_1 6.) Restore datafile : As we have restore the controlfile ,we restore all the datafiles . RMAN> alter database mount; database mounted released channel: ORA_DISK_1 RMAN> restore database; Starting restore at 03-MAY-11 Starting implicit crosscheck backup at 03-MAY-11 allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=153 device type=DISK Crosschecked 3 objects Finished implicit crosscheck backup at 03-MAY-11 Starting implicit crosscheck copy at 03-MAY-11 using channel ORA_DISK_1 Crosschecked 2 objects Finished implicit crosscheck copy at 03-MAY-11 searching for all files in the recovery area cataloging files... no files cataloged using channel ORA_DISK_1 channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring datafile 00001 to D:\ORACLE\ORADATA\NOIDA\SYSTEM01.DBF channel ORA_DISK_1: restoring datafile 00002 to D:\ORACLE\ORADATA\NOIDA\SYSAUX01.DBF channel ORA_DISK_1: restoring datafile 00003 to D:\ORACLE\ORADATA\NOIDA\UNDOTBS01.DBF channel ORA_DISK_1: restoring datafile 00004 to D:\ORACLE\ORADATA\NOIDA\USERS01.DBF channel ORA_DISK_1: restoring datafile 00005 to D:\ORACLE\ORADATA\NOIDA\EXAMPLE01.DBF channel ORA_DISK_1: reading from backup piece D:\RMAN_BKP\03MBDHJ7_1_1 channel ORA_DISK_1: piece handle=D:\RMAN_BKP\03MBDHJ7_1_1 tag=TAG20110503T141047 channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:01:35 Finished restore at 03-MAY-11
181
ORACLE DATA BASE ADMINISTRATION
7.) Recover database and open the database : While recovering the database , an error occur related to next log sequence , so find the log sequence and recover until last logseq and open the database in resetlogs mode. RMAN> recover database ; Starting recover at 03-MAY-11 using channel ORA_DISK_1 starting media recovery archived log for thread 1 with sequence 20 is already on disk as file D:\ARCHIVE\NOIDA_20_1_749730106. archived log file name=D:\ARCHIVE\NOIDA_20_1_749730106.ARC thread=1 sequence=20 unable to find archived log archived log thread=1 sequence=21 RMAN-00571: ========================================================= == RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: ========================================================= == RMAN-03002: failure of recover command at 05/03/2011 16:38:03 RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 21 and starting RMAN> recover database until logseq 21 ; Starting recover at 03-MAY-11 using channel ORA_DISK_1 starting media recovery media recovery complete, elapsed time: 00:00:03 Finished recover at 03-MAY-11 RMAN> alter database open resetlogs ; database opened Hence, we restore the database successfully . Format for LOG_ARCHIVE_FORMAT in Oracle LOG_ARCHIVE_FORMAT parameter controls the format of the archive log file name. This parameter can only be used when the redo log is in ARCHIVELOG mode . LOG_ARCHIVE_FORMAT parameter is static in nature so it comes into action after restart of the instance. If the format defined in the parameter log_archive_format is invalid the database will startup but archiver will fail to archive logs which will cause database to hang and in the alert log the following message would be reported . ―ORA-00294: invalid archivelog format specifier..‖ So if we change this parameter a quick test can be done by running the above SQL to make sure oracle is able to archive the redo log. The format of specifying the archive redo log filename is given below : LOG_ARCHIVE_FORMAT = ―LOG%s_%t_%r.arc‖ Various parameters that can be used with the LOG_ARCHIVE_FORMAT parameter are given
182
ORACLE DATA BASE ADMINISTRATION
below: %s – log sequence number %S – log sequence number, padded with zero %t – thread number %T – thread number, padded with zero %a – activation id %d – database id %r – reset logs id Whenever uppercase is used for a variable, such as %S or %T, it forces the value of the variable to be of fixed length, and the value to the left is padded with zeros. Below is the Demo of the log_archive_format parameters. SQL> alter system set log_archive_dest_1='location=D:\archive\'; System altered. SQL> archive log list Database log mode Archive Mode Automatic archival Enabled Archive destination D:\archive\ Oldest online log sequence 11 Next log sequence to archive 13 Current log sequence 13 SQL> alter system set log_archive_format='noida_%s_%t_%r.arc' scope=spfile; System altered. SQL> shut immediate SQL>startup SQL> archive log list Database log mode Archive Mode Automatic archival Enabled Archive destination D:\archive\ Oldest online log sequence 11 Next log sequence to archive 13 Current log sequence 13 SQL> alter system switch logfile; System altered. Now the new archive log file name like 'NOIDA_13_1_749730106.ARC' Automated Checkpoint Tuning (MTTR) Determining the time to recover from an instance failure is a necessary component for reaching required service levelsagreements. For example, if service levels dictate that when a node fails, instance recovery time can be no more than 3 minutes, FAST_START_MTTR_TARGET should be set to 180.
183
ORACLE DATA BASE ADMINISTRATION
Fast-start checkpointing refers to the periodic writes by the database writer (DBWn) processes for the purpose of writing changed data blocks from the Oracle buffer cache to disk and advancing the thread-checkpoint. Setting the database parameter FAST_START_MTTR_TARGET to a value greater than zero enables the fast-start checkpointing feature. Fast-start checkpointing should always be enabled for the following reasons: It reduces the time required for cache recovery, and makes instance recovery time-bounded and predictable. This is accomplished by limiting the number of dirty buffers (data blocks which have changes in memory that still need to be written to disk) and the number of redo records (changes in the database) generated between the most recent redo record and the last checkpoint. Fast-Start checkpointing eliminates bulk writes and corresponding I/O spikes that occure traditionally with interval- based checkpoints, providing a smoother, more consistent I/O pattern that is more predictable and easier to manage. If the system is not already near or at its maximum I/O capacity, fast-start checkpointing will have a negligible impact on performance. Although fast-start checkpointing results in increased write activity, there is little reduction in database throughout, provided the system has sufficient I/O capacity. Check-Pointing : Check-pointing is an important Oracle activity which records the highest system change number (SCN,) so that all data blocks less than or equal to the SCN are known to be written out to the data files. If there is a failure and then subsequent cache recovery, only the redo records containing changes at SCN(s) higher than the checkpoint need to be applied during recovery. As we are aware, instance and crash recovery occur in two steps - cache recovery followed by transaction recovery. During the cache recovery phase, also known as the rolling forward stage, Oracle applies all committed and uncommitted changes in the redo log files to the affected data blocks. The work required for cache recovery processing is proportional to the rate of change to the database and the time between checkpoints. Mean time to recover (MTTR) : Fast-start recovery can greatly reduce the mean time to recover (MTTR), with minimal effects on online application performance. Oracle continuously estimates the recovery time and automatically adjusts the check-pointing rate to meet the target recovery time. With 10g, the Oracle database can now self-tune check-pointing to achieve good recovery times with low impact on normal throughput. We no longer have to set any checkpointrelated parameters. This method reduces the time required for cache recovery and makes the recovery bounded and predictable by limiting the number of dirty buffers and the number of redo records generated between the most recent redo record and the last checkpoint. Administrators specify a target (bounded) time to complete the cache recovery phase of recovery with the FAST_START_MTTR_TARGET initialization parameter, and Oracle automatically varies the incremental checkpoint writes to meet that target. The TARGET_MTTR field of V$INSTANCE_RECOVERY contains the MTTR target in effect. The ESTIMATED_MTTR field of V$INSTANCE_RECOVERY contains the estimated MTTR should a crash happen right away. Enable MTTR advisory :
184
ORACLE DATA BASE ADMINISTRATION
Enabling MTTR Advisory Enabling MTTR Advisory involves setting two parameters: STATISTICS_LEVEL = TYPICAL FAST_START_MTTR_TARGET > 0 Estimate the value for FAST_START_MTTR_TARGET as follows: SELECT FROM
TARGET_MTTR, ESTIMATED_MTTR, V$INSTANCE_RECOVERY;
TARGET_MTTR ----------214
CKPT_BLOCK_WRITES
ESTIMATED_MTTR CKPT_BLOCK_WRITES -----------------------------12 269880
FAST_START_MTTR_TARGET = 214; Whenever you set FAST_START_MTTR_TARGET to a nonzero value, then set the following parameters to 0. LOG_CHECKPOINT_TIMEOUT = 0 LOG_CHECKPOINT_INTERVAL = 0 FAST_START_IO_TARGET = 0 Disable MTTR advisory : FAST_START_MTTR_TARGET = 0 LOG_CHECKPOINT_INTERVAL = 200000 Who is using which UNDO or TEMP segment ? Undo tablespace is common for all the users for an instance while temporary tablespace are assigned to users or a single default temporary tablespace is common for all users . To determine determine who is using a particular UNDO or Rollback Segment, use the bwlow query to find it . SQL> SELECT TO_CHAR(s.sid)||','||TO_CHAR(s.serial#) sid_serial, NVL(s.username, 'None') orauser, s.program, r.name undoseg , t.used_ublk * TO_NUMBER(x.value)/1024||'K' "Undo" FROM sys.v_$rollname r, sys.v_$session s, sys.v_$transaction t , sys.v_$parameter x WHERE s.taddr = t.addr AND r.usn= t.xidusn(+) AND x.name = 'db_block_size' ; Output : SID_SERIAL ORAUSER PROGRAM UNDOSEG Undo -------------- ------------ ------------------------------------------------- ------260,7 SCOTT
[email protected] _SYSSMU4$ 8K To determine the user who is using a TEMP tablespace ,then fire the below query as : SQL> SELECT b.tablespace, ROUND(((b.blocks*p.value)/1024/1024),2)||'M' "SIZE", a.sid||','||a.serial# SID_SERIAL , a.username, a.program FROM sys.v_$session a, sys.v_$sort_usage b, sys.v_$parameter p WHERE p.name = 'db_block_size' AND a.saddr = b.session_addr
185
ORACLE DATA BASE ADMINISTRATION
ORDER BY b.tablespace, b.blocks; Output : TABLESPACE SIZE SID_SERIAL USERNAME PROGRAM ----------------- ------- ----------------------------- -------------------------------TEMP 24M 260,7 SCOTT
[email protected] Track Redo Generation per hours and days Here is the scripts for Tracking Redo Generation per Hours and by Days . Track redo generation by day SQL>select trunc(completion_time) rundate ,count(*) logswitch ,round((sum(blocks*block_size)/1024/1024)) ―REDO PER DAY (MB)‖ from v$archived_log group by trunc(completion_time) order by 1; Sample Output : RUNDATE LOGSWITCH REDO PER DAY (MB) ------------------------------ ---------------------18-APR-11 2 1 19-APR-11 5 230 20-APR-11 36 1659 21-APR-11 14 175 22-APR-11 5 147 Track the Amount of Redo Generated per Hour : SQL> SELECT Start_Date, Start_Time, Num_Logs, Round(Num_Logs * (Vl.Bytes / (1024 * 1024)),2) AS Mbytes, Vdb.NAME AS Dbname FROM (SELECT To_Char(Vlh.First_Time, 'YYYY-MM-DD') AS Start_Date, To_Char(Vlh.First_Time, 'HH24') || ':00' AS Start_Time, COUNT(Vlh.Thread#) Num_Logs FROM V$log_History Vlh GROUP BY To_Char(Vlh.First_Time, 'YYYY-MM-DD'), To_Char(Vlh.First_Time, 'HH24') || ':00') Log_Hist, V$log Vl , V$database Vdb WHERE Vl.Group# = 1 ORDER BY Log_Hist.Start_Date, Log_Hist.Start_Time; Sample Output : START_DATE START NUM_LOGS MBYTES DBNAME ------------------------- -------------------------2011-04-18 16:00 1 50 NOIDA 2011-04-18 17:00 2 100 NOIDA 2011-04-19 00:00 1 50 NOIDA 2011-04-19 09:00 1 50 NOIDA 2011-04-19 14:00 1 50 NOIDA 2011-04-19 20:00 1 50 NOIDA 2011-04-19 23:00 1 50 NOIDA 2011-04-20 06:00 1 50 NOIDA 2011-04-20 10:00 5 250 NOIDA 2011-04-20 11:00 8 400 NOIDA 2011-04-20 12:00 21 1050 NOIDA 2011-04-20 14:00 1 50 NOIDA
186
ORACLE DATA BASE ADMINISTRATION
2011-04-21 2011-04-21 2011-04-21 2011-04-21 2011-04-21 2011-04-22 2011-04-22 2011-04-22
09:00 13:00 15:00 17:00 22:00 00:00 05:00 14:00
1 3 1 8 1 1 1 2
50 150 50 40 50 50 50 100
NOIDA NOIDA NOIDA NOIDA NOIDA NOIDA NOIDA NOIDA
Cloning A Database On The Same Server Using Rman A nice feature of RMAN is the ability to duplicate, or clone, a database from a previous backup. It is possible to create a duplicate database on a remote server with the same file structure, a remote server will a different file structure or the local server with a different file structure. In this article I'll demonstrate how to duplicate a database on the local server .This can prove useful when we want to recover selected objects from a backup, rather than roll back a whole database or tablespace. Purpose of Database Duplication : The goal of database duplication is the creation of a duplicate database, which is a separate database that contains all or a subset of the data in the source database. A duplicate database is useful for a variety of purposes, most of which involve testing. We can perform the following tasks in a duplicate database: 1.) Test backup and recovery procedures . 2.) Test an upgrade to a new release of Oracle Database . 3.) Test the effect of applications on database performance . 4.) Generate reports . 5.) Export data such as a table that was inadvertently dropped from the production database, and then import it back into the production database For example, we can duplicate the production database on host1 to host2, and then use the duplicate database on host2 to practice restoring and recovering this database while the production database on host1 operates as usual. Here, we will follow Step-by-step to create the clone or duplicate database . Terms used to describe the method step by step : Target: Database to be cloned = "noida" Duplicate or clone database = "clone" Here the source and target database instances are on a Windows server. 1.) Prepare Init.ora file for the duplicate instance Create the pfile from spfile of the target database as below : SQL> select name,open_mode from v$database; NAME OPEN_MODE --------- ---------NOIDA READ WRITE SQL> create pfile='c:\noidainit.ora' from spfile; File created. Now edit the noidainit file.
187
ORACLE DATA BASE ADMINISTRATION
a.) Replace "noida" with "clone" b.) Add two below parameter as shown in clone parameter file. DB_FILE_NAME_CONVERT LOG_FILE_NAME_CONVERT a.) Below is the pfile of target database (noida) : noida.__db_cache_size=67108864 noida.__java_pool_size=12582912 noida.__large_pool_size=4194304 noida.__oracle_base='D:\oracle'#ORACLE_BASE set from environment noida.__pga_aggregate_target=83886080 noida.__sga_target=234881024 noida.__shared_io_pool_size=0 noida.__shared_pool_size=130023424 noida.__streams_pool_size=4194304 *.audit_trail='db' *.compatible='11.1.0.0.0' *.control_files='D:\ORACLE\ORADATA\NOIDA\CONTROL01.CTL','D:\ORACLE\ORADATA\NOI DA\CONTROL02.CTL','D:\ORACLE\ORADATA\NOIDA\CONTROL03.CTL'#Restore Controlfile *.db_block_size=8192 *.db_domain='' *.db_name='noida' *.db_recovery_file_dest='D:\oracle\flash_recovery_area' *.db_recovery_file_dest_size=2147483648 *.diagnostic_dest='D:\oracle' *.dispatchers='(PROTOCOL=TCP) (SERVICE=noidaXDB)' *.log_archive_dest_1='LOCATION=D:\archive\' *.log_archive_format='ARC%S_%R.%T' *.memory_target=315621376 *.open_cursors=300 *.processes=150 *.remote_login_passwordfile='EXCLUSIVE' *.undo_tablespace='UNDOTBS1' Now replace "noida" with "clone" and add two above parameter clone.__db_cache_size=67108864 clone.__java_pool_size=12582912 clone.__large_pool_size=4194304 clone.__oracle_base='D:\oracle'#ORACLE_BASE set from environment clone.__pga_aggregate_target=83886080 clone.__sga_target=234881024 clone.__shared_io_pool_size=0 clone.__shared_pool_size=130023424 clone.__streams_pool_size=4194304 *.audit_trail='db' *.compatible='11.1.0.0.0' *.control_files='D:\ORACLE\ORADATA\clone\CONTROL01.CTL','D:\ORACLE\ORADATA\clone\ CONTROL02.CTL','D:\ORACLE\ORADATA\clone\CONTROL03.CTL'#Restore Controlfile *.db_block_size=8192 *.db_domain='' *.db_name='clone' *.db_recovery_file_dest='D:\oracle\flash_recovery_area'
188
ORACLE DATA BASE ADMINISTRATION
*.db_recovery_file_dest_size=2147483648 *.diagnostic_dest='D:\oracle' *.dispatchers='(PROTOCOL=TCP) (SERVICE=cloneXDB)' *.log_archive_dest_1='LOCATION=D:\archive\' *.log_archive_format='ARC%S_%R.%T' *.memory_target=315621376 *.open_cursors=300 *.processes=150 *.remote_login_passwordfile='EXCLUSIVE' *.undo_tablespace='UNDOTBS1' db_file_name_convert=('D:\oracle\oradata\noida\','D:\oracle\oradata\clone\') log_file_name_convert=('D:\oracle\oradata\noida\','D:\oracle\oradata\clone\') Step 2 : If using linux Platform then create password file as below : orapwd file=/u01/app/oracle/product/9.2.0.1.0/dbs/orapwDUP password=password entries=10 Step 3 : If using windows then Create Oracle related services (Required for Oracle on Windows only) C:\> oradim -new -sid clone -intpwd clone -startmode m Step 4 : Create directories for database files Create the required directories on the target server for datafiles, redo logs, control files, temporary files etc, this example assumes that all the database files will be stored under ‗D:\oracle\oradata\clone' and 'D:\oracle\admin\clone\' Step 5 : Configure listener and service name (i.e, tnsname.ora) It is better to configure listener through net mgr or we can add the below details in listener.ora file (SID_DESC = (GLOBAL_DBNAME = clone) (ORACLE_HOME = D:\oracle\product\11.1.0\db_1) (SID_NAME = clone) ) and perform following steps C:\> lsnrctl LSNRCTL> reload or LSNRCTL> stop LSNRCTL> start LSNRCTL> exit Now add following entries in tnsnames.ora CLONE = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = xxxx)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = clone)
189
ORACLE DATA BASE ADMINISTRATION
) ) or we can also use netca command to configure the tns . check tns entries as below : C:\> tnsping clone TNS Ping Utility for 32-bit Windows: Version 11.1.0.6.0 - Production on 22-APR-2011 11:33:04 Copyright (c) 1997, 2007, Oracle. All rights reserved. Used parameter files: D:\oracle\product\11.1.0\db_1\network\admin\sqlnet.ora Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = xxxx)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = clone))) OK (30 msec) Step 6 : Now Connect with Duplicate Database. c:\> set ORACLE_SID=clone c:\>sqlplus / as sysdba SQL*Plus: Release 11.1.0.6.0 - Production on Fri Apr 22 10:44:14 2011 Copyright (c) 1982, 2007, Oracle. All rights reserved. Connected to an idle instance. SQL> startup nomount pfile='C:\clone.ora'; ORACLE instance started. Total System Global Area 318046208 bytes Fixed Size 1332920 bytes Variable Size 234883400 bytes Database Buffers 75497472 bytes Redo Buffers 6332416 bytes SQL> create spfile from pfile='C:\clone.ora'; SQL>shut immediate ORA-01507: database not mounted ORACLE instance shut down. SQL> startup nomount ORACLE instance started. Total System Global Area 318046208 bytes Fixed Size 1332920 bytes Variable Size 234883400 bytes Database Buffers 75497472 bytes Redo Buffers 6332416 bytes SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Step 7 : Now Duplicate the Target Database . C:\>rman target sys/ramtech@noida auxiliary sys/clone@clone Recovery Manager: Release 11.1.0.6.0 - Production on Fri Apr 22 11:16:38 2011 Copyright (c) 1982, 2007, Oracle. All rights reserved. connected to target database: NOIDA (DBID=1502483083) connected to auxiliary database: CLONE (not mounted) RMAN> duplicate target database to "clone" nofilenamecheck; Starting Duplicate Db at 22-APR-11
190
ORACLE DATA BASE ADMINISTRATION
using target database control file instead of recovery catalog allocated channel: ORA_AUX_DISK_1 channel ORA_AUX_DISK_1: SID=170 device type=DISK contents of Memory Script: { set until scn 1955915; set newname for datafile 1 to "D:\ORACLE\ORADATA\CLONE\SYSTEM01.DBF"; set newname for datafile 2 to "D:\ORACLE\ORADATA\CLONE\SYSAUX01.DBF"; set newname for datafile 3 to "D:\ORACLE\ORADATA\CLONE\UNDOTBS01.DBF"; set newname for datafile 4 to "D:\ORACLE\ORADATA\CLONE\USERS01.DBF"; set newname for datafile 5 to "D:\ORACLE\ORADATA\CLONE\EXAMPLE01.DBF"; set newname for datafile 6 to "D:\ORACLE\ORADATA\CLONE\TRANS.DBF"; restore clone database ; } executing Memory Script executing command: SET until clause executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME executing command: SET NEWNAME Starting restore at 22-APR-11 using channel ORA_AUX_DISK_1 channel ORA_AUX_DISK_1: starting datafile backup set restore channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set channel ORA_AUX_DISK_1: restoring datafile 00001 to D:\ORACLE\ORADATA\CLONE\SYSTEM01.DBF channel ORA_AUX_DISK_1: restoring datafile 00002 to D:\ORACLE\ORADATA\CLONE\SYSAUX01.DBF channel ORA_AUX_DISK_1: restoring datafile 00003 to D:\ORACLE\ORADATA\CLONE\UNDOTBS01.DBF channel ORA_AUX_DISK_1: restoring datafile 00004 to D:\ORACLE\ORADATA\CLONE\USERS01.DBF channel ORA_AUX_DISK_1: restoring datafile 00005 to D:\ORACLE\ORADATA\CLONE\EXAMPLE01.DBF channel ORA_AUX_DISK_1: restoring datafile 00006 to D:\ORACLE\ORADATA\CLONE\TRANS.DBF channel ORA_AUX_DISK_1: reading from backup piece D:\ORACLE\FLASH_RECOVERY_AREA\NOIDA\BACKUPSET\2011_04_21\O1_MF_NNNDF_TA G20110421T134444_6TZSWBRW_.BKP channel ORA_AUX_DISK_1: piece handle=D:\ORACLE\FLASH_RECOVERY_AREA\NOIDA\BACKUPSET\2011_04_21\O1_MF_NN NDF_TAG20110421T134444_6TZSWBRW_.BKP tag=TAG20110421T134444 channel ORA_AUX_DISK_1: restored backup piece 1
191
ORACLE DATA BASE ADMINISTRATION
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:03:35 Finished restore at 22-APR-11 sql statement: CREATE CONTROLFILE REUSE SET DATABASE "CLONE" RESETLOGS ARCHIVELOG MAXLOGFILES 37 MAXLOGMEMBERS 3 MAXDATAFILES 10 MAXINSTANCES 1 MAXLOGHISTORY 292 LOGFILE GROUP 1 ( 'D:\ORACLE\ORADATA\CLONE\REDO01.LOG' ) SIZE 50 M REUSE, GROUP 2 ( 'D:\ORACLE\ORADATA\CLONE\REDO02.LOG' ) SIZE 50 M REUSE, GROUP 3 ( 'D:\ORACLE\ORADATA\CLONE\REDO03.LOG' ) SIZE 50 M REUSE DATAFILE 'D:\ORACLE\ORADATA\CLONE\SYSTEM01.DBF' CHARACTER SET WE8MSWIN1252 contents of Memory Script: { switch clone datafile all; } executing Memory Script datafile 2 switched to datafile copy input datafile copy RECID=1 STAMP=749128842 file name=D:\ORACLE\ORADATA\CLONE\SYSAUX01.DBF datafile 3 switched to datafile copy input datafile copy RECID=2 STAMP=749128842 file name=D:\ORACLE\ORADATA\CLONE\UNDOTBS01.DBF datafile 4 switched to datafile copy input datafile copy RECID=3 STAMP=749128843 file name=D:\ORACLE\ORADATA\CLONE\USERS01.DBF datafile 5 switched to datafile copy input datafile copy RECID=4 STAMP=749128843 file name=D:\ORACLE\ORADATA\CLONE\EXAMPLE01.DBF datafile 6 switched to datafile copy input datafile copy RECID=5 STAMP=749128843 file name=D:\ORACLE\ORADATA\CLONE\TRANS.DBF contents of Memory Script: { set until scn 1955915; recover clone database delete archivelog ; } executing Memory Script executing command: SET until clause Starting recover at 22-APR-11 using channel ORA_AUX_DISK_1 starting media recovery archived log for thread 1 with sequence 47 is already on disk as file D:\ARCHIVE\ARC00047_0748802215.001 archived log for thread 1 with sequence 48 is already on disk as file D:\ARCHIVE\ARC00048_0748802215.001
192
ORACLE DATA BASE ADMINISTRATION
archived log for thread 1 with sequence 49 is already on disk as file D:\ARCHIVE\ARC00049_0748802215.001 archived log for thread 1 with sequence 50 is already on disk as file D:\ARCHIVE\ARC00050_0748802215.001 archived log for thread 1 with sequence 51 is already on disk as file D:\ARCHIVE\ARC00051_0748802215.001 archived log for thread 1 with sequence 52 is already on disk as file D:\ARCHIVE\ARC00052_0748802215.001 archived log for thread 1 with sequence 53 is already on disk as file D:\ARCHIVE\ARC00053_0748802215.001 archived log for thread 1 with sequence 54 is already on disk as file D:\ARCHIVE\ARC00054_0748802215.001 archived log for thread 1 with sequence 55 is already on disk as file D:\ARCHIVE\ARC00055_0748802215.001 archived log for thread 1 with sequence 56 is already on disk as file D:\ARCHIVE\ARC00056_0748802215.001 archived log for thread 1 with sequence 57 is already on disk as file D:\ARCHIVE\ARC00057_0748802215.001 archived log for thread 1 with sequence 58 is already on disk as file D:\ARCHIVE\ARC00058_0748802215.001 archived log for thread 1 with sequence 59 is already on disk as file D:\ARCHIVE\ARC00059_0748802215.001 archived log file name=D:\ARCHIVE\ARC00047_0748802215.001 thread=1 archived log file name=D:\ARCHIVE\ARC00048_0748802215.001 thread=1 archived log file name=D:\ARCHIVE\ARC00049_0748802215.001 thread=1 archived log file name=D:\ARCHIVE\ARC00050_0748802215.001 thread=1 archived log file name=D:\ARCHIVE\ARC00051_0748802215.001 thread=1 archived log file name=D:\ARCHIVE\ARC00052_0748802215.001 thread=1 archived log file name=D:\ARCHIVE\ARC00053_0748802215.001 thread=1 archived log file name=D:\ARCHIVE\ARC00054_0748802215.001 thread=1 archived log file name=D:\ARCHIVE\ARC00055_0748802215.001 thread=1 archived log file name=D:\ARCHIVE\ARC00056_0748802215.001 thread=1 archived log file name=D:\ARCHIVE\ARC00057_0748802215.001 thread=1 archived log file name=D:\ARCHIVE\ARC00058_0748802215.001 thread=1 archived log file name=D:\ARCHIVE\ARC00059_0748802215.001 thread=1 media recovery complete, elapsed time: 00:01:35 Finished recover at 22-APR-11 contents of Memory Script: { shutdown clone immediate; startup clone nomount ; } executing Memory Script database dismounted Oracle instance shut down connected to auxiliary database (not started) Oracle instance started Total System Global Area 318046208 bytes Fixed Size 1332920 bytes Variable Size 234883400 bytes Database Buffers 75497472 bytes
sequence=47 sequence=48 sequence=49 sequence=50 sequence=51 sequence=52 sequence=53 sequence=54 sequence=55 sequence=56 sequence=57 sequence=58 sequence=59
193
ORACLE DATA BASE ADMINISTRATION
Redo Buffers 6332416 bytes sql statement: CREATE CONTROLFILE REUSE SET DATABASE "CLONE" RESETLOGS ARCHIVELOG MAXLOGFILES 37 MAXLOGMEMBERS 3 MAXDATAFILES 10 MAXINSTANCES 1 MAXLOGHISTORY 292 LOGFILE GROUP 1 ( 'D:\ORACLE\ORADATA\CLONE\REDO01.LOG' ) SIZE 50 M REUSE, GROUP 2 ( 'D:\ORACLE\ORADATA\CLONE\REDO02.LOG' ) SIZE 50 M REUSE, GROUP 3 ( 'D:\ORACLE\ORADATA\CLONE\REDO03.LOG' ) SIZE 50 M REUSE DATAFILE 'D:\ORACLE\ORADATA\CLONE\SYSTEM01.DBF' CHARACTER SET WE8MSWIN1252 contents of Memory Script: { set newname for tempfile 1 to "D:\ORACLE\ORADATA\CLONE\TEMP02.DBF"; switch clone tempfile all; catalog clone datafilecopy "D:\ORACLE\ORADATA\CLONE\SYSAUX01.DBF"; catalog clone datafilecopy "D:\ORACLE\ORADATA\CLONE\UNDOTBS01.DBF"; catalog clone datafilecopy "D:\ORACLE\ORADATA\CLONE\USERS01.DBF"; catalog clone datafilecopy "D:\ORACLE\ORADATA\CLONE\EXAMPLE01.DBF"; catalog clone datafilecopy "D:\ORACLE\ORADATA\CLONE\TRANS.DBF"; switch clone datafile all; } executing Memory Script executing command: SET NEWNAME renamed tempfile 1 to D:\ORACLE\ORADATA\CLONE\TEMP02.DBF in control file cataloged datafile copy datafile copy file name=D:\ORACLE\ORADATA\CLONE\SYSAUX01.DBF RECID=1 STAMP=749128969 cataloged datafile copy datafile copy file name=D:\ORACLE\ORADATA\CLONE\UNDOTBS01.DBF RECID=2 STAMP=749128970 cataloged datafile copy datafile copy file name=D:\ORACLE\ORADATA\CLONE\USERS01.DBF RECID=3 STAMP=749128970 cataloged datafile copy datafile copy file name=D:\ORACLE\ORADATA\CLONE\EXAMPLE01.DBF RECID=4 STAMP=749128970 cataloged datafile copy datafile copy file name=D:\ORACLE\ORADATA\CLONE\TRANS.DBF RECID=5 STAMP=749128971 datafile 2 switched to datafile copy input datafile copy RECID=1 STAMP=749128969 file name=D:\ORACLE\ORADATA\CLONE\SYSAUX01.DBF datafile 3 switched to datafile copy input datafile copy RECID=2 STAMP=749128970 file name=D:\ORACLE\ORADATA\CLONE\UNDOTBS01.DBF datafile 4 switched to datafile copy input datafile copy RECID=3 STAMP=749128970 file
194
ORACLE DATA BASE ADMINISTRATION
name=D:\ORACLE\ORADATA\CLONE\USERS01.DBF datafile 5 switched to datafile copy input datafile copy RECID=4 STAMP=749128970 file name=D:\ORACLE\ORADATA\CLONE\EXAMPLE01.DBF datafile 6 switched to datafile copy input datafile copy RECID=5 STAMP=749128971 file name=D:\ORACLE\ORADATA\CLONE\TRANS.DBF contents of Memory Script: { Alter clone database open resetlogs; } executing Memory Script database opened Finished Duplicate Db at 22-APR-11 RMAN> exit Recovery Manager complete. Step 8 : Check the Duplicate "clone" Database . C:\>sqlplus sys/clone@clone as sysdba SQL*Plus: Release 11.1.0.6.0 - Production on Fri Apr 22 11:43:05 2011 Copyright (c) 1982, 2007, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> select name,open_mode from v$database; NAME OPEN_MODE --------- ---------CLONE READ WRITE
Loss Of Control-file in various Scenario's A Control file is a small binary file that is part of an Oracle database. The control file is used to keep track of the database's status and physical structure. The control file is absolutely crucial to database operation . Here , we will discuss the various scenario's when control file(s) get lost or corrupt. CASE 1 : If one of the controlfile get lost or corrupted when the database is shut down and on startup we get the following error due to loss of controlfile. C:\>sqlplus sys/xxxx@noida as sysdba SQL*Plus: Release 11.1.0.6.0 - Production on Mon Apr 18 15:41:33 2011 Copyright (c) 1982, 2007, Oracle. All rights reserved. Connected to an idle instance. SQL> startup ORACLE instance started. Total System Global Area 318046208 bytes Fixed Size 1332920 bytes Variable Size 239077704 bytes Database Buffers 71303168 bytes Redo Buffers 6332416 bytes
195
ORACLE DATA BASE ADMINISTRATION
ORA-00205: error in identifying control file, check alert log for more info Checked the Alert log file and the following information are in the alert log file. ALTER DATABASE MOUNT Mon Apr 18 15:42:12 2011 ORA-00210: cannot open the specified control file ORA-00202: control file: 'D:\ORACLE\ORADATA\NOIDA\CONTROL02.CTL' ORA-27041: unable to open file OSD-04002: unable to open file O/S-Error: (OS 2) The system cannot find the file specified. Mon Apr 18 15:42:14 2011 Checker run found 1 new persistent data failures ORA-205 signalled during: ALTER DATABASE MOUNT To solve this issue, copy one of the existing control file (say control01.ctl or control03.ctl ) and paste it where the missing control file was earlier residing and rename the controlfile which one is missing, as in above example, control file (CONTROL02.CTL) is missing and then following the below steps: SQL> alter database mount; Database altered. SQL> alter database open; Database altered. SQL> select name,open_mode from v$database ; NAME OPEN_MODE -----------------NOIDA READ WRITE CASE 2: When all the controlfile are lost If we have valid backup and if all the control files are lost then we can recover the control files from autobackup of controlfile or by specifying the location of autobackup control file. C:\>sqlplus sys/xxxx@noida as sysdba SQL*Plus: Release 11.1.0.6.0 - Production on Mon Apr 18 16:21:55 2011 Copyright (c) 1982, 2007, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> startup nomount ORACLE instance started. Total System Global Area 318046208 bytes Fixed Size 1332920 bytes Variable Size 272632136 bytes Database Buffers 37748736 bytes Redo Buffers 6332416 bytes SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options
196
ORACLE DATA BASE ADMINISTRATION
C:\>rman target sys/xxxx@noida Recovery Manager: Release 11.1.0.6.0 - Production on Mon Apr 18 16:33:34 2011 Copyright (c) 1982, 2007, Oracle. All rights reserved. connected to target database: NOIDA (not mounted) RMAN> restore controlfile from 'D:\orcl_bkp\cf\C-1502483083-20110418-01'; (location of controlfile) Starting restore at 18-APR-11 using channel ORA_DISK_1 channel ORA_DISK_1: restoring control file channel ORA_DISK_1: restore complete, elapsed time: 00:00:03 output file name=D:\ORACLE\ORADATA\NOIDA\CONTROL01.CTL output file name=D:\ORACLE\ORADATA\NOIDA\CONTROL02.CTL output file name=D:\ORACLE\ORADATA\NOIDA\CONTROL03.CTL Finished restore at 18-APR-11 RMAN> alter database mount; database mounted released channel: ORA_DISK_1 RMAN> recover database; Starting recover at 18-APR-11 Starting implicit crosscheck backup at 18-APR-11 allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=153 device type=DISK Crosschecked 7 objects Finished implicit crosscheck backup at 18-APR-11 Starting implicit crosscheck copy at 18-APR-11 using channel ORA_DISK_1 Finished implicit crosscheck copy at 18-APR-11 searching for all files in the recovery area cataloging files... no files cataloged using channel ORA_DISK_1 starting media recovery archived log for thread 1 with sequence 19 is already on disk as file D:\ORACLE\ORADATA\NOIDA\RE archived log file name=D:\ORACLE\ORADATA\NOIDA\REDO01.LOG thread=1 sequence=19 media recovery complete, elapsed time: 00:00:05 Finished recover at 18-APR-11 RMAN> alter database open resetlogs; database opened CASE 3 : When we donot have any backup and and all control files are lost or corrupted SQL> startup nomount ORACLE instance started. Total System Global Area 318046208 bytes Fixed Size 1332920 bytes
197
ORACLE DATA BASE ADMINISTRATION
Variable Size Database Buffers Redo Buffers
281020744 bytes 29360128 bytes 6332416 bytes
Now we create the controlfile manually on command prompt SQL> CREATE CONTROLFILE REUSE DATABASE "NOIDA" NORESETLOGS archivelog MAXLOGFILES 5 MAXLOGMEMBERS 3 MAXDATAFILES 10 MAXINSTANCES 1 MAXLOGHISTORY 113 LOGFILE GROUP 1 'D:\oracle\oradata\noida\REDO01.LOG' SIZE 50M, GROUP 2 'D:\oracle\oradata\noida\REDO02.LOG' SIZE 50M, GROUP 3 'D:\oracle\oradata\noida\REDO03.LOG' SIZE 50M DATAFILE 'D:\oracle\oradata\noida\SYSTEM01.DBF' , 'D:\oracle\oradata\noida\USERS01.DBF' , 'D:\oracle\oradata\noida\EXAMPLE01.DBF' , 'D:\oracle\oradata\noida\SYSAUX01.DBF' , 'D:\oracle\oradata\noida\TRANS.DBF' , 'D:\oracle\oradata\noida\UNDOTBS01.DBF' ; Control file created. SQL> archive log list Database log mode Automatic archival Archive destination Oldest online log sequence Next log sequence to archive Current log sequence
Archive Mode Disabled D:\archive\ 1 1 1
SQL> select first_change# ,group# from v$log; FIRST_CHANGE# GROUP# ------------- ---------1313491 1 0 3 0 2 SQL> alter database open; SQL> select name,open_mode from v$database; NAME OPEN_MODE -----------------NOIDA READ WRITE Create Control file Manually. When and How ? The control files of a database store the status of the physical structure of the database. The control file is absolutely crucial to database operation . Control File contains
198
ORACLE DATA BASE ADMINISTRATION
> Database information (RESETLOGS SCN and their time stamp) > Archive log history > Tablespace and datafile records (filenames, datafile checkpoints, read/write status, offline or not) > Redo Logs (current online redo log) > Database‘s creation date > database name > current archive log mode > Log records (sequence numbers, SCN range in each log) > RMAN catalog > Database block corruption information > Database ID, which is unique to each DB If the controlfile is lost, it is somewhat difficult to do a recovery because the database cannot be mounted for a recovery. The controlfile must be recreated. So We can Manually create a new control file for a database using the CREATE CONTROLFILE statement. The following statement creates a new control file for the database (a database that formerly used a different database name) . When to Create New Control Files : It is necessary for us to create new control files in the following situations: 1.) All control files for the database have been permanently damaged and we do not have a control file backup. 2.) We want to change the database name. For example, we would change a database name if it conflicted with another database name in a distributed environment. 3.) The compatibility level is set to a value that is earlier than 10g, and we must make a change to an area of database configuration that relates to any of the following parameters from the CREATE DATABASE or CREATE CONTROLFILE commands: MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, and MAXINSTANCES. If compatibility is 10g or later, we do not have to create new control files when we make such a change; the control files automatically expand, if necessary, to accommodate the new configuration information. For example, assume that when we created the database or recreated the control files, we set MAXLOGFILES to 3. Suppose that now we want to add a fourth redo log file group to the database with the ALTER DATABASE command. If compatibility is set to 10g or later, we can do so and the controlfiles automatically expand to accommodate the new logfile information. However, with compatibility set earlier than 10g, our ALTER DATABASE command would generate an error, and we would have to first create new control files . Command to Create Controlfile Manually C:\>sqlplus sys/ramtech@noida as sysdba SQL*Plus: Release 11.1.0.6.0 - Production on Mon Apr 18 17:31:50 2011 Copyright (c) 1982, 2007, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> STARTUP NOMOUNT SQL> CREATE CONTROLFILE REUSE DATABASE "NOIDA" NORESETLOGS archivelog
199
ORACLE DATA BASE ADMINISTRATION
MAXLOGFILES 5 MAXLOGMEMBERS 3 MAXDATAFILES 10 MAXINSTANCES 1 MAXLOGHISTORY 113 LOGFILE GROUP 1 'D:\oracle\oradata\noida\REDO01.LOG' SIZE 50M, GROUP 2 'D:\oracle\oradata\noida\REDO02.LOG' SIZE 50M, GROUP 3 'D:\oracle\oradata\noida\REDO03.LOG' SIZE 50M DATAFILE 'D:\oracle\oradata\noida\SYSTEM01.DBF' , 'D:\oracle\oradata\noida\USERS01.DBF' , 'D:\oracle\oradata\noida\EXAMPLE01.DBF' , 'D:\oracle\oradata\noida\SYSAUX01.DBF' , 'D:\oracle\oradata\noida\TRANS.DBF' , 'D:\oracle\oradata\noida\UNDOTBS01.DBF' ; Specify RESETLOGS if we want Oracle to ignore the contents of the files listed in the LOGFILE clause. The log files do not have to exist but each redo_log_file_spec in the LOGFILE clause must specify the SIZE parameter. Oracle will assign all online redo log file groups to thread 1 and will enable this thread for public use by any instance. We must then open the database using ALTER DATABASE RESETLOGS. NORESETLOGS will use all files in the LOGFILE clause as they were when the database was last open. These files must exist and must be the current online redo log files rather than restored backups.Oracle will reassign the redo log file groups to re-enabled threads as previously assigned. How to Determine the Name of the Trace File to be Generated In many cases we need to find out the name of the latest trace file generated in the USER_DUMP_DEST directory. What we usually do is, that we physically go to the USER_DUMP_DEST location with the operating system browser and sort all the files by date and look for latest files. We can remove this hassle easily if we know what would be the trace file name in advance. Let's have a look ... Demo : C:\>sqlplus sys/xxxx@noida as sysdba SQL*Plus: Release 11.1.0.6.0 - Production on Mon Apr 18 17:44:49 2011 Copyright (c) 1982, 2007, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options
1
SQL> alter database backup controlfile to trace; Database altered. The above Command will generate the trace file inside USER_DUMP_DEST. Let's check the location ofUSER_DUMP_DEST. If we are using Sql*plus then issue, SQL> show parameter user_dump_dest NAME TYPE VALUE
200
ORACLE DATA BASE ADMINISTRATION
-------------user_dump_dest
-----------------------------------------------------------string d:\oracle\diag\rdbms\noida\noida\trace
Here the latest files are for latest trace . Sometimes, we may not get the right trace file .Now it would be quite easy task if we knew the name of the trace file to be generated by ALTER DATABASE command. In advance we can get the trace file name as SQL> SELECT s.sid, s.serial#, pa.value || '\' || LOWER(SYS_CONTEXT('userenv','instance_name')) || '_ora_' || p.spid || '.trc' AS trace_file FROM v$session s, v$process p, v$parameter pa WHERE pa.name = 'user_dump_dest' AND s.paddr = p.addr AND s.audsid = SYS_CONTEXT('USERENV', 'SESSIONID'); SID SERIAL# TRACE_FILE --------------------------------------------------------------------------------------110 312 d:\oracle\diag\rdbms\noida\noida\trace\noida_ora_3552.trc the trace file to be generated now will be named as noida_ora_3552.trc . So now issuing, "alter database backup controlfile to trace" will generate the file named d:\oracle\diag\rdbms\noida\noida\trace\noida_ora_3552.trc Demo : 2 This method is much simple and easy to identify the trace file. Let's have a look on another demo . C:\>sqlplus sys/xxxx@noida as sysdba SQL*Plus: Release 11.1.0.6.0 - Production on Mon Apr 18 17:49:49 2011 Copyright (c) 1982, 2007, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> show parameter user_dump_dest NAME TYPE VALUE ------------------------------------------------------------------user_dump_dest string d:\oracle\diag\rdbms\noida\noida\trace SQL> alter session set tracefile_identifier='mytracefile' ; Session altered. SQL> alter database backup controlfile to trace; Session altered. Now, go to the user_dump_dest location and find the trace file having name "mytracefile" . In mycase the name is "noida_ora_3552_mytracefile.trc" The difference between the two demo is that first demo is on system level so it will give all the trace file generated by different session whereas in second case , it will show the trace file for particular session only . The another difference between is that in first demo we have to fire the command and then check the tracefile but in second demo we have to set the trace file name so that we can easily identify the correct trace file .
201
ORACLE DATA BASE ADMINISTRATION
Question On Oracle Data Pump Here are some question related to Data Pump which will help you to clear your doubt regarding Data Pump. 1.) What is Oracle Data Pump? Oracle Data Pump is a new feature of Oracle Database 10g that provides high speed, parallel,bulk data and metadata movement of Oracle database contents. A new public interface package, DBMS_DATAPUMP, provides a server-side infrastructure for fast data and metadata movement. In Oracle Database 10g, new Export (expdp) and Import (impdp) clients that use this interface have been provided. Oracle recommends that customers use these new Data Pump Export and Import clients rather than the Original Export and Import clients, since the new utilities have vastly improved performance and greatly enhanced functionality. 2.) Is Data Pump a feature or an option of Oracle 10g? Data Pump is a fully integrated feature of Oracle Database 10g. Data Pump is installed automatically during database creation and database upgrade. 3.) What platforms is Data Pump provided on? Data Pump is available on the Oracle Database 10g Standard Edition, Enterprise Edition, and Personal Edition. However, the parallel capability is only available on Oracle10g Enterprise Edition. Data Pump is included on all the same platforms supported by Oracle 10g, including Unix, Linux, Windows NT, Windows 2000, and Windows XP. 4.) What are the system requirements for Data Pump? The Data Pump system requirements are the same as the standard Oracle Database 10g requirements. Data Pump doesn‘t need a lot of additional system or database resources, but the time to extract and treat the information will be dependent on the CPU and memory available on each machine. If system resource consumption becomes an issue while a Data Pump job is executing, the job can be dynamically throttled to reduce the number of execution threads. 5.) What is the performance gain of Data Pump Export versus Original Export? Using the Direct Path method of unloading, a single stream of data unload is about 2 times faster than original Export because the Direct Path API has been modified to be even more efficient. Depending on the level of parallelism, the level of improvement can be much more. 6.) What is the performance gain of Data Pump Import versus Original Import? A single stream of data load is 15-45 times faster Original Import. The reason it is so much faster is that Conventional Import uses only conventional mode inserts, whereas Data Pump Import uses the Direct Path method of loading. As with Export, the job can be parallelized for even more improvement. 7.) Does Data Pump require special tuning to attain performance gains? No, Data Pump requires no special tuning. It runs optimally ―out of the box‖. Original Export and (especially) Import require careful tuning to achieve optimum results. 8.) Why are directory objects needed? They are needed to ensure data security and integrity. Otherwise, users would be able to read data that they should not have access to and perform unwarranted operations on the server.
202
ORACLE DATA BASE ADMINISTRATION
9.) What makes Data Pump faster than original Export and Import? There are three main reasons that Data Pump is faster than original Export and Import. First,the Direct Path data access method (which permits the server to bypass SQL and go right to the data blocks on disk) has been rewritten to be much more efficient and now supports Data Pump Import and Export. Second, because Data Pump does its processing on the server rather than in the client, much less data has to be moved between client and server. Finally, Data Pump was designed from the ground up to take advantage of modern hardware and operating system architectures in ways that original Export/ and Import cannot. These factors combine to produce significant performance improvements for Data Pump over original Export and Import . 10.) How much faster is Data Pump than the original Export and Import utilities? For a single stream, Data Pump Export is approximately 2 times faster than original Export and Data Pump Import is approximately 15 to 40 times faster than original Import. Speed can be dramatically improved using the PARALLEL parameter. 11.) Why is Data Pump slower on small jobs? Data Pump was designed for big jobs with lots of data. Each Data Pump job has a master table that has all the information about the job and is needed for restartability. The overhead of creating this master table makes small jobs take longer, but the speed in processing large amounts of data gives Data Pump a significant advantage in medium and larger jobs. 12.) Are original Export and Import going away? Original Export is being deprecated with the Oracle Database 11g release. Original Import will always be supported so that dump files from earlier releases (release 5.0 and later) will be able to be imported. Original and Data Pump dump file formats are not compatible. 13.) Are Data Pump dump files and original Export and Import dump files compatible? No, the dump files are not compatible or interchangeable. If you have original Export dump files, you must use original Import to load them. 14.) How can I monitor my Data Pump jobs to see what is going on? In interactive mode, you can get a lot of detail through the STATUS command. In SQL, you can query the following views: DBA_DATAPUMP_JOBS - all active Data Pump jobs and the state of each job
USER_DATAPUMP_JOBS – summary of the user‘s active Data Pump jobs
DBA_DATAPUMP_SESSIONS – all active user sessions that are attached to a Data Pump Job
V$SESSION_LONGOPS – shows all progress on each active Data Pump job
15.) Can you adjust the level of parallelism dynamically for more or less resource consumption? Yes, you can dynamically throttle the number of threads of execution throughout the lifetime of the job. There is an interactive command mode where you can adjust the level of parallelism. So, for example, you can start up a job during the day with a PARALLEL=2, and then increase it at night to a higher level. 16.) Can I use gzip with Data Pump? Because Data Pump uses parallel operations to achieve its high performance, you cannot pipe the output of Data Pump export through gzip. Starting in Oracle Database 11g, the
203
ORACLE DATA BASE ADMINISTRATION
COMPRESSION parameter can be used to compress a Data Pump dump file as it is being created. The COMPRESSION parameter is available as part of the Advanced Compression Option for Oracle Database 11g 17.)Does Data Pump support all data types? Yes, all the Oracle database data types are supported via Data Pump‘s two data movement mechanisms, Direct Path and External Tables. 18.) What kind of object selection capability is available with Data Pump? With Data Pump, there is much more flexibility in selecting objects for unload and load operations . You can now unload any subset of database objects (such as functions, packages, and procedures) and reload them on the target platform. Almost all database object types can be excluded or included in an operation using the new Exclude and Include parameters. 19.) Is it necessary to use the Command line interface or is there a GUI that you can use? You can either use the Command line interface or the Oracle Enterprise Manager web-based GUI interface. 20.) Can I move a dump file set across platforms, such as from Sun to HP? Yes, Data Pump handles all the necessary compatibility issues between hardware platforms and operating systems. 21.) Can I take 1 dump file set from my source database and import it into multiple databases? Yes, a single dump file set can be imported into multiple databases. You can also just import different subsets of the data out of that single dump file set. 22.) Is there a way to estimate the size of an export job before it gets underway? Yes, you can use the ―ESTIMATE ONLY‖ command to see how much disk space is required for the job‘s dump file set before you start the operation. 23.) Can I monitor a Data Pump Export or Import job while the job is in progress? Yes, jobs can be monitored from any location is going on. Clients may also detach from an executing job without affecting it. 24.) If a job is stopped either voluntarily or involuntarily, can I restart it? Yes, every Data Pump job creates a Master Table in which the entire record of the job is maintained. The Master Table is the directory to the job, so if a job is stopped for any reason, it can be restarted at a later point in time, without losing any data. 25.) Does Data Pump give me the ability to manipulate the Data Definition Language (DDL)? Yes, with Data Pump, it is now possible to change the definition of some objects as they are Created at import time. For example, you can remap the source datafile name to the target datafile name in all DDL statements where the source datafile is referenced. This is really useful if you are moving across platforms with different file system syntax. 26.) Is Network Mode supported on Data Pump? Yes, Data Pump Export and Import both support a network mode in which the job‘s source is a remote oracle instance. This is an overlap of unloading the data, using Export, and
204
ORACLE DATA BASE ADMINISTRATION
loading the data, using Import, so those processes don‘t have to be serialized. A database link is used for the network. You don‘t have to worry about allocating file space because there are no intermediate dump files. 27.) Does Data Pump support Flashback? Yes, Data Pump supports the Flashback infrastructure, so you can perform an export and get a dump file set that is consistent with a specified point in time or SCN. 28.) Can I still use Original Export? Do I have to convert to Data Pump Export? An Oracle9i compatible Export that operates against Oracle Database 10g will ship with Oracle 10g, but it does not export Oracle Database 10g features. Also, Data Pump Export has new Syntax and a new client executable, so Original Export scripts will need to change. Oracle recommends that customers convert to use the Oracle Data Pump Export. 29.) How do I import an old dump file into Oracle 10g? Can I use Original Import or do I have to convert to Data Pump Import? Original Import will be maintained and shipped forever, so that Oracle Version 5.0 through Oracle9i dump files will be able to be loaded into Oracle 10g and later. Data Pump Import can only read Oracle Database 11g (and later) Data Pump Export dump files. Data Pump Import has new syntax and a new client executable, so Original Import scripts will need to change. 30.) When would I use SQL*Loader instead of Data Pump Export and Import? You would use SQL*Loader to load data from external files into tables of an Oracle database.Many customers use SQL*Loader on a daily basis to load files (e.g. financial feeds) into their databases. Data Pump Export and Import may be used less frequently, but for very important tasks, such as migrating between platforms, moving data between development, test, and production databases, logical database backup, and for application deployment throughout a corporation. 31.)When would I use Transportable Tablespaces instead of Data Pump Export and Import? You would use Transportable Tablespaces when you want to move an entire tablespace of data from one Oracle database to another. Transportable Tablespaces allows Oracle data files to be unplugged from a database, moved or copied to another location, and then plugged into another database. Moving data using Transportable Tablespaces can be much faster than performing either an export or import of the same data, because transporting a tablespace only requires the copying of datafiles and integrating the tablespace dictionary information. Even when transporting a tablespace, Data Pump Export and Import are still used to handle the extraction and recreation of the metadata for that tablespace. Conclusion Data Pump is fast and flexible. It replaces original Export and Import starting in Oracle Database 10g.Moving to Data Pump is easy, and opens up a world of new options and features. Automatic Archiving Stop when Archive Destination Disk is Full The database is running in archive mode with automatic archiving is turned on . log Archive destination is in FRA (flash Recovery Area) .The DB_RECOVERY_FILE_DEST_SIZE was set to be 2G. Once when i startup the database it goes at mount stage and throw the following error .
205
ORACLE DATA BASE ADMINISTRATION
ORA-1034 : Oracle not available ORA-16014 : log 3 sequence# xx not archived Then i check my alert logfile and find the space related issue in fra .i.e Automatic Archiving gets Stop when there is no space in disk. I fire the following command to resolve the issue . SQL> startup ORACLE instance started. Total System Global Area 313860096 bytes Fixed Size 1332892 bytes Variable Size 281020772 bytes Database Buffers 25165824 bytes Redo Buffers 6340608 bytes Database mounted. ORA-1034 : Oracle not available SQL>alter system set log_archive_dest_1='location=>' ; ( Here we may increase the fra size or change the archive destination , as i have changed the archive destination ) SQL> alter system archive log all to '>' ; SQL> shut immediate ( or sometimes shut abort if hangs) SQL> startup Explanation : Once the archive destination becomes full the location also becomes invalid. Normally Oracle does not do a recheck to see if space has been made available. 1) Using the command SQL> alter system archive log all to '>' ; The Above command gives Oracle a valid location for the archive logs. Even after using this the archive log destination parameter is still invalid and automatic achive does not work. We can also use this to allow to do a Shutdown immediate instead of Shutdown abort. 2.) Shutdown and restart of the database resets the archive log destination parameter to be valid . Do not forget to make disk space available before starting the database. 3.) Use the REOPEN attribute of the LOG_ARCHIVE_DEST_n parameter to determine whether and when ARCn attempts to re-archive to a failed destination following an error. REOPEN applies to all errors, not just OPEN errors. REOPEN=n sets the minimum number of seconds before ARCn should try to reopen a failed destination. The default value for n is 300 seconds. A value of 0 is the same as turning off the REOPEN option, in other words, ARCn will not attempt to archive after a failure. If we change the archive destination then there is no need of specifying repoen option . Setting Up AUTOTRACE in SQL*Plus
206
ORACLE DATA BASE ADMINISTRATION
AUTOTRACE is a facility within SQL*Plus that shows us the explain plan of the queries that we have executed and the resources they used. This topic makes extensive use of the AUTOTRACE facility.There is more than one way to get AUTOTRACE configured. This is what I like to do to get AUTOTRACE working: 1.) cd $ORACLE_HOME\rdbms\admin 2.) log into SQL*Plus as SYSTEM 3.) Run @utlxplan 4.) run the below command SQL> create public synonym plan_table for plan_table ; SQL> grant all on plan_table to public ; We can automatically get a report on the execution path used by the SQL optimizer and the statement execution statistics. The report is generated after successful SQL DML ( i.e., select,delete,update,merge and insert ) statements. It is useful for monitoring and tuning the performance of these statements.We can control the report by setting AUTOTRACE system variable . 1.) SET AUTOTRACE OFF : No AUTOTRACE report is generated. This is the default. 2.) SET AUTOTRACE ON : The AUTOTRACE report includes both the optimizer execution path and theSQL statement execution statistics. Here is an demo SQL> set autotrace on SQL>select e.last_name,e.salary,j.job_title from employees e , jobs j where e.job_id=j.job_id and e.salary> 12000 ; LAST_NAME ---------------King Kochhar De Haan Russell Partners Hartstein 6 rows selected.
SALARY ---------24000 17000 17000 14000 13500 13000
JOB_TITLE ----------------------President Administration Vice President Administration Vice President Sales Manager Sales Manager Marketing Manager
The statement can be automatically traced when it is run : Execution Plan ---------------------------------------------------------0 SELECT STATEMENT Optimizer=CHOOSE 1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMPLOYEES' 2 1 NESTED LOOPS 3 2 TABLE ACCESS (FULL) OF 'JOBS' 4 2 INDEX (RANGE SCAN) OF 'EMP_JOB_IX' (NON-UNIQUE) Statistics ---------------------------------------------------------0 recursive calls
207
ORACLE DATA BASE ADMINISTRATION
2 db block gets 34 consistent gets 0 physical reads 0 redo size 848 bytes sent via SQL*Net to client 503 bytes received via SQL*Net from client 4 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 6 rows processed 3.) SET AUTOTRACE ON STATISTICS : The AUTOTRACE report shows only the SQL statement execution statistics. SQL>set autotrace on statistics SQL>select
e.last_name,e.salary,j.job_title
from
employees
e
,
jobs
j
where
e.job_id=j.job_id and e.salary> 12000 ; LAST_NAME ---------------King Kochhar De Haan Russell Partners Hartstein 6 rows selected.
SALARY ---------24000 17000 17000 14000 13500 13000
JOB_TITLE ----------------------President Administration Vice President Administration Vice President Sales Manager Sales Manager Marketing Manager
Statistics ---------------------------------------------------------0 recursive calls 2 db block gets 34 consistent gets 0 physical reads 0 redo size 848 bytes sent via SQL*Net to client 503 bytes received via SQL*Net from client 4 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 6 rows processed 4.) SET AUTOTRACE ON EXPLAIN : The AUTOTRACE report shows only the optimizer execution Path. SQL> set autotrace on explain SQL> select
e.last_name,e.salary,j.job_title
e.job_id=j.job_id and e.salary> 12000 ;
from
employees
e
,
jobs
j
where
208
ORACLE DATA BASE ADMINISTRATION
LAST_NAME
SALARY
----------------
----------
JOB_TITLE -----------------------
King
24000
President
Kochhar
17000
Administration Vice President
De Haan
17000
Administration Vice President
Russell
14000
Sales Manager
Partners
13500
Sales Manager
Hartstein
13000
Marketing Manager
6 rows selected.
Execution Plan : ---------------------------------------------------------0
SELECT STATEMENT Optimizer=CHOOSE
1
0
TABLE ACCESS (BY INDEX ROWID) OF 'EMPLOYEES'
2
1
3
2
TABLE ACCESS (FULL) OF 'JOBS'
4
2
INDEX (RANGE SCAN) OF 'EMP_JOB_IX' (NON-UNIQUE)
NESTED LOOPS
5.) SET AUTOTRACE TRACEONLY : This is like SET AUTOTRACE ON, but it suppresses the printing of the user‘s query output, if any. SQL > set autotrace traceonly Execution Plan ---------------------------------------------------------0
SELECT STATEMENT Optimizer=CHOOSE
1
0
TABLE ACCESS (BY INDEX ROWID) OF 'EMPLOYEES'
2
1
3
2
TABLE ACCESS (FULL) OF 'JOBS'
4
2
INDEX (RANGE SCAN) OF 'EMP_JOB_IX' (NON-UNIQUE)
NESTED LOOPS
Statistics : --------------------------------------------------------0 recursive calls 2 db block gets 34 consistent gets 0 physical reads 0 redo size 848 bytes sent via SQL*Net to client 503 bytes received via SQL*Net from client
209
ORACLE DATA BASE ADMINISTRATION
4 SQL*Net roundtrips to/from client 0 sorts (memory) 0 sorts (disk) 6 rows processed
What is Database Link ? A database link is a pointer that defines a one-way communication path from an Oracle Database server to another database server. The link pointer is actually defined as an entry in a data dictionary table. A database link is a schema object in one database that enables you to access objects on another database. To create a private database link, we must have the create database link system privilege. To create a public database link, we must have the create public database link system privilege. Also, we must have the CREATE SESSION system privilege on the remote Oracle database . Before creating it, we must collect the following information: 1.) A net service name that our local database instance can use to connect to the remote instance and 2.) A valid username and password on the remote database. The net service name is necessary for every database link. The username and password that we specify when defining a database link are used to establish the connection to the remote instance. The Credentials of database links are : Primary Server = Noida Remote Server = Delhi 1.) Connect to datbase as C:\> sqlplus sys/xxxx@delhi as sysdba 2.) Create a user SQL> create user abc identified by abc 2 default tablespace users 3 quota unlimited on users; User created. 3.) Grant the privileges required for database link SQL> grant create public database link , create session, create table to abc; Grant succeeded. 4.) Connect with "ABC" user and create a table for testing purpose as SQL> conn abc/abc@delhi Connected. SQL> create table test1 (id number); Table created. SQL> insert into test1 values(&T); Enter value for t: 23 old 1: insert into test1 values(&T)
210
ORACLE DATA BASE ADMINISTRATION
new 1: insert into test1 values(23) 1 row created. SQL> / Enter value for t: 345 old 1: insert into test1 values(&T) new 1: insert into test1 values(345) 1 row created. SQL> / Enter value for t: 32 old 1: insert into test1 values(&T) new 1: insert into test1 values(32) 1 row created. SQL> commit ; Commit complete. SQL> select * from test1 ; ID ---------23 345 32 SQL> exit 5.) Connect with primary database as c:\>sqlplus sys/XXXX@noida as sysdba 6.) Create a Public database link and access the remote table(test1) as SQL> create public database link d_link connect to abc identified by abc using 'DELHI' ; Database link created SQL> select * from abc.test1@d_link ; ID ---------23 345 32 Hence , we access the remote table by using the database link . What is Alert Log File ? The alert log file is a chronological log of messages and errors written out by an Oracle Database. Typical messages found in this file is: database startup, shutdown, log switches, space errors, etc. This file should constantly be monitored to detect unexpected messages and corruptions.Oracle will automatically create a new alert log file whenever the old one is deleted.
211
ORACLE DATA BASE ADMINISTRATION
When an internal error is detected by a process, it dumps information about the error to its trace file. Some of the information written to a trace file is intended for the database administrator, while other information is for Oracle Worldwide Support. Trace file information is also used to tune applications and instances. The alert log of a database includes the following information : 1. ) All internal errors (ORA-00600), block corruption errors (ORA-01578), and deadlock errors (ORA-00060) that occur. 2.) Administrative operations, such as CREATE, ALTER, and DROP statements and STARTUP, SHUTDOWN, and ARCHIVELOG statements. 3.) Messages and errors relating to the functions of shared server and dispatcher processes. 4.) Errors occurring during the automatic refresh of a materialized view. 5.) The values of all initialization parameters that had non-default values at the time the database and instance startup . which process writes to alert log file? Not "one" but all the background processes can/do write to it. The archiver writes to it. LogWriter can write (if we have log_checkpoints_to_alert). When a background process detects that another has died, the former writes to the alert log before panicking the instance and killing it. Similarly an ALTER SYSTEM command issued by the server process for our database session will also write to the alert.log . To find the location of alert log file we can find by below command SQL > select value from v$parameter where name = 'background_dump_dest' ; SQL> show parameter background
OR
If the background_dump_dest parameter is not specified, Oracle will write the alert.log into the $ORACLE_HOME/RDBMS/trace directory. Hot Backups,extras redo generated and Fractured Blocks Today ,I have come across a good document. I have worked and modify it to explained in detail.Hope you all appreciate it.Before discussing the topic, i will like give overview of user-managed hot backup i.e, what happen during begin backup mode and end backup mode. When we begin backup, it freezes the header of the datafiles (means the SCN number will not increment any more until the backup is ended ). It also instructs LGWR to write whole blocks to redo the first time a block is touched. Writes still occur to the datafile, just the SCN is frozen. This occurs so that at recovery time, Oracle will know that it must overwrite all blocks in the backup file with redo entries due to fracturing. The original datafiles remain up-to-date, but the backup files will not be because they are being changed during backup. When we end backup, it unfreezes the header of the datafiles and allows SCNs to be recorded properly during checkpoint. Question: The oracle documentation tells us that when we put a tablespace in backup mode, the first DML in the session logs the entire block in the redo log buffer and not just the changed vectors. 1.) Can we simulate an example to see this happening? 2.) What can be the purpose of logging the entire block the first time and not do the same subsequently?
212
ORACLE DATA BASE ADMINISTRATION
Answer: Below, I‘ve created a simulation. Pay attention to the ―redo size‖ statistic in each. First, I have updated a single row of the employees table. SQL> set autotrace trace stat SQL> update employees set first_name = 'Stephen' where employee_id = 100; 1 row updated. Statistics ---------------------------------------------------------0 recursive calls 1 db block gets 1 consistent gets 0 physical reads 292 redo size 669 bytes sent via SQL*Net to client 598 bytes received via SQL*Net from client 4 SQL*Net roundtrips to/from client 1 sorts (memory) 0 sorts (disk) 1 rows processed SQL> rollback; Rollback complete. Notice the redo size was only 292 bytes, not a very large amount. Now, let‘s put the USERS tablespace into hot backup mode. SQL> alter tablespace users begin backup; Tablespace altered. SQL> update employees set first_name = 'Stephen' where employee_id = 100; 1 row updated. Statistics ---------------------------------------------------------0 recursive calls 2 db block gets 1 consistent gets 0 physical reads 8652 redo size 670 bytes sent via SQL*Net to client 598 bytes received via SQL*Net from client 4 SQL*Net roundtrips to/from client 1 sorts (memory) 0 sorts (disk) 1 rows processed Wow! Quite a bit of a difference. This time, we can see that at least an entire block was written to redo; 8,652 bytes total. Let‘s run it one more time, with the tablespace still in hot backup mode.
213
ORACLE DATA BASE ADMINISTRATION
SQL> / 1 row updated. Statistics ---------------------------------------------------------0 recursive calls 1 db block gets 1 consistent gets 0 physical reads 292 redo size 671 bytes sent via SQL*Net to client 598 bytes received via SQL*Net from client 4 SQL*Net roundtrips to/from client 1 sorts (memory) 0 sorts (disk) 1 rows processed This time, it only used 292 bytes, the same as the original amount. However, to address the second question, we‘re going to attempt changing a different block, by changing a record in the departments table instead of employees. SQL> update departments set department_name = 'Test Dept.' where department_id = 270; 1 row updated. Statistics ---------------------------------------------------------17 recursive calls 1 db block gets 5 consistent gets 1 physical reads 8572 redo size 673 bytes sent via SQL*Net to client 610 bytes received via SQL*Net from client 4 SQL*Net roundtrips to/from client 2 sorts (memory) 0 sorts (disk) 1 rows processed The result is that another entire block was written to redo. In the question, we stated: ―The oracle documentation tells us that when we put a tablespace in backup mode, the first DML in the session logs the entire block in the redo log buffer and not just the changed vectors‖ .This is close, but not right on the mark. It is not the first DML of the session, but the first DML to a block that is written to redo.However, when Oracle writes the first DML for the block, it ensures that the redo logs/archive trail contains at least one full representation of each block that is changed. Subsequent changes will therefore be safe. This process exists to resolve block fractures. A block fracture occurs when a block is being read by the backup, and being written to at the same time by DBWR. Because the OS (usually) reads blocks at a different rate than Oracle, the OS copy will pull pieces of an
214
ORACLE DATA BASE ADMINISTRATION
Oracle block at a time. What if the OS copy pulls half a block, and while that is happening, the block is changed by DBWR? When the OS copy pulls the second half of the block it will result in mismatched halves, which Oracle would not know how to reconcile. This is also why the SCN of the datafile header does not change when a tablespace enters hot backup mode. The current SCNs are recorded in redo, but not in the datafile. This is to ensure that Oracle will always recover over the datafile contents with redo entries. When recovery occurs, the fractured datafile block will be replaced with a complete block from redo, making it whole again. After Oracle can be certain it has a complete block, all it needs are the vectors.
Database Monitoring and Checklist What is monitoring ? The Monitering of predefined events that generates a message or warning when a certain threshold has been exceeded.This is done in an effort to ensure that an issue doesn't become a problem.The database monitering is required for the following reason : Supporting production !!!
Keeping an eye on development, i.e. disabled PKs | FKs.
Database performance
In Support of an SLA (service level agreement)
Here are steps which are required to moniter. 1.) Daily Procedures A.) Verify all instances are up : Make sure the database is available. Log into each instance and run daily reports or test scripts. Use the below queries to check where database is up or not. SQL> select name,open_mode from v$database ; B.) Look for any new alert log entries : The alert log file is a best DBA's friend and could be a true lifesaver. Go to the background dump destination or diag (in oracle 11g) and check alert log file . If any ORA- errors have appeared since the previous time we looked note them investigate it and take the steps to resolve the errors . C.) Verify DBSNMP is running : Log on to each managed machine to check for the 'dbsnmp' process. For Unix: at the command line, type ps –ef | grep dbsnmp. There should be two dbsnmp processes running. If not, restart DBSNMP. (Some sites have this disabled on purpose; if this is the case, remove this item from our list, or change it to "verify that DBSNMP is NOT running".) D.) Verify success of database backup : Check the physical location of the database backup and ensure that the database backup is successful . E.) Verify enough resources for acceptable performance : Check the space status of the tablespace i.e, free spaces and database size.click the below monitoring link to check the tablespaces spaces .
215
ORACLE DATA BASE ADMINISTRATION
F.) Check the instance status of the database : Check the memory component ie, buffer cache ratio, library hits , shared pool and physical reads and logical reads . To check all run the below monitoring scripts . Monitoring scripts G.) Processes to review contention for CPU, memory, network or disk resources : To check CPU utilization, go to x:\web\phase2\default.htm =>system metrics=>CPU utilization page. 400 is the maximum CPU utilization because there are 4 CPUs on phxdev and phxprd machine. We need to investigate if CPU utilization keeps above 350 for a while. II. Nightly Procedures Most production databases (and many development and test databases) will benefit from having certain nightly batch processes run. A.) Collect volumetric data : This example collects table row counts. This can easily be extended to other objects such as indexes, and other data such as average row sizes . Analyze schemas and collect data as in case of oracle 11g the optimizer collect the analyse the whole database and collect the fresh statistics .In case of Oracle 9i , collect fresh statistics by running the below command : SQL> exec dbms_stats.gather_schemas_stats ; SQL> exec dbms_stats.gather_schemas_stats('SCOTT') ; III. Weekly Procedures A.) Check the invalids objects : Check the invalid objects and remove them from the database .The the utlrp.sql script to remove the invalid objects . This scripts is present in "$ORACLE_HOME\rdbms\admin" folder . B.) Check the Growth of the tablespace in the database : Check the growth of the each tablespace and in the database. Run the scripts to check the growth of tablespace . Tablespace Growth Scripts IV. Monthly Procedures A.) Look for Harmful Growth Rates : Review changes in segment growth when compared to previous reports to identify segments with a harmful growth rate. B.) Review Tuning Opportunities : Review common Oracle tuning points such as cache hit ratio, latch contention, and other points dealing with memory management. Compare with past reports to identify harmful trends or determine impact of recent tuning adjustments. C.) Look for I/O Contention : Review database file activity. Compare to past output to identify trends that could lead to possible contention. D.) Review Fragmentation : Investigate fragmentation (e.g. row chaining, etc.). E.) Perform Tuning and Maintenance : Make the adjustments necessary to avoid contention for system resources. This may include scheduled down time or request for additional resources. What is Data Dictionary?
216
ORACLE DATA BASE ADMINISTRATION
The Data Dictionary is a repository of database metadata (data about data), about all the information inside the database. This repository is owned by "sys", and is stored principally in the "system" tablespace, though some components are stored in the "sysaux" tablespace (in Oracle 10g and 11g). The Oracle user "SYS" stores all base tables and user-accessible views of the data dictionary. No Oracle user should ever alter (update,delete or insert) any rows or schema objects contained in the SYS schema, because such activity can compromise data integrity. A data dictionary contains the following contents : The definitions of all schema objects in the database (tables, views, indexes, clusters, synonyms, sequences, procedures, functions, packages, triggers, and so on) How much space has been allocated for, and is currently used by, the schema objects
Default values for columns
Integrity constraint information
The names of Oracle users
Privileges and roles each user has been granted
Auditing information, such as who has accessed or updated various schema objects
Other general database information.
The data dictionary consists of the following : 1.) Base Table : The underlying tables that store information about the associated database. Only Oracle should write to and read these tables. Users rarely access them directly because they are normalized, and most of the data is stored in a cryptic format. 2.) User-Accessible Views : The views that summarize and display the information stored in the base tables of the data dictionary. These views decode the base table data into useful information, such as user or table names, using joins and where clauses to simplify the information. Most users are given access to the views rather than the base tables. The user-accessible views come in two primary forms : 1.) The static performance views : The static views are dba_xx , all_xx , user_xx views (eg. dba_users, user_tables, all_tables). These views are used to manage database structures. 2.) The v$dynamic performance views : The dynamics views are (eg v$process).These views are used to monitor real time database statistics.
v$xx views
Some of these tables are inside of the Oracle kernel, so we would never work directly with them unless we are working for Oracle support or performing a disaster recovery scenario . But instead we can access to the views in order to know the
217
ORACLE DATA BASE ADMINISTRATION
―information about the information‖. For example, a possible usage of this data dictionary would be to know all the tables owned by a single user, or the list or relationships between all the tables of the database. The main view of the data dictionary is the view DICT (or DICTIONARY): SQL> desc dict Name Null? Type -------------------------------------TABLE_NAME VARCHAR2(30) COMMENTS VARCHAR2(4000) Through the DICT view, we can access to all the data dictionary views that could provide us the information that we need. For example, if we are looking for information related to link , but we don‘t know where to look for, then query the DICT view: SQL> select table_name from TABLE_NAME ---------------------------DBA_DATAPUMP_JOBS DBA_DATAPUMP_SESSIONS USER_DATAPUMP_JOBS GV$DATAPUMP_JOB GV$DATAPUMP_SESSION V$DATAPUMP_JOB V$DATAPUMP_SESSION 7
dict
where
rows
table_name
like
'%DATAPUMP%' ;
selected.
Now, we have just to query one of these views to find the data we are looking for .GV$ views are very useful when we are working with RAC, and V$ views are instance related. Remember that the data dictionary provides critical information of the database, and it should be restricted to users. However, if a user really needs to query the data dictionary, we can grant the following privileges . SQL> grant select_catalog_role to username ;
As a DBA, we can see why the data dictionary is so important. Since we can‘t possibly remember everything about our database (like the names of all the tables and columns) Oracle remembers this for us. All we need is to learn how to find that information .
What are Patches and how to apply patches ? Patching is one of the most common task performed by DBA's in day-to-day life . Here , we will discuss about the various types of patches which are provided by Oracle . Oracle issues product fixes for its software called patches. When we apply the patch to our Oracle software installation, it updates the executable files, libraries, and object files in the software home directory . The patch application can also update configuration files and Oracle-supplied SQL schemas . Patches are applied by using OPatch, a utility supplied by Oracle , OUI or Enterprise Manager Grid Control . Oracle Patches are of various kinds . Here , we are broadly categorizing it into two groups .
218
ORACLE DATA BASE ADMINISTRATION
1.) Patchset : 2.) Patchset Updates : 1.) Patchset : A group of patches form a patch set. Patchsets are applied by invoking OUI (Oracle Universal Installer) . Patchsets are generally applied for Upgradation purpose . This results in a version change for our Oracle software, for example, from Oracle Database 11.2.0.1.0 to Oracle Database 11.2.0.3.0. We will cover this issue later . 2.) Patchset Updates : Patch Set Updates are proactive cumulative patches containing recommended bug fixes that are released on a regular and predictable schedule . Oracle has catergaries as : i.) Critical Patch Update (CPU) now refers to the overall release of security fixes each quarter rather than the cumulative database security patch for the quarter. Think of the CPU as the overarching quarterly release and not as a single patch . ii.) Patch Set Updates (PSU) are the same cumulative patches that include both the security fixes and priority fixes. The key with PSUs is they are minor version upgrades (e.g., 11.2.0.1.1 to 11.2.0.1.2). Once a PSU is applied, only PSUs can be applied in future quarters until the database is upgraded to a new base version. iii.) Security Patch Update (SPU) terminology is introduced in the October 2012 Critical Patch Update as the term for the quarterly security patch. SPU patches are the same as previous CPU patches, just a new name . For the database, SPUs can not be applied once PSUs have been applied until the database is upgraded to a new base version. iv.) Bundle Patches are the quarterly patches for Windows and Exadata which include both the quarterly security patches as well as recommended fixes. PSUs(PatchSet Updates) or CPUs(Critical Patch Updates) ,SPU are applied via opatch utility. How to get Oracle Patches : We obtain patches and patch sets from My Oracle Support (MOS) . The ability to download a specific patch is based on the contracts associated to the support identifiers in our My Oracle Support account. All MOS users are able to search for and view all patches, but we will be prevented from downloading certain types of patches based on our contracts. While applying Patchset or patchset upgrades , basically there are two entities in the Oracle Database environment i. ) Oracle Database Software ii.) Oracle Database Most of the database patching activities involve, in the following sequence Update "Oracle Database Software" using known as "Installation" Tasks.
'./runInstaller' or
'opatch apply'
Update "Oracle Database" (catupgrd.sql or catbundle.sql ...etc) to make it compatible for newly patched "Oracle database Software" known as "Post Installation" tasks.
219
ORACLE DATA BASE ADMINISTRATION
Patchset OR CPU/PSU (or one-off) patch contains Post Installation tasks to be executed on all Oracle Database instances after completing the Installation tasks. If we are planning to apply a patchset along with required one-off-patches (either CPU or PSU or any other one-off patch), then we can complete the Installation tasks of the Patchset+CPU/PSU/one-off patches at once and then execute Post Installation tasks of the Patchset+CPU/PSU/one-off patches in the same sequence as they were installed . This approach minimizes the requirement of database shutdown across each patching activity and simplifies the patching mechanism as two tasks: Software update and then
Database update.
Here , we will cover the Opatch Utility in details along with example. OPatch is the recommended (Oracle-supplied) tool that customers are supposed to use in order to apply or rollback patches. OPatch is PLATFORM specific . Release is based on Oracle Universal Installer version . OPatch resides in $ORACLE_HOME/OPatch . OPatch supports the following :
Applying an interim patch.
Rolling back the application of an interim patch.
Detecting conflict when applying an interim patch after previous interim patches have been applied. It also suggests the best options to resolve a conflict .
Reporting on installed products and interim patch.
The patch metadata exist in the inventory.xml and action.xml files exists under//etc/config/ Inventory .xml file have the following information :
Bug number
Unique Patch ID
Date of patch year
Required and Optional components
OS platforms ID
Instance shutdown is required or not
Patch can be applied online or not
Actions
.xml file have the following information .
File name and it location to which it need to be copied
Components need to be re-linked
Information about the optional and required components
Here are steps for applying patches on linux Platform : 1.) Download the required Patches from My Oracle Support (MOS) :
220
ORACLE DATA BASE ADMINISTRATION
Login to metalink.
Click "Patches & Updates" link on top menu.
On the patch search section enter patch number and select the platform of your database.
Click search.
On the search results page, download the zip file.
2.) Opatch version : Oracle recommends that we use the latest released OPatch , which is available for download from My Oracle Support . OPatch is compatible only with the version of Oracle Universal Installer that is installed in the Oracle home. We can get all Opatch command by using Opatch help command . 3.) Stop all the Oracle services : Before applying Optach , make sure all the Oracle services are down . If they are not down then stop/down the oracle related Services . Let's crosscheck it $ ps -ef |grep pmon oracle 15871 15484 0 11:20 pts/2
00:00:00 grep pmon
$ ps -ef |grep tns oracle 15874 15484 0 11:20 pts/2
00:00:00 grep tns
4.) Take Cold Backup : It is highly recommended to backup the software directory which we are patching before performing any patch operation . This applies to Oracle Database or Oracle Grid Infrastructure software installation directories. Take the backup of following
Take the Oracle software directory backup
$ tar -zcvf /u01/app/oracle/product/11.2.0/ohsw-bkp-b4-ptch.tar.gz /u01/app/oracle/product/11.2.0
Take backup of oracle database .
$ tar -zcvf /u01/app/oracle/oradata/dbfl-b4-ptch.tar.gz Here all the database files are in oradata directory .
/u01/app/oracle/oradata
Take backup of OraInventary
$ tar -zcvf
/u01/app/oraInventary/orinv-b4-ptch.tar.gz
/u01/app/oraInventary
5.) Apply OPatches Set our current directory to the directory where the patch is located and then run the OPatch utility by entering the following commands: $ export PATH=$ORACLE_HOME/OPatch:$PATH: $ opatch apply .
221
ORACLE DATA BASE ADMINISTRATION
6.) Post Installation : Once , the Opatch installation completed successfully . Perform the post Installation steps . Startup the oracle database with new patched software and run catbundle.sql scripts which is found in $ORACLE_HOME/rdbms/admin directory . The catbundle.sql execution is reflected in the dba_registry_history view by a row associated with bundle series PSU. 7.) Finally check the status of patch status : We can check the final status of applied patched new Oracle Home by using the below command . SQL > select * from dba_registry_history order by action_time desc ; Notes : i.) If we are using a Data Guard Physical Standby database, we must install this patch on both the primary database and the physical standby database . ii.) While applying patching take care of mount point status .There should be sufficient Space . Applying CPUJan2012 Patch on 11.2.0.2/Linux(64 bit) STEPS:1. Database Version 2. OS version 3. Download CPUJan2012 patch for 11.2.0.2.0 4. Opatch Version 5. Sessions Status 6. Invalid objects 7. Status of Oracle Services 8. Backup 9. Apply Opatch 10. Post Installation 11. Check the status from registry$history 12. Recompiling Views in Database 1) Database Version SQL> select * from v$version; BANNER -------------------------------------------------------------------------------Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production PL/SQL Release 11.2.0.2.0 - Production CORE
11.2.0.2.0
Production
TNS for Linux: Version 11.2.0.2.0 - Production
222
ORACLE DATA BASE ADMINISTRATION
NLSRTL Version 11.2.0.2.0 - Production SQL> 2) OS version oracle-ckpt.com> file /bin/ls /bin/ls: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses shared libs), for GNU/Linux 2.6.9, stripped oracle-ckpt.com> 3) Download CPUJan2012 patch for 11.2.0.2
4) Opatch Version To apply CPUJan2012, OPatch utility version 11.2.0.1.0 or later to apply this patch. Oracle recommends that you use the latest released OPatch 11.2, which is available for download from My Oracle Support patch 6880880 by selecting the 11.2.0.0.0 release oracle-ckpt.com> export PATH=/u00/app/oracle/product/11.2.0/OPatch:$PATH oracle-ckpt.com> opatch lsinventory Invoking OPatch 11.2.0.1.1 Oracle Interim Patch Installer version 11.2.0.1.1 Copyright (c) 2009, Oracle Corporation. All rights reserved. Oracle Home
: /u00/app/oracle/product/11.2.0
Central Inventory : /u00/app/oraInventory from
: /etc/oraInst.loc
OPatch version : 11.2.0.1.1 OUI version : 11.2.0.2.0 OUI location : /u00/app/oracle/product/11.2.0/oui Log file location : /u00/app/oracle/product/11.2.0/cfgtoollogs/opatch/opatch2012-0303_06-32-39AM.log Patch history file: /u00/app/oracle/product/11.2.0/cfgtoollogs/opatch/opatch_history.txt Lsinventory Output file location : /u00/app/oracle/product/11.2.0/cfgtoollogs/opatch/lsinv/lsinventory2012-03-03_06-3239AM.txt -------------------------------------------------------------------------------Installed Top-level Products (1):
223
ORACLE DATA BASE ADMINISTRATION
Oracle Database 11g There are 1 products installed in this Oracle Home. 5) Sessions Status
11.2.0.2.0
Check How Many sesion are ACTIVE, If any found Ask Application team to bring down all Applications/Processes. SQL> select username,count(*) from v$session where username is not nulll group by username; USERNAME
COUNT(*)
------------------------------ ---------26 SOTCADM SYS
6 1
SQL> 6) Invalid objects SQL> select count(*),object_type from dba_objects where status 'VALID' and OWNER !='PUBLIC' and OBJECT_TYPE!='SYNONYM' group by object_type; COUNT(*) OBJECT_TYPE ---------- ------------------38 TRIGGER 2 VIEW SQL> 7) Status of Oracle Services oracle-ckpt.com> ps -ef|grep pmon oracle
8016 30235 0 02:17 pts/0
00:00:00 grep pmon
oracle-ckpt.com> ps -ef|grep tns oracle
8019 30235 0 02:17 pts/0
00:00:00 grep tns
oracle-ckpt.com> 8 ) Backup Take Cold Backup of Database & Backup of (ORACLE_HOME & Inventory)
224
ORACLE DATA BASE ADMINISTRATION
oracle-ckpt.com> tar -zcpvf 11.2.0_Home_Inventory_Backup_$(date +%Y%m%d).tar.gz /u00/app/oracle/product/11.2.0 /u00/app/oraInventory/ /u00/app/oracle/product/11.2.0/ /u00/app/oracle/product/11.2.0/jdev/ /u00/app/oracle/product/11.2.0/jdev/lib/ /u00/app/oracle/product/11.2.0/jdev/lib/jdev-rt.jar /u00/app/oracle/product/11.2.0/jdev/lib/javacore.jar /u00/app/oracle/product/11.2.0/jdev/doc/ /u00/app/oracle/product/11.2.0/jdev/doc/extension/ /u00/app/oracle/product/11.2.0/jdev/doc/extension/extension.xsd /u00/app/oracle/product/11.2.0/olap/ ---
All files related to ORACLE_HOME & Inventory ------
/u00/app/oraInventory/orainstRoot.sh /u00/app/oraInventory/ContentsXML/ /u00/app/oraInventory/ContentsXML/comps.xml /u00/app/oraInventory/ContentsXML/libs.xml /u00/app/oraInventory/ContentsXML/inventory.xml /u00/app/oraInventory/install.platform /u00/app/oraInventory/oui/ /u00/app/oraInventory/oui/srcs.lst oracle-ckpt.com> 9) Apply Opatch oracle-ckpt.com> export PATH=$ORACLE_HOME/OPatch:$PATH: oracle-ckpt.com> opatch napply -skip_subset -skip_duplicate Invoking OPatch 11.2.0.1.1 Oracle Interim Patch Installer version 11.2.0.1.1 Copyright (c) 2009, Oracle Corporation. All rights reserved.
225
ORACLE DATA BASE ADMINISTRATION
UTIL session Oracle Home
: /u00/app/oracle/product/11.2.0
Central Inventory : /u00/app/oraInventory from
: /etc/oraInst.loc
OPatch version
: 11.2.0.1.1
OUI version
: 11.2.0.2.0
OUI location
: /u00/app/oracle/product/11.2.0/oui
Log file location : /u00/app/oracle/product/11.2.0/cfgtoollogs/opatch/opatch2012-0226_02-17-44AM.log Patch history file: /u00/app/oracle/product/11.2.0/cfgtoollogs/opatch/opatch_history.txt Invoking utility "napply" Checking conflict among patches... Checking if Oracle Home has components required by patches... Checking skip_duplicate Checking skip_subset Checking conflicts against Oracle Home... OPatch continues with these patches: 11830776 11830777 12586486 12586487 12586488 12586489 12586491 12586492 12586493 12586494 12586495 12586496 12846268 12846269 13343244 13386082 13468884 Do you want to proceed? [y|n] y User Responded with: Y Running prerequisite checks... OPatch detected non-cluster Oracle Home from the inventory and will patch the local system only. Please shutdown Oracle instances running out of this ORACLE_HOME on the local system. (Oracle Home = '/u00/app/oracle/product/11.2.0') Is the local system ready for patching? [y|n] y User Responded with: Y Backing up files affected by the patch 'NApply' for restore. This might take a while... Applying patch 11830776... ApplySession applying interim patch '11830776' to OH '/u00/app/oracle/product/11.2.0'
226
ORACLE DATA BASE ADMINISTRATION
Backing up files affected by the patch '11830776' for rollback. This might take a while... Patching component oracle.sysman.console.db, 11.2.0.2.0... Updating jar file "/u00/app/oracle/product/11.2.0/sysman/jlib/emCORE.jar" with "/sysman/jlib/emCORE.jar/oracle/sysman/eml/admin/rep/AdminResourceBundle.class" Updating jar file "/u00/app/oracle/product/11.2.0/sysman/jlib/emCORE.jar" with "/sysman/jlib/emCORE.jar/oracle/sysman/eml/admin/rep/AdminResourceBundleID.class" Updating jar file "/u00/app/oracle/product/11.2.0/sysman/jlib/emCORE.jar" with "/sysman/jlib/emCORE.jar/oracle/sysman/eml/admin/rep/UserData.class" Copying file to "/u00/app/oracle/product/11.2.0/oc4j/j2ee/oc4j_applications/applications/em/em/admin/re p/editUserSummary.uix" Patching component oracle.rdbms, 11.2.0.2.0... Updating archive file "/u00/app/oracle/product/11.2.0/lib/libserver11.a" with "lib/libserver11.a/qerrm.o" Updating archive file "/u00/app/oracle/product/11.2.0/lib/libserver11.a" with "lib/libserver11.a/kspt.o" Updating archive file "/u00/app/oracle/product/11.2.0/lib/libserver11.a" with "lib/libserver11.a/qmix.o" Updating archive file "/u00/app/oracle/product/11.2.0/lib/libserver11.a" with "lib/libserver11.a/qmxtk.o" Updating archive file "/u00/app/oracle/product/11.2.0/rdbms/lib/libknlopt.a" with "rdbms/lib/libknlopt.a/kkxwtp.o" Copying file to "/u00/app/oracle/product/11.2.0/rdbms/lib/kkxwtp.o" ApplySession adding interim patch '13468884' to inventory Verifying the update... Inventory check OK: Patch ID 13468884 is registered in Oracle Home inventory with proper meta-data. Files check OK: Files from Patch ID 13468884 are present in Oracle Home. Running make for target client_sharedlib Running make for target client_sharedlib Running make for target ioracle The local system has been patched and can be restarted. UtilSession: N-Apply done. OPatch succeeded. oracle-ckpt.com> 10) Post Installation database instance running on the Oracle home being patched, connect to the database using SQL*Plus using SYSDBA and run the catbundle.sql script as follows:
oracle-ckpt.com> sqlplus / as sysdba SQL*Plus: Release 11.2.0.2.0 Production on Sun Feb 26 02:26:39 2012 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to:
227
ORACLE DATA BASE ADMINISTRATION
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> @?/rdbms/admin/catbundle.sql cpu apply PL/SQL procedure successfully completed. PL/SQL procedure successfully completed. Generating apply and rollback scripts... Check the following file for errors: /u00/app/oracle/cfgtoollogs/catbundle/catbundle_CPU_PROD_GENERATE_2012Feb26_02_2 7_09.log Apply script: /u00/app/oracle/product/11.2.0/rdbms/admin/catbundle_CPU_PROD_APPLY.sql Rollback script: /u00/app/oracle/product/11.2.0/rdbms/admin/catbundle_CPU_PROD_ROLLBACK.sql PL/SQL procedure successfully completed. Executing script file... SQL> COLUMN spool_file NEW_VALUE spool_file NOPRINT SQL> SELECT '/u00/app/oracle/cfgtoollogs/catbundle/' || 'catbundle_CPU_' || name || '_APPLY_' || TO_CHAR(SYSDATE, 'YYYYMonDD_hh24_mi_ss', 'NLS_DATE_LANGUAGE=''AMERICAN''') || '.log' AS spool_file FROM v$database; SQL> ALTER SESSION SET current_schema = SYS; Session altered. SQL> PROMPT Updating registry... Updating registry... SQL> INSERT INTO registry$history 2 (action_time, action, 3 namespace, version, id, 4 bundle_series, comments) 5 VALUES 6 (SYSTIMESTAMP, 'APPLY', 7 SYS_CONTEXT('REGISTRY$CTX','NAMESPACE'), 8 '11.2.0.2', 9 4, 10 'CPU', 11 'CPUJan2012'); 1 row created. SQL> COMMIT; Commit complete. SQL> SPOOL off SQL> SET echo off Check the following log file for errors: /u00/app/oracle/cfgtoollogs/catbundle/catbundle_CPU_PROD_APPLY_2012Feb26_02_27_12 .log SQL> 11) Check the status from registry$history
228
ORACLE DATA BASE ADMINISTRATION
12) Compile Invalid objects by executing “utlrp.sql”. Before Patching SQL> select count(*),object_type from dba_objects where status 'VALID' and OWNER !='PUBLIC' and OBJECT_TYPE!='SYNONYM' group by object_type;
COUNT(*) OBJECT_TYPE ---------- ------------------38 TRIGGER 2 VIEW SQL> After Patching & Recompile SQL> select count(*),object_type from dba_objects where status 'VALID' and OWNER !='PUBLIC' and OBJECT_TYPE!='SYNONYM' group by object_type; COUNT(*) OBJECT_TYPE ---------- ------------------2 VIEW SQL> 13) Opatch Status oracle-ckpt.com> opatch lsinventory|grep 13343244 Patch 13343244
: applied on Sun Feb 26 02:21:14 EST 2012
12419321, 12828071, 13343244, 11724984 oracle-ckpt.com>
Oracle Database 12C Release 1 Installation on Linux Oracle 12c (Oracle 12.1.0.1) has been released and is available for download . Oracle 12C Installation steps are almost same as that of Oracle 10g and 11g Installations . Oracle 12c is available for 64 bit . Here , we will see step-by-step Installation of Oracle 12C database . Step 1 : Oracle S/W Installation We can download Oracle 12c s/w from e-delivery or from OTN . Below are Link http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html https://edelivery.oracle.com/EPD/Download/get_form?egroup_aru_number=16496132
229
ORACLE DATA BASE ADMINISTRATION
Step 2 : Hardware Requirements Oracle Recommand the following requirement for installation . RAM = 2GB of RAM or more Swap = 1.5 times of RAM if RAM less than 2 GB , equal to size of RAM is RAm size is more than 2GB Disk Space = More than 6.4 GB for Enetrprise Edition . Tmp directory = Minimum 1GB of free space Step 3 : Hardware Verifications [root@server1 ~]# grep MemTotal MemTotal: 3017140 kB [root@server1 ~]# grep SwapTotal SwapTotal: 4105420 kB [root@server1 ~]# df -h /tmp Filesystem Size Used /dev/sda1 46G 19G [root@server1 ~]# df -h Filesystem Size /dev/sda1 46G tmpfs 1.5G /dev/hdc 3.4G
/proc/meminfo /proc/meminfo
Avail 25G
Used 19G 0 3.4G
Use% 44%
Avail 25G 1.5G 0
[root@server1 ~]# free total used free Mem: 3017140 715376 2301764 -/+ buffers/cache: 221504 2795636 Swap: 4105420 0 4105420
Mounted on /
Use% 44% 0% 100%
Mounted on / /dev/shm /media/RHEL_5.3 x86_64 DVD
shared buffers cached 0 109776 384096
[root@server1 ~]# uname -m x86_64 [root@server1 ~]# uname -a Linux server1.example.com 2.6.18-128.el5 #1 2008 x86 _64 x86_64 x86_64 GNU/Linux
SMP
Wed
Dec 17 11:41:38 EST
Step 4 : Packages Verifications The following packages are required for the Oracle Installation , so make sure all the packages are installed . make-3.81 binutils-2.17.50 gcc-4.1.2 gcc-c++-4.1.2 compat-libcap1 compat-libstdc++-33
(x86_64) (x86_64) (x86_64) (x86_64)
230
ORACLE DATA BASE ADMINISTRATION
glibc-2.5-58 glibc-devel-2.5 libgcc-4.1.2 libstdc++-4.1.2 libstdc++-devel-4 libaio-0.3.106 libaio-devel-0.3 ksh sysstat unixODBC unixODBC-devel
(x86_64) (x86_64) (x86_64) (x86_64) (x86_64) (x86_64) (x86_64)
Execute the below command as root to make sure that we have all this rpms installed. If not installed, then download them from appropriate linux site or we will find the package from the Red Hat Enterprise Linux 5 DVD . For example , # rpm -qa | grep glib* The above command will display all the installed packages, name starting with glib, similarly we can check for all others packages . If any of the above packages are not installed, run the following command: # rpm -ivh .i386.rpm Steps 5 : Kernel Parameters Add the below kernel Parameters in the /etc/sysctl.conf file fs.file-max = 6815744 kernel.sem = 250 32000 100 128 kernel.shmmni = 4096 kernel.shmall = 1073741824 kernel.shmmax = 4398046511104 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 fs.aio-max-nr = 1048576 net.ipv4.ip_local_port_range = 9000 65500 After adding these lines to /etc/sysctl.conf , run the below command as root to make them enabled. # sysctl -p Step 6 : Edit the /etc/security/limits.conf file To improve the performance of the software on Linux systems, we must increase the following shell limits for the oracle user . Add the following lines to the /etc/security/limits.conf file : oracle oracle oracle oracle
soft nproc 2047 hard nproc 16384 soft nofile 1024 hard nofile 65536
231
ORACLE DATA BASE ADMINISTRATION
Where "nproc" is the maximum number of processes "nofiles" is the number of open file descriptors.
available to
the user and
Step 7 : Create User and Groups Starting with Oracle Database 12c , we can create new administrative privileges that are more task-specific and less privileged than the OSDBA/SYSDBA system privileges to support specific administrative privileges tasks required for everyday database operation. Users granted these system privileges are also authenticated through operating system group membership . We do not have to create these specific group names, but during installation we are prompted to provide operating system groups whose members are granted access to these system privileges. we can assign the same group to provide authentication for these privileges, but Oracle recommends that we should provide a unique group to designate each privileges. i .) The OSDBA group (typically, dba) : This group identifies operating system user accounts that have database administrative privileges (the SYSDBA privilege). #groupadd -g 501 dba ii .) The Oracle Inventory Group (oinstall) : This group owns the Oracle inventory that is a catalog of all Oracle software installed on the system. A single Oracle Inventory group is required for all installations of Oracle software on the system. # groupadd -g 502 oinstall iii .) The OSOPER group for Oracle Database (typically, oper) : This is an optional group. We create this group if we want a separate group of operating system users to have a limited set of database administrative privileges for starting up and shutting down the database (the SYSOPER privilege). # groupadd -g 503 oper iv .) The OSBACKUPDBA group for Oracle Database (typically, backupdba) : Create this group if we want a separate group of operating system users to have a limited set of database backup and recovery related administrative privileges (the SYSBACKUP privilege). # groupadd -g 504 backupdba v .) The OSDGDBA group for Oracle Data Guard (typically, dgdba) : Create this group if we want a separate group of operating sytsem users to have a limited set of privileges to administer and monitor Oracle Data Guard (the SYSDG privilege). # groupadd -g 505 dgdba vi .) The OSKMDBA group for encyption key management (typically, kmdba) : Create this group if we want a separate group of operating sytem users to have a limited set of privileges for encryption key management such as Oracle Wallet Manager management (the SYSKM privilege). # groupadd -g 506 kmdba vii .) The OSDBA group for Oracle ASM (typically, asmdba) : The OSDBA group for Oracle ASM can be the same group u sed as the OSDBA group for the database, or we can create a separate OSDBA group for Oracle ASM to provide administrative access to Oracle ASM instances . # groupadd -g 507 asmdba
232
ORACLE DATA BASE ADMINISTRATION
viii .) The OSASM group for Oracle ASM Administration (typically, asmadmin) : Create this group as a separate group if we want to have separate administration privileges groups for Oracle ASM and Oracle Database administrators. Members of this group are granted the SYSASM system privileges to administer Oracle ASM . # groupadd -g 508 asmoper ix .) The OSOPER group for Oracle ASM (typically, asmoper) : This is an optional group. Create this group if we want a separate group of operating system users to have a limited set of Oracle instance administrative privileges (the SYSOPER for ASM privilege), including starting up and stopping the Oracle ASM instance . By default , members of the OSASM group also have all privileges granted by the SYSOPER for ASM privilege. # groupadd -g 509 asmadmin x . ) Create Oracle user : # useradd -u 54321 -g oinstall -G dba,asmdba,backupdba,dgdba,kmdba oracle #passwd oracle The -u option specifies the user ID. Using this command flag is optional because the system can provide with an automatically generated user ID number. However, Oracle recommends that we should specify a number. We must note the user ID number because we need it during preinstallation. Step 8 : Creating oracle directories As per OFA, oracle base directory has the path : /mount_point/app/oracle_sw_owner where mount_point is the mount point directory for the file system that will contain the Oracle software . I have used /u01 for the mount point directory. However, we could choose another mount point directory, such as /oracle or /opt/soft. # mkdir -p /u01/oracle/product/12.1.0/db_1 # chown -R oracle:oinstall /u01 # chmod -R 777 /u01 Step 9 : Setting Oracle Enviroment Edit the /home/oracle/.bash_profile file and add following lines: # su - oracle $ vi .bash_profile export TMP=/tmp export TMPDIR=$TMP export ORACLE_BASE=/u01/oracle export ORACLE_HOME=$ORACLE_BASE/product/12.1.0/db_1 export PATH=/usr/sbin:$PATH export PATH=$ORACLE_HOME/bin:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib Step 10 : Check firewall and Selinux Make sure Selinux be either disable or permissive . Check "/etc/selinux/config" file and make following changes . SELINUX=permissive
233
ORACLE DATA BASE ADMINISTRATION
Once ,Selinux value is set than restart the server or or run the below command # setenforce Permissive If Firewall is enabled ,we need to disable it . we can disable by using below command # service iptables stop # chkconfig iptables off Step 11 : Finally run the runInstaller for Installation of Oracle 12c release 1
Once , runInstaller get initaited , OUI get invoked and rest are interative graphical console .
234
ORACLE DATA BASE ADMINISTRATION
Click next and proceed forward .
Click on "Yes" button and proceed .
235
ORACLE DATA BASE ADMINISTRATION
Select "Skip Software Updates" option and click on next button .
236
ORACLE DATA BASE ADMINISTRATION
Select "Create and configure a database" option and click on next button
237
ORACLE DATA BASE ADMINISTRATION
Here , I selected the "Desktop Class" option . Click on next button
238
ORACLE DATA BASE ADMINISTRATION
Enter the Administrative Password and click next
239
ORACLE DATA BASE ADMINISTRATION
Click on "Yes" option and proceed forward
240
ORACLE DATA BASE ADMINISTRATION
Click on next button
241
ORACLE DATA BASE ADMINISTRATION
Make sure all the prerequisite must be successfull and passed .
242
ORACLE DATA BASE ADMINISTRATION
Summary page displays all the locations and database information . Click next
243
ORACLE DATA BASE ADMINISTRATION
Oracle Database Installation in process
244
ORACLE DATA BASE ADMINISTRATION
Execute the configurations scripts from root
245
ORACLE DATA BASE ADMINISTRATION
Run the scripts from root .
246
ORACLE DATA BASE ADMINISTRATION
Oracle Database in process
247
ORACLE DATA BASE ADMINISTRATION
Database creation in process .
248
ORACLE DATA BASE ADMINISTRATION
Database creation in process .
Database Creation complted .
249
ORACLE DATA BASE ADMINISTRATION
Installation of Oracle database was successfull .
250
ORACLE DATA BASE ADMINISTRATION
Finally connected with Oracle 12c database .