DBA - Oracle DBA - Backup and Recovery Scripts

July 11, 2017 | Author: traxir | Category: Oracle Database, Backup, Scripting Language, Computer File, Databases
Share Embed Donate


Short Description

Download DBA - Oracle DBA - Backup and Recovery Scripts...

Description

Oracle DBA - Backup and Recovery Scripts Date: Dec 27, 2002 By Rajendra Gutta. Sample Chapter is provided courtesy of Sams. Having the right backup and recovery procedures is crucial to the operation of any database. It is the responsibility of the database administrator to protect the database from system faults, crashes, and natural calamities resulting from a variety of circumstances. Learn how to choose the best backup and recovery mechanism for your Oracle system. Having the right backup and recovery procedures is the lifeblood of any database. Companies live on data, and, if that data is not available, the whole company collapses. As a result, it is the responsibility of the database administrator to protect the database from system faults, crashes, and natural calamities resulting from a variety of circumstances.

The choice of a backup and recovery mechanism depends mainly on the  following factors:  • • • • •

Database mode (ARCHIVELOG, NOARCHIVELOG) Size of the database Backup and recovery time uptime Type of data (OLTP, DSS, Data Warehouse).

The types of backup are

• • •

Offline backup (Cold or closed database backup) Online backup (Hot or open database backup) Logical export

Logical exports create an export file that contains a list of SQL statements to recreate the database. Export is performed when the database is open and does not affect users work. Offline backups can only be performed when the database is shut down cleanly, and the database will be unavailable to users while the offline backup is being performed. Online backups are performed when the database is open, and it does not affect users work. The database needs to run in ARCHIVELOG mode to perform online backups. The database can run in either ARCHIVELOG mode or NOARCHIVELOG mode. In ARCHIVELOG mode, the archiver (ARCH) process archives the redo log files to the archive destination directory. These archive files can be used to recover the database in the case of a failure. In NOARCHIVELOG mode, the redo log files are not archived. When the database is running in ARCHIVELOG mode, the choice can be one or more of the following:

• • •

Export Hot backup Cold backup

When the database is running in NOARCHIVELOG mode, the choice of backup is as follows:

• •

Export Cold backup

Cold Backup Offline or cold backups are performed when the database is completely shutdown. The disadvantage of an offline backup is that it cannot be done if the database needs to be run 24/7. Additionally, you can only recover the database up to the point when the last backup was made unless the database is running in ARCHIVELOG mode.

The general steps involved in performing a cold backup are shown in Figure 3.1. These general steps are used in writing cold backup scripts for Unix and Windows NT. Figure 3.1 Steps for cold backup. The steps in Figure 3.1 are explained as follows. Step 1—Generating File List An offline backup consists of physically copying the following files:

• • •

Data files Control files

Init.ora and config.ora files

CAUTION Backing up online redo log files is not advised in all cases, except when performing cold backup with the database running in NOARCHIVELOG mode. If you make a cold backup in ARCHIVELOG mode do not backup redo log files. There is a chance that you may accidentally overwrite your real online redo logs, preventing you from doing a complete recovery. If your database is running in ARCHIVELOG mode, when you perform cold backup you should also backup archive logs that exist. Before performing a cold backup, you need to know the location of the files that need to be backed up. Because the database structure changes day to day as more files get added or moved between directories, it is always better to query the database to get the physical structure of database before making a cold backup. To get the structure of the database, query the following dynamic data dictionary tables:



V$datafile Lists all the data files used in the database SQL>select name from v$datafile;



Backup the control file and perform a trace of the control file using

SQL>alter database backup controlfile to '/u10/backup/control.ctl'; SQL>alter database backup controlfile to trace; •

Init.ora and config.ora Located under $ORACLE_HOME/dbs directory

Step 2—Shut down the database You can shut down a database with the following commands:

$su – oracle $sqlplus "/ as sysdba" SQL>shutdown Step 3—Perform a backup In the first step, you generated a list of files to be backed up. To back up the files, you can use the Unix copy command (cp) to copy it to a backup location, as shown in the following code. You have to copy all files that you generated in Step 1.

$cp /u01/oracle/users01.dbf /u10/backup You can perform the backup of the Init.ora and config.ora files as follows:

$cp $ORACLE_HOME/dbs/init.ora /u10/backup $cp $ORACLE_HOME/dbs/config.ora /u10/backup Step 4—Start the database After the backup is complete, you can start the database as follows:

$su – oracle $sqlplus "/ as sysdba" SQL> startup

Hot Backup An online backup or hot backup is also referred to as ARCHIVE LOG backup. An online backup can only be done when the database is running in ARCHIVELOG mode and the database is open. When the database is running in ARCHIVELOG mode, the archiver (ARCH) background process will make a copy of the online redo log file to archive backup location. An online backup consists of backing up the following files. But, because the database is open while performing a backup, you have to follow the procedure shown in Figure 3.2 to backup the files:

• • • •

Data files of each tablespace Archived redo log files Control file Init.ora and config.ora files

Figure 3.2 Steps for hot backup. The general steps involved in performing hot backup are shown in Figure 3.2. These general steps are used in writing hot backup scripts for Unix and Windows NT. The steps in Figure 3.2 are explained as follows. Step 1—Put the tablespace in the Backup mode and copy the data files. Assume that your database has two tablespaces, USERS and TOOLS. To back up the files for these two tablespaces, first put the tablespace in backup mode by using the ALTER statement as follows:

SQL>alter tablespace USERS begin backup; After the tablespace is in Backup mode, you can use the SELECT statement to list the data files for the USERS tablespace, and the copy (cp) command to copy the files to the backup location. Assume that the USERS tablespace has two data files—users01.dbf and users02.dbf.

SQL>select file_name from dba_data_files where tablespace_name='USERS'; $cp /u01/oracle/users01.dbf /u10/backup $cp /u01/oracle/users01.dbf /u10/backup The following command ends the backup process and puts the tablespace back in normal mode.

SQL>alter tablespace USERS end backup;

You have to repeat this process for all tablespaces. You can get the list of tablespaces by using the following SQL statement: SQL>select tablespace_name from dba_tablespaces; Step 2—Back up the control and Init.ora files. To backup the control file,

SQL>alter database backup controlfile to '/u10/backup/control.ctl'; You can copy the Init.ora file to a backup location using

$cp $ORACLE_HOME/dbs/initorcl.ora /u10/backup Step 3—Stop archiving. Archiving is a continuous process and, without stopping archiver, you might unintentionally copy the file that the archiver is currently writing. To avoid this, first stop the archiver and then copy the archive files to backup location. You can stop the archiver as follows:

SQL>alter system switch logfile; SQL>alter system archive log stop; The first command switches redo log file and the second command stops the archiver process. Step 4—Back up the archive files. To avoid backing up the archive file that is currently being written, we find the least sequence number that is to be archived from the V$LOG view, and then backup all the archive files before that sequence number. The archive file location is defined by the LOG_ARCHIVE_DEST_n parameter in the Init.ora file.

select min(sequence#) from v$log where archived='NO'; Step 5—Restart the archive process. The following command restarts the archiver process:

SQL>alter system archive log start; Now you have completed the hot backup of database. An online backup of a database will keep the database open and functional for 24/7 operations. It is advised to schedule online backups when there is the least user activity on the database, because backing up the database is very I/O intensive and users can see slow response during the backup period. Additionally, if the user activity is very high, the archive destination might fill up very fast. Database Crashes During Hot Backup There can be many reasons for the database to crash during a hot backup—a power outage or rebooting of the server, for example. If these were to happen during a hot backup, chances are that tablespace would be left in backup mode. In that case you must manually recover the files involved, and the recovery operation would end the backup of tablespace. It's important to check the status of the files as soon as you restart the instance and end the backup for the tablespace if it's in backup mode.

select a.name,b.status from v$datafile a, v$backup b where a.file#=b.file# and b.status='ACTIVE';

or

select a.tablespace_name,a.file_name,b.status from dba_data_files a, v$backup b where a.file_id=b.file# and b.status='ACTIVE'; This statement lists files with ACTIVE status. If the file is in ACTIVE state, the corresponding tablespace is in backup mode. The second statement gives the tablespace name also, but this can't be used unless the database is open. You need to end the backup mode of the tablespace with the following command:

alter tablespace tablespace_name end backup;

Logical Export Export is the single most versatile utility available to perform a backup of the database, de-fragment the database, and port the database or individual objects from one operating system to another operating system. Export backup detects block corruption Though you perform other types of backup regularly, it is good to perform full export of database at regular intervals, because export detects any data or block corruptions in the database. By using export file, it is also possible to recover individual objects, whereas other backup methods do not support individual object recovery. Export can be used to export the database at different levels of functionality:

• • • •

Full export (full database export) (FULL=Y) User-level export (exports objects of specified users) (OWNER=userlist) Table-level export (exports specified tables and partitions) (TABLES=tablelist) Transportable tablespaces (TABLESPACES=tools, TRANSPORT_TABLESPACE=y)

There are two methods of Export:

• •

Conventional Path (default)—Uses SQL layer to create the export file. The fact is that the SQL layer introduces CPU overhead due to character set, converting numbers, dates and so on. This is time consuming. Direct path (DIRECT=YES)—Skips the SQL layer and reads directly from database buffers or private buffers. Therefore it is much faster than conventional path.

We will discuss scripts to perform the full, user-level, and table-level export of database. The scripts also show you how to compress and split the export file while performing the export. This is especially useful if the underlying operating system has a limitation of 2GB maximum file limit. Understand scripting This chapter requires understanding of basic Unix shell and DOS batch programming techniques that are described in Chapter 2 "Building Blocks." That chapter explained some of the common routines that will be used across most of the scripts presented here. This book could have provided much more simple scripts. But, considering standardization across all scripts and the reusability of individual sections for your own writing of scripts, I am focusing on providing a comprehensive script, rather than a temporary fix. After you understand one script, it is easy to follow the flow for the rest of the scripts. Backup and Recovery under Unix The backup and recovery scripts discussed here have been tested under Sun Solaris 2.x, HP-UX 11.x and AIX 4.x. The use of a particular command is discussed if there is a difference between these operating

systems. They might also work in higher versions of the same operating system. These scripts are written based on the common ground among these three Unix flavors. However, I advise that you test the scripts under your environment for both backup and recovery before using it as a regular backup script. This testing not only gives you confidence in the script, it also gives you an understanding of how to use the script in case a recovery is needed and gives you peace of mind when a crisis hits. Backup Scripts for HP-UX, Sun Solaris, and AIX The backup scripts provided here work for HP-UX, Sun Solaris, and AIX with one slight modification. That is, the scripts use v$parameter and v$controlfile to get the user dump destination and control file information. Because in Unix the dollar sign ($) is a special character, you have to precede it with a forward slash (\) that tells Unix to treat it as a regular character. However, this is different in each flavor of Unix. AIX and HP-UX need one forward slash, and the Sun OS needs two forward slashes to make the dollar sign a regular character. Sun OS 5.x needs two \\ AIX 4.x needs one \ HP-UX 11.x needs one \ These scripts are presented in modular approach. Each script consists of a number of small functions and a main section. Each function is designed to meet a specific objective so that they are easy to understand and modify. These small functions are reusable and can be used in the design of your own scripts. If you want to change a script to fit to your unique needs, you can do so easily in the function where you want the change without affecting the whole script. After the backup is complete, it is necessary to check the backup status by reviewing log and error files generated by the scripts.

Cold Backup Cold backup program (see Listing 3.1) performs the cold backup of the database under the Unix environment. The script takes two input parameters—SID and OWNER. SID is the instance to be backed up, and OWNER is the Unix account under which Oracle is running. Figure 3.3 describes the functionality of the cold backup program. Each box represents a corresponding function in the program. Figure 3.3 Functions in cold backup script for Unix.

Listing 3.1 coldbackup_ux ##################################################################### # PROGRAM NAME:coldbackup_ux # PURPOSE:Performs cold backup of the database. Database #should be online when you start # the script. It will shutdown and take a cold backup and brings # the database up again

# USAGE:$coldbackup_ux SID OWNER # INPUT PARAMETERS: SID(Instance name), OWNER(Owner of instance) #####################################################################

#:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: # funct_verify(): Verify that database is online #:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: funct_verify(){ STATUS=´ps -fu ${ORA_OWNER} |grep -v grep| grep ora_pmon_${ORA_SID}´ funct_chk_ux_cmd_stat "Database is down for given SID($ORA_SID), Owner($ORA_OWNER). Can't generate files to be backed up" }

#:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: # funct_verify_shutdown(): Verify that database is down #:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: funct_verify_shutdown(){ STATUS=´ps -fu ${ORA_OWNER} |grep -v grep| grep ora_pmon_${ORA_SID}´ if [ $? = 0 ]; then echo "´date´" >> $LOGFILE echo "COLDBACKUP_FAIL: ${ORA_SID}, Database is up, can't make coldbackup if the database is online."|tee -a ${BACKUPLOGFILE} >> $LOGFILE exit 1 fi }

#:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: # funct_shutdown_i(): Shutdown database in Immediate mode #:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: funct_shutdown_i(){ ${ORACLE_HOME}/bin/sqlplus -s ${BACKUPLOGFILE} ${ORACLE_HOME}/bin/sqlplus -s > ${RESTOREFILE}

echo "# Use your own discretion to copy control file, not advised unless required..." >> ${RESTOREFILE} echo " End of backup of control file" >> ${BACKUPLOGFILE}

#:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: # funct_cold_backup(): Perform cold backup #:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: funct_cold_backup(){ #Copy datafiles to backup location echo "############### Data Files " >> ${RESTOREFILE} for datafile in ´echo $datafile_list´ do echo "Copying datafile ${datafile} ..." >> ${BACKUPLOGFILE} #Prepare a restore file to restore coldbackup in case a restore is necessary echo cp -p ${DATAFILE_DIR}/´echo $datafile|awk -F"/" '{print $NF}'´ $datafile >> ${RESTOREFILE} cp -p ${datafile} ${DATAFILE_DIR} funct_chk_ux_cmd_stat "Failed to copy datafile file to backup location" done #Copy current init.ora file to backup directory echo " Copying current init.ora file" >> ${BACKUPLOGFILE} cp -p ${init_file} ${INITFILE_DIR}/init${ORA_SID}.ora funct_chk_ux_cmd_stat "Failed to copy init.ora file to backup location" echo "################ Init.ora File " >> ${RESTOREFILE} echo cp -p ${INITFILE_DIR}/init${ORA_SID}.ora ${init_file} >> ${RESTOREFILE} }

#:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: # funct_chk_parm(): Check for input parameters #:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: funct_chk_parm() { if [ ${NARG} -ne 2 ]; then echo "COLDBACKUP_FAIL: ${ORA_SID}, Not enough arguments passed" exit 1 fi }

#:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: # funct_chk_bkup_dir(): Create backup directories if not already existing #:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: funct_chk_bkup_dir() { RESTOREFILE_DIR="${BACKUPDIR}/restorefile_dir" BACKUPLOG_DIR="${BACKUPDIR}/backuplog_dir" DATAFILE_DIR="${BACKUPDIR}/datafile_dir"

CONTROLFILE_DIR="${BACKUPDIR}/controlfile_dir" REDOLOG_DIR="${BACKUPDIR}/redolog_dir" ARCLOG_DIR="${BACKUPDIR}/arclog_dir" INITFILE_DIR="${BACKUPDIR}/initfile_dir" BACKUPLOGFILE="${BACKUPLOG_DIR}/backup_log_${ORA_SID}" RESTOREFILE="${RESTOREFILE_DIR}/restorefile_${ORA_SID}" LOGFILE="${LOGDIR}/${ORA_SID}.log" if if if if if if if

[ [ [ [ [ [ [

! ! ! ! ! ! !

-d -d -d -d -d -d -d

${RESTOREFILE_DIR} ]; then mkdir -p ${RESTOREFILE_DIR}; fi ${BACKUPLOG_DIR} ]; then mkdir -p ${BACKUPLOG_DIR}; fi ${DATAFILE_DIR} ]; then mkdir -p ${DATAFILE_DIR}; fi ${CONTROLFILE_DIR} ]; then mkdir -p ${CONTROLFILE_DIR}; fi ${REDOLOG_DIR} ]; then mkdir -p ${REDOLOG_DIR}; fi ${ARCLOG_DIR} ]; then mkdir -p ${ARCLOG_DIR}; fi ${INITFILE_DIR} ]; then mkdir -p ${INITFILE_DIR}; fi

if [ ! -d ${DYN_DIR} ]; then mkdir -p ${DYN_DIR}; fi if [ ! -d ${LOGDIR} ]; then mkdir -p ${LOGDIR}; fi # Remove old backup rm -f ${RESTOREFILE_DIR}/* rm -f ${BACKUPLOG_DIR}/* rm -f ${DATAFILE_DIR}/* rm -f ${CONTROLFILE_DIR}/* rm -f ${REDOLOG_DIR}/* rm -f ${ARCLOG_DIR}/* rm -f ${INITFILE_DIR}/* echo "${JOBNAME}: coldbackup of ${ORA_SID} begun on ´date +\"%c\"´" > ${BACKUPLOGFILE} }

#:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: # funct_get_vars(): Get environment variables #:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: funct_get_vars(){ ORA_HOME=´sed /#/d ${ORATABDIR}|grep -i ${ORA_SID}|nawk -F ":" '{print $2}'´ ORA_BASE=´echo ${ORA_HOME}|nawk -F "/" '{for (i=2; i %ERR_FILE% echo. > %LOG_FILE% (echo Cold Backup started & date/T & time/T) >> %LOG_FILE% echo Parameter Checking Completed >> %LOG_FILE% REM :::::::::::::::::::: End Parameter Checking Section REM :::::::::::::::::::: Begin Create Dynamic files Section echo. >%CFILE% echo set termout off heading off feedback off >>%CFILE% echo set linesize 300 pagesize 0 >>%CFILE% echo set serveroutput on size 1000000 >>%CFILE% echo. >>%CFILE% echo spool %BACKUP_DIR%\log\coldbackup_list.bat >>%CFILE% echo. >>%CFILE% echo exec dbms_output.put_line('@echo off' ); >>%CFILE% echo. >>%CFILE% echo exec dbms_output.put_line('REM ******Data files' ); >>%CFILE% echo select 'copy '^|^| file_name^|^| ' %BKP_DIR%\data ' >>%CFILE% echo from dba_data_files order by tablespace_name; >>%CFILE% echo. >>%CFILE% echo exec dbms_output.put_line('REM ******Control files' ); >>%CFILE% echo select 'copy '^|^| name^|^| ' %BKP_DIR%\control ' >>%CFILE% echo from v$controlfile order by name; >>%CFILE% echo. >>%CFILE% echo exec dbms_output.put_line('REM ******Init.ora file ' ); >>%CFILE% echo select ' copy %INIT_FILE% %BKP_DIR%\control ' >>%CFILE% echo from dual; >>%CFILE% echo exec dbms_output.put_line('exit;'); >>%CFILE% echo spool off >>%CFILE% echo exit >>%CFILE% echo Dynamic files Section Completed >> %LOG_FILE% REM :::::::::::::::::::: End Create Dynamic files Section

REM :::::::::::::::::::: Begin ColdBackup Section %ORA_HOME%\sqlplus %ORA_HOME%\sqlplus %ORA_HOME%\sqlplus %ORA_HOME%\sqlplus

-s -s -s -s

%CONNECT_USER% %CONNECT_USER% %CONNECT_USER% %CONNECT_USER%

@%CFILE% @shutdown_i_nt.sql @startup_r_nt.sql @shutdown_n_nt.sql

REM Copy the files to backup location start/b %BACKUP_DIR%\log\coldbackup_list.bat 1>> %LOG_FILE% 2>> %ERR_FILE% %ORA_HOME%\sqlplus -s %CONNECT_USER% @startup_n_nt.sql (echo ColdBackup Completed Successfully & date/T & time/T) >> %LOG_FILE%

(echo ColdBackup Completed Successfully & date/T & time/T) >> %LOGFILE% goto end REM :::::::::::::::::::: End ColdBackup Section

REM :::::::::::::::::::::::::::: Begin Error handling section :usage echo Error, Usage: coldbackup_nt.bat SID goto end :backupdir echo Error creating Backup directory structure >> %ERR_FILE% (echo COLDBACKUP_FAIL:Error creating Backup directory structure & date/T & time/T) >> %LOGFILE% REM :::::::::::::::::::: End Error handling section REM :::::::::::::::::::: Cleanup Section :end set ORA_HOME= set ORACLE_SID= set CONNECT_USER= set BACKUP_DIR= set INIT_FILE= set CFILE= set ERR_FILE= set LOG_FILE= set BKP_DIR=

Cold Backup Script for Windows NT Checklist • • • • •

Check to see that ORA_HOME, BACKUP_DIR, and TOOLS are set to correct values according to your directory structure. These variables are highlighted in the script. Verify that CONNECT_USER is set to correct the username and password. Define the INIT_FILE variable to the location of the Init.ora file. Be sure that the user running the program has Write access to backup directories. When you run the program, pass SID as a parameter.

Cold Backup under Windows NT Troubleshooting and Status Check Backup log files defined by LOG_FILE contain detailed information about each step of the backup process. This is a very good place to start investigating why a backup has failed or for related errors. This file will also have the start and end time of backup. ERR_FILE has error information. A single line about the success or failure of backup is appended to the SID.log file every time a backup is performed. This file is located under the directory defined by the LOGDIR variable. The messages for a cold backup are 'COLDBACKUP_FAIL', if the cold backup failed, and 'Cold Backup Completed successfully', if the backup completes successfully. You can schedule automatic backups using the 'at' command, as shown in the following:

at 23:00 "c:\backup\coldbackup_nt.bat ORCL" Runs at 23:00 hrs on current date. at 23:00 /every:M,T,W,Th,F "c:\backup\coldbackup_nt.bat ORCL " This command runs a backup at 23:00 hours every Monday, Tuesday, Wednesday, Thursday, and Friday.

The "Create Dynamic Files" section in the coldbackup_nt.bat program creates the coldbackup.sql file (see Listing 3.10) under the log directory. coldbackup.sql is called from coldbackup_nt.bat and generates a list of data, control, and redo log files to be backed up from the database. A sample coldbackup.sql is shown in Listing 3.10 for your understanding. The contents of this file are derived based on the structure of the database.

Listing 3.10 coldbackup.sql set termout off heading off feedback off set linesize 300 pagesize 0 set serveroutput on size 1000000 spool c:\backup\orcl\cold\log\coldbackup_list.bat exec dbms_output.put_line('@echo off' ); exec dbms_output.put_line('REM ******Data files' ); select 'copy '|| file_name|| ' c:\backup\orcl\cold\data from dba_data_files order by tablespace_name; exec dbms_output.put_line('REM ******Control files' ); select 'copy '|| name|| ' c:\backup\orcl\cold\control from v$controlfile order by name;

'

'

exec dbms_output.put_line('REM ******Init.ora file ' ); select ' copy c:\oracle\admin\orcl\pfile\init.ora c:\backup\orcl\cold\control ' from dual; exec dbms_output.put_line('exit;'); spool offexit When the coldbackup.sql file is called from the coldbackup_nt.bat program, it spools output to the coldbackup_list.bat DOS batch file (see Listing 3.11). This file has the commands necessary for performing the cold backup. This is only a sample file. Note that in the contents of file data, control, redo log, and Init.ora files are copied to respective backup directories.

Listing 3.11 coldbackup_list.bat @echo off REM ******Data files copy C:\ORADATA\DSGN01.DBF c:\backup\orcl\cold\data copy C:\ORADATA\INDX01.DBF c:\backup\orcl\cold\data copy C:\ORADATA\OEM01.DBF c:\backup\orcl\cold\data copy C:\ORADATA\RBS01.DBF c:\backup\orcl\cold\data copy C:\ORADATA\SYSTEM01.DBF c:\backup\orcl\cold\data copy C:\ORADATA\TEMP01.DBF c:\backup\orcl\cold\data copy C:\ORADATA\USERS01.DBF c:\backup\orcl\cold\data

REM ******Control files copy C:\ORADATA\CONTROL01.CTL c:\backup\orcl\cold\control copy C:\ORADATA\CONTROL02.CTL c:\backup\orcl\cold\control

REM ******Init.ora file copy c:\oracle\admin\orcl\pfile\init.ora c:\backup\orcl\cold\control exit;

Hot Backup The hot backup program (see Listing 3.12) performs a hot backup of a database under the Windows NT environment. The hot backup script takes SID, the instance to be backed up, as the input parameter.

Listing 3.12 hotbackup_nt.bat @echo off REM ##################################################################### REM PROGRAM NAME: hotbackup_nt.bat REM PURPOSE: This utility performs hot backup of REM the database on Windows NT REM USAGE: c:\>hotbackup_nt.bat SID REM INPUT PARAMETERS: SID (Instance name) '' REM ##################################################################### REM :::::::::::::::::::: Begin Declare Variables Section set set set set set set

ORA_HOME=c:\oracle\ora81\bin CONNECT_USER="/ as sysdba" ORACLE_SID=%1 BACKUP_DIR=c:\backup\%ORACLE_SID%\hot INIT_FILE=c:\oracle\admin\orcl\pfile\init.ora ARC_DEST=c:\oracle\oradata\orcl\archive

set TOOLS=c:\oracomn\admin\my_dba set LOGDIR=%TOOLS%\localog set LOGFILE=%LOGDIR%\%ORACLE_SID%.log set HFILE=%BACKUP_DIR%\log\hotbackup.sql set ERR_FILE=%BACKUP_DIR%\log\herrors.log set LOG_FILE=%BACKUP_DIR%\log\hbackup.log set BKP_DIR=%BACKUP_DIR% REM :::::::::::::::::::: End Declare Variables Section REM :::::::::::::::::::: Begin Parameter Checking Section if "%1" == " goto usage REM Create if not exist if not exist if not exist if not exist if not exist

backup directories if already not exist %BACKUP_DIR%\data mkdir %BACKUP_DIR%\data %BACKUP_DIR%\control mkdir %BACKUP_DIR%\control %BACKUP_DIR%\arch mkdir %BACKUP_DIR%\arch %BACKUP_DIR%\log mkdir %BACKUP_DIR%\log %LOGDIR% mkdir %LOGDIR%

REM Check to see that there were no create errors if not exist %BACKUP_DIR%\data goto backupdir if not exist %BACKUP_DIR%\control goto backupdir if not exist %BACKUP_DIR%\arch goto backupdir if not exist %BACKUP_DIR%\log goto backupdir REM Deletes previous backup. Make sure you have it on tape. del/q %BACKUP_DIR%\data\* del/q %BACKUP_DIR%\control\*

del/q del/q

%BACKUP_DIR%\arch\* %BACKUP_DIR%\log\*

echo. > %ERR_FILE% echo. > %LOG_FILE% (echo Hot Backup started & date/T & time/T) >> %LOG_FILE% echo Parameter Checking Completed >> %LOG_FILE% REM :::::::::::::::::::: End Parameter Checking Section REM :::::::::::::::::::: Begin Create Dynamic files Section echo. >%HFILE% echo set termout off heading off feedback off >>%HFILE% echo set linesize 300 pagesize 0 >>%HFILE% echo set serveroutput on size 1000000 >>%HFILE% echo spool %BACKUP_DIR%\log\hotbackup_list.sql >>%HFILE% echo Declare >>%HFILE% echo cursor c1 is select distinct tablespace_name from dba_data_files order by tablespace_name; >>%HFILE% echo cursor c2( ptbs varchar2) is select file_name from dba_data_files where tablespace_name = ptbs order by file_name; >>%HFILE% echo Begin >>%HFILE% echo dbms_output.put_line('set termout off heading off feedback off'); >>%HFILE% echo. >>%HFILE% echo dbms_output.put_line(chr(10) ); >>%HFILE% echo dbms_output.put_line('host REM ******Data files' ); >>%HFILE% echo for tbs in c1 loop >>%HFILE% echo dbms_output.put_line(' alter tablespace '^|^| tbs.tablespace_name ^|^|' begin backup;'); >>%HFILE% echo for dbf in c2(tbs.tablespace_name) loop >>%HFILE% echo dbms_output.put_line(' host copy '^|^|dbf.file_name^|^|' %BKP_DIR%\data 1^>^> %LOG_FILE% 2^>^> %ERR_FILE%'); >>%HFILE% echo end loop; >>%HFILE% echo dbms_output.put_line(' alter tablespace '^|^|tbs.tablespace_name ^|^|' end backup;'); >>%HFILE% echo end loop; >>%HFILE% echo. >>%HFILE% echo dbms_output.put_line(chr(10) ); >>%HFILE% echo dbms_output.put_line('host REM ******Control files ' ); >>%HFILE% echo dbms_output.put_line(' alter database backup controlfile to '^|^| ''^|^|'%BKP_DIR% \control\coltrol_file.ctl'^|^|''^|^|';'); >>%HFILE% echo dbms_output.put_line(' alter database backup controlfile to trace;'); >>%HFILE% echo. >>%HFILE% echo dbms_output.put_line(chr(10) ); >>%HFILE% echo dbms_output.put_line('host REM ******Init.ora file ' ); >>%HFILE% echo dbms_output.put_line(' host copy %INIT_FILE% %BKP_DIR%\control 1^>^> %LOG_FILE% 2^>^> %ERR_FILE%'); >>%HFILE%

echo. >>%HFILE% echo dbms_output.put_line(chr(10) ); >>%HFILE% echo dbms_output.put_line('host REM ******Archivelog files' ); >>%HFILE% echo dbms_output.put_line(' alter system switch logfile;'); >>%HFILE% echo dbms_output.put_line(' alter system archive log stop;'); >>%HFILE% echo dbms_output.put_line('host move %ARC_DEST%\* %BKP_DIR%\arch 1^>^> %LOG_FILE% 2^>^> %ERR_FILE%' ); >>%HFILE% echo dbms_output.put_line(' alter system archive log start;'); >>%HFILE%

echo echo echo echo echo

dbms_output.put_line('exit;'); End; >>%HFILE% / >>%HFILE% spool off >>%HFILE% exit; >>%HFILE%

>>%HFILE%

echo Dynamic files Section Completed >> %LOG_FILE% REM :::::::::::::::::::: End Create Dynamic files Section REM :::::::::::::::::::: Begin HotBackup Section %ORA_HOME%\sqlplus -s %CONNECT_USER% @%HFILE% REM Copy the files to backup location %ORA_HOME%\sqlplus -s %CONNECT_USER% @%BACKUP_DIR%\log\hotbackup_list.sql (echo HotBackup Completed Successfully & date/T & time/T) >> %LOG_FILE% (echo HotBackup Completed Successfully & date/T & time/T) >> %LOGFILE% goto end REM :::::::::::::::::::: End HotBackup Section

REM :::::::::::::::::::: Begin Error handling section :usage echo Error, Usage: hotbackup_nt.bat SID goto end :backupdir echo Error creating Backup directory structure >> %ERR_FILE% (echo HOTBACKUP_FAIL:Error creating Backup directory structure & date/T & time/T) >> %LOGFILE% REM :::::::::::::::::::: End Error handling section REM :::::::::::::::::::: Cleanup Section :end set ORA_HOME= set ORACLE_SID= set CONNECT_USER= set BACKUP_DIR= set INIT_FILE= set ARC_DEST= set HFILE= set ERR_FILE=

set LOG_FILE= set BKP_DIR= Hot backup program functionality can be shown with the similar diagram as for a cold backup. The sections and their purposes in the program are the same as for a cold backup.

Hot Backup Script under Windows NT Checklist • • • • • •

Check to see that ORA_HOME, BACKUP_DIR, and TOOLS are set to the correct values according to your directory structure. These variables are highlighted in the script. Verify that CONNECT_USER is set to the correct username and password. Define the INIT_FILE variable to the location of the Init.ora file. Define the ARC_DEST variable to the location archive destination. Be sure that the user running the program has Write access to the backup directories. When you run the program, pass SID as a parameter.

Hot Backup under Windows NT Troubleshooting and Status Check The backup log file defined by LOG_FILE contains detailed information about each step of the backup process. This is a very good place to start investigating why a backup has failed or for related errors. This file will also have the start and end time of backup. ERR_FILE has error information. A single line about the success or failure of backup is appended to the SID.log file every time a backup is performed. This file is located under the directory defined by the LOGDIR variable. The messages for a hot backup are 'HOTBACKUP_FAIL', if a hot backup failed, and 'Hot Backup Completed successfully', if a backup completes successfully. The "Create Dynamic Files" section, in the hotbackup_nt.bat creates the hotbackup.sql file (see Listing 3.13) under the log directory. This generates a list of tablespaces, data, control, and redo log files from the database. It is called from the hotbackup_nt.bat program.

Listing 3.13 hotbackup.sql set termout off heading off feedback off set linesize 300 pagesize 0 set serveroutput on size 1000000 spool c:\backup\orcl\hot\log\hotbackup_list.sql Declare cursor c1 is select distinct tablespace_name from dba_data_files order by tablespace_name; cursor c2( ptbs varchar2) is select file_name from dba_data_files where tablespace_name = ptbs order by file_name; Begin dbms_output.put_line('set termout off heading off feedback off'); dbms_output.put_line(chr(10) ); dbms_output.put_line('host REM ******Data files' ); for tbs in c1 loop dbms_output.put_line(' alter tablespace '|| tbs.tablespace_name ||' begin backup;'); for dbf in c2(tbs.tablespace_name) loop dbms_output.put_line(' host copy '||dbf.file_name||' c:\backup\orcl\hot\data 1>> hbackup.log 2>> herrors.log'); end loop; dbms_output.put_line(' alter tablespace '||tbs.tablespace_name || ' end backup;'); end loop;

dbms_output.put_line(chr(10) ); dbms_output.put_line('host REM ******Control files ' ); dbms_output.put_line(' alter database backup controlfile to '|| ''||'c:\backup\orcl\hot\control\coltrol_file.ctl '||''||';'); dbms_output.put_line(' alter database backup controlfile to trace;'); dbms_output.put_line(chr(10) ); dbms_output.put_line('host REM ******Init.ora file ' ); dbms_output.put_line('host copy c:\oracle\admin\orcl\pfile\init.orac:\backup\orcl\hot\control 1>> hbackup.log 2>> herrors.log'); dbms_output.put_line(chr(10) ); dbms_output.put_line('host REM ******Archivelog files' ); dbms_output.put_line(' alter system switch logfile;'); dbms_output.put_line(' alter system archive log stop;'); dbms_output.put_line('host move c:\oracle\oradata\orcl\archive\* c:\backup\orcl\hot\arch 1>> hbackup.log 2>> herrors.log' ); dbms_output.put_line(' alter system archive log start;'); dbms_output.put_line('exit;'); End; / spool off exit; The hotbackup.sql file is called from hotbackup_nt.bat and it spools output to the hotbackup_list.sql SQL file (see Listing 3.14). This file has the commands necessary for performing a hot backup. This is only a sample file. Note in the file that the data, control, archive log, and Init.ora files are copied to their respective backup directories. First, it puts the tablespace into Backup mode, copies the corresponding files to backup location, and then turns off the Backup mode for that tablespace. This process is repeated for each tablespace, and each copy command puts the status of the copy operation to hbackup.log and reports any errors to the herrors.log file. Listing 3.14 is generated based on the structure of the database. In a real environment, the database structure changes as more data files or tablespaces get added. Because of this, it is important to generate the backup commands dynamically, as shown in hotbackup_list.sql. It performs the actual backup and is called from hotbackup_nt.bat.

Listing 3.14 hotbackup_list.sql set termout off heading off feedback off host REM ******Data files alter tablespace DESIGNER begin backup; host copy C:\ORADATA\DSGN01.DBF c:backup\orcl\hot\data 1>> hbackup.log 2>> herrors.logalter tablespace DESIGNER end backup; alter tablespace DESIGNER_INDX begin backup; host copy C:\ORADATA\DSGN_INDX01.DBF c:backup\orcl\hot\data 1>> hbackup.log 2>> herrors.log alter tablespace DESIGNER_INDX end backup; alter tablespace INDX begin backup; host copy C:\ORADATA\INDX01.DBF c:backup\orcl\hot\data 1>> hbackup.log 2>> herrors.log alter tablespace INDX end backup;

alter tablespace OEM_REPOSITORY begin backup; host copy C:\ORADATA\OEMREP01.DBF c:backup\orcl\hot\data 1>> hbackup.log 2>> herrors.log alter tablespace OEM_REPOSITORY end backup; host REM ******Control files alter database backup controlfile to 'c:\hot\control\coltrol_file.ctl'; alter database backup controlfile to trace; host REM ******Init.ora file host copy c:\oracle\admin\orcl\pfile\init.ora c:backup\orcl\hot\control 1>> hbackup.log2>> herrors.log host REM ******Archivelog files alter system switch logfile; alter system archive log stop; host move c:\oracle\oradata\orcl\archive\* 1>> hbackup.log 2>> herrors.log alter system archive log start; exit;

c:\backup\orcl\hot\arch

Export The export program (see Listing 3.15) performs a full export of the database under a Windows NT environment. The export script takes SID, the instance to be backed up, as the input parameter.

Listing 3.15 export_nt.bat @echo off REM ##################################################################### REM PROGRAM NAME: export_nt.bat REM PURPOSE: REM REM USAGE:

This utility performs a full export of database on Windows NT c:\>export_nt.bat SID

REM INPUT PARAMETERS: SID (Instance name) '' REM ##################################################################### REM :::::::::::::::::::: Begin Declare Variables Section set set set set

ORA_HOME=c:\oracle\ora81\bin ORACLE_SID=%1 CONNECT_USER=system/manager BACKUP_DIR=c:\backup\%ORACLE_SID%\export

set TOOLS=c:\oracomn\admin\my_dba set LOGDIR=%TOOLS%\localog set LOGFILE=%LOGDIR%\%ORACLE_SID%.log REM :::::::::::::::::::: End Declare Variables Section REM :::::::::::::::::::: Begin Parameter Checking Section if "%1" == " goto usage

REM Create backup directories if already not exist if not exist %BACKUP_DIR% mkdir %BACKUP_DIR% if not exist %LOGDIR% mkdir %LOGDIR%

REM Check to see that there were no create errors if not exist %BACKUP_DIR% goto backupdir REM Deletes previous backup. Make sure you have it on tape. del/q %BACKUP_DIR%\* REM :::::::::::::::::::: End Parameter Checking Section REM :::::::::::::::::::: Begin Export Section %ORA_HOME%\exp %CONNECT_USER% parfile=export_par.txt (echo Export Completed Successfully & date/T & time/T) >> %LOGFILE% goto end REM :::::::::::::::::::: End Export Section

REM :::::::::::::::::::: Begin Error handling section :usage echo Error, Usage: coldbackup_nt.bat SID goto end :backupdir echo Error creating Backup directory structure (echo EXPORT_FAIL:Error creating Backup directory structure & date/T & time/T) >> %LOGFILE% REM :::::::::::::::::::: End Error handling section

REM ::::::::::::::::::::Cleanup Section :end set ORA_HOME= set ORACLE_SID= set CONNECT_USER= set BACKUP_DIR= This program performs an export of the database by using the parameter file specified by export_par.txt. In Listing 3.16 is a sample parameter file that performs a full export of the database. You can modify the parameter file to suit to your requirements.

Listing 3.16 export_par.txt file= %BACKUP_DIR%\export.dmp log= %BACKUP_DIR%\export.log full=y compress=n consistent=y

Export Script under Windows NT Checklist • • • • •

Check to see that ORA_HOME and BACKUP_DIR, TOOLS are set to correct values according to your directory structure. These variables are highlighted in the program. Verify that CONNECT_USER is set to the correct username and password. Be sure that the user running the program has Write access to the backup directories. Edit the parameter file to your specific requirements. Specify the full path of the location of your parameter file in the program. When you run the program, pass SID as a parameter.

Export under Windows NT Troubleshooting and Status Check The log file specified in the parameter file contains detailed information about each step of the export process. This is a very good place to start investigating why an export has failed or for related errors. A single line about the success or failure of export is appended to the SID.log file every time an export is performed. This file is located under the directory defined by the LOGDIR variable. The messages for an export are 'EXPORT_FAIL', if the export failed, and 'Export Completed successfully', if the export completes successfully. Recovery Principles Recovery principles are the same, regardless of whether you are in a Unix or Windows NT environment. The following are general guidelines for recovery using a cold backup, hot backup, and export.

Definitions •





Control File—The control file contains records that describe and maintain information about the physical structure of a database. The control file is updated continuously during database use and must be available for writing whenever the database is open. If the control file is not accessible, the database will not open. System Change Number (SCN)—The system change number is a clock value for the database that describes a committed version of the database. The SCN functions as a sequence generator for a database and controls concurrency and redo record ordering. Think of the SCN as a timestamp that helps ensure transaction consistency. Checkpoint—A checkpoint is a data structure in the control file that defines a consistent point of the database across all threads of a redo log. Checkpoints are similar to SCNs and they also describe which threads exist at that SCN. Checkpoints are used by recovery to ensure that Oracle starts reading the log threads for the redo application at the correct point. For a parallel server, each checkpoint has its own redo information.

Media Recovery Commands To perform either a complete media recovery or incomplete media recovery, you need to be familiar with the following three media recovery commands.

• • • • • • •

RECOVER DATABASE This command performs a media recovery on all the data files that require the application of redo. This can be used only when the database is mounted but not open. This command is generally used when the system data file is lost. RECOVER TABLESPACE tablespace_name This command performs a media recovery on all the data files in the tablespaces listed. The database must be mounted and open. The tablespace in question must be offline to perform the media recovery. To recover the tablespace, you need to mount the database first, put the data file that is in trouble offline, and then open the database and put the tablespace offline. Then give the recover tablespace tablespace_name command and put the tablespace online when the recovery is complete.

• • • •

RECOVER DATAFILE 'filename' This command performs a recovery on listed data files. The database can be open or closed. If the database is open, data file recovery can only recover offline files. To recover the data file in question, mount the database and put the troubled data file offline, open the database and issue the 'RECOVER DATAFILE 'FILE_NAME' command, and put the data file online. This command is generally used when a non-system data file is lost.

Performing Recovery, Where to Start? You are a new DBA and you get a call from the project manager saying that the users are not able to connect to the database. As a first step, try to establish a connection for yourself as a DBA as shown. If the connection succeeds, try to connect as a regular user and see if you receive any errors during connection, because some errors that are seen by regular users do not show up when you connect as Internal or SYSDBA (such as Max sessions reached).

$sqlplus user/pwd Now you determined that you are not able to connect to the database. As a second step, try to see whether the processes are running by using the following command.

$ps –ef|grep –i ORCL This should list the processes that are running. If it does not list any processes, you are sure that the database is down. As a third step, check the alert log file for any errors. The alert log file is located under the directory defined by BACKGROUND_DUMP_DEST in the Init.ora file. This file lists any errors encountered by database. If you see any errors, note the time of the error, error number, and error message. If you do not see any errors, start up the database (sometimes it will report an error when you try to startup the database). If the database starts, that is wonderful! If it doesn't start, it will generally complain about the error onscreen and also report the error in the alert log file. Check the alert log again for more information. Now you determined from the error that the database is not finding one of the data files. As a fourth step, inform the project manager that somebody has caused a problem in the database and try to find out what happened (a hard disk problem or perhaps somebody deleted the file). Limit your time to this research based on time available. As a fifth step, try to determine what kind of backups you have taken recently and see which one is most beneficial for recovering as much data as possible. This depends on the types of backups your site is employing to protect from database crashes. If you have a hot backup mechanism in place, you can be sure that you can recover all or most of the data. If you have an export or cold backup mechanism in place, the data changes since the time of last backup will be lost. As a sixth step, follow the instructions in this chapter, given your recovery scenario.

Recovery Using Cold Backup To restore a full database, do the following: 1. 2. 3.

Shutdown the database. Copy all data files, control files, and redo log files from the backup location to the original location. Verify the owner and permissions for the files (for Unix only). Start up the database.

Recovery When a Data File Is Lost To recover a database using c cold backup, just restore all the files from the backup location to their original locations and open the database. You can find the original physical location in the trace file you generated as part of the backup. You cannot recover the transactions that occurred between the last backup and the point of failure—that information is lost.

Recovery When a Redo Log File Is LostTo recover the database when a redo log file is lost or corrupted alter database clear logfile group 1; Where group 1 is the corrupted log group number. Or you can create a new control file and open the database in the Reset Logs mode (alter database open resetlogs). For this the database need to be in NOMOUNT state (startup NOMOUNT). The reset logs option resets the redo log sequence numbering and recreates any missing logfiles. To create the new control file, you need to know the full structure of the database. We have taken the trace of control file by using Alter database backup controlfile to trace as part of the backup. Follow the steps explained in Chapter 10, "Database Maintenance and Reorganization," for creating a new control file.

Recovery When a Control File Is Lost To recover the database in case of a lost control file, you simply recreate the control file knowing the structure of the database (from the trace of control file) and open the database with reset logs. Follow the steps explained in Chapter 10 for creating a new control file.

Recovery Using Hot Backup When the database is running in ARCHIVELOG mode and online backup is being used, there are a variety of options for recovering the database, up to the point of failure, that provide maximum protection for your data. Recovery can be classified as follows:

• • • • • • • •

Complete media recovery Closed database recovery Open database/offline tablespace recovery Open database/offline tablespace/individual data file recovery Incomplete media recovery Cancel-based recovery Time-based recovery Change-based recovery

Complete Media Recovery At all costs, we want to be able to fully recover the data in case of a database failure. Consequently, we always try to perform a complete recovery unless the need is to recover the database only to a specific point in time for specific reasons, such as those discussed in the next section, "Incomplete Media Recovery." The choice of whether to use a closed or open database recovery is based on the type of failure. If you lose system data files, the only choice is a closed database recovery. If a non-system data file is lost, you can perform recovery by using either a closed or open database method. Suppose that you are running a 24/7, mission-critical database, and only part of the database (non-system) is damaged. In this situation, you can open the database for users by taking the damaged data files offline and then performing a recovery on the damaged files. This way, users can access the rest of the database while the recovery is being performed on the damaged data files.

Incomplete Media Recovery Incomplete media recovery is very useful as well, if a user drops a table accidentally and comes to you for help, for example. If you know the time the table drop occurred, you can restore the database from a backup. By using the latest control file, you can roll forward the changes by applying redo log files up to the point just before the accidental drop (time-based recovery). Point in Time Recovery There was a database corruption at 5 p.m. in the evening and the database crashed. When I tried to bring up the database, the database opened and immediately died as soon as I started executing any SQL statement. This crippled my ability to perform troubleshooting of the problem. I restored the database from a backup and applied the archive redo log files up to just before the time of the crash and the database came up fine. Remember, you have to use the latest control file to roll forward with the archived redo log files, so that the Oracle knows what archived redo log files to apply.

Closed Database Recovery Steps 1. 2.

Restore the damaged files from backup. With the following command, mount the database but do not open it:

startup mount 3.

Start media recovery as follows:

recover database At this point, you will be prompted for the location of the archived redo log files, if necessary. 4.

Open the database:

alter database open Verify that the recovery worked.

Offline Tablespace Recovery Steps 1. 2.

Restore the damaged files from the backup. With the following command, mount database but do not open it:

startup mount 3.

Take the corrupted data file offline:

alter datafile '/u01/oradata/users01.dbf' offline; 4.

Open the database as follows:

alter database open; 5.

After the database is open, take the tablespace offline. For example, if the corrupted data file belongs to USERS tablespace, use the following command:

alter tablespace users offline; Here, tablespace can be taken offline either with a normal, temporary, or immediate priority. If possible, take the damaged tablespace offline with a normal or temporary priority to minimize the amount of recovery. 6.

Start the recovery on the tablespace:

recover tablespace users; At this point, you will be prompted for the location of the archived redo log files, if necessary. 7.

Bring the tablespace online:

alter tablespace users online; 8.

Verify that the recovery worked.

Offline Datafile Recovery Steps 1. 2.

Restore the damaged files from the backup. Using the following command, mount the database but do not open it:

Startup mount 3.

Take the corrupted data file offline:

alter datafile '/u01/oradata/users01.dbf' offline; 4.

Open the database:

alter database open; 5.

After the database is open, take the tablespace offline. For example, if the corrupted data file belongs to USERS tablespace, use the following command:

alter tablespace users offline; Here, tablespace can be taken offline either with a normal, temporary, or immediate priority. If possible, take the damaged tablespace offline with a normal or temporary priority to minimize the amount of recovery. 6.

Start the recovery on the data file:

recover datafile '/u01/oradata/users01.dbf'; At this point, you will be prompted for the location of the archived redo log files, if necessary.

7.

Bring the tablespace online:

alter tablespace users online; 8.

Verify that the recovery worked.

Cancel-Based Recovery Steps 1. 2.

Restore the damaged files from the backup. Using the following command, mount the database but do not open it:

startup mount 3.

Start the recovery:

recover database until cancel [using backup controlfile] At this point, you will be prompted for the location of the archived redo log files, if necessary. Enter cancel to cancel recovery after Oracle has applied the archived redo log file just prior to the point of corruption. If a backup control file or recreated control file is being used with incomplete recovery, you should specify the using backup controlfile option. In cancel-based recovery, you cannot stop in the middle of applying a redo log file. You either completely apply a redo log file or you don't apply it at all. In time-based recovery, you can apply to a specific point in time, regardless of the archived redo log number. 4.

Open the database:

alter database open resetlogs Whenever an incomplete media recovery is being performed or the backup control file is used for recovery, the database should be opened with the resetlogs option. The resetlogs option will reset the redo log files. 5.

Perform a full backup of database. If you open the database with resetlogs, a full backup of the database should be performed immediately after recovery. Otherwise, you will not be able to recover changes made after you reset the logs.

6.

Verify that the recovery worked.

Time-Based Recovery Steps 1. 2.

Restore the damaged files from the backup. Using the following command, mount the database but do not open it:

startup mount 3.

Start the recovery:

recover database until time [using backup controlfile] For example

recover database until time '1999-01-01:12:00:00' using backup controlfile

At this point, you will be prompted for the location of the archived redo log files, if necessary. Oracle automatically terminates the recovery when it reaches the correct time. If a backup control file or recreated control file is being used with incomplete recovery, you should specify the using backup controlfile option. 4.

Open the database:

alter database open resetlogs Whenever an incomplete media recovery is being performed or the backup control file is used, the database should be opened with the resetlogs option, so that it resets the log numbering. 5.

Perform a full backup of the database. If the database is opened with resetlogs, a full backup of the database should be performed immediately after recovery. Otherwise, you will not be able to recover the changes made after you reset the logs.

6.

Verify that the recovery worked.

Change-Based Recovery Steps 1. 2.

Restore the damaged files from the backup. Using the following command, mount the database but do not open it:

startup mount 3.

Start the recovery:

recover database until change [using backup controlfile] For example

recover database until change 2315 using backup controlfile At this point, you will be prompted for the location of the archived redo log files, if necessary. Oracle automatically terminates the recovery when it reaches the correct system change number (SCN). If a backup control file or a recreated control file is being used with an incomplete recovery, you should specify using the backup controlfile option. 4.

Open the database.

alter database open resetlogs 5.

Perform a full backup of the database. If the database is opened with resetlogs, a full backup of the database should be performed immediately after recovery. Otherwise, you will not be able to recover the changes made after you reset the logs.

6.

Verify that the recovery worked.

System Tablespace Versus a Non-System Tablespace Recovery When a system data file is lost or damaged, the only way to recover the database is by doing a closed database recovery using RECOVER DATABASE command. Checking for Files Needing Recovery The following command can be used to check the data file status. This command works when the database is mounted or open.

select name, status from v$datafile; Before you actually start recovering the database, you can obtain information about the files that need recovery by executing the following command. To execute the statement, the database must be mounted. The command also gives error information.

select b.name, a.error from v$recover_file a, v$datafile b where a.file# = b.file#

Recovery Using Import The import utility is used to import the database from the dump file generated through the export utility. This is very useful for transferring data across platforms and importing only specific objects or users. It works whether archiving is turned on or off. Full database import performance can be improved by turning off archiving during the import. There are three levels of Import:

• • •

Full User-level Table-level

Full Import A full import can be used to restore the database in case of a database crash. For example, you have a full export of the database from yesterday and your database crashed this afternoon. You can use the import command to restore the database from the previous day's backup. The restore steps are as follows. 1. 2.

Create a blank database—Refer to Chapter 10 for instructions on how to create a database. Import the database—The following command performs a full database import, assuming that your export dump filename is export.dmp. The IGNORE=Y option ignores any create errors, and the DESTROY=N option does not destroy the existing tablespaces.

3. C:\>imp system/manager file=export.dmp log=import.log full=y ignore=y destroy=n 4.

Verify the import log for any errors—With this import, the data changes between your previous backup and the crash will be lost.

Table-Level Import A table level import allows you to import specific objects without importing the whole database. Example 1: For example, if one of the developers requests that you transfer the EMP and DEPT tables of user SCOTT from database ORCL to TEST. You can use the following steps to transfer these two tables. 1.

Set your ORACLE_SID to ORCL.

C:\>set ORACLE_SID=ORCL This step sets the correct database to which to connect. 2.

Perform an export of EMP and DEPT.

C:\>exp system/manager tables=(scott.emp,scott.dept) file=export.dmp log=export.log This command exports table data, constraints and any indexes on the table. Because the tables belong to owner scott, we need to precede them with the owner in the export command. Verify the export.log file to make sure there are no errors in the export. 3.

Connect to TEST database.

SQL>Connect system/manager@TEST 4.

Drop the tables if it already exists. If the TEST database already has EMP and DEPT tables, you can truncate the tables or drop the tables as shown.

SQL>Truncate table EMP;SQL>Truncate table DEPT; Or

SQL>Drop table EMP;SQL>Drop table DEPT; 5.

Import the tables to TEST.

6. C:\>set ORACLE_SID=TEST 7. C:\>imp system/manager fromuser=scott touser=scott tables=(EMP,DEPT) 8. file=export.dmp log=import.log ignore=Y Check for any errors in the import log file. Example 2: Suppose you walk into the office in the morning and a developer meets you in the hallway and says that he accidentally dropped the SALES table. He wants to see whether you can do anything to restore the table. Well, you could do something if you have an export dump file from your previous backup. The steps to restore the table are as follows (assuming that this happened in the TEST database): 1.

Set your ORACLE_SID to the TEST database.

C:\>set ORACLE_SID=TEST 2.

Import the table from previous backup.

3. C:\>imp system/manager tables=(SCOTT.SALES) file=export.dmp 4. log=import.log ignore=Y This command imports the SALES table from previous backup. After the import check the import log file for any errors.

Backup and Recovery Tools

Recovery Manager (RMAN) RMAN is an Oracle provided tool that allows you perform backup and recovery operations on the database. Using RMAN you can backup and restore datafiles, control files and archived redo log files. RMAN operates using the recovery catalog to store metadata information about backup and recovery operations. Typically the recovery catalog is stored in a separate database. If you do not want to use the recovery catalog RMAN can use the target database control file to perform backup and recovery operations. Because most information in the recovery catalog is also available in the target database's control file, RMAN supports using the target database control file instead of a recovery catalog. The disadvantage of using the control file is that RMAN does not support restore or recovery when the control file is lost. To avoid this you should make frequent backups of the control file. Using the control file is especially appropriate for small databases where installation and administration of another database for the sole purpose of maintaining the recovery catalog is burdensome. A single recovery catalog is able to store information for multiple target databases. Consequently, loss of the recovery catalog can be disastrous. You should back up the recovery catalog frequently. If the recovery catalog is destroyed and no backups of it are available, then you can partially reconstruct the catalog from the current control file or control file backups. When you perform a backup using RMAN, information about the backup is stored in the catalog and the actual backups(physical files) are stored on disk or tape(requires media management software). When you use RMAN with a recovery catalog, the RMAN environment is comprised of the following components

• • • •

RMAN executable Recovery catalog database (Database to hold the catalog) Recovery catalog schema in the recovery catalog database (Schema to hold the metadata information) Optional Media Management Software (for tape backups)

Sample Files

Sample

oratab

File

Listing 3.17 is created by the Oracle installer when you install the Oracle database under Unix operating system. The installer adds the instance name, Oracle home directory, and auto startup flag (Y/N) for the database in the format [SID:ORACLE_HOME:FLAG]. The auto startup flag tells whether the Oracle database should be started automatically when the system is rebooted.

Listing 3.17 oratab # All the entries in oratab file follow the # following syntax. Each instance listed on a separate line # SID:ORACLE_HOME:Y/N DEV:/u02/oracle/DEV/oracle/8.1.7:N TEST:/u05/oracle/TEST/oracle/8.1.7:N #PREPROD:/u06/oracle/PREPROD/oracle/8.1.7:N

Sample Trace of Control File Listing 3.18 will have the structure of the database. It lists the data files, control files, and the redo log files and their location. This is useful if you need to recreate the control file. A trace of the control file can be generated by using the alter database backup control file to trace.

Listing 3.18 trace of control file /u02/oracle/DEV/common/admin/udump/DEV_ora_11817.trc Oracle8i Enterprise Edition Release 8.1.7.1.0 - Production With the Partitioning option JServer Release 8.1.7.1.0 - Production ORACLE_HOME = /u02/oracle/DEV/oracle/8.1.7 System name: SunOS Node name: mking07 Release: 5.6 Version: Generic_105181-25 Machine: sun4u Instance name: DEV Redo thread mounted by this instance: 1 Oracle process number: 10 Unix process pid: 11817, image: oracle@mking07 (TNS V1-V3) *** SESSION ID:(9.13) 2001-05-17 21:15:28.730 *** 2001-05-17 21:15:28.730 # The following commands will create a new control file and use it # to open the database. # Data used by the recovery manager will be lost. Additional logs may # be required for media recovery of offline data files. Use this # only if the current version of all online logs are available. STARTUP NOMOUNT CREATE CONTROLFILE REUSE DATABASE "DEV" NORESETLOGS NOARCHIVELOG MAXLOGFILES 16 MAXLOGMEMBERS 4 MAXDATAFILES 1022 MAXINSTANCES 1 MAXLOGHISTORY 453 LOGFILE GROUP 1 ( '/u03/oracle/DEV/data/log01a.dbf', '/u03/oracle/DEV/data/log01b.dbf' ) SIZE 400M, GROUP 2 ( '/u03/oracle/DEV/data/log02a.dbf', '/u03/oracle/DEV/data/log02b.dbf' ) SIZE 400M, DATAFILE '/u02/oracle/DEV/data/system01.dbf', '/u02/oracle/DEV/data/indx01.dbf', '/u02/oracle/DEV/data/rbs01.dbf', '/u02/oracle/DEV/data/temp01.dbf', '/u02/oracle/DEV/data/users.dbf', CHARACTER SET WE8ISO8859P1 ; # Recovery is required if any of the datafiles are restored backups, # or if the last shutdown was not normal or immediate. RECOVER DATABASE # Database can now be opened normally. ALTER DATABASE OPEN; # Commands to add tempfiles to temporary tablespaces. # Online tempfiles have complete space information. # Other tempfiles may require adjustment. ALTER TABLESPACE TEMP ADD TEMPFILE '/u03/oracle/DEV/data/temp04.dbf' REUSE; ALTER TABLESPACE TEMP ADD TEMPFILE '/u03/oracle/DEV/data/temp03.dbf' REUSE; # End of tempfile additions.

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF