TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
REFRESHING DATA REFRESH means MOVING PORTION OF DATA from one database to another. Taking recent copy of production database and restoring it to the target environment, i.e. applying changes (updates) of the production database to dev/clone database where the database already cloned. Normally refresh can be done for TEST/DEV databases during development and test phases. Refreshing a particular table, group of tables, schema, tablespace using traditional export/import, transportable tablespaces or Data Pump methods. TABLE REFRESH Sometimes requirement comes to us refresh tables from development database server to production database server. Tables tested in development database and need to be in production database. Capture source database table complete info. Take export of required table(s) at source database. SCP the export .dmp file to target database server. Take export of required table(s) at target database -(Recommended). Truncate the required table(s) at destination database -(Recommended). Import the .dmp file into the destination database. Target database table row count should be same as source database table row count. PRODUCTION DATABASE
CRMS
192.168.1.130
DEVELOPMENT DATABASE
DEVDB
192.168.1.131
TABLES
QRT_PRD_AP, QRT_PRD_AR, QRT_PRD_TARGET
TAKE AN EXPORT OF THE TABLES IN DEVDB
$ exp system/manager file=sham_tables.dmp log=sham_tables.log
direct=y
tables=(sham.prd_ap, sham.prd_ar, sham.prd_gl) statistics=none Export: Release 11.2.0.1.0 - Production on Thu Jun 4 00:12:28 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Export done in US7ASCII character set and AL16UTF16 NCHAR character set server uses WE8MSWIN1252 character set (possible charset conversion) About to export specified tables via Direct Path ... Current user changed to SHAM . . exporting table
QRT_PRD_AP
1868 rows exported
. . exporting table
QRT_PRD_AR
88622 rows exported
. . exporting table
QRT_PRD_TARGET
68261 rows exported
Export terminated successfully without warnings. TAKE A BACKUP OF THE EXISTING TABLES IN PRODUCTION SERVER
SHAM> create table QRT_PRD_AP_BKP as select * from QRT_PRD_AP; SHAM> create table QRT_PRD_AR_BKP as select * from QRT_PRD_AR; SHAM> create table QRT_PRD_TARGET_BKP as select * from QRT_PRD_TARGET;
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
VERIFY THE BACKUPS IN PRODUCTION SERVER (CRMS)
SYS> select owner, table_name from dba_tables where table_name like '%BKP'; .. ... SCP THE EXPORT FILE TO PRODUCTION SERVER
$ gzip sham_tables.dmp $ scp sham_tables.dmp.gz
[email protected]:$HOME
[email protected]'s password: sham_tables.dmp.gz GUNZIP THE EXPORT FILE IN THE PRODUCTION SERVER
$ gunzip sham_tables.dmp.gz sham_tables.dmp
TRUNCATE TABLES FROM SHAM SCHEMA
SYS> drop table SHAM.QRT_PRD_AP; SYS> drop table SHAM.QRT_PRD_AR; SYS> drop table SHAM.QRT_PRD_TARGET; IMPORT THE EXPORT FILE
$ imp system/manager file=sham_tables.dmp log=sham_tables.log fromuser=sham touser=sham Import: Release 11.2.0.1.0 - Production on Fri Jun 5 10:41:09 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Export file created by EXPORT:V11.02.00 via direct path import done in US7ASCII character set and AL16UTF16 NCHAR character set import server uses WE8MSWIN1252 character set (possible charset conversion) . importing SHAM's objects into SHAM . . importing table
"QRT_PRD_AP"
1868 rows imported
. . importing table
"QRT_PRD_AR"
88622 rows imported
. . importing table
"QRT_PRD_TARGET"
68261 rows imported
About to enable constraints... Import terminated successfully without warnings. SQL COMAMND TO ENABLE INDEX MONITORING
$ ind_monitoring.sql; select 'alter index '||index_name||' monitoring usage;' from user_indexes; After import manually we have to enable index monitoring. You can use above query in spool file. SHAM> spool enable_ind_montoring; SHAM>@ind_monitoring.sql; SHAM>spool off;
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
It is advisable to drop indexes before import to speed up the import process. After import you can recreate the indexes. INDEXFILE – contains index creation statements you can use this file to rebuild the indexes after the import has completed.
indexfile.txt
RECOMPLILE INVALID OBJECTS
SYS>@?/rdbms/admin/utlrp.sql; .. ...
POINTS TO REMEMBER
Oracle’s utilities EXP/IMP are used to perform logical database backup and recovery. To perform export and import a database must be up and running. EXP and IMP utilities are present in the $ORACLE_HOME/bin directory. $ which exp imp /u02/app/oracle/product/11.2.0/dbhome_1/bin/exp /u02/app/oracle/product/11.2.0/dbhome_1/bin/imp To use export and import, you must have the CREATE SESSION privilege. A user can perform the backup of entire database, the user should have EXP_FULL_DATABASE privilege. In order to import full database, the user should have IMP_FULL_DATABASE. You can run export and import in two modes. Command line mode & Interactive mode. To get interactive mode just type exp or imp at OS prompt, tools will ask all necessary input. Logical backups are more useful to perform schema level refreshes. It is also useful for database reorg, schem reorg, and table level reorg.. Using EXP/IMP tools we can transfer data from one database to another database. EXP tool to export data from source database, IMP tool to load data into the target database. A file .dmp is generated by EXP utility. It is partial binary and partial text file.
This file
can be read by IMP utility. Manually editing .dmp files are NOT recommended. During import operations, sometimes we may get errors messages related to undo to overcome this error use commit=Y loads array of records and issues commit frequently. To suppress DDL related errors messages like (object already exists) specify ignore=Y. Export can be done in two methods. Conventional & Direct. By default traditional logical backup is conventional (goes through SQL processing layer). We can also bypass by specifying direct=Y. CONVENTIOANL PATH Vs DIRECT PATH
Export uses the SQL SELECT statement to extract data from tables. Data is read from disk into the buffer cache, then rows are transferred to the evaluation buffer, after passing expression evolution (equivalent insert statement is generated and validated) it is transported to the export client, then writes data into the export .dmp file.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
DIRECT PATH
Direct path export is much faster than conventional path because the data is read from a disk into the buffer cache and rows are transferred directly to the export client. The evaluating buffer (that is, the SQL command-processing layer) is bypassed. Data is already in the format that export expects, thus avoiding unnecessary data conversion then writes the data into the export .dmp file. Direct path can be much faster than conventional path, because the SQL command processing layer is bypassed. QUERY & BUFFER parameters cannot be used along with DIRECT=Y in export command. CONSISTENT=Y Vs CONSISTENT=N CONSISTENT = N is default. In this case each table is usually exported in a single transaction. CONSISTENT = Y internally specifies “set transaction read only” and ensures data consistency.
Export utility exports consistent image of the table i.e. the changes which are done during export will NOT be exported. If ‘Y’ is set confirm sufficient undo segment space, otherwise it may lead to “ORA-1555 snapshot too old error”. When using CONSISTENT=Y and the volume of update is large, then the rollback segment usage will be large. The export process will be slower, because the rollback segment must be scanned for uncommitted transactions. COMPRESS =Y Vs COMPRESS=N
COMPRESS=Y is default. Compress specifies how export & import manage initial extent for table data. COMPRESS=Y means export does not compress the contents of the exported data. The INITIAL storage parameter is set to the total size of all extents allocated for the object. If extent sizes are large, the allocated space will be larger than the space required to hold the data.
SCHEMA USR1
TABLE_NAME T1
OPTION COMPRESS=Y
ACTIVITY 1
EXPORT T1
USR1> select table_name, initial_extent, next_extent TABLE_NAME
ACTIVITY 2 IMPORT T1
SCHEMA_NAME
USR2
from user_tables where table_name='T1';
INITIAL_EXTENT NEXT_EXTENT
------------------------------ -------------- ----------T1
65536
1048576
USR1> select extents, bytes from user_segments where segment_name='T1'; EXTENTS
BYTES
----------
--------
52
38797312
During export i have specified COMPRESS=Y, after import we can check the value from USR2 SCHEMA. USR2> select table_name, initial_extent, next_extent from user_tables where table_name='T1'; TABLE_NAME
INITIAL_EXTENT
NEXT_EXTENT
------------------------------ --------------
-----------
T1
38797312
1048576
Total allocated extents are 9 after import the object t1. I would suggest do NOT use compress=Y.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
COMPRESS=N
If you set COMPRESS=N during export, export uses the current storage parameters, including the values of initial extent size and next extent size. While import at that time of creating table, it will use the same values of INITIAL extent as in the original one. The values of these parameters specified in the CREATE TABLE or ALTER TABLE statement. SCHEMA USR1
TABLE_NAME T1
OPTION COMPRESS=N
ACTIVITY 1
ACTIVITY 2
EXPORT T1
IMPORT T1
USR1> select table_name, initial_extent, next_extent TABLE_NAME
SCHEMA_NAME
USR3
from user_tables where table_name='T1';
INITIAL_EXTENT NEXT_EXTENT
------------------------------ -------------- ----------T1
65536
1048576
USR1> select extents, bytes from user_segments where segment_name='T1'; EXTENTS
BYTES
----------
--------
52
38797312
During export i have specified COMPRESS=N, after import we can check the value from USR3 SCHEMA. USR3> select table_name, initial_extent, next_extent TABLE_NAME
from user_tables where table_name='T1';
INITIAL_EXTENT NEXT_EXTENT
------------------------------ -------------- ----------T1
65536
1048576
USR3> select extents, bytes from user_segments where segment_name='T1'; EXTENTS ---------47
BYTES -------33554432
EXPORTING QUESTIONABLE STATISTICS
STATISTICS = ESTIMATE is default. Option includes estimate, compute and none. Exporting statistics may not usable. Mostly none is specified. When export the database object you may encounter warning message as EXP-00091: Exporting questionable statistics. EXP-00091 is not an error but might want to gather more up-to-date stats, preferably by DBMS_STATS. We can remove the error by setting statistics="none" or by setting the client character set. .. Export done in US7ASCII character set and AL16UTF16 NCHAR character set server uses WE8MSWIN1252 character set (possible charset conversion) ...
The key to resolve the warning message is here. Change the charset before running exp with an $ export NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
PARFILE
The file contains list of export parameters or import parameters. Instead of typing the parameters on the command line you can use a parameter file where the parameters are stored. $ exp username/password parfile= $ imp username/password parfile=
$ cat sham_exp.txt file=exp_sham_tables.dmp log=exp_sham_tables.log tables=sham.emp, sham.project, sham.dept, sham.payroll direct=y statistics=none $ cat sham_imp_txt file=sham_exp_tables.dmp log=sham_imp_tables.log fromuser=sham touser=maya indexes=no $ exp system/manager parfile=sham_exp_txt .. ... $ imp system/manager parfile=sham_imp_txt .. ... FILESIZE
Export utility supports writing to multiple files and import utility can read from multiple files. Once you specify a value (bytes) for the filesize parameter, export writes number of bytes (as you specified filesize value) for each dump file. QUERY
When doing table mode export, query parameter allows to select a subset of rows from a set of tables based on WHERE clause. QUERY parameter cannot be specified in a direct path export. QUERY parameter cannot be specified for full, user or tablespace mode exports. query=\"where gender=\'female\'\" query=\"where dept_id=\10\" FEEDBACK
Every . is determined how often a feedback is displayed. If you specify feedback=n, then displays a dot(.) for every n rows are processed. This is useful for tracking the progress of large export and import operations. USERID
Specifies the userid/password of the user who performs export or import.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
IGNORE
Specifies how object creation error should be handled.
If already object exists and IGNORE=Y then
the rows are imported to the tables otherwise error will be reported and no rows are loaded into the table. HELP
A basic help screen will appear for export and import. At os, $ exp help=y or imp help=y FILE
Default: expdat.dmp – stands for EXPORT DATA DUMP. The default extension is .dmp. File specifies the name of the export a dump file. LOG
Specifies a file name (export.log). Which contains exp and import progress information, error and warning messages. SHOW
To display contents of the export dump file are listed and NOT imported. imp command with show=y. FULL
The entire database is exported. You need EXP_FULL_DATABASE role to export in this mode. TABLES
Specifies the name of the table or tables to be exported.
You need to specify a comma separated
list of all tables to be exported. When export a table or set of tables, you need to specify TABLE parameter could be used with OWNER parameter. Ex: TABLES = OWNER.TABLE_NAME1, OWNER.TABLENAME2 … OWNER
Owners object will be exported. ROWS
Whether the rows of table data are exported. FROMUSER | TOUSER
FROMUSER: Specifies the schema for import. TOUSER: Specifies target schema for import. Both parameters need to be used together when import operation. The IMP_FULL_DATABASE role is required to use FROMUSER and TOUSER parameter. INDEXES | CONSTRAINTS | GRANTS
INDEX:
Whether or not export utility export indexes. Default y.
CONSTARINTS: Whether or not export utility export table constraints. Default y. GRANTS:
Whether or not export utility exports object grants. System privilege grants are always
exported. Object grants are exported depend on whether you use full database mode or user mode. In FULL database mode, all grants on the table are exported. In USER mode,
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
BUFFER
In export, specifies the size (in bytes) of the buffer (array) used to fetch rows. It determines the maximum number of rows in an array, fetched by export. If you specify 0, export fetches only one row at a time. Use below formula to calculate buffer size. BUFFER_SIZE = ROWS_IN_ARRAY * MAXIMUM_ROW_SIZE Buffer parameter applies only to conventional path export and no effect with direct path export. For direct path export use record length parameter to specify size of the buffer. RECORDLENGTH
Operating system dependent. Specifies the length in bytes, of the file record. You can use this parameter the size of the export I/O buffer (value is 64kb). This parameter is necessary when you transfer the export file to another operating system. If you do not define this parameter, it defaults to platform dependent value. COMMIT
Default=N. Specifies whether import should perform after each insert. By default commit occurs after loading each table. When an error occurs, import performs a rollback. If you import large table, the undo segment may grow large. To improve performance use COMMIT=Y OBJECT_CONSISTENT
Default=N. Specifies whether or not export uses SET TRANSACTION READ ONLY to ensure the data is being exported is consistent. TABLESPACES
Indicates type of mode is tablespace mode. Specifies all the tables in the tablespace to be exported and imported. We must have EXP_FULL_DATABASE role to export all tables in the tablespace to export. TTS_FULL_CHECK
Default=N. Specifies whether or not to verify a tablespace for dependency. This parameter is applicable for transportable tablespace mode export. When TTS_FULL_CHECK=Y, export verifies a recovery set (set of tablespaces to be recovered) has no dependencies on objects outside the recovery set. TRANSPORT_TABLESPACE
Specifies export of transportable tablespace metadata and import transport tablespace metadata from an export file. Transportable tablespace import for the target database must be same or higher version as the source database. FLASHBACK _SCN
Specifies the SYSTEM CHANGE NUMBER (SCN) that export will use to enable flashback.
The export
operation is performed data consistent as of specified SCN. $ exp system/password file=exp.dmp flashback_scn= 10908900 FLASHBACK_TIME
FLASHBACK_TIME enables you to specify a timestamp. Export finds the SCN that closely matches to the specified timestamp.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
FREQUENTLY USING EXP/IMP PARAMETERS
EXPORT
IMPORT
BUFFER
BUFFER
COMPRESS
COMMIT
CONSISTENT
FILE
DIRECT
FROMUSER
FILE
FULL
FILESIZE
IGNORE
FULL
INDEXFILE
OWNER
LOG
PARFILE
PARFILE
QUERY
QUERY
ROWS
TABLESPACES
TABLES
TOUSER
TABLESPACES
SHOW
PERFORMANCE PARAMETERS
Performance parameter in export are direct and buffer. Performance parameter in import are buffer and commit. INVOKE
EXPORT AND IMPORT
Command line mode, Interactive mode and Parameter file. Mostly preferably method is command line mode. EXPORT AND IMPORT MODES
A user who has EXP_FULL_DATABASE and IMP_FULL_DATABASE roles can use following modes. FULL: Exports and imports for full database. USER: User can export and import their own objects. (Tables, triggers, packages, indexes, .. TABLE: Specific table and associated partitions TABLESPACE: To move set of tablespaces from one database to another. ORDER OF IMPORT
Schema objects are imported as import read from export file. Export file contains objects in the following order. Type definitions, Table definitions, Table data, Table index, Integrity constraints, views, procedures, triggers, Bitmap, function based and domain indexes. The order of import is as follows: New tables are created, data is imported and indexes are built. Triggers are imported. Integrity constraints are enabled on the new tables and any bitmap, functionbased and domain indexes are built.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
CAN WE EXPORT SYS SCHEMA?
No. As I said, SYS schema cannot be exported. Account named %SYS% you cannot export. Following schemas cannot be processed by export. ORDSYS, MDSYS, CTXSYS, LBACSYS, ORDPLUGINS LOGICAL BACKUP AND IT’S ADVANTAGES
Table refresh and Schema refresh. Move data from one owner to another. Move data from one tablespace to another. Transporting tablespaces between databases. Reduce database fragmentation to save disk space. IMPROVE EXPORT/IMPORT PERFROMACE
If you use conventional path export use buffer and set high value. Use direct=y to bypass SQL command processing layer. Stop unnecessary applications to free up system resources for your job. If you run multiple export sessions, ensure they write to different physical disk. Before import database objects (if they exist in target database, take backup) and drop them. Not advisable to create an index at that time of data loading when import. If any constraints on the target table, disable it during import and enable after import. Use indexes=N and use INDEXFILE parameter while import. Create an indexfile so that you can create indexes after you have imported data. If you import large table, use commit=n to overcome undo related errors. Make sure before import check size of undo and archive destination. HOW TO IMPORT TABLE IN DIFFERENT TABLESPACE
By default NO parameter to specify different tablespace to import data. When export objects will be re-created actually from where they were exported from. But we can alter this behavior. Recreate the table(s) in the another tablespace. (Table would be empty). Import the dumpfile using INDEXFILE parameter option. Edit the indexfile script – to create indexes in the tablespace you want and remover REM also . Import the table using ignore=y (because already table created) Execute the indexfile to recreate the indexes. INDEXFILE contains index creation statements you can use this file to rebuild the indexes after the import has completed. SYS> select username, default_tablespace from dba_users where username='SHAM'; USERNAME
DEFAULT_TABLESPACE
------------------------------ ------------------------------SHAM TBS1 SYS> select username, default_tablespace from dba_users where username='MAYA'; USERNAME
DEFAULT_TABLESPACE
------------------------------ -----------------------------MAYA
USERS
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
Sham schema objects and its associated tablespaces I am going to import sham’s following object (emp, dept, project, emp_audit, grade) into maya. TABLE DETAILS
SYS> select table_name, tablespace_name from
DBA_tables where table_name IN
('EMP','PROJECT','DEPT','EMP_AUDIT','GRADE') and owner='SHAM'; TABLE_NAME
TABLESPACE_NAME
------------------------------ -----------------------------PROJECT
TBS1
GRADE
USERS
EMP_AUDIT
TBS1
EMP
TBS1
DEPT
TBS1
INDEX DETAILS
SYS> select index_name, table_name, tablespace_name from dba_indexes where table_name IN ('EMP','PROJECT','DEPT','EMP_AUDIT','GRADE') and owner='SHAM'; INDEX_NAME
TABLE_NAME
TABLESPACE_NAME
------------------------------ ------------------------------ -----------------------------DEPT_DEPTID_C3_PK
DEPT
TBS1
EMP_EMPID_C1_PK
EMP
TBS1
EMP_DPID_IN1
EMP
USERS
GRD_EMPGRD_C12_PK
GRADE
TBS1
PROJ_PRJID_C4_PK
PROJECT
TBS1
Above output shows sham users some tables and indexes are from TBS1 tablespace. Grade table and associated index from USERS tablespace. I am going to import above all tables in TBS2 tablespace and associated indexes in TBS3 tablespace into maya user. Let’s start the process. TAKE AN EXPORT
$ exp system/manager file=sham_tables.dmp log=sham_tables.log tables=sham.emp, sham.grade, sham.dept, sham.project, sham.emp_audit statistics=none Export: Release 11.2.0.1.0 - Production on Fri Jun 19 12:19:31 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set About to export specified tables via Conventional Path ... Current user changed to SHAM . . exporting table
GRADE
6 rows exported
. . exporting table
EMP
68 rows exported
. . exporting table
DEPT
12 rows exported
. . exporting table
PROJECT
22 rows exported
. . exporting table
EMP_AUDIT
24 rows exported
Export terminated successfully without warnings.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
$ imp system/manager file=sham_tables.dmp log=sham_tables.log fromuser=sham touser=maya indexfile=indx_file.sql Import: Release 11.2.0.1.0 - Production on Fri Jun 19 12:29:35 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Export file created by EXPORT:V11.02.00 via conventional path import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set . . skipping table "GRADE" . . skipping table "EMP" . . skipping table "DEPT" . . skipping table "PROJECT" . . skipping table "EMP_AUDIT" Import terminated successfully without warnings. Edited indexfile script as per my requirement
indexfile.sql
MAYA> @indx_file.sql; Enter password: **** Connected. .. ... Table structure and indexes created MAYA> select table_name, tablespace_name from user_tables; TABLE_NAME
TABLESPACE_NAME
------------------------------ -----------------------------EMP
TBS2
GRADE
TBS2
DEPT
TBS2
PROJECT
TBS2
EMP_AUDIT
TBS2
MAYA> select index_name, table_name, tablespace_name from user_indexes; INDEX_NAME
TABLE_NAME
TABLESPACE_NAME
------------------------------ ------------------------------ -----------------------------PROJ_PRJID_C4_PK
PROJECT
TBS3
GRD_EMPGRD_C12_PK
GRADE
TBS3
EMP_DPID_IN1
EMP
TBS3
EMP_EMPID_C1_PK
EMP
TBS3
DEPT_DEPTID_C3_PK
DEPT
TBS3
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
TABLES AND INDEXS AND IT’S ASSOCIATED TABLESPCES
TABLES AND CONSTRAINTS
ORA-02291 integrity constraint violated - parent key not found For an insert statement, ORA-02291 error is common when you are trying to insert a child without a matching parent, as defined by a foreign key constraint. To avoid this error so you need to first insert values in parent table and then insert values in child table. IMPORT THE DUMP FILE
$ imp system/manager file=sham_tables.dmp log=sham_tables.log fromuser=sham touser=maya ignore=y Import: Release 11.2.0.1.0 - Production on Fri Jun 19 15:50:44 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Export file created by EXPORT:V11.02.00 via conventional path import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
. importing SHAM's objects into MAYA . . importing table
"GRADE"
6 rows imported
. . importing table
"EMP"
68 rows imported
. . importing table
"DEPT"
12 rows imported
. . importing table
"PROJECT"
22 rows imported
. . importing table
"EMP_AUDIT"
24 rows imported
About to enable constraints... Import terminated successfully without warnings. FULL DATABASE EXPORT/IMPORT
In order to perform full database export and import the user must have DBA role or EXP_FULL_DATABASE and IMP_FULL_DATABASE role. Entire database will be exported to a binary dump file. You have to use parameter FULL=Y. $ exp system/ file=fulldb.dmp log=fulldb1.log
full=y direct=y
$ imp system/ file=fulldb.dmp log=fulldb2.log
full=y
SPECIFIC USER’S DATA EXPORT/IMPORT
When you import dump file in another database, it is most important to create the users and its associated tablespaces. Make necessary privileges as per source database – such as quota on user associated tablespaces, user session privileges, etc... $ exp system/ file=schema.dmp log=schema1.log
owner=(sham,rose) direct=y
$ imp system/ file=schema.dmp log=schema2.log
fromuser=(sham,rose)
If the schema received grant privilege from another schema, when export both schemas at a time oracle exports all received privilege –(If privileges are only made by both schemas). When you import, it imports those privileges. Once i export/import sham,rose schema but except scott and jhil SHAM> grant select on emp to rose;
-- Will be exported and imported.
SHAM> grant update on emp to scott; -- Will get error while import Because scott user does not exist in the target database – By default user made privileges are always exported. You can’t avoid ORA-01917. ROSE> grant select on dept to sham; --
Will be exported and imported.
JHIL> grant select on payroll to sham; -- No error. (Received Privilege – read below points) Points to note If the schema received grant privilege from others does not get export when we export the schema. So, it does not get import when we import the same schema. If the schema made grant privilege to others, those privileges are exported when export the schema. Whenever you import the file, those privilege will be imported to the specific users. If users does NOT exist, import will throw warning message. User X dropped so I have warning message. IMP-00017: following statement failed with ORACLE error 1917: "GRANT SELECT ON "EMP" TO "X"" IMP-00003: ORACLE error 1917 encountered ORA-01917: user or role 'X' does not exist
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
SPECIFIC USER TABLES EXPORT/IMPORT
$ exp system/ select * from emp where emp_level='d' and dept_id=60; EMP_ID
EMP_NAME
GENDER
DEPT_ID
EMP_DESG
---------- ---------- ------ --------- ----------
ISACTI
EMP_HIRE_
EMP_TERM_
EMP_LEVEL
------ ----------- ---------- ----------
18
Thomas Reeve
male
60
Proj-manager
active
27-FEB-87
d
19
Andy Campbell
male
60
Proj-manager
active
23-MAR-87
d
23
Bailey Bond
male
60
Proj-manager
active
30-JUN-88
d
26
Bret Steyn
male
60
Proj-manager
active
28-SEP-88
d
29
Carlos Liu
male
60
Proj-manager
active
01-FEB-89
d
33
Brian Scalzo
male
60
Proj-manager
active
08-MAY-90
d
17
Donovan Thomson
male
60
Proj-manager
active
22-JAN-87
d
35
Ricky Martin
male
60
Proj-manager
active
22-JUN-90
d
8 rows selected. When using query clause please note the use to escape characters(\) and double quotes ("). $ exp system/ file=emp_rows.dmp log=rows1.log tables=sham.emp direct=y query=\"where emp_level=\'d\' and dept_id=\60\" statistics=none $ imp system/ file=emp_rows.dmp log=rows2.log fromuser=sham touser=scott TABLESPACE LEVEL EXPORT/IMPORT
$ exp system/ file=tbs.dmp log=tbs1.log tablespaces=crms direct=y $ imp system/ file=tbs.dmp log=tbs1.log tablespaces=crms full=y MULIPLE DUMP FILES EXPORT/IMPORT
$ exp system/ file=\file1.dmp,\file2.dmp,\file3.dmp owner=scott filesize=100m $ ls –l file*.dmp .. Each filesize will be allocated 100m at that time of taking export.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
FEEDBACK
As I said already a value of feedback=n displays a dot for every n rows processed.
SHAM> create table tab1(no number, string_val varchar2(15)); Table created. SHAM> insert into tab1 select rownum,'ORACLE' from dual connect by level commit; Commit complete. $ exp system/ file=tab1.dmp log=tab1.log tables=sham.tab1 feedback=50000 direct=y .. ... About to export specified tables via Direct Path ... Current user changed to SHAM . . exporting table
TAB1
........ 400000 rows exported Here, each dot specifies every 50000 records. PATTERN MATCHING OBJECTS EXPORT/IMPORT
Suppose you want to EXPORT/IMPORT whose name matches a particular pattern. To do so, use “%” wild character in TABLES option. For example, the following command will EXPORT only tables whose names starts with alphabet “e,d,p”, and will import whose tables names starts with e%,g% $ exp sham/sham file=sham_tables.dmp log=sham_tab1.log tables=(e%,d%,p%) direct=y $ imp sham/sham file=sham_tables.dmp log=sham_tab2.log fromuser=sham touser=jhil tables=e%,g% SOME COMMON ERRORS DURING EXPORT/IMPORT
IMP-00001: Unique constraint ... violated Already object is existing. When you import you might get this error because of duplicate rows. Drop the existing object and continue your import. IMP-00015: Object already existes Use IGNORE=Y parameter to ignore this error but you might have duplicate rows. ORA-30036: unable to extend segment in undo tablespace ... When you import large table, use COMMIT=Y or make undo tablespace large, such as adding additional datafiles in undo tablespace. Always ensure size of undo tablespace and archive destination before perform export/import process. In oracle version 9i, it was used rollback segments. Managing rollback segment is complex. Oracle strongly recommends automatic undo management. From 10g and 11g oracle is using automatic undo management mode – UNDO SEGEMENTS IN UNDO TABLESACE.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
DATA PUMP
Data Pump is available from 10g oracle version. It is the replacement for traditional export and import utilities. Data pump enables high speed movement of data and metadata from one database to another. Why Data Pump is much faster than export/import? EXPORT/IMPORT is just a single process. EXP/IMP is client based which means, executing queries from client to the server. Data Pump is a server based process rather than a client one. (This is part of the improvement). Dump files are read/written directly by the server and do NOT require data movement to the client. EXPDP/IMPDP can have multiple process all doing parts of the job. I.e. parallel processes available. EXP/IMP Vs EXPDP/IMPDP
EXP/IMP is a Client/Server tool and runs outside of the database. EXP pulls data over the network and writes it to the disk. EXP/IMP can access files in client and server (both). EXP represents database metadata information as DDL in .dmp files When you look at export dump files (.dmp), it would be full of SQL statements. EXP uses OCI – used to connect to the oracle database and is used to run ordinary SQL statements. OCI stands for Oracle Call Interface is an Application Programming Interface (APIS). OCI consists (set of library functions written in C language) – It is an interface to the oracle database and it can be used to manipulate data and schema in an oracle database. An OCI application program can process SQL statements that are needed to perform application tasks. Data Pump is a server side technology (works in the server). Data Pump can access files in the server using oracle directories. Data Pump represents database metadata in XML and uses transformations to regenerate the SQL. Data Pump uses Direct Path API – means bypassing the client and writing the dumps directly from the database. Direct path is available with exp/imp and the Data Pump and is independent of other features. DATA PUMP COMPONENTS
Data Pump is made up of three components. Data Pump uses PL/SQL packages, they are DBMS_DATAPUMP, DBMS_METADATA. dbms_datapump PL/SQL package (Data Pump API) dbms_metadata PL/SQL package (Metadata API) The command-line line clients (expdp, impdp) Data Pump Export uses EXPDP utility and Import uses IMPDP utility. DBMS_DATAPUMP package is used to move all, or part of, a databases including both data and metadata. DBMS_METADATA package is used to retrieve metadata from the database dictionary as XML and submit the XML to recreate the object. The Data Pump client’s expdp and impdp use the procedures in the DBMS_DATAPUMP PL/SQL package to execute export and import commands, as you enter the parameters at the command-line. Similarly when
metadata is moved Data
Pump uses DBMS_METADATA
PL/SQL package.
The DBMS_DATAPUMP and
DBMS_METADATA PL/SQL packages can be used independently of the Data Pump clients.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
EXPDP/IMPDP executables are located in $ORACLE_HOME/bin directory. $ which expdp impdp /u01/app/oracle/product/11.2.0/dbhome_1/bin/expdp /u01/app/oracle/product/11.2.0/dbhome_1/bin/impdp Get quick summaries of Data Pump parameters. $ expdp help=y
-
EXPDP for unloading.
$ impdp help=y
-
IMPDP for loading.
Data Pump export and import process is reading and writing of dump files on the server. In order to use Data Pump for NON privileged users, the DATABASE ADMINISTRATOR must create a directory object for the Data Pump files that are read and written on the server and grant privileges to the users on that directory. To create a directory object log on to the oracle database as SYS user with SYSDBA privilege. Let’s us create the directory and grant necessary privileges. SYS> create or replace directory dpdir as '/u03/datapump/'; Directory created. Here directory object named dpdir that is mapped to a directory located at '/u03/datapump/'. Here a directory is created, you need to grant read and write privilege on the directory to other users. SYS> grant read, write on directory dpdir to scott; Grant succeeded. READ and WRITE permission to a directory object means, an Oracle database requires permission from the operating system to read/write files in the directories. As a DBA, you must ensure that only approved users access to the directory object associated with the directory path. A directory object helps to ensure data security and integrity. Data Pump requires directory path which is already mapped by directory object name because export/import process files such as dump files and log files will be created inside the directory path). Create directory SQL statement does NOT create any directory at Operating System level, you need to create directory manually. $ mkdir /u03/datapump/ For Privileged users a default directory object is available DATA_PUMP_DIR. By default the user SYSTEM has read/write access to DATA_PUMP_DIR directory. You can check using following SQL command. SYS> select * from dba_directories; .. HOW DATA PUMP ACCESS DATA ?
Data Pump moves data in and out of databases using below methods. Copy Data file to move data. Using Direct path to move data. Using External tables to move data. Using Conventional path to move data. Using network link to move data. The fastest method of moving data is to copy database data files to the target database. Data Pump is used to unload structural information (metadata) into the dump file. It is used for transportable tabalespace mode operation. You need to specify TRANSPORT_TABLESPACES parameter.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
DATA ACCESS METHODS - DIRECT PATH Vs EXTERNAL TABLE
Generally Data Pump supports two types of data access methods to load and unload data. One of the following access path is selected by Data Pump. 1) Direct Path. 2) External Tables - new driver called ORACLE_DATAPUMP Really it does NOT matter to us which method is used. As a DBA You do NOT have control. Data Pump is always choose best method to load/unload data. Data Pump uses external path for complex objects. As we know, Data Pump automatically uses direct path method for loading and unloading data. In this method, the SQL layer of the database is bypassed and rows are moved to and from the dump file only minimal interpretation. If a table contains a column BFILE, then direct path cannot be used to load the table data so Data Pump would use External tables. EXTERNAL TABLES
Data Pump provides an external tables access driver (ORACLE_DATAPUMP) that reads and writes files. Oracle 10g has introduced ORACLE_DATAPUMP driver, enabling you to create external tables. Situations in which direct path load/unload is NOT used Data Pump uses external tables rather than direct path to load/unload the data for the table, if any following conditions are exist for a table. When data is being exported/imported at least any following conditions exist. Some examples given here. Clustered tables. If any unique index exists on a preexisting table. If the tables is partitioned on a preexisting table. An active triggers on the preexisting table. A single partition in a table with a global index. Referential integrity constraint is present on a preexisting table. The table has any encrypted columns. A table contains BFILE columns. Domain indexes exist on LOB columns. Supplemental logging is enabled and table has at least 1 LOB column. Fine-grained access control is in insert mode. – Load. Fine-grained access control is in select mode. – Unload.
A table is being loaded with active referential constraints or global indexes, cannot be loaded using direct path access method. A table column has LONG datatype, cannot be loaded with external table access method. Data Pump is always choose best method to load data. When conventional path is used by Data Pump? Data Pump is NOT able to load data into a table using either direct path or external tables. In such cases conventional path is used. For ex, a table contains an encrypted column and LONG column would be imported using conventional path only. Direct Path cannot be used to load encrypted columns and external tables cannot be used to import LONG columns. So Data Pump uses conventional Path it has no other choice. Conventional Path is much slower than Direct Path and External table path.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
DATA PUMP PROCESS STRUCTURE
There are number of processes involved in a Data Pump job. As we know Data Pump jobs from user process, either SQL*Plus or through Enterprise manager but all the work done by server process so that Data Pump improves performance over traditional EXP/IMP utilities. Client Process Shadow Process Worker Process Master Control Process Parallel Query Process (PQ) CLIENT PROCESS
This process is initiated by the client utility (EXPDP or IMPDP) that make calls Data Pump API. Since the Data Pump is completely integrated to the database, once the Data Pump job is established, the client process is NOT required to keep the job running. Multiple clients can attach and detach from a job for the purpose of monitoring and control. SHADOW PROCESS
When a client logs into the oracle database, a foreground process is created. This shadow process services the client Data Pump API requests. Upon receipt of DBMS_DATAPUMP.OPEN request, the shadow process creates the job, which primarily consisting of creating the master table, master control process, and the Advanced Queuing (AQ) queues for communication among various processes. Once a job is running the shadow process main job is to check the job status for the client process. If the client detaches the shadow process goes away; however the remaining Data Pump job processes are still active. Another client process can create new shadow process and attach to the existing job. You can see on the diagram how shadow process creates all required components.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
Two queues are created for each Data Pump job.
A CONTROL QUEUE & STATUS QUEUE.
MASTER CONTROL PROCESS : (MCP)
A master control process (MCP) is created for every Data Pump export job and import job. This is primary process for managing Data Pump job because it controls the execution of the Data Pump job. The master process performs following tasks. Creates jobs and controls them. Creates and manages worker process. Monitors the jobs and logs the process. Maintains the job state, dumpfile info, restart info. To divide loading and unloading of data and metadata tasks. To manage job information in the master table. Communicating with the clients and records ongoing export/import activities in the logfile. Once a Data Pump job is launched, Master Process creates the master table in the user schema. At least two processes are started; they are DMnn and DWnn. A Data Pump master process (DMnn), One or more worker processes (DWnn) There is 1 MCP per job. This process can be seen when you start export/import job. $ ps –ef | grep dm0 master background process $ ps –ef | grep dw0 worker process FROM SQL PROMPT
SYS> select to_char(sysdate,'YYYY-MM-DD HH24:MI:SS') "DATE", s.program, s.sid, s.status, s.username, d.job_name, p.spid, s.serial#, p.pid from v$session s, v$process p, dba_datapump_sessions d where p.addr=s.paddr and s.saddr=d.saddr;
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
MASTER TABLE
As we know, when a job is submitted MCP creates master table. This master table is heart of the every Data Pump export and import job. While data/metadata is being transferred, a master table is used to track the progress in the job. During the operation a master table is maintained in the schema of the current user who initiated the Data Pump export. The master table contains info about details of current Data Pump operation being performed. Master table maintains one row per object with status information. What kind of information’s is maintained in master table? Running job, location of objects, parameters of the job, status of the job, worker process, etc... For export operation, The master table records the location of the database objects with in a dump file set. End of the export operation, the content of the master table is written in the dumpfile set. Finally master table is removed from the user schema. For import operation, The master table is loaded from the dumpfile set. It controls the sequence of operations and objects being imported. The master table is either retained or dropped, it depends on the circumstances. The master table is dropped, when the job is successfully completed. The master table is dropped, if a job is killed using the KILL_JOB interactive command. The master table is retained, if a job is stopped using the STOP_JOB interactive command. The master table is retained, if a job terminates unexpectedly. In the event of failure (job terminates unexpectedly), Data Pump uses the information in the master table to restart the job. For suspended job also, Data Pump is using master table to resume it.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
Master table has the same name as the name of the job. Default name of the job would be, EX: ___ SYS_EXPORT_FULL_01
The master table is dropped (by default) when the Data Pump job finishes successfully. INTERPROCESS COMMUNICATION (IPC)
Advance Queuing (AQ) is used for communicating among the various Data Pump processes. Two queues are created for each Data Pump job: a control Queue and a status queue. COMMAND AND CONTROL QUEUE
This Queue provides a path of communication between master control process and worker process. All the work request created by the master process and the associated response are passed through the command and control queue. I.e. The DMnn divides up the work to be done and places individual tasks that make up the job on this queue. The worker processes pick up these tasks and execute them. Used by MCP, to command and control of worker process. Queue is owned by SYS and Queue name like STATUS QUEUE
The status queue is for monitoring purposes. MCP is the only writer to this queue. The Data Pump MCP writes work progress and error messages to the status queue. DMnn places messages on it describing (state of the job). Populated by Master control process (DMnn). Consumed by clients shadow process to retrieve status info. Info is available for clients; related to status of the running job and Encountered Errors. Queue is owned by SYS and Queue name like
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
WORKER PROCESS
The worker process handles the request assigned by the MCP. The MCP creates the worker processes based on the value of the PARALLEL parameter. The worker process performs loading/unloading data and metadata. The worker process maintains the object rows and type of objects such as tables, indexes or views; as well as maintains and their current status such as (pending, completed ,or failed, in the master table that can be used to restart a failed job.
PARALLEL QUERY PROCESS
The worker process can initiate parallel query process when Data Pump uses External Table API as data access method for loading and unloading data. The worker process uses External Table API creates multiple parallel query processes for data movement. DATA PUMP FILES
Data Pump can be generate Three types of files. SQL FILES, DUMP FILES & LOG FILES SQL FILES are full of DDL statements describing the objects. LOG FILES describe the message of Data Pump operations. DUMP files are created by EXPDP and IMPDP only can read dump files. DUMP files are used as input by IMPDP utility. DUMP files contain the exported data – (holds data and metadata in binary format). Collection of DUMP files are made up of dump file sets. It contains table data, database objects metadata and control information in binary format. INVOKING EXPORT AND IMPORT
The Expdp/Impdp commands can be executed in three ways. They are following below. Command line interface
– EXPDP/IMPDP parameters in command line
Parameter file interface - EXPDP/IMPDP parameters in the file at OS level. Interactive command interface - EXPDP/IMPDP ask required parameters to perform the task.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
DATA PUMP EXPORT MODES
You can use EXPDP utility to export data or say unload (data or metadata) from the oracle database. There are distinct modes of data unloading using expdp, they are Full Export Mode (entire database is unloaded). Schema Mode (specific schemas are unloaded). Table Mode (separate table or set of tables and its associated objects are unloaded). Tablespace Mode (all objects in the tablespace are unloaded). DIRECTORY OBJECT AND ACCESS PERMISSIONS
As we know in order to use Data Pump, the database administrator must create a directory object and grant privileges to the user on that directory object. The directory object is only a pointer to a physical directory it does NOT actually create the physical directory on the file system. SYS> CREATE OR REPLACE DIRECTORY dpdir AS '/U01/DATAPUMP'/; SYS> GRANT WRITE ON DIRECTORY dPdir TO scott; Export SYS> GRANT READ ON DIRECTORY dpdir TO scott;
Import
SPECIFIC USER TABLES EXPORT /IMPORT
$ expdp scott/ dumpfile=table.dmp logfile=tab1.log tables=emp,dept directory=dpdir $ impdp scott/ dumpfile=tab1e.dmp logfile=tab2.log directory=dp
IMPORT INTO ANOTHER SCHEMA
The import parameter REMAP_SCHEMA allows us to import objects from one schema to another. Suppose we export tables from SCOTT schema and want to import these objects into HR schema. This can be achieved by REMPA_SCHEMA parameter. $ expdp scott/ dumpfile=table.dmp logfile=tab1.log tables=emp,dept directory=dpdir $ impdp scott/ dumpfile=tab1e.dmp logfile=tab2.log remap_schema=scott:hr directory=dp In Data Pump the FROMUSER/TOUSER syntax is replaced by the REMAP_SCHEMA Option. You may get error, if you have required privilege ERROR:
ORA-39122: Unprivileged users may not perform REMAP_SCHEMA remappings.
Action: Retry the job from a schema that owns the IMPORT_FULL_DATABASE privilege. TO PERFORM EXPORT FROM ANOTHER USERS ACCOUNT
SYS> grant exp_full_database to scott;
or
SYS> grant datapump_exp_full_database to scott; TO PERFORM IMPORT FOR ANOTHER USERS ACCOUNT
SYS> grant imp_full_database to scott;
or
SYS> grant datapump_imp_full_database to scott; You can perform import operation even if you did NOT create export dump file, then you must have IMP_FULL_DATABASE
or DATAPUMP_IMP_FULL_DATABASE role to import operation. Above roles are granted to
database users by Database Administrators (DBAs).
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
SPECIFIC TABLES FROM DIFFERENT SCHEMAS
$ expdp system/ dumpfile=tables.dmp logfile=tab1.log directory=dpdir tables=sham.emp,scott.dept $ impdp system/ dumpfile=tables.dmp logfile=tab2.log directory=dpdir remap_schema=sham:hr,scott:hr
METADATA – TABLE STRUCTURE EXPORT/IMPORT
$ expdp system/ dumpfile=meta.dmp logfile=meta1.log tables=scott.payroll directory=dpdir content=metadata_only $ impdp system/ dumpfile=meta.dmp logfile=meta2.log directory=dpdir remap_schema=scott:hr
METADATA EXPORT/IMPORT INTO SAME USER
Before import table structure into Scott, drop the payroll table from the Scott’s account. $ expdp system/ dumpfile=meta.dmp logfile=meta1.log tables=scott.payroll directory=dpdir content=metadata_only $ impdp system/ dumpfile=meta.dmp logfile=meta2.log directory=dpdir
SPECIFIEC RECORDS FOR EXPORT/IMPORT
I want to import following records who belongs emp_level='d' and dept_id='60'. SHAM> select * from emp where dept_id=60 and emp_level='d'; EMP_ID
EMP_NAME
------ -----------
GENDER --------
DEPT_ID
EMP_DESG
--------
ISACTIVE
EMP_HIRE_
EMP_TERM_ EMP_LEVEL
---------- -------- ----------
---------
---------
18
Thomas Reeve
male
60
Proj-manager
active
27-FEB-87
d
19
Andy Campbell
male
60
Proj-manager
active
23-MAR-87
d
23
Bailey Bond
male
60
Proj-manager
active
30-JUN-88
d
26
Bret Steyn
male
60
Proj-manager
active
28-SEP-88
d
29
Carlos Liu
male
60
Proj-manager
active
01-FEB-89
d
33
Brian Scalzo
male
60
Proj-manager
active
08-MAY-90
d
17
Donovan Thomson male
60
Proj-manager
active
22-JAN-87
d
35
Ricky Martin
60
Proj-manager
active
22-JUN-90
d
male
8 rows selected. $ expdp system/ dumpfile=rows.dmp logfile=rows1.log tables=sham.emp query=\"where emp_level=\'d\' and dept_id=\60\"
directory=dpdir
$ impdp system/ dumpfile=rows.dmp logfile=rows2.log directory=dpdir remap_schema=sham:sony When using query clause please note the use to escape characters(\) and double quotes ("). SHEMA LEVEL EXPORT/IMPORT
If we want to export specific schemas in the database, we can use the SCHEMAS parameter. EXPDP command will create a dumpfile containing data and metadata for Schemas respectively
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
$ expdp system/ dumpfile=sham.dmp logfile=sham1.log directory=dpdir schemas=sham $ impdp system/ dumpfile=sham.dmp logfile=sham2.log directory=dpdir remap_schema=sham:rose To import these schemas (SCOTT & SONY) into a database, we can use the following IMPDP command. Drop these (SCOTT & SONY) before start import process. $ expdp system/ dumpfile=schema.dmp logfile=schema1.log schemas=scott,sony directory=dpdir $ impdp system/ dumpfile=schema.dmp logfile=schema1.log schemas=scott,sony directory=dpdir
GENERATE SQL FOR IMPORT OBJECTS
Instead of importing the data and objects, it is also possible to generate a file which will contain the DDL statements (SQL statements) for the objects and it will be saved in the OS. This is achieved using the SQLFILE parameter in 11g. $ expdp system/ dumpfile=schemas.dmp logfile=schema1.log schemas=scott,sony directory=dpdir $ impdp system/ dumpfile=schemas.dmp logfile=schema2.log sqlfile=sqlinfo.sql directory=dpdir At OS level you can verify sqlfile; you can give whatever a name you want to the file. TABLESPACE LEVEL EXPORT/IMPORT
Before you import the dumpfile, you need to drop and recreate the tablespace in the database. SQL> drop tablespace tbs1 including contents and datafiles; SQL> create tablespace tbs1 datafile... $ expdp system/ dumpfile=tbs_data.dmp logfile=tbs_data1.log tablespaces=TBS1 directory=dpdir $ impdp system/ dumpfile=tbs_data.dmp logfile=tbs_data2.log tablespaces=TBS1 directory=dpdir FULL DATABASE EXPORT/IMPORT
$ expdp system/ dumpfile=fulldb.dmp logfile=fulldb1.log directory=dpdir full=y $ impdp system/ dumpfile=fulldb.dmp logfile=fulldb2.log directory=dpdir full=y MULTIPLE DUMP FILES EXPORT/IMPORT
$ expdp system/ dumpfile=\schema1.dmp,\schema2.dmp,\schema.dmp logfile=schema1.log schemas=sony filesize=1g directory=dpdir $ impdp system/ dumpfile=\schema1.dmp,\schema2.dmp,\schema3.dmp logfile=schema2.log remap_schema=sony:scott directory=dpdir In our case, we have used four dump files for export operation. You need to provide all above files to impdp utility, otherwise Data Pump will throw following error. ORA-39059: dump file set is incomplete.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
PARAMETER FILE FOR EXPORT/IMPORT
We can use parameter file (PARFILE) which contains EXPDP/IMPDP parameters. When we run expdp/impdp job instead of writing parameters in a command line, we can call the file at OS level. Suppose I have parfiles named exp_schema_par.txt and imp_schema_par.txt with some Data Pump parameters. $ cat exp_schema_par.txt # Parfile for Schema refresh – Export # dumpfile=schema_par.%u.dmp logfile=schema_par.log schemas=sony directory=dpdir parallel=3 $ expdp system/ parfile=exp_schema_par.txt $ cat imp_schema_par.txt # Parfile for Schema refresh – Import # dumpfile=schema_par.%u.dmp logfile=imp_schema.log remap_schema=sony:maya directory=dpdir $ impdp system/ parfile=imp_schema_par.txt REMAP FUNCTION
This is most important Data Pump feature. REMAP function allows the user to easily redefine how an object will be stored in the database. REMAP_DATA - 11g FEATURE REMAP_TABLE - 11g FEATURE REMAP_SCHEMA REMAP_DATAFILE REMAP_TABLESPACE TRANSFORM REMAP_DATA
REMAP_DATA provides a method to mask data – when moving production database to test database. This remap_data uses a user defined package/function to alter the data, i.e. it allows you to specify remap function that transforms table original data of the designated column and returns some other value (in the dumpfile) during export or import. To perform a data remap operation, you need to create a stored package function and supply the PACKAGE.FUNCTION_NAME to the REMAP_DATA parameter. SYNTAX : REMAP_DATA=SCHEMA.TABLE_NAME.COLUMN_NAME:SCHEMA.PKG.FUNCTION EXAMPLE: REMAP_DATA=SCOTT.EMP.SAL:SCOTT.PACKAGE_NAME.FUNCTION_NAME You can use this feature to protect (mask/convert) sensitive data such as credit/debit card numbers, customer financial account balance, customers_salary, etc. we can discuss separately.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
REMAP_TABLE
REMAP_TABLE allows us to rename tables during an import operation – This is 11g feature. SYNTAX
: REMAP_TABLE=SCHEMA.TABLE_NAME:NEWTABLE_NAME
EXAMPLE : REMAP_TABLE=SCOTT.SALGRADE:SALGRADE_TEST $ expdp system/ dumpfile=salgrade.dmp tables=scott.salgrade directory=dpdir $ impdp system/ dumpfile=salgrade.dmp remap_table=scott.salgrade:salgrade_new directory=dpdir REMAP_SCHEMA
Loads objects from one schema to another schema, i.e. changing object ownership. If target schema does NOT exist, the import utility will create the target schema. FROMUSER/TOUSER syntax is replaced by the remap_schema. SYNTAX
: REMAP_SCHEMA=SOURCE_SCHEMA:TARGET_SCHEMA
EXAMPLE : REMAP_SCHEMA=SCOTT:HR $ expdp system dumpfile=scott_schema.dmp schemas=scott directory=dpdir $ impdp system/ dumpfile=scott_schema.dmp remap_schema=scott:hr directory=dpdir REMAP_DATAFILE
REMAP_DATAFILE changes the name or path of the datafile. When you move database from one server to another server, the source and target server have different mount point names, you can rename the datafiles on the import using remap_datafile. Source datafile filesystem is '/u01/ora11g/crms/oradata/tbs01.dbf' Target datafile filesystem is '/u02/ora11g/crms/oradata/tbs01.dbf' When you move database between platforms, obviously that have different file naming conventions, now the REMAP_DATAFILE parameter comes to change file system names. This parameter changes the source datafilename to the target datafile name in all SQL statements where the source datafile is referenced.This is useful when performing database migration to another system with different file naming conventions. SYNTAX
: REMAP_DATAFILE=SOURCE_DATAFILE:TARGET_DATAFILE
EXAMPLE : REMAP_DATAFILE='/u01/ora11g/crms/tbs01.dbf':'C:\ora11g\crms\tbs01.dbf'
REMAP_TABLESPACE
Tablespace objects are remapped to another tablespace and changing tablespace definition also. Placing all objects into the specified target, i.e. objects to be moved from one tablespace into another tablespace during an export operation. You can easily import a table into different tablespace from which it was originally exported. SYNTAX
: REMAP_TABLESPACE=SOURCE_TABLESPACE:TARGET_TABLESPACE
EXAMPLE : REMAP_TABLESPACE=USERS:EXAMPLE $ expdp system/ dumpfile=scott.dmp schemas=scott directory=dpdir $ impdp system/ dumpfile=scott.dmp remap_tablespace=USERS:EXAMPLE directory=dpdir
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
Before import the schema (Scott) into the database, you need to drop it. If your remap tablespace is NOT existing in the database, you need to create it and assign necessary privileges and quotas on the tablespace to Scott user. Let’s start the process. SYS> select username, default_tablespace from dba_users where username='SCOTT'; USERNAME
DEFAULT_TABLESPACE
------------------------------ -----------------------------SCOTT
USERS
SCOTT> select tablespace_name, table_name from user_tables; TABLESPACE_NAME
TABLE_NAME
------------------------------ -----------------------------USERS
BONUS
USERS
SALGRADE
USERS
JOBS
USERS
JOBS_HISTORY
USERS
LOCATIONS
USERS
EMP_CONT_INFO
USERS
REGIONS
USERS
DEPT
USERS
EMP
.. ... EXPORT SCOTT SCHEMA
$ expdp system/ dumpfile=scott_remap.dmp logfile=scott_remap.log schemas=scott directory=dpdir .. ... [Trimmed] DROP & RECREATE SCOTT USER
SYS> drop user scott cascade; User dropped. SYS> create user scott identified by tiger default tablespace users; User created. SYS> alter user scott temporary tablespace temp; User altered. SYS> alter user scott quota unlimited on users; User altered. CREATE NEW TABLESPACE & ASSIGN QUOTA
SYS> create tablespace remap_tbs datafile '/u01/app/oracle/oradata/crms/remap_tbs01.dbf' size 1000m; Tablespace created
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
SYS> alter user scott quota unlimited on remap_tbs; User altered. SYS> grant connect, create session to scott; Grant succeeded. IMPORT THE DUMPFILE WITH REMAP_TABLESPACE OPTION
$ impdp system/ dumpfile=scott_remap.dmp logfile=scott_remap.log directory=dpdir remap_tablespace=users:remap_tbs .. ... [Trimmed]
SCOTT> select tablespace_name, table_name from user_tables; TABLESPACE_NAME
TABLE_NAME
------------------------------ -----------------------------REMAP_TBS
BONUS
REMAP_TBS
SALGRADE
REMAP_TBS
JOBS
REMAP_TBS
JOBS_HISTORY
REMAP_TBS
LOCATIONS
REMAP_TBS
EMP_CONT_INFO
REMAP_TBS
REGIONS
REMAP_TBS
DEPT
REMAP_TBS
EMP
.. ... SYS> select username, default_tablespace from dba_users where username='SCOTT'; USERNAME
DEFAULT_TABLESPACE
------------------------------ -----------------------------SCOTT
USERS
DATAPUMP IMPORT : TRANSFORMATIONS
Enables you to alter the object-creation DDL for specific object or all objects being loaded. DDL TRANSFORMATIONS
Object metadata is stored as XML in the dumpfile set, it is easy to apply transformations when DDL is being formed during import. Data Pump import has many options to transform the metadata during the import operation. Those are REMAP_DATAFILE, REMAP_SCHEMA, REMAP_TABLESPACE and the transform. SYNTAX AND DESCRIPTION
TRANSFORM=TRANSFORM_NAME:BOOLEAN_VALUE[:OBJECT_TYE] BOOLEAN VALUE IS: Y or N Transform name specifies the name of the transform.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
In case you do NOT want the object storage attributes but just the table content (data), you can use TRANSFORM parameter that instructs the IMPDP to modify the storage attributes of the DDL that creates the object during import job. Applicable transform_names are given below. Two basic attributes are SEGMENT_ATTRIBUTE & STORAGE. SEGMENT_ATTRIBUTES :
If the value is specified as Y, then segment attributes (physical attributes,
storage attributes, tablespaces, and logging) are included. Default is Y. STORAGE :
EX:
If the value is specified as Y, the storage clauses are included. Default is Y
INITIAL, NEXT, MINEXTENTS, MAXEXTENTS, PCTINCREASE, FREELIST, etc...
OID ( OBJECT ID) :
Determines whether the object ID of abstract data types is reused or created as
new. If the value is specified as N, the generation of the export OID clause for object types is suppressed. This is useful when you need to duplicate schemas across databases by using export and import, but you cannot guarantee that the object types will have identical OID values in those databases. Default is Y. PCTSPACE :
The value supplied for this transformation must be greater than zero. It represents
the percentage multiplier that is used to alter extent allocations and the size of data files. OBJECT_TYPE :
This is an optional. If you supply, it applies transformations on the specified
object_type. If no object_type is specified then the transform applies to all object_types. Ex: CLUSTER, CONSTRAINT, INDEX, SEGMENT, TABLE, TYPE, etc…
In case, the test database may be very small compared to production volume. When you perform Import, it may fail initial extents are defined too large to fit into the test database.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
Instead of creating tables manually you can remove storage clauses of the tables using TRANSFORM parameter when import. If source and target databases sizes are different, then you can specify DDL of the storage attributes should not be generated while import. STORAGE removes the storage clause from the create statement of its DDL. SEGMENT_ATTRIBUTE removes physical attributes, storage attributes, logging and tablespaces. SEGMENT_CREATION :
If it is set to Y (the default), then this transform causes the SQL SEGMENT
CREATION clause to be added to the CREATE TABLE statement. i.e., the CREATE TABLE statement will explicitly say either SEGMENT CREATION DEFERRED or SEGMENT CREATION IMMEDIATE. If the value is n, then the SEGMENT CREATION clause is omitted from the CREATE TABLE statement. Set this parameter to n to use the default segment creation attributes for the table(s) being loaded. This functionality is available starting with Oracle Database 11g R2 (11.2.0.2). EXPORT OF SHAM.EMP
$ expdp system/ dumpfile=sham_emp.dmp tables=sham.emp directory=dpdir .. ... $ impdp system/ dumpfile=sham_emp.dmp sqlfile=sham_emp.sql directory=dpdir .. ... SQLFILE FOR SHAM_EMP.DMP
$ vi sham_emp.sql .. ... -- new object type path: TABLE_EXPORT/TABLE/TABLE CREATE TABLE "SHAM"."EMP" (
"EMP_ID" NUMBER, "EMP_NAME" VARCHAR2(30 BYTE), "GENDER" VARCHAR2(6 BYTE), "DEPT_ID" NUMBER CONSTRAINT "EMP_DEPTID_C2_NTNL" NOT NULL ENABLE, "EMP_DESG" VARCHAR2(16 BYTE), "ISACTIVE" VARCHAR2(6 BYTE), "EMP_HIRE_DATE" DATE, "EMP_TERM_DATE" DATE, "EMP_LEVEL" VARCHAR2(8 BYTE)
) SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "TBS1" ; -- new object type path: TABLE_EXPORT/TABLE/INDEX/INDEX -- CONNECT SHAM CREATE UNIQUE INDEX "SHAM"."EMP_EMPID_C1_PK" ON "SHAM"."EMP" ("EMP_ID") PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "TBS1" PARALLEL 1 ; ALTER INDEX "SHAM"."EMP_EMPID_C1_PK" NOPARALLEL; CREATE INDEX "SHAM"."EMP_DPID_IN1" ON "SHAM"."EMP" ("DEPT_ID") PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "USERS" PARALLEL 1 ; ALTER INDEX "SHAM"."EMP_DPID_IN1" NOPARALLEL; -- new object type path: TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT -- CONNECT SYSTEM ALTER TABLE "SHAM"."EMP" ADD CONSTRAINT "EMP_EMPID_C1_PK" PRIMARY KEY ("EMP_ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "TBS1"
ENABLE;
-- new object type path: TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS .. ... If you do NOT want similar metadata, using TRANSFORM option you can change above definitions. Let’s see some examples using TRANSFORM parameter. EXAMPLE I - SEGMENT ATTRIBUTE = NO
$ impdp system/ dumpfile=sham_emp.dmp sqlfile=segmnt_attrbt.sql directory=dpdir transform=segment_attributes:n .. ... SEGMENT_ATTRIBUTES:N
option ignores all segment attributes (pink color) things for objects.
Now you can see NO (pink color) lines from the sqlfile. Now everything is default. SQL FILE – SEGMNT_ATTRBT.SQL
$ vi segmnt_attrbt.sql .. ... -- new object type path: TABLE_EXPORT/TABLE/TABLE CREATE TABLE "SHAM"."EMP" (
"EMP_ID" NUMBER, "EMP_NAME" VARCHAR2(30 BYTE), "GENDER" VARCHAR2(6 BYTE), "DEPT_ID" NUMBER CONSTRAINT "EMP_DEPTID_C2_NTNL" NOT NULL ENABLE,
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
"EMP_DESG" VARCHAR2(16 BYTE), "ISACTIVE" VARCHAR2(6 BYTE), "EMP_HIRE_DATE" DATE, "EMP_TERM_DATE" DATE, "EMP_LEVEL" VARCHAR2(8 BYTE) ) ; -- new object type path: TABLE_EXPORT/TABLE/INDEX/INDEX -- CONNECT SHAM CREATE UNIQUE INDEX "SHAM"."EMP_EMPID_C1_PK" ON "SHAM"."EMP" ("EMP_ID"); ALTER INDEX "SHAM"."EMP_EMPID_C1_PK" NOPARALLEL; CREATE INDEX "SHAM"."EMP_DPID_IN1" ON "SHAM"."EMP" ("DEPT_ID"); ALTER INDEX "SHAM"."EMP_DPID_IN1" NOPARALLEL; -- new object type path: TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT -- CONNECT SYSTEM ALTER TABLE "SHAM"."EMP" ADD CONSTRAINT "EMP_EMPID_C1_PK" PRIMARY KEY ("EMP_ID") ENABLE; -- new object type path: TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS .. ... EXAMPLE II - SEGMENT ATTRIBUTES FOR TABLE = NO
$ impdp system/ dumpfile=sham_emp.dmp directory=dpdir sqlfile=segmnt_attrbt_tab.sql transform=segment_attributes:n:table .. ... SQL FILE - SEGMNT_ATTRBT_TAB.SQL
$ vi segmnt_attrbt_tab.sql .. ... -- new object type path: TABLE_EXPORT/TABLE/TABLE CREATE TABLE "SHAM"."EMP" (
"EMP_ID" NUMBER, "EMP_NAME" VARCHAR2(30 BYTE), "GENDER" VARCHAR2(6 BYTE), "DEPT_ID" NUMBER CONSTRAINT "EMP_DEPTID_C2_NTNL" NOT NULL ENABLE, "EMP_DESG" VARCHAR2(16 BYTE), "ISACTIVE" VARCHAR2(6 BYTE), "EMP_HIRE_DATE" DATE, "EMP_TERM_DATE" DATE, "EMP_LEVEL" VARCHAR2(8 BYTE)
) ; -- new object type path: TABLE_EXPORT/TABLE/INDEX/INDEX -- CONNECT SHAM
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
CREATE UNIQUE INDEX "SHAM"."EMP_EMPID_C1_PK" ON "SHAM"."EMP" ("EMP_ID") PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "TBS1" PARALLEL 1 ; ALTER INDEX "SHAM"."EMP_EMPID_C1_PK" NOPARALLEL; CREATE INDEX "SHAM"."EMP_DPID_IN1" ON "SHAM"."EMP" ("DEPT_ID") PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "USERS" PARALLEL 1 ; ALTER INDEX "SHAM"."EMP_DPID_IN1" NOPARALLEL; -- new object type path: TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT -- CONNECT SYSTEM ALTER TABLE "SHAM"."EMP" ADD CONSTRAINT "EMP_EMPID_C1_PK" PRIMARY KEY ("EMP_ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "TBS1"
ENABLE;
-- new object type path: TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS .. ... Now attributes are ignored for table only. EXAMPLE III - SEGMENT ATTRIBUTES FOR INDEX=NO
$ impdp system/ dumpfile=sham_emp.dmp sqlfile=segmnt_attrbt_ind.sql directory=dpdir transform=segment_attributes:n:index .. ... SQLFILE - SEGMNT_ATTRBT_IND.SQL
Segmnt_attrbt_ind.sql
Now attributes are ignored for indexes only. EXAMPLE IV - STORAGE =NO
By default objects will get STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
$ impdp system/ dumpfile=sham_emp.dmp directory=dpdir sqlfile=storage.sql transform=storage:n SQLFILE – STORAGE.SQL
$ vi storage.sql .. ... -- new object type path: TABLE_EXPORT/TABLE/TABLE CREATE TABLE "SHAM"."EMP" (
"EMP_ID" NUMBER, "EMP_NAME" VARCHAR2(30 BYTE), "GENDER" VARCHAR2(6 BYTE), "DEPT_ID" NUMBER CONSTRAINT "EMP_DEPTID_C2_NTNL" NOT NULL ENABLE, "EMP_DESG" VARCHAR2(16 BYTE), "ISACTIVE" VARCHAR2(6 BYTE), "EMP_HIRE_DATE" DATE, "EMP_TERM_DATE" DATE, "EMP_LEVEL" VARCHAR2(8 BYTE)
) SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING TABLESPACE "TBS1" ; -- new object type path: TABLE_EXPORT/TABLE/INDEX/INDEX -- CONNECT SHAM CREATE UNIQUE INDEX "SHAM"."EMP_EMPID_C1_PK" ON "SHAM"."EMP" ("EMP_ID") PCTFREE 10 INITRANS 2 MAXTRANS 255 TABLESPACE "TBS1" PARALLEL 1 ; ALTER INDEX "SHAM"."EMP_EMPID_C1_PK" NOPARALLEL; CREATE INDEX "SHAM"."EMP_DPID_IN1" ON "SHAM"."EMP" ("DEPT_ID") PCTFREE 10 INITRANS 2 MAXTRANS 255 TABLESPACE "USERS" PARALLEL 1 ; ALTER INDEX "SHAM"."EMP_DPID_IN1" NOPARALLEL; -- new object type path: TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT -- CONNECT SYSTEM ALTER TABLE "SHAM"."EMP" ADD CONSTRAINT "EMP_EMPID_C1_PK" PRIMARY KEY ("EMP_ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 TABLESPACE "TBS1"
ENABLE;
-- new object type path: TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS .. Now Storage clause is ignored. EXAMPLE V – CHANGE THE PCTSPACE
PCTSPACE parameter helpful to either reduce/increase the storage space.
Whatever value you put
(interms of percentage), based on metadata value its resize.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
SHAM.EMP TABLE PCTSPACE IS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "TBS1" ; Now I specify pctspace=10, it will initially allocate 20% extents and also for next extent, while importing the data. $ impdp system/manager dumpfile=sham_emp.dmp directory=dp sqlfile=emp_pct.sql transform=pctspace:10 .. ... SQLFILE – EMP_PCT.SQL
$ vi emp_pct.sql .. ... -- new object type path: TABLE_EXPORT/TABLE/TABLE CREATE TABLE "SHAM"."EMP" (
"EMP_ID" N..UMBER, "EMP_NAME" VARCHAR2(30 BYTE), "GENDER" VARCHAR2(6 BYTE), "DEPT_ID" NUMBER CONSTRAINT "EMP_DEPTID_C2_NTNL" NOT NULL ENABLE, "EMP_DESG" VARCHAR2(16 BYTE), "ISACTIVE" VARCHAR2(6 BYTE), "EMP_HIRE_DATE" DATE, "EMP_TERM_DATE" DATE, "EMP_LEVEL" VARCHAR2(8 BYTE)
) SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE(INITIAL 6554 NEXT 104858 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "TBS1" ; -- new object type path: TABLE_EXPORT/TABLE/INDEX/INDEX -- CONNECT SHAM CREATE UNIQUE INDEX "SHAM"."EMP_EMPID_C1_PK" ON "SHAM"."EMP" ("EMP_ID") PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 6554 NEXT 104858 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "TBS1" PARALLEL 1; ALTER INDEX "SHAM"."EMP_EMPID_C1_PK" NOPARALLEL; CREATE INDEX "SHAM"."EMP_DPID_IN1" ON "SHAM"."EMP" ("DEPT_ID") PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 6554 NEXT 104858 MINEXTENTS 1 MAXEXTENTS 2147483645
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "USERS" PARALLEL 1 ; ALTER INDEX "SHAM"."EMP_DPID_IN1" NOPARALLEL; -- new object type path: TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT -- CONNECT SYSTEM ALTER TABLE "SHAM"."EMP" ADD CONSTRAINT "EMP_EMPID_C1_PK" PRIMARY KEY ("EMP_ID") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 6554 NEXT 104858 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "TBS1"
ENABLE;
-- new object type path: TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS .. ... Reduces the original initial extent of tables. Now you can compare these values without pctspace option. EXPDP & IMPDP PARAMETERS
EXPDP PARAMETERS
IMPDP PARAMETERS
ACCESS_METHOD
NETWORK_LINK
ACECESS_METHOD
REMAP_DATA
ATTACH
NOLOGFILE
ATTACH
REMAP_DATAFILE
COMPRESSION
PARALLEL
CONTENT
REMAP_SCHEMA
CONTENT
PARFILE
DATA_OPTIONS
REMAP_TABLE
DATA_OPTIONS
QUERY
DIRECTORY
REMAP_TABLESPACE
DIRECTORY
REMAP_DATA
DUMPFILE
REUSE_DUMPFILES
DUMPFILE
REUSE_DUMPFILES
ENCRYPTION_PASSWORD
SCHEMAS
ENCRYPTION
SAMPLE
ESTIMATE
SKIP_UNUSABLE_INDEXES
ENCRYPTION_ALGORITHM
SCHEMAS
EXCLUDE
SOURCE_EDITION
ENCRYPTION_MODE
SOURCE_EDITION
FLASHBACK_SCN
SQLFILE
ENCRYPTION_PASSWORD
STATUS
FLASHBACK_TIME
STATUS
ESTIMATE
TABLES
FULL
STREAMS_CONFIGURATION
ESTIMATE_ONLY
TABLESPACES
HELP
TABLES_EXIST_ACTION
EXCLUDE
TRANSPORTABLE
INCLUDE
TABLES
FILESIZE
TRANSPORT_FULL_CHECK
JOB_NAME
TABLESSPACES
FLASHBACK_SCN
TRANSPORT_TABLESPACES
LOGFILE
TARGET_EDITION
FLASHBACK_TIME
VERSION
NETWORK_LINK
TRANSFORM
FULL
NOLOGFILE
TRANSPORTABLE
HELP
PARALLEL
TRANSPORT_DATAFILES
INCLUDE
PARFILE
TRANSPORT_FULL_CHECK
JOB_NAME
PARTITION_OPTIONS
TRANSPORT_TABLESPACES
LOGFILE
QUERY
VERSION
USERID is default, this is the first parameter for all export/import jobs. The user must have read and write permissions on the database object directory which is pointing the server directory.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
ACCESS METHOD
Data access method. Default is Automatic. Data Pump knows best methods to unload/load your data. This is undocumented parameter and you can use this parameter, if requested by Oracle support. $ expdp system/
...
access_method=direct_path
$ expdp system/
...
access_method=external_table
$ impdp system/
...
access_method=direct_path
$ impdp system/
...
access_method=external_table
ATTACH
Attach the client session to an existing job and Automatically places in the interactive command interface. To attach a stopped job, you must supply the job name. You can find the job_name by querying dba_datapump_jobs/user_datapump_jobs. SYNTAX
: ATTACH=JOBNAME
EXAMPLE : ATTACH=SYS_EXPORT_SCHEMA_01 $ expdp system/ attach=sys_export_schema_01 export> ... When you specify attach parameter, you cannot specify any other parameters except userid/password. Once you attached the job, export displays description of the job. You can change degree of parallelism while perform export job. COMPRESSION
Reducing the size of the dumpfile(set) while take export. Default= METADATA_ONLY. DATA_ONLY
: Only data is compressed.
METADATA_ONLY : Only Metadata is compressed. ALL
: Data and metadata both are compressed.
NONE
: No compression is performed for entire export.
Compression of data using ALL or DATA_ONLY is valid only in the Enterprise Edition of Oracle Database 11g and also requires that the Oracle Advanced Compression option be enabled. SYNTAX
: COMPRESSION={ALL|DATA_ONLY|METADATA_ONLY}
EXAMPLE : COMPRESSION=ALL $ expdp system/ dumpfile=dpdir:maya.dmp schemas=maya $ expdp system/ dumpfile=dpdir:maya.dmp schemas=maya compression=all Dumpfile size with compression
: 805078272
= 548MB
Dumpfile size without compression : 2057162752 = 2.0GB CONTENT
You can choose what to unload into the dump file. EXPDP/IMPDP can process (DATA_ONLY or METADATA_ONLY or METADATA WITH DATA). Easily you can create Skeleton structure of source when using CONTENT=METADATA_ONLY.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
METADATA_ONLY – Unloading only row data; no database object definations are unloaded. DATA_ONLY – Unloading only database object definations; no table row data is unloaded. ALL – Unloading both data + metadata. This is default. SYNTAX
: CONTENT={ALL| DATA_ONLY | METADATA_ONLY }
EXAMPLE : CONTENT=METADATA_ONLY $ expdp system/ dumpfile=dpdir:scott.dmp schemas=scott content=metadata_only $ expdp system/ dumpfile=dpdir:scott.dmp schemas=scott content=data_only DATA_OPTIONS
Data_options comes with IMPDP utility. By default this parameter is disabled during the import job. We can invoke specifically to handle special kind of data during the import operations. You can use two options to use this parameter. SYNTAX
: DATA_OPTIONS=SKIP_CONSTRAINT_ERROR= {DISABLE_APPEND_HINT | SKIP_CONSTRAINT_ERROR}
EXAMPLE : DATA_OPTIONS=SKIP_CONSTRAINT_ERROR $ expdp system/ dumpfile=dpdir:scott_emp.dmp .. $ impdp system dumpfile=dpdir:scott_emp.dmp .. DATA_OPTIONS=SKIP_CONSTRAINT_ERROR
DIRECTORY
DIRECTORY= directory_object – Pointing to the server directory. Specifies the location where your export to writes dumpfile set and logfile. The directory_object is the name of the database directory object that was previously created by the DBA using CREATE DIRECTORY SQL STATEMENT. (Not the name of the actual directory). Default location: DATA_PUMP_DIR SYNTAX
: DIRECTORY=directory_object
EXAMPLE : DIRECTORY=dpdir $ expdp system/ dumpfile=scott.dmp schemas=scott directory=dpdir The dumpfile will be written to the path that is associated with the directory object dpdir. DUMPFILE
Specifies the name of the dumpfile and created by EXPDP utility. The file_name is name of a file in dumpfile set. Default : expdat.dmp SYNTAX
: DUMPFILE=DIRECTORY_OBJECT:FILE_NAME
EXAMPLE : DUMPFILE=DPDIR:FULLDB.DMP EXAMPLE : DUMPFILE=DPDIR:FULLDB%u.DMP $ expdp system/ dumpfile=dpdir:fulldb.dmp $ expdp system/ dumpfile=dpdir:fulldb%u.dmp parallel=4 Here the filename can contain substitution variable (%u), which implies to generate multiple files. The %u argument allows to create multiple dump files for each parallel process. It’s recommended to split .dmp files so that EXPDP utility can write can write to multiple files at the same time.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
When you export using parallel clause with single dumpfile, or number of dumpfiles less than a parallelism value, Export will fail with following error. $ expdp system/ dumpfile=dpdir:fulldb.dmp parallel=4 full=y .. ... ORA-39095: Dump file space has been exhausted: Unable to allocate 8192 bytes When using single .dmp file or a number less than parallelism value, several slave processes wait for the file locked by the other process to write. We are not benefiting from the parallelism. The value you specified for integer should be less than, or equal to, the number of files in the dump file set. Because each active worker process or I/O server process writes exclusively to one file at a time, specifying insufficient number of files leads to worker process will be idle while waiting for files, thereby degrading the overall performance of the job. To get solution using the Data Pump utility, adding more files using the ADD_FILE parameter in interactive mode. Solution for ORA - 39095 Use a number of dump files equal to, or more than the parallelism value. ENCRYPTION
Whether to encrypt data before writing it to the dumpfile set. To enable encryption, either ENCRYPTION or ENCRPTION_PASSWORD or both must be specified. If ENCRYPTION_PASSWORD only specified, then ENCRYPTION parameter defaults to all. If neither ENCRYPTION nor ENCRYPTION_PASSWORD is specified then ENCRYPTION defaults to none. ALL : Encryption for all data and metadata. DATA_ONLY : Data is written to the dumpfile set in encrypted format. METADATA_ONLY : Metadata is written to the dumpfile set in encrypted format. ENCRYPTED_COLUMNS_ONLY : Encrypted columns are written to the dumpfile set in encrypted format. To use this option, you must have Oracle Advance Security transparent data encryption enabled. NONE : No data is written to the dumpfile set in encrypted format.
SYNTAX : ENCRYPTION={ ALL | DATA_ONLY | ENCRYPTED_COLUMNS_ONLY | METADATA_ONLY | NONE } EXAMPLE: ENCRYPTION=ALL $ expdp system/ dumpfile=dpdir:sham.dmp ...
encryption=all
Oracle recommends to encrypt entire dumpfile set. Encryption of entire dump file set is the only way to achieve security for secure files. Encryption_Password has introduced in 10g version of oracle. Oracle 11g has introduced three additional parameters in Data Pump. ENCRYPTION, ENCRYPTION_ALGORITHM , ENCRYPTION_MODE
ENCRYPTION_ALGORITHM
Specifies which cryptographic algorithm should be used to perform the encryption. The ENCRYPTION_ALGORITHM parameter requires to specify ENCRYPTION or ENCRYPTION_PASSWORD. Default algorithm is AES256 ADVANCED ENCRYPTION STANARD. This is symmetric key algorithm that uses same encryption key for both encryption and decryption.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
SYNTAX: ENCRYPTION_ALGORITHM={ AES128 | AES192 | AES256 } EXAMPLE : ENCRYPTION_ALGORITHM=AES256 $ expdp system/ dumpfile=dpdir:sham.dmp ... encryption_algorithm=AES256
ENCRYPTION_MODE
This parameter specifies the type of security is used during EXPORT and IMPORT operations. SYNTAX
: ENCRYPTION_MODE={ DUAL | PASSWORD | TRANSPARENT }
EXAMPLE : ENCRYPTION_MODE=PASSWORD $ expdp system/ dumpfile=dpdir:sham.dmp ... encryption_mode=password
PASSWORD: This mode requires you have to provide a password when creating encrypted dumpfile sets. You need to provide same password when you import the dumpfile set. This mode requires you need to specify encryption_password parameter. TRANSPARAENT: Requires a wallet to use. This mode creates an encrypted dump file set using and open Oracle Encryption Wallet. ENCRYPTION_PASSWORD is NOT required. DUAL: This mode creates dumpfile set that can be imported using an Oracle Encryption wallet or the ENCRYPTION_PASSWORD specified during the export operation. POINTS TO NOTE
If only ENCRYPTION is specified and Oracle Encryption Wallet is open, then the default mode is TRANSPARENT. If only ENCRYPTION is specified and Oracle Encryption Wallet is closed, then an error is returned. If the ENCRYPTION_PASSWORD parameter is specified & the wallet is open, then the default is DUAL. If the ENCRYPTION_PASSWORD parameter is specified and the wallet is NOT open, then the default is PASSWORD. To use dual or transparent mode, the compatible initialization parameter atleast 11.0.0 If you use ENCRYPTION_MODE parameter, you must use either ENCRYPTION or ENCRYPTION_PASSWORD. When you use ENCRYPTION=ENCRYTED_COLUMNS_ONLY, you cannot use the ENCRYPTION_MODE parameter. ENCRYPTION_PASSWORD
This parameter is valid from Enterprise Edition of 11g. Specifies unauthorized access to an encrypted dumpfile set. For export job, this parameter is required, if ENCRYPTION_MODE is set to either PASSWORD or DUAL. SYNTAX
: ENCRYPTION_PASSWORD=PASSWORD
EXAMPLE : ENCRYPTION_PASSWORD=anystring --> # User can define. $ expdp system/ dumpfile=dpdir:sham.dmp ... encryption_password=********
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
POINTS TO NOTE
The ENCRYPTION_PASSWORD is NOT valid, if encryption mode is TRANSPARENT. If ENCRYPTION_PASSWORD is specified but ENCRYPTION_MODE is NOT specified, then the ENCRYPTION_MODE default to PASSWORD. ESTIMATE
Specifies in bytes. Default: BLOCKS EXPORT will use to estimate how much disk space each table in the export job will consume? The estimate is for table row data only does NOT include metadata. SYNTAX
: ESTIMATE={ BLOCKS | STATISTICS }
EXAMPLE :
ESTIMATE=BLOCKS
$ expdp system/ dumpfile=dpdir:schema_hr.dmp ... estimate=blocks $ expdp system/ dumpfile=dpdir:schema_hr.dmp ... estimate=statistics
BLOCKS - Estimation is calculated by multiplying the number of blocks used by source objects. STATISTICS – Estimation is calculated using statistics. This method can be as accurate as possible but all the tables must have analyzed recently. Analysis can be done either SQL ANALYZE statement or DBMS_STATS PL/SQL package. ESTIMATE does NOT consider LOB SIZE. IF tables have LOBs dumpfile size may vary. Estimate also be inaccurate, if you use query or remap_data parameter. Oracle recommends to use ESTIMATE=STATISTICS to get accurate size for compressed tables. ESTIMATE_ONLY
Default is NO. Instructs export to estimate to calculate the disk space, without performing export operation. It will NOT export data into the dumpfile. SYNTAX
: ESTIMATE_ONLY={ YES | NO }
EXAMPLE : ESTIMATE_ONLY=YES $ expdp system/ schemas=scott estimate_only=y ORA-39201: Dump files are not supported for estimate only jobs. ESTIMATE_ONLY cannot be used in conjunction with the QUERY parameter. METADATA FILETRING - EXCLUDE/INCLUDE
We can filter to load/unload certain objects – called as METADATA FILTERING. Metadata Filtering is implemented through EXCLUDE & INCLUDE parameters. The EXCLUDE & INCLUDE parameters are mutually exclusive i.e. NOT possible to specify both INCLUDE parameter and EXLUDE parameter in the same job. Dependent objects of an identified objects are processed along with the identified object. Example: If a filter specifies that an index to be included in an operation, then statistics from that index will be included.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
If a table is excluded by a filter, then all associated objects for the table such as (INDEXES, CONSTRAINTS, GRANTS, TRIGGERS, PROCEDURES) will be excluded by the filter. To check list of valid objects, you can query following views. DATABASE_EXPORT_OBJECTS
: FULL MODE
SCHEMA_EXPORT_OBJECTS
: SCHEMA MODE
TABLE_EXPORT_OBJECTS
: TABLE AND TABLESPACE MODE
SQL> select object_path, named, comments from schema_export_objects; ... SQL> SELECT OBJECT_PATH, NAMED, COMMENTS FROM
DATABASE_EXPORT_OBJECTS
WHERE
OBJECT_PATH LIKE 'SCHEMA%';
...
EXCLUDE /INCLUDE
EXCLUDE: List of objects to be excluded. INCLUDE: List of objects to be included. Purpose is Fine Filtering of Objects during export or import. No default. We can use EXCLUDE AND INCLUDE parameters with EXPDP and IMPDP. EXCLUDE: Excludes object & object types that which you don’t want to export or import operations. INCLUDE: Includes object and object types that which you want to export or import operations. SYNTAX : EXCLUDE=OBJECT_TYPE[:NAME_CLAUSE][,...] SYNTAX : INCLUDE=OBJECT_TYPE[:NAME_CLAUSE][,...] EXCLUDE - Meta data can be filtered with EXLUDE parameter. INCLUDE - Meta data can be filtered with IMPDP parameter. OBJECT_TYPE - Specifies the type of object to be excluded. OPERATORS – IN, NOT IN, LIKE, = NAME CLAUSE – This is separated from object_type with a colon. NAME CLAUSE must be enclosed in double quotation marks. Single quotation marks are required to delimit the name strings. You can use the expression with operators in the name clause to filter the objects according to your requirement. See given examples following below. EXAMPLES FOR EXCLUDE & INLCUDE
EXCLUDE=PROCEDURE:"LIKE’MY_PROC%" EXCLUDE=INDEX:"='INDX1'" EXCLUDE=INDEX:"LIKE 'INDX%'" EXCLUDE=SCHEMA:"='SCOTT'" EXCLUDE=SCHEMA:"IN\('EMP'\,'DEPT'\)" EXCLUDE=SEQUENCE,TABLE:"IN('EMP','DEPT')" EXCLUDE=STATISTICS EXCLUDE=TABLE:"='DEPT'" EXCLUDE=TABLE:"LIKE 'EMP'"
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
EXCLUDE=TABLE:"IN('EMP','DEPT') EXCLUDE TABLE:"IN ('CONTRACT_TYPES','TIME_CODES','LAB_BOOK_ACCESS')",GRANT,TABLE_DATA EXCLUDE=TRIGGER:"IN('TRIG1','TRIG2')", INDEX:"='INDX1'",REF_CONSTRAINT EXCLUDE=VIEW,PACKAGE,PROCEDURE,FUNCTION EXCLUDE=VIEW, MATERIALIZED_VIEW,MATERIALIZED_VIEW_LOG INCLUDE=FUNCTION,TABLE:"='EMP'" INCLUDE=INDEX:\"LIKE ‘PK%\" INCLUDE=PROCEDURE:"LIKE'MY_PROC%'" INCLUDE=PROCEDURE:\"=\'PROC1\'\",FUNCTION:\"=\'FUNC1\'\" INCLUDE=TABLE:"IN('EMP','DEPT')" INCLUDE=TABLE:">'E'" INCLUDE=VIEW,PACKAGE:"LIKE'%API'" INCLUDE Once you specify exclude/include parameter with Data Pump utility, all the objects mentioned in the EXLUDE/INCLUDE clause will be considered for the operation. Oracle recommends to use exclude or include parameters in a parameter file. POINTS TO NOTE
All constraints will be included, no need to specify explicitly. EXCLUDE=CONSTRAINT will exclude all NON-REFERENTIAL constraints. EXCLUDE=REF_CONSTRAINT will exclude referential integrity. By default NOT NULL constraint cannot be explicitly excluded. EXCLUDE=GRANT excludes object grants on all object types and system privilege grants. EXCLUDE=USER excludes only definition of the users not the objects contained within users schema. FILESIZE
The maximum dumpfile size when EXPDP is executed. Mostly used in dumpfiles split case. Whenever we use substitution variable (%u), required dump files have been added to the job. Notice %u flag, which append a two digit suffix to the Data Pump file. SYNTAX
: FILESIZE=INTEGER_VALUE { BYTES | KILOBYTES | MEGABYTES | GIGABYTES }
EXAMPLE : FILESIZE=500M $ expdp system/ dumpfile=dpdir:sham%u.dmp filesize=500m ... .. ... Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_06 is: /u01/datapump/sham01.dmp /u01/datapump/sham02.dmp /u01/datapump/sham03.dmp /u01/datapump/sham04.dmp We got dumpfile names as sham01.dmp, sham02.dmp, ... If the filesize is reached for any member of the dump file set, then that file is closed and oracle attempts to create a new dumpfile for the current operation. When you perform import job, you have to mention dumpfile along with substitution variable(%u).
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
FLASHBACK_SCN/ FLASHBACK_TIME
No default. Specifies the SCN that export will use to enable the Flashback Query Utility. SYNTAX
: FLASHBACK_SCN=SCN_VALUE
EXAMPLE : FLASHBAK_SCN=2010025 $ expdp system/ dumpfile=dpdir:hr_fbscn.dmp ... flashback_scn=2010025 The export operation is performed with data, it is consistent as of specified SCN. SYNTAX
: FLASHBACK_TIME=”TO_TIMESTAMP(TIME-VALUE)”
EXAMPLE : FLASHBACK_TIME=”TO_TIMESTAMP(’08-01-2015 00:40:55’, ’DD-MM-YYYY HH24:MI:SS’)” $ expdp system/ dumpfile=dpdir:hr_fb_time.dmp ... flashback_time=”to_timestamp(’08-01-2015 00:40:55’, ’dd-mm-yyyy hh24:mi:ss’)”
FLASHBACK_TIME : You can specify any time which is possible within flashback capabilities (undo log sizes) using the to_timestamp argument. Oracle finds the SCN that closely matches to the specified time, then this SCN used to enable the flashback utility. The export operation is performed with data, it is consistent up to this SCN. POINTS TO NOTE
In FLASHBACK_SCN, you need to use SCN as argument. In FLASHBACK_TIME, you need to use timestamp value. FLASHBACK_SCN AND FLASHBACK_TIME are mutually exclusive. FLASHBACK_SCN/FLASHBACK_TIME affect only to flashback query capability. Both are NOT applicable to flashback database, flashback drop and flashback data archive. FULL
Default No. Specifies that you want to perform a full database export or import. If FULL=yes, all data and metadata are to be exported. ROLES REQUIRED TO PERFORM FULL EXPORT OR IMPORT
EXPORT : EXP_FULL_DATABASE
or
DATAPUMP_EXP_FULL_DATABASE
IMPORT : IMP_FULL_DATABASE
or
DATAPUMP_IMP_FULL_DATABASE
SYNTAX
: FULL= { Y | N }
EXAMPLE : FULL=Y $ expdp system/ dumpfile=dpdir:fulldb.dmp ... full=y $ impdp system/ dumpfile=dpdir:fulldb.dmp ... full=y POINTS TO NOTE
Full export does NOT export system schemas that contain Oracle managed data and metadata. System schemas are SYS, ORDSYS, CTXSYS, ORDPLUGINS, WMSYS, DBSNMP, DIP, etc. Grants on objects owned by SYS schema are never exported.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
HELP=Y
Default is NO. To display all parameters with description for export and import job. $ expdp help=y
Export.
$ impdp help=y
Import.
JOB_NAME
Data Pump creates job_name for each and every export and import process. Specifies a name of the job. The default system generated job name of the form of SYS_OPERATION_MODE_NN SYNTAX Operation
: EXPORT/IMPORT
MODE
: FULL | TABLESPACE | SCHEMA | TABLE
NN
: 2 DIGIT INCREMENTING INTEGER STARTS AT 01.
DEFAULT JOB_NAME
: SYS_EXPORT_TABLESPACE_01
If you do not want default job_name, you can assign job_name explicitly. SYNTAX : JOB_NAME=JOBNAME_STRING EXAMPLE : JOB_NAME=TBS_USERS_EXP $ expdp system/ dumpfile=dpdir:tbs_exp_users.dmp ...
JOB_NAME=TBS_USERS_EXP
POINTS TO NOTE
Master table has the same name of the job. It is good practice to have the job_name explicitly defined; so the user can reattach the stopped job any time and related objects such as MASTER TABLE can be find easily. KEEP_,MASTER & METRICS
Default is No. Both are undocumented parameters. Indicates whether the master table should be deleted or retained at the end of a Data Pump job. The master table is automatically deleted at the end of the job, once the job completes successfully. The master table is automatically retained for jobs that the job do NOT complete successfully. SYNTAX
: KEEP_MASTER={ YES | NO }
EXAMPLE : KEEP_MASTER=YES $ expdp system/ dumpfile=dpdir:expdat.dmp ...
keep_master=yes
Metrics parameter does NOT create any additional files for your job. Default is NO. When metrics=y, additional logging information about ‘the number of objects and the time it took to process them into the logfile. SYNTAX
: METRICS={Y|N}
EXAMPLE : METRICS=Y $ expdp system/ dumpfile=dpdir:expdat.dmp ... schemas=hr metrics=y Manually dropping the master table does NOT lead any data dictionary corruption.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
LOGFILE/NOLOGFILE
LOGFILE: You can specify LOGFILE parameter to create a logfile for an export job. Default log filename for export job: export.log Default log filename for import job: import.log If you do not specify LOGFILE parameter, oracle will create a logfile named export.log for export import.log for import, more over subsequent export jobs will overwrite export.log file and import jobs will overwrite import.log file. $ expdp system/ dumpfile=scott.dmp logfile=scott.log schemas=scott directory=dpdir $ expdp system/ dumpfile=scott.dmp schemas=scott directory=dpdir NOLOGFILE: If you do NOT want creation of logfile for your export job, specify NOLOGFILE parameter. $ expdp system/ dumpfile=scott.dmp nologfile=y schemas=scott
NETWORK_LINK
Default is No. EXPORT : Enables an export from a source database by a valid database link. IMPORT : Enables an import from a source database by a valid database link. EXPDP: The network_link parameter initiates an export using a database link – means connected EXPDP client contacts the source database referenced by the source_database_link, retrieves data from it and writes data to a dumpfile set back on the connected system. IMPDP: The network_link parameter initiates an import via a database link – means connected IMPDP client contacts the source database referenced by the source_database_link, retrieves data from it, and writes the data to the database on the connected instance. NO DUMPFILES INVOLVED.
SYNTAX
: NETWORK_LINK=SOURCE_DATABASE_LINK
EXAMPLE : NETWORK_LINK=DBLINK_CRMS $ expdp system/ dumpfile=hr.dmp
...
network_link=crms_dblink
$ impdp system/ directory=exp_dir ... network_link=crms_dblink
Roles required to import data using NETWORK_LINK ORA-31631: privileges are required ORA-39149: cannot link privileged user to non-privileged user The user who is executing the import job must have DATAPUMP_IMP_FULL_DATABASE role on the target database. Source database schema must have DATAPUMP_EXP_FULL_DATABASE role. SOURCE DATABASE
CRMS
SERVER1
192.168.1.130
PRODUCTION
TARGET DATABASE
HRMS
SERVER2
192.168.1.131
DEVELOPMENT
Create database link in the target database. Now you can transfer data from source database to target database over a network (via database link) without creating any dumpfile set(s). $ impdp system/ logfile=dp_dir:netwrk_imp.log network_link=crms_link schemas=hr remap_schema=hr:scott
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
For imports, the NETWORK_LINK parameter identifies the database link pointing to the source server. Schema objects are imported directly from the source to the development server without a dump file. Although there is no need for a DUMPFILE parameter, a directory object is still required for the logs associated with the operation. OPTION 2: $ expdp system/manager dumpfile=dp_dir:scott.dmp logfile=dp_dir:scott.log network_link=crms_link
The NETWORK_LINK parameter identifies a database link to be used as the source for a network export. The NETWORK_LINK parameter identifies the database link pointing to the source server. The objects are exported from the source server in the normal manner, but written to a directory object on the local server, rather than one on the source server. PARALLEL
Default is 1, specifies number of worker processes. Parallel is the only tuning parameter specific to Data Pump. Data Pump performance can be improved by using the PARALLEL parameter. Parallel allows to launch multiple worker processes, by setting parallel=N (interger_value), you can increase/decrease the number of active worker processes and/or PQ slaves for the current job. It’s possible to increase or decrease parallel count during job execution, using interactive mode. If you wish to change degree of parallelism on fly use attach parameter. An increase takes effect immediately, if there is work that can be performed in parallel. Decreasing parallelism does not take effect an existing process finishes its current task. If the integer value is decreased, then workers are idle but NOT deleted until the job exists. Oracle recommends the value of the parallel parameter should NOT be more than two times of number of CPUs in the database server for the optimum performance. When you specify PARALLEL=N, Dump file should be used with %u wildcard, it allows to create multiple dump files for the job. Do we need exactly same number of dump files equal to parallel value? The value of the parallel should be less than or equal to number of files in the dump file set. Each worker or parallel execution process requires exclusive access to the dump file. IF we have fewer dump files than the degree of parallelism, some workers or PX processes will be unable to write the information what they are exporting. If this occurs, the work processes go into an idle stage and will not doing any work until more files are added to the job. SYNTAX
: PARALLEL=INTEGER_VALUE
EXAMPLE : PARALLEL=4 $ expdp system/ full=y parallel=4 filesize=5g dumpfile=dpdir1:file1_%U.dmp, dumpfile=dpdir2:file2_%U.dmp, dumpfile=dpdir3:file3_%U.dmp, dumpfile=dpdir4:file4_%U.dmp, dumpfile=dpdir5:file5_%U.dmp logfile=export.log
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
$ expdp system/ dumpfile=dpdir:fulldb%u.dmp full=y ... parallel=4 $ impdp system/ dumpfile=dpdir:file%U.dmp ... parallel=4
GENERAL GUIDES TO USE PARALLEL PARAMETER
Parallel is most useful for big jobs with a lot of data. To export, the parallel parameter value should be less than or equal to number of dump files. To import, the parallel=n value should not be larger than the number of files in the dumpset. Data Pump uses Streams functionality to communicate between processes, if you set the SGA_TARGET initialization parameter, then STREAMS_POOL_SIZE initialization parameter will get required value. As per oracle, tables smaller than 250Mb are not being considered for parallel export, and one worker is dedicated to export metadata. PARFILE
Default No. Specifies the name of the parameter file, which contains export or import commands. Highly recommended when using parameters whose values require the use of quotation marks. SYNTAX
: PARFILE=DIRECTORY_PATH
EXAMPLE : PARFILE=PARAM.INI $ vi param.ini dumpfile=dpdir:schema_bkp.dmp logfile=dpdir:schema_imp.log schemas=hr parallel=2 $ expdp system/ parfile=param.ini
PARTITION_OPTIONS
Default NONE. Departition as per condition. Specifies how table partitions should be created during an import operation. NONE: This parameter imports the partitioned tables as it present in the source database. MERGE : This option imports all partitions and sub-partitions into one table. DEPARTITION : All the partitions and sub partitions will get imported into individual tables. SYNTAX
: PARTITION_OPTIONS={ NONE | DEPARTITION | MERGE }
EXAMPLE : PARTION_OPTIONS=MERGE $ impdp system/ dumpfile=dpdir:hr.dmp ... partition_options=merge If the export dump is taken using transportable option, you cannot use the MERGE option. Export operation with the transportable method, (if partition/subpartition was specified), then the import must use departition option. A value of departition promotes each partition/subpartition to a new individual table. The default name of the new table will be concatenation of the table and partition name or the table and subpartition name.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
QUERY
Default is No. Allows you to specify a query clause that filters the data that gets imported. When the QUERY parameter is used, the external tables method is used for data access.
SYNTAX : SCHEMA.TABLE_NAME:QUERY_CLAUSE EXAMPLE: QUERY='SALES:"WHERE EXISTS (SELECT CUST_ID FROM SCOTT.CUSTOMERS C WHERE CUST_CREDIT_LIMIT > 10000 AND KU$.CUSID=C.CUST_ID)"' $ impdp system/ dumpfile=dpdir:scott_query.dmp query='sales:"where exists (select cust_id from scott.customers c where cust_credit_limit > 10000 and ku$.cust_id=c.cust_id)"'
If KU$ is not used for a table alias, the result will be that all rows are loaded. Query parameter cannot be used with content=metadata_only, sqlfile, transport_datafiles. REMAP
Already discussed in REMAP FUNCTION chapter. REUSE_DUMPFILES
Default is No. Whether or not to overwrite pre-existing export dump files. When set to y any existing dump files will be overwritten. SYNTAX
: REUSE_DUMPFILES={ YES | NO }
EXAMPLE : REUSE_DUMPFILES=YES $ expdp=system/ dumpfiles=scott.dmp ... reuse_dumpfiles=yes
SAMPLE
There is NO default. This parameter is used to export the sample number of rows. Value mentioned will be considered as the sample percentage. SYNTAX
: SAMPLE=SCHEMA.TABLE_NAME:SAMPLE_PERCENT
EXAMPLE : SAMPLE=SCOTT.EMP:60 $ expdp system/ dumpfile=dpdir:file%U.dmp sample=60 Sample parameter is NOT valid for network export. SCHEMAS
Default is current user’s schema. Specifies schema level export. DATAPUMP_EXP_FULL_DATABASE role allows you can export other schemas. DATAPUMP_IMP_FULL_DATABASE role allows you list of schemas to import. SYNTAX
: SCHEMAS=SCHEMA_NAME [,...]
EXAMPLE
: SCHEMAS=SCOTT,HR
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
$ expdp system/ dumpfile=dpdir:scott_exp.dmp
... schemas=scott
$ expdp system/ dumpfile=dpdir:schema_exp.dmp ... schemas=scott,hr,maya $ impdp system/ dumpfile=dpdir:schema_exp.dmp ... schemas=scott,hr,maya
SOURCE_EDITION
Default database edition. Edition to be used for extracting Meta data (from 11g R2). The default edition will be ORA$BASE is the parent or first edition for all the objects. SYS> SELECT SYS_CONTEXT('userenv', 'current_edition_name') FROM DUAL; SYNTAX
:
EXAMPLE :
SOURCE_EDITION=EDITIN_NAME SOURCE_EDITION=EDITION_NAME
$ expdp system dumpfile=dpdir:expdat.dmp ... source_edition= If SOURCE_EDITION parameter is specified, then the objects from that edition are exported. If not SOURCE_EDITION is specified, then the default edition is used. STATUS
Default 0. Used to display status of the job. If you supply a integer_value, in seconds, job status should be displayed in logging mode. SYNTAX
: STATUS=INTEGER_VALUE
EXAMPLE : STATUS=300 $ expdp=system/ dumpfile=dpdir:tbs.dmp ... status=300 Displays the status of the export job every 5 minutes (60 seconds * 5 = 300 seconds) SKIP_UNUSABALE_INDEXES
By default value of this parameter is Y. If SKIP_UNUSABLE_INDEXES=Y, a table or partition with an index in the unusable state is encountered, the load of the table or partition proceeds anyway. If SKIP_UNUSABLE_INDEXES=N, a table or partition with an index in the unusable state is encountered, the table or partition is not loaded.
SYNTAX
: SKIP_UNUSABLE_INDEXES={ Y | N }
EXAMPLE : SKIP_UNUSABLE_INDEXES=YES $ impdp system/ dumpfile=dpdir:exp.dmp ... skip_unusable_indexes=Y This parameter is useful only when importing data into an existing table. It has no effect when a table is created as part of an import. In that case, the table and indexes are newly created and will not be marked unusable. If skip_unusable_indexes parameter is not specified (whose default value is Y) will be used to determine, how to handle unusable indexes.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
STREAMS_CONFIGURATION
Default is Y. Whether or NOT to import any streams metadata that may be present in the export dumpfile. SYNTAX
: STREAMS_CONFIGURATION=( Y | N }
EXAMPLE : STREAMS_CONFIGURATION=YES $ impdp system/ dumpfile=dpdir:expfull.dmp ... streams_configuration=n
SQLFILE
There is no default. Used to extract DDL from the export dumpfile. When we execute impdp with SQLFILE. it won’t import the data into the tables/schemas.
SYNTAX
: SQLFILE=DIRECTORY_OBJECT:FILE_NAME
EXAMPLE : SQLFILE=dpdir:SCOTT_SCRIPT.SQL $ impdp system/ dumpfile=dpdir:scott_exp.dmp ... sqlfile=scott_script.sql $ impdp system/ dumpfile=dpdir:scott_exp.dmp ... sqlfile=dpdir:scott_script.sql
TABLES
Default is No. Specifies to perform table_mode export. The table name that you specify can be preceded with schema name. To specify a schema other than own, you must have DATAPUMP_EXP_FULL_DATABASE role.
SYNTAX
: TABLES=SCHEMA_NAME.]TABLE_NAME[:PARTITION_NAME] [,...]
EXMAPLE : TABLES=SCOTT.EMP $ expdp system/ dumpfile=dpdir:tables_emp.dmp ... tables=hr.emp, scott.dept $ expdp system/ dumpfile=dpdir:tables_part.dmp
tables=scott.sales:sales_Q1,
scott.sales:sales_Q2,scott.sales:sales_Q3 If an entire partitioned table is exported, then it will be imported entirely as a partitioned table, except if PARTITION_OPTIONS=DEPARTITION is specified during import. TABLES_EXIST_ACTION
Default is SKIP. Tells to import what to do, when import is trying to create table if already exists. This parameter possible values are (skip, append, truncate, replace). (If CONTENT=DATA_ONLY is specified, the default is APPEND not SKIP). SYNTAX
: TABLE_EXISTS_ACTION={ SKIP | APPEND | TRUNCATE | REPLACE }
EXAMPLE : TABLE_EXISTS_ACTION=APPEND $ impdp system/ dumpfile=dpdir:file%U.dmp ... table_exists_option=replace
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
TRANSPORTABLE
Default is NEVER. This is similar to TRANSPORT_TABLESPCES parameter. Specifies whether the transportable option should be used during a table mode export. ALWAYS: Instructs the export job to use the transportable option. If transportable is NOT possible, then the job will fail. This option only exports only metadata for the specified tables, partitions, or sub partitions specified by the TABLES parameter. NEVER: Default. Instructs the export job to use either the direct path or external table method to unload data rather than the transportable option. SYNTAX
: TRANSPORTABLE={ ALWAYS | NEVER }
EXAMPLE : TRANSPORTABLE=ALWAYS $ expdp system/ dumpfile=dpdir:tbs_exp.dmp ... transportables=always Transportable export requires DATAPUMP_EXP_FULL_DATABASE privilege. The Transportable parameter is only valid in table mode exports. Transportable mode does not export any data. Data is copied when the tablespace datafiles are copied from the source system to target system. TABLESPACES
Default is No. EXPORT: Specifies a list of tablespace_names that to be exported in tablespace mode. IMPORT: Specifies that you want to perform a tablespace-mode import. SYNTAX
: TABLESPACES=TABLESPACE_NAME,[,...]
EXAMPLE : TABLESPACES=USERS,CRMS,HRMS $ expdp system/ dumpfile=dpdir:fulldb%u.dmp ... full=y parallel=4 $ expdp system/ dumpfile=dpdir:tbs.dmp ... tablespaces=users,crms,hrms $ impdp system/ dumpfile=dpdir:tbs.dmp ... tablespaces=users,crms,hrms Before import the tablespace, we need to create the tablespace in the database.
TRANSPORT_TABLESPACES AND TRANSPORT_DATAFILES
Oracle allows to transport the tablespaces from one database to other database across different OS platforms. This is most efficient way of bulk data movement. TRANSPORT_TABLESPACES: There is no default. EXPORT: Specifies that you want to perform an export in transportable-tablespace mode. IMPORT: Specifies that you want to perform an import in transportable-tablespace mode. This
parameter
TRANSPORT_TABLESPACES
exports
metadata
for
all
objects
within
the
specified
tablespaces. We can specify a list of tablespace names for which object metadata will be exported from the source database into the target database. The SYSTEM and SYSAUX tablespaces are NOT transportable. Transportable tablespace mode requires that you have DATAPUMP_EXP_FULL_DATABASE role. Corresponding datafiles should be copied to the target database
prior to starting the import.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
SYNTAX
: TRANSPORT_TABLESPACES=TABLESPACE_NAME
EXAMPLE : TRANSPORT_TABLESPACES=example $ expdp system/ dumpfile=tts.dmp directory=dpdir logfile=tts.log transport_tablespaces=tools transport_full_check=yes $ impdp system/ dumpfile=tts.dmp directory=dpdir logfile=tts.log transport_datafiles='/file_path/'
TRANSPORT_DATAFILES: There is no default. Specifies a list of datafiles to be imported into the target database. Files must have been copied from the source database. This parameter is used to identify the datafiles holding the table data. SYNTAX
: TRANSPORT_DATAFILES=DATAFILE_NAME
EXAMPLE : TRANSPORT_DATAFILES='/u01/app/oracle/oradata/crms01.dbf' $ impdp system/ dumpfile=dpdir:tbs_crms.dmp ... transport_datafiles='/u01/app/oracle/oradata/crms01.dbf'
TRANSPORT_FULL_CHECK
There is no default. This parameter is applicable only to a transportable-tablespace mode export. Specifies whether or not to check that the specified transportable set has no dependencies. SYNTAX
: TRANSPORT_FULL_CHECK={ YES | NO }
EXAMPLE : TRANSPORT_FULL_CHECK=YES $ expdp system/ dumpfile=dpdir:tbs_exp.dmp ... transport_tablespaces=bpms transport_full_check=yes We can use transport_full_check parameter, so that data pump will check tablespace dependencies. A table (tab1) in the BPMS TABLESPACE had an index(indx1) in the USERS TABLESPACE, then transporting the USERS tablespaces (without the CRMS tablespace) would present a problem. When the USERS tablespace is transported into the target database, there would be an index on a non-exist table. Index is existing on users tablespace for table tab1. Without users tablespace, if I transport bpms tablespace using transport_tablespace option, I will get ORA 39907 error. ORA-39907: Index HR.INDX1 in tablespace USERS points to table HR.TAB1 in tablespace BPMS VERSION
Compatible is default. EXPORT: Specifies the version of database objects to be exported. IMPORT: Specifies the version of database objects to be imported. Purpose is to move data between Oracle versions, from a higher version to lower version. COMPATIBLE : Versions of the metadata corresponds to the database compatibility. LATEST
: The version of the metadata corresponds to database release.
VERSION
: A specific database release.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
SYNTAX
: VERSION={COMPATIBLE | LATEST | VERSION_STRING }
EXAMPLE : VERSION=LATEST $ expdp system/ dumpfile=dpdir:hr.dmp ... tables=hr.emp version=latest Data Pump does not exist before 10g so we cannot specify versions earlier than Oracle 10g. To import dumpfile in 10.2.0.5 database export from 11.2.0.1 $ expdp system/ dumpfile=dpdir:hr.dmp ... tables=hr.emp version=10.2
INTEREACTIVE COMMAND MODE PARAMETERS
When you do export or import job, hit CTRL,C to get interactive mode.
EXPDP PARAMETERS
IMPDP PARAMETERS
ADD_FILE
CONTINUE_CLIENT
CONTINUE_CLIENT
EXIT_CLIENT
EXIT_CLIENT
HELP
FILESIZE
KILL_JOB
HELP
PARALLEL
KILL_JOB
START_JOB
PARALLEL
STATUS
REUSE_DUMPFILES
STOP_JOB
START_JOB STATUS STOP_JOB You can use following interactive mode parameters during EXPORT or IMPORT process. ADD_FILE
: Add additional dump files (export).
CONTINUE_CLIENT : Exit from interactive mode and enter logging mode. EXIT_CLIENT
: Stop client session, but leave running job.
FILESIZE
: Redefine the file size.
HELP
: Display a summary of available commands.
KILL_JOB
: Detach currently attached client sessions and kill the current job.
PARALLEL
: Increase/Decrease the number of active worker processes for the current job.
START_JOB
: Restart a stopped job to which you are attached.
STATUS
: Display detailed status for the current job.
STOP_JOB
: Stop the current job for restart later.
SYNTAX
: ADD_FILE=[DIRECTORY_OBJECT]FILE_NAME[,...]
EXAMPLE : ADD_FILE=dpdir:fulldb2.dmp EXPORT> ADD_FILE=dpdir:fulldb2.dmp
or
EXPORT> ADD_FILE=fulldb2.dmp SYNTAX
: FILESIZE=NUMBER
EXAMPLE : FILESIZE=100M EXPORT> FILESIZE=100M EXPORT> ADD_FILE=dp:fulldb2.dmp filesize=100m
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
EXPORT> HELP EXPORT> STOP_JOB=IMMEDIATE EXPORT> START_JOB EXPORT> STATUS EXPORT> PARALLEL=4 EXPORT> CONTINUE_CLIENT EXPORT> EXIT_CLIENT EXPORT> KILL_JOB
DATA PUMP EXPDP / IMPDP JOBS - HELP, STATUS , STOP_JOB , START_JOB , CONTINUE_CLIENT , EXIT_CLIENT , KILL_JOB
Each and every export/import has separate job_name, we can see job details using dba_datapump_jobs. Whenever a job is submitted, Master table is created. This master table contains info about details of current Data Pump operation being performed. By default master table name same as job_name. TAKE EXPORT
$ expdp system/manager dumpfile=dpdir:file1.dmp schemas=scott reuse_dumpfiles=yes .. ... EXPORT> HELP The following commands are valid while in interactive mode. Note: abbreviations are allowed. ADD_FILE Add dumpfile to dumpfile set. CONTINUE_CLIENT Return to logging mode. Job will be restarted if idle. EXIT_CLIENT Quit client session and leave job running. FILESIZE Default filesize (bytes) for subsequent ADD_FILE commands. HELP Summarize interactive commands. KILL_JOB Detach and delete job. PARALLEL Change the number of active workers for current job. REUSE_DUMPFILES Overwrite destination dump file if it exists [N]. START_JOB Start or resume current job. Valid keyword values are: SKIP_CURRENT.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
STATUS Frequency (secs) job status is to be monitored where the default [0] will show new status when available. STOP_JOB Orderly shutdown of job execution and exits the client. Valid keyword values are: IMMEDIATE. SYSTEM> select owner_name, job_name, operation, job_mode, state from dba_datapump_jobs; OWNER_NAME
JOB_NAME
------------ ---------------------SYSTEM
OPERATION
JOB_MODE
STATE
------------------ ------------ ------------
SYS_EXPORT_SCHEMA_01
EXPORT
SCHEMA
EXECUTING
By hitting "CTRL+C" on the export/import client, we can monitor status of the running job. Export> status Job: SYS_EXPORT_SCHEMA_01 Operation: EXPORT Mode: SCHEMA State: EXECUTING Bytes Processed: 0 Current Parallelism: 1 Job Error Count: 0 Dump File: /u01/datapump/file1.dmp bytes written: 4,096 Worker 1 Status: Process Name: DW00 State: EXECUTING Object Schema: SCOTT Object Type: SCHEMA_EXPORT/DEFAULT_ROLE Completed Objects: 1 Total Objects: 1 Worker Parallelism: 1 EXPORT> continue_client .. Processing object type SCHEMA_EXPORT/TABLE/TABLE . . exported "SCOTT"."SALES_Q1_TARGET"
899.6 MB 48000 rows
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS . . exported "SCOTT"."SALES_Q2_TARGET"
899.6 MB 48000 rows
Export> stop_job=immediate Are you sure you wish to stop this job ([yes]/no): yes SYSTEM> select owner_name, job_name, operation, job_mode, state from dba_datapump_jobs; OWNER_NAME
JOB_NAME
------------
---------------------
SYSTEM
SYS_EXPORT_SCHEMA_01
OPERATION ------------EXPORT
JOB_MODE
STATE
------------
------------
SCHEMA
NOT RUNNING
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
HOW TO RESUME A STOPPED JOB
$ expdp system/manager attach=SYS_EXPORT_SCHEMA_01 .. ... Job: SYS_EXPORT_SCHEMA_01 Owner: SYSTEM Operation: EXPORT Creator Privs: TRUE GUID: 1D4BA801E304447AE050A8C0820166B1 Start Time: Saturday, 15 August, 2015 2:34:15 Mode: SCHEMA Instance: crms Max Parallelism: 2 EXPORT Job Parameters: Parameter Name
Parameter Value:
CLIENT_COMMAND
system/******** dumpfile=dpdir:file1%u.dmp schemas=scott
reuse_dumpfiles=yes parallel=2 job_name= SYS_EXPORT_SCHEMA_01 State: IDLING Bytes Processed: 0 Current Parallelism: 2 Job Error Count: 0 Dump File: /u01/datapump/file1%u.dmp Dump File: /u01/datapump/file101.dmp bytes written: 517,476,352 Dump File: /u01/datapump/file102.dmp bytes written: 24,576 Worker 1 Status: Process Name: DW00 State: UNDEFINED Object Schema: SCOTT Object Type: SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Completed Objects: 1 Worker Parallelism: 1 Worker 2 Status: Process Name: DW01 State: UNDEFINED EXPORT> START_JOB EXPORT> EXIT_CLIENT $ expdp system/ attach=SYS_EXPORT_SCHEMA_01 .. ... HIT "CTRL+C"
------------------- TO GET EXPDP_CLIENT INTERACTIVE MODE PROMPT
EXPORT> CONTINUE_CLIENT ..
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
HIT "CTRL+C" Export> kill_job Are you sure you wish to stop this job ([yes]/no): yes SYSTEM> select owner_name, job_name, operation, job_mode, state from dba_datapump_jobs; no rows selected.
DATA PUMP EXPDP / IMPDP JOBS - ADDFILE , FILESIZE , PARALLEL
$ expdp system/ dumpfile=dpdir:hr_schema01.dmp logfile=dpdir:hr_schema.log .. ... Estimate in progress using BLOCKS method... Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA .. SYSTEM> select owner_name, job_name, operation, job_mode, state from dba_datapump_jobs; OWNER_NAME
JOB_NAME
----------- ------------------
OPERATION
SYSTEM
------------
SYS_EXPORT_SCHEMA_01
EXPORT
JOB_MODE ----------
STATE -----------
SCHEMA
EXECUTING
EXPORT> parallel=3 EXPORT> add_file=dpdir:hr_schema02.dmp filesize=2000m EXPORT> add_file=dpdir:hr_schema03.dmp filesize=2000m EXPORT> add_file=dpdir:hr_schema04.dmp filesize=2000m
CLEANUP ORPHANED DATA PUMP JOBS
SYSTEM> select owner_name, job_name, operation, job_mode, state from dba_datapump_jobs; OWNER_NAME
JOB_NAME
OPERATION
JOB_MODE
STATE
------------ ----------------------- ----------- ------------- -------------SYSTEM
SYS_EXPORT_SCHEMA_01
EXPORT
SCHEMA
NOT RUNNING
SYSTEM
SYS_EXPORT_FULL_01
EXPORT
FULL
NOT RUNNING
SYSTEM
SYS_EXPORT_SCHEMA_02
EXPORT
SCHEMA
NOT RUNNING
.. ... Above jobs already stopped and not running will not be started anymore, so drop the master table. Orphaned Data Pump jobs do not have an impact on new Data Pump jobs. SYSTEM> drop table ; SYSTEM> drop table SYS_EXPORT_SCHEMA_01; SYSTEM> drop table SYS_EXPORT_FULL_01; SYSTEM> drop table SYS_EXPORT_SCHEMA_02; .. ...
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
EXP/IMP PARAMETERS Vs DATA PUMP PARAMETERS
Some parameter name changes in Data Pump from traditional export/import. EXPORT/IMPORT
DATAPUMP
FILE
DUMPFILE
LOG
LOGFILE/NOLOGFILE
GRANTS
EXCLUDE AND INCLUDE
INDEXES
EXCLUDE AND INCLUDE
CONSTRIANTS
EXCLUDE AND INCLUDE
FEEDBACK
STATUS
OWNER
SCHEMAS
ROWS=Y
CONTENT=ALL
ROWS=N
CONTENT=METADATA_ONLY
INDEXFILE
SQLFILE
CONSISTENT
FLASHBACK_SCN or FLASHBACK_TIME
IGNORE
TABLE_EXISTES_ACTION
FROMUSER , TOUSER
REMAP_SCHEMA
Some parameters are no more required, completely removed in Data Pump. Ex: recordlenght, volsize, statistics, buffer, direct, commit, resumable, resumable_name, etc... DATAPUMP LEGACY MODE
Whenever we use traditional EXPORT/IMPORT related commands with EXPDP/IMPDP command line Data Pump will enter in legacy mode automatically. $ expdp system/ file=scott.dmp log=scott.log buffer=100000 rows=n indexes=n constraints=n grants=n owner=scott .. ... Legacy Mode Active due to the following parameters: Legacy Mode Parameter: "constraints=FALSE" Location: Command Line, Replaced with: "exclude=constraint" Legacy Mode Parameter: "direct=TRUE" Location: Command Line, ignored. Legacy Mode Parameter: "file=scott.dmp" Location: Command Line,Replaced with:"dumpfile=scott.dmp" Legacy Mode Parameter: "grants=FALSE" Location: Command Line, Replaced with: "exclude=grant" Legacy Mode Parameter: "indexes=FALSE" Location: Command Line, Replaced with: "exclude=indexes" Legacy Mode Parameter: "log=scott.log" Location: Command Line, Replaced with: "logfile=scott.log" Legacy Mode Parameter: "owner=scott" Location: Command Line, Replaced with: "schemas=scott" Legacy Mode Parameter: "rows=FALSE" Location: Command Line,Replaced with: "content=metadata_only" Legacy Mode has set reuse_dumpfiles=true parameter. Starting "SYSTEM"."SYS_EXPORT_SCHEMA_02":
system/******** dumpfile=scott.dmp logfile=scott.log
content=metadata_only exclude=indexes exclude=grant exclude=constraint schemas=scott reuse_dumpfiles=true
/u02/app/oracle/admin/crms/dpdump/scott.dmp
.. ... Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_02 is: /u02/app/oracle/admin/crms/dpdump/scott.dmp Job "SYSTEM"."SYS_EXPORT_SCHEMA_02" successfully completed at 20:52:54
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
We can notice Data Pump converts the export specific parameters to their Data Pump on the fly and then completes the export as normal and more over dumpfile and logfile have been created in the default Data Pump directory rather than where we ran export command. Legacy mode has also been implemented for IMPDP utility. $ impdp system/ file=scott.dmp log=scott.log fromuser=scott touser=maya show=y .. ... Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Legacy Mode Active due to the following parameters: Legacy Mode Parameter: "file=scott.dmp" Location: Command Line,Replaced with:"dumpfile=scott.dmp" Legacy Mode Parameter: "fromuser=scott" Location: Command Line, Replaced with: "remap_schema" Legacy Mode Parameter: "log=scott.log" Location: Command Line, Replaced with: "logfile=scott.log" Legacy Mode Parameter: "show=TRUE" Location: Command Line, Replaced with: "sqlfile=scott.sql" Master table "SYSTEM"."SYS_SQL_FILE_FULL_01" successfully loaded/unloaded Starting "SYSTEM"."SYS_SQL_FILE_FULL_01": remap_schema=scott:maya
system/******** dumpfile=scott.dmp logfile=scott.log
sqlfile=scott.sql
.. ... Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS Job "SYSTEM"."SYS_SQL_FILE_FULL_01" successfully completed at 20:54:08 ADVANTAGES OF DATAPUMP
Data Pump has parallel executions. Backup jobs can be monitored. Failed jobs can be restarted. You can stop and restart the job anytime. Job Estimation can done; Estimate an idea howmuch dumpfile size will be consumed. Data Pump has the ability to estimate the job times. Dump files can be compressed using COMPRESSION parameter Dump files can be encrypted. Data Pump supports character set conversion. XML schemas and XML TYPE columns are supported in Data Pump. You can track time estimation for the data pump jobs using v$session_longops. A dumpfile is NOT required while import through network link. You can remap datafiles and or tablespaces when import. It has remap capabilities (remap_schema, remap_tablespace, remap_tables, etc ...) Data Pump has own Performance Tuning features i.e. Data Pump needs to tuning parameters unlike (direct=y, buffer in original export/import). Fine-grained object selection. I.e. EXCLUDE/INCLUDE parameters. EXLUDE parameters allows you to specify which object (and its dependent objects) you want to keep out of the export job. INCLUDE parameters allows you to specify which object (and its dependent objects) you want to keep in the export job. Traditional EXPORT/IMPORT was deprecated in Oracle 10g and no longer supported of 11g version of oracle. Data Pump is flexible and mainly designed for big jobs with lots of data. If data volume is too high Data Pump provides 15 to 40% performance improvement than traditional EXPORT/IMPORT. Directory Object ensures data security and integrity means, privileged users only can access.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
RELATED VIEWS
DBA_DIRECTORIES
SCHEMA_EXPORT_OBJECTS
DBA_DATAPUMP_JOBS
TABLE_EXPORT_OBJECTS
DBA_DATAPUMP_SESSIONS
USER_DATAPUMP_JOBS
DATABASE_EXPORT_OBJECTS
V$SESSION_LONGOPS
V$DATAPUMP_JOB
V$DATAPUMP_SESSION
DATA_OPTIONS WITH TABLE_EXISTS_ACTION
Some cases the IMPDP utility will rollback the entire table import in case any constraint error encounters on that particular table. If you use SKIP_CONSTRAINT_ERRORS as the parameter value, it will proceed the import operation even there is constraint errors for some records. MAYA> create table tab1(no number, string_val varchar2(15)); Table created. MAYA> alter table tab1 add constraint TAB1_CONS1_NO primary key(no); Table altered. MAYA> insert into tab1 select rownum,'DATABASE' from dual connect by level select * from tab1; NO
STRING_VAL
----- ------------1
DATABASE
2
DATABASE
3
DATABASE
4
DATABASE
5
DATABASE
$ expdp system/ dumpfile=dpdir:tab1.emp tables=maya.tab1 nologfile=y .. ... MAYA> delete from tab1 where no=4 or no=5; 2 rows deleted. IMPORTING THE DUMP TABLE_EXISTS_ACTION=APPEND
$ impdp system/ dumpfile=dpdir:tab1.dmp table_exists_action=append Import: Release 11.2.0.1.0 - Production on Wed Jul 29 12:22:02 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Master table "SYSTEM"."SYS_IMPORT_FULL_04" successfully loaded/unloaded Starting "SYSTEM"."SYS_IMPORT_FULL_04":
system/******** dumpfile=dpdir:tab1.dmp
table_exists_action=append
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
Processing object type TABLE_EXPORT/TABLE/TABLE ORA-39152: Table "MAYA"."TAB1" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append Processing object type TABLE_EXPORT/TABLE/TABLE_DATA ORA-31693: Table data object "MAYA"."TAB1" failed to load/unload and is being skipped due to error: ORA-00001: unique constraint (MAYA.TAB1_CONS1_NO) violated Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Job "SYSTEM"."SYS_IMPORT_FULL_04" completed with 2 error(s) at 12:22:05 Why load process failed ? IMPDP cannot insert any rows from the export dump so table import rolledback. Already table has 3 records, when I tried to load the dumpfile (tab1.dmp) the load fails because these three rows are violating the primary key of the table. When trying to load records into the existing table, if any rows violates an active constraint the load process fails. We can override this behavior by specifying DATA_OPTIONS=SKIP_CONSTRAINT_ERRORS on the import command line. Now I am trying to import DATA_OPTINS=skip_constraint_errors, which will import two rows. $ impdp system/manager dumpfile=dpdir:tab1.dmp table_exists_action=append data_options=skip_constraint_errors Import: Release 11.2.0.1.0 - Production on Wed Jul 29 12:25:26 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Master table "SYSTEM"."SYS_IMPORT_FULL_04" successfully loaded/unloaded Starting "SYSTEM"."SYS_IMPORT_FULL_04":
system/******** dumpfile=dpdir:tab1.dmp
table_exists_action=append data_options=skip_constraint_errors Processing object type TABLE_EXPORT/TABLE/TABLE ORA-39152: Table "MAYA"."TAB1" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append Processing object type TABLE_EXPORT/TABLE/TABLE_DATA . . imported "MAYA"."TAB1"
5.492 KB
2 out of 5 rows
3 row(s) were rejected with the following error: ORA-00001: unique constraint (MAYA.TAB1_CONS1_NO) violated Rejected rows with the primary keys are: Rejected row #1: column NO: 1 Rejected row #2: column NO: 2 Rejected row #3: column NO: 3 Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Job "SYSTEM"."SYS_IMPORT_FULL_04" completed with 1 error(s) at 12:25:29 MAYA> select count(*) from tab1; COUNT(*) --------5 ENCRYPTION
$ expdp system/ dumpfile=dpdir:sham.dmp logfile=dpdir:sham.log schemas=sham encryption=all encryption_password=shambkp encryption_algorithm=AES256 encryption_mode=password . .. ... Master table "SYSTEM"."SYS_EXPORT_SCHEMA_07" successfully loaded/unloaded ****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_07 is: /u01/datapump/sham.dmp Job "SYSTEM"."SYS_EXPORT_SCHEMA_07" successfully completed at 22:09:35 IMPORT THE DUMPFILE INTO SONY USER
$ impdp system/ dumpfile=dp:sham.dmp remap_schema=sham:sony Import: Release 11.2.0.1.0 - Production on Wed Jul 29 22:12:25 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – Production With the Partitioning, OLAP, Data Mining and Real Application Testing options ORA-39002: invalid operation ORA-39174: Encryption password must be supplied. IMPORT THE DUMPFILE WITH ENCRYPTED PASSWORD
$ impdp system/ dumpfile=dp:sham.dmp logfile=dp:sham_imp.log remap_schema=sham:sony encryption_password=shambkp Import: Release 11.2.0.1.0 - Production on Wed Jul 29 22:12:59 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Master table "SYSTEM"."SYS_IMPORT_FULL_04" successfully loaded/unloaded Starting "SYSTEM"."SYS_IMPORT_FULL_04":
system/******** dumpfile=dp:sham.dmp
. .. ...
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER Processing object type SCHEMA_EXPORT/EVENT/TRIGGER Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS Job "SYSTEM"."SYS_IMPORT_FULL_04" successfully completed at 22:15:20 ESTIMATE
Specifies the method, how much disk space your export job will consume based on objects. You can see estimation size in the logfile or expdp client session. Generally estimation is for table row only it does NOT include metadata. Estimate parameter comes with two additional parameters. BLOCKS (default) and STATISTICS. BLOCKS: The estimation is calculated by multiplying the number of database blocks used by the target objects with appropriate block sizes. STATISTICS: The estimation is calculated using statistics for each table. To get accurate result, tables should have been analyzed recently. POINTS TO NOTE
Some cases ESTIMATE=BLOCKS method can be inaccurate, When the table was created with bigger initial extent. (Check COMPRESS=Y in traditional EXP/IMP). Many rows have been deleted from the table or each block may contain less amount of data. Some cases ESTIMATE=STATISTICS method can be inaccurate, Never calculated statistics for a schema or not collected statistics recently. If all tables analyzed recently, Mostly ESTIMATE=STATISTICS is most accurate to dump file size. ESTIMATE=BLOCKS
$ expdp system/ dumpfile=dpdir:hr_estimate.dmp schemas=hr estimate=blocks Export: Release 11.2.0.1.0 - Production on Wed Aug 5 11:44:45 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "SYSTEM"."SYS_EXPORT_SCHEMA_10":
system/******** dumpfile=dpdir:hr_estimate.dmp
schemas=hr estimate=blocks Estimate in progress using BLOCKS method... Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA .
estimated "HR"."TAB1"
1.062 GB
.
estimated "HR"."TAB2"
1.062 GB
.
estimated "HR"."TAB3"
1.062 GB
Total estimation using BLOCKS method: 3.187 GB Processing object type SCHEMA_EXPORT/USER Processing object type SCHEMA_EXPORT/ROLE_GRANT Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TABLE/TABLE
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS . . exported "HR"."TABB1"
899.6 MB 42255682 rows
. . exported "HR"."TABB2"
899.6 MB 42255682 rows
. . exported "HR"."TABB3"
899.6 MB 42255682 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_10" successfully loaded/unloaded ****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_10 is: /u01/datapump/hr_estimate.dmp Job "SYSTEM"."SYS_EXPORT_SCHEMA_10" successfully completed at 11:47:58
ESTIMATE =STATISTICS
SYS> EXECUTE dbms_stats.gather_schema_stats('HR'); PL/SQL procedure successfully completed. $ expdp system/ dumpfile=dpdir:hr_estimate.dmp schemas=hr estimate=STATISTICS Export: Release 11.2.0.1.0 - Production on Wed Aug 5 12:05:27 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "SYSTEM"."SYS_EXPORT_SCHEMA_10":
system/******** dumpfile=dpdir:hr_estimate.dmp
schemas=hr estimate=statistics reuse_dumpfiles=yes Estimate in progress using STATISTICS method... Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA .
estimated "HR"."TABB1"
732.4 MB
.
estimated "HR"."TABB2"
732.4 MB
.
estimated "HR"."TABB3"
732.4 MB
Total estimation using STATISTICS method: 2.145 GB Processing object type SCHEMA_EXPORT/USER Processing object type SCHEMA_EXPORT/ROLE_GRANT Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS . . exported "HR"."TABB1"
899.6 MB 48000000 rows
. . exported "HR"."TABB2"
899.6 MB 48000000 rows
. . exported "HR"."TABB3"
899.6 MB 48000000 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_09" successfully loaded/unloaded ****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_09 is: /u01/datapump/hr_estimate.dmp Job "SYSTEM"."SYS_EXPORT_SCHEMA_10" successfully completed at 12:09:50
If a table has LOBs, the dumpfile size may vary as ESTIMATE does not take LOB size for consideration. To get more accurate size for compressed tables, always use ESTIMATE=STATISTICS.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
ESTIMATE _ONLY
With the help of ESTIMATE_ONLY parameter we can estimate the space in bytes without perform the export. Dump will not be generated. Log file will be generated if we specify. $ expdp system/ logfile=dpdir:estimate_maya.log
estimate_only=y
Export: Release 11.2.0.1.0 - Production on Sun Aug 2 12:52:06 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "SYSTEM"."SYS_EXPORT_SCHEMA_07":
system/******** logfile=dp:estimate_maya.log
schemas=maya estimate_only=yes Estimate in progress using BLOCKS method... Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA .
estimated "MAYA"."TABB1"
1.062 GB
.
estimated "MAYA"."TABB2"
1.062 GB
Total estimation using BLOCKS method: 2.125 GB Job "SYSTEM"."SYS_EXPORT_SCHEMA_07" successfully completed at 12:52:20
$ expdp system/manager logfile=dp:estimate_maya.log schemas=maya estimate_only=yes compression=all Export: Release 11.2.0.1.0 - Production on Sun Aug 2 12:55:03 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "SYSTEM"."SYS_EXPORT_SCHEMA_07":
system/******** logfile=dp:estimate_maya.log
schemas=maya estimate_only=yes compression=all Estimate in progress using BLOCKS method... Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA .
estimated "MAYA"."TABB1"
1.062 GB
.
estimated "MAYA"."TABB2"
1.062 GB
Total estimation using BLOCKS method: 2.125 GB Job "SYSTEM"."SYS_EXPORT_SCHEMA_07" successfully completed at 12:55:17
EXCLUDE AND INCLUDE
Metadata filtering is implemented
through EXCLUDE/INCLUDE parameters. Both are applicable for the
database objects like tables, indexes, triggers, procedures, grants, function, etc... EXPDP & IMPDP parameters are mutually exclusive to each other. EXPDP command can have either exclude or include parameter at one time, but you can have multiple exclude/include parameter with single EXPDP command. We can see some examples. EX - I : EXCLUDE=TABLE - WHEN EXPORTING SINGLE SCHEMA
$ expdp system/ dumpfile=dpdir:expmeta.dmp ... schemas=sham
EXCLUDE=TABLE
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
EX - 2 : EXCLUDE= TRIGGER, FUNCTION, VIEW, SYNONYM, SEQUENCE , … WHEN EXPORTING SINGLE SCHEMA
$ expdp system/ dumpfile=dpdir:sham_exp.dmp logfile=dpdir:sham_exp.log schemas=sham EXCLUDE=INDEX,VIEW,SYNONYM,SEQUENCE,TRIGGER,FUNCTION,PACKAGE,PROCEDURE,PACKAGE_BODY $ expdp system/ dumpfile=dpdir:hr_exp.dmp logfile=dpdir:hr_exp.log schemas=hr EXCLUDE=VIEW,SYNONYM,SEQUENCE,TRIGGER EXCLUDE=FUNCTION,PACKAGE,PROCEDURE,PACKAGE_BODY
EX - 3 : EXCLUDE SCHEMA - WHEN EXPORTING FULL DATABASE
$ expdp system/ dumpfile=dpdir:fulldb.dmp ... full=y exclude=schema:"in('SCOTT')" $ expdp system/ dumpfile=dpdir:fulldb.dmp ... full=y exclude=schema:"in\('SCOTT'\)" $ expdp system/ dumpfile=dpdir:fulldb.dmp ... full=y exclude=schema:"='SCOTT'" LRM-00116: syntax error at 'SCHEMA:' following '=' Double quotes ("), single quotes (') parentheses () are termed as special character on UNIX system. In order to use special character on UNIX system, you have to use escape characters. SYNTAX: EXCLUDE=SCHEMA:\"=\'SCHEMA_NAME\'\" $ expdp system/ dumpfile=dpdir:fulldb.dmp ... full=y exclude=schema:\"=\'SCOTT\'\" When using EXCLUDE/INCLUDE parameter, it's a good idea to use a parameter file, that avoid some OS exception. You can use a parameter file, include all your parameters in parfile. $ expdp system/ parfile=param1.ini $ cat param1.ini # TO EXCLUDE SINGLE SCHEMA DUMPFILE=dpdir:fulldb%U.dmp LOGFILE=dpdir:fulldb.log FULL=Y PARALLEL=2 EXCLUDE=SCHEMA:"='SCOTT'" EX - 4 : EXCLUDE MORE THAN ONE SCHEMA - WHEN EXPORTING FULL DATABASE
If you want to exclude more than one schema, use following syntax. SYNTAX: EXCLUDE=SCHEMA:"IN\('SCHEMA_NAME'\,'SCHEMA_NAME')" $ expdp system/ dumpfile=dpdir:fulldb.dmp ... exclude=schema:"IN\('SCOTT'\,'HR'\)" $ expdp system/ parfile=param2.ini $ cat param2.ini # TO EXCLUDE MORE THAN ONE SCHEMAS DUMPFILE=dpdir:fulldb%U.dmp LOGFILE=dpdir:fulldb.log FULL=Y PARALLEL=2 EXCLUDE=SCHEMA:"IN('SCOTT','SONI','HR)" # EXCLUDE=SCHEMA:"IN\('SCOTT'\,'HR'\,'SONI'\)"
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
EX - 5 : EXCLUDE SCHEMA USING LIKE OPERATOR
$ expdp system/ parfile=param3.ini $ vi param3.ini # EXCLUDE SCHEMAS WHICH SCHEMA_NAME STARTS WITH STA% DUMPFILE=dpdir:fulldb%U.dmp LOGFILE=dpdir:fulldb.log FULL=Y PARALLEL=3 EXCLUDE=SCHEMA:"LIKE 'SHA%'"
EX - 5 : EXCLUDE TABLE(S) - WHEN EXPORTING THE SCHEMA
$ expdp sham/ dumpfile=dpdir:sham.dmp ... schemas=sham exclude=table:"in\('TAB1'\)" $ expdp
dumpfile=dpdir:sham1.dmp schemas=sham
...
exclude=table:"in\('TAB1'\,'TAB2'\)"
$ expdp
dumpfile=dpdir:sham2.dmp schemas=sham
...
exclude=table:\"=\'DEPT\'\"
$ expdp
dumpfile=dpdir:sham3.dmp schemas=sham
...
exclude=table:">'E'"
$ expdp sham/ parfile=param4.ini $ vi param4.ini # EXCLUDE TABLES WHICH NAME STARTS WITH EMP% DUMPFILE=dp:sham4.dmp LOGFILE=dp:sham4.log SCHEMAS=SHAM EXCLUDE=TABLE:"LIKE 'EMP%'" $ expdp sham/ parfile=param5.ini $ vi param4.ini # EXCLUDE MUTLIPLE TABLES DUMPFILE=dpdir:sham_tab.dmp LOGFILE=dpdir:sham_tab.log EXCLUDE=TABLE:"IN('EMP','DEPT','PAYROLL')"
EX - 6 : EXCLUDE INDEX(ES) - WHEN EXPORTING INDIVIDUAL TABLE
$ expdp dumpfile=dpdir:schema.dmp
... tables=sham.tab1 exclude=index:\"=\'INDX1\'\"
$ expdp dumpfile=dpdir:schema.dmp
... tables=sham.tab2 exclude=index:"in\('IND1'\,'IND2'\)"
$ expdp system/ parfile=param5.ini $ vi param5.ini dumpfile=dp:sham_indx.dmp logfile=dp:sham_indx.log tables=sham.tab3 EXCLUDE=INDEX:"='IN1'"
# For single index
EXCLUDE=INDEX:"IN('IN1','IN2')"
# For more than one index
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
Exclude all indexes in a table using LIKE operator. $ expdp system/
parfile=param6.ini
$ vi param6.ini dumpfile=dp:sham_indx1.dmp logfile=dp:sham_indx1.log tables=sham.tab3 EXCLUDE=INDEX:"LIKE 'IN%'"
EXCLUDE ALL INDEXES – WHEN EXPORTING A SCHEMA
In this example, EXCLUDE=INDEX excludes all indexes in the hr schema. $ expdp system/ dumpfile=dpdir:hr.dmp ... schemas=hr
exclude=INDEX
By default, Oracle attempts to create a UNIQUE INDEX to police a PK/UK constraint. I.e. Oracle always creates UNIQUE INDEX when we create PRIMARY KEY on the table. So you cannot explicitly exclude Primary key and unique key associated indexes.
EX – 7 : EXCLUDE CONSTRAINTS WHEN EXPORTING A SCHEMA
NOT NULL constraints cannot be excluded. EXCLUDE=REF_CONSTRAINT will exclude REFERENTIAL INTEGRITY (FOREIGN KEY) constraints. EXCLUDE=CONSTRAINT will exclude all NON REFERENTAIL constraints except NOT NULL constraints. $ expdp hr/ dumpfile=dpdir:hr_export.dmp ... schemas=hr exclude=constraint $ expdp hr/ dumpfile=dpdir:hr_export.dmp ... schemas=hr exclude=ref_constraint $ expdp hr/ dumpfile=dpdir:hr_expdat.dmp ... schemas=hr exclude=constraint,index Before EXPORT, check constraint details in -(from_user account). After IMPORT the dumpfile, check constraint details where you have imported -(to_user account). SQL> select constraint_name, constraint_type, table_name from user_constarints;
EX – 8 : EXCLUDE GRANTS – WHEN EXPORTING TABLE
$ expdp system/ dumpfile=dpdir:sham_exp.dmp ... schemas=sham exclude=grant
EX – 8 : EXCLUDE TRIGGER(S)
SHAM> select trigger_name, trigger_type, table_name, status from user_triggers; TRIGGER_NAME
TRIGGER_TYPE
TABLE_NAME
STATUS
------------------------------ ---------------- ------------------------------ -------TRI_EMP
AFTER STATEMENT
USRLOG
AFTER EVENT
EMP
ENABLED ENABLED
$ expdp system/ dumpfile=dpdir:sham_exp.dmp ... schemas=sham exclude=trigger
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
Excludes both triggers at command line $ expdp system dumpfile=dpdir:sham_exp.dmp logfile=dpdir:sham_exp.log schemas=sham EXCLUDE=TRIGGER:"IN\('TRI_EMP'\,'USRLOG'\)" Unload EMP table but not its associated trigger TRI_EMP $ expdp system dumpfile=dpdir:sham_emp.dmp logfile=dpdir:sham_exp.log schemas=sham tables=sham.emp EXCLUDE=TRIGGER:"IN\('TRI_EMP'\)"
EX - 9 : EXCLUDE SCHEMA OBJECTS – WHEN EXPORT SCHEMA
$ expdp system parfile=param7.ini $ vi param7.ini # EXCLUDE SCHEMA OBJECTS DUMPFILE=dpdir:sham_exp.dmp LOGFILE=dpdir:sham_exp.log SCHEMAS=SHAM EXCLUDE=TRIGGER:"IN('TRI_EMP','USRLOG')", INDEX:"='EMP_DPID_IN1'", EXCLUDE=INDEX:"IN('INDX1','INDX2')",GRANT,REF_CONSTRAINT $ expdp system/ parfile=param8.ini reuse_dumpfiles=yes $ vi param8.ini # EXCLUDE SCHEMA OBJECTS DUMPFILE=dpdir:schema_exp.dmp LOGFILE=dpdir:schema_exp.log SCHEMAS=SHAM EXCLUDE=PROCEDURE:"LIKE'PROC%'" EXCLUDE=SEQUENCE, TABLE:"IN('TAB1','TAB2')" EXCLUDE=INDEX:"LIKE 'IN%'" EXCLUDE=TABLE:"='TAB3'",VIEW,PACKAGE,FUNCTION EX – 8 : EXCLUDE =USER
Specifying EXCLUDE=USER excludes only the definitions of users, not the objects contained within users schema. If you try to exclude a user by using a statement such as EXCLUDE=USER:"='HR'", then only the information used in CREATE USER hr DDL statements will be excluded. $ expdp system/ parfile=param9.ini $ vi param9.ini # EXCLUDE CREATE USER HR DDL STATEMENT... DUMPFILE=dpdir:hr.dmp LOGFILE=dp:hr.log SCHEMAS=HR EXCLUDE=USER:"='HR'"
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
EX – 9 : EXCLUDE MULTIPLE SCHEMAS USING LIKE & IN OPERATORS
$ expdp system/ parfile=param10.ini $ vi param10.ini # EXCLUDE SCHEMAS USING LIKE OPERATOR DUMPFILE=dp:fulldb.dmp LOGFILE=dp:fulldb.log FULL=Y PARALLEL=2 EXCLUDE=SCHEMA:"LIKE 'SYS%'" $ vi param11.ini # EXCLUDE SCHEMAS USING IN OPERATOR DUMPFILE=dp:full%u.dmp LOGFILE=dp:full.log FULL=Y PARALLEL=2 COMPRESSION=ALL EXCLUDE=SCHEMA:"IN ('OUTLN','SYSTEM','SYSMAN','FLOWS_FILES','APEX_030200','APEX_PUBLIC_USER','ANONYMOUS')"
EX – 10 EXCLUDE SPECIFIC SCHEMAS – WHEN IMPORTING
Source database : crms Target database : hrms EXPORT SCHMEAS
- HR, MAYA, ROSE, SCOTT, SHAM from source database.
EXCLUDE SCHEMAS – MAYA, SHAM in target database. 1) Find schema associated tablespaces in source database. 2) If those tablespaces are not existing in target database, recreate it. 4) Check user profiles and roles in source database. 5) Recreate those profiles and roles as per source database. 6) Start your import process. GATHER ALL SCHEMA INFO IN SOURCE DATABSE
SQL> select * from dba_ts_quotas where username='USER_NAME'; SQL> select * from dba_ts_quotas where username='SCOTT'; SQL> select username, profile from dba_users where username='USER_NAME'; SQL> select username, profile from dba_users where username='SCOTT'; SQL> select * from dba_profiles where profile='PROFILE_NAME'; SQL> select * from dba_profiles where profile='P1'; SQL> select * from dba_role_privs where grantee='USER_NAME';
# SEE GRANTED_ROLE COLUMN
SQL> select * from dba_role_privs where grantee='SCOTT';
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
SYS>select * from role_sys_privs where role='ROLE_NAME'; SYS>select * from role_sys_privs where role='R1'; After export, create sqlfile from the dumpfile, you can verify all SQL DDL statements. Start export then import the dumpfile with EXCLUDE option. $ expdp system/ dumpfile=dpdir:schema.dmp ... schemas=sham,rose,maya,scott,hr $ scp schema.dmp
[email protected]:/u03/datapump
[email protected]'s password:****** $ impdp system/ dumpfile=dp:schema.dmp ... EXCLUDE=SCHEMA:"IN\('JHIL'\)" $ impdp system/ dumpfile=dp:schema.dmp ... EXCLUDE=SCHEMA:"IN\('MAYA'\,'SHAM'\)"
I am using dumpfile which was taken originaly taken from CRMS database. For some examples, EXCLUDE with IMPDP utility is tested in HRMS database. EX – 11 : EXCLUDE SCHEMA OBJECTS – WHEN IMPORTING SCHEMA
$ expdp system/ dumpfile=dpdir:schema.dmp ... schemas=sham,rose,maya,scott,hr $ impdp system/ dumpfile=dp:schema.dmp ... remap_schema=sham:sony EXCLUDE=INDEX,TRIGGER,FUNCTION
EXCLUDE=SCHEMA:"IN\('MAYA'\,'SCOTT'\,'HR'\,'ROSE'\)"
EX – 12 : EXCLUDE TABLE – WHEN IMPORTING SCHEMA
$ expdp system/ dumpfile=dpdir:schema.dmp ... schemas=sham,rose,maya,scott,hr $ impdp system/ dumpfile=dp:schema.dmp ... remap_schema=sham:sony EXCLUDE=TABLE EXCLUDE=SCHEMA:"IN\('MAYA'\,'SCOTT'\,'HR'\,'ROSE'\)" ... ORA-39082: Object type ALTER_PACKAGE_SPEC:"SONY"."PKGSAL" created with compilation warnings ORA-39082: Object type PACKAGE_BODY:"SONY"."PKGSAL" created with compilation warnings ...
When you load PL/SQL coding without the dependent tables, you will encounter compilation errors. EX – 13 : EXCLUDE INDEX , GRANT – WHEN IMPORTING SCHEMA
$ expdp system/ dumpfile=dpdir:schema.dmp ... schemas=sham,rose,maya,scott,hr $ impdp system/ dumpfile=dp:schema.dmp ... remap_schema=sham:sony EXCLUDE=INDEX,GRANT EXCLUDE=SCHEMA:"IN\('MAYA'\,'SCOTT'\,'HR'\,'ROSE'\)"
We cannot explicitly exclude Primary key and unique key associated indexes. By default, Oracle attempts to create a UNIQUE INDEX to police a PK/UK constraint. I.e. Oracle always creates UNIQUE INDEX when we create PRIMARY KEY on the table. EX – 14 : EXCLUDE REF_CONSTRAINT – WHEN IMPORTING SCHEMA
$ expdp system/ dumpfile=dpdir:schema.dmp ... schemas=sham,rose,maya,scott,hr $ impdp system/ dumpfile=dp:schema.dmp ... remap_schema=sham:sony EXCLUDE=REF_CONSTRAINT EXCLUDE=SCHEMA:"IN\('MAYA'\,'SCOTT'\,'HR'\,'ROSE'\)"
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
After import, you can verify using following SQL statement. SQL> select constraint_name, constraint_type from user_constraints where constraint_type='R';
EXCLUDE=CONSTRAINT
$ expdp system/ dumpfile=dpdir:schema.dmp ... schemas=sham,rose,maya,scott,hr $ impdp system/ dumpfile=dp:schema.dmp ... remap_schema=sham:sony EXCLUDE=CONSTRAINT EXCLUDE=SCHEMA:"IN\('MAYA'\,'SCOTT'\,'HR'\,'ROSE'\)" SQL> select constraint_name, constraint_type from user_constraints where constraint_type='C'; Not null constraint explicitly can’t be excluded. You can verify using above SQL command. EX – 14 EXCLUDE TRIGGER(S) – WHEN IMPORTING SCHEMA
$ expdp system/ dumpfile=dpdir:schema.dmp ... schemas=sham,rose,maya,scott,hr $ impdp system/ dumpfile=dp:schema.dmp ... remap_schema=sham:sony EXCLUDE=TRIGGER:"IN\('TRI_EMP'\,'USRLOG'\)" EXCLUDE=SCHEMA:"IN\('MAYA'\,'SCOTT'\,'HR'\,'ROSE'\)" $ impdp parfile=param12.ini $ vi param12.ini # EXCLUDE TRIGGER(S) USING LIKE & IN OPERATORS USERID=SYSTEM/MANAGER DUMPFILE=dp:schema.dmp LOGFILE=dp:schema.log REMAP_SCHEMA=SHAM:SONY EXCLUDE=TRIGGER:"LIKE 'TRI%'"
# Using LIKE Operator
EXCLUDE=TRIGGER:"IN\('USRLOG'\)"
# Using IN Operator
EXCLUDE=SCHEMA:"IN\('MAYA'\,'SCOTT'\,'HR'\,'ROSE'\)" $ vi param13.ini # EXCLUDE TRIGGER(S) USERID=SYSTEM/MANAGER DUMPFILE=dp:schema.dmp LOGFILE=dp:schema.log REMAP_SCHEMA=SHAM:SONY EXCLUDE=TRIGGER:"IN\('TRI_EMP','USRLOG','TRI_SAL')" EXCLUDE=SCHEMA:"IN\('MAYA'\,'SCOTT'\,'HR'\,'ROSE'\)"
EX – 14 EXCLUDE INDEX(S) USING LIKE OPERATOR – WHEN IMPORTING SCHEMA
$ expdp system/ dumpfile=dpdir:schema.dmp ... schemas=sham,rose,maya,scott,hr $ impdp system/ dumpfile=dp:schema.dmp ... remap_schema=sham:sony EXCLUDE=INDEX:\"LIKE\'INDX\%\'\" EXCLUDE=SCHEMA:"IN\('MAYA'\,'SCOTT'\,'HR'\,'ROSE'\)"
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
EX – 14 EXCLUDE SPECIFIC TABLE
– WHEN IMPORTING SCHEMA
$ expdp system/ dumpfile=dpdir:schema.dmp ... schemas=sham,rose,maya,scott,hr $ impdp system/ dumpfile=dp:schema.dmp ... remap_schema=sham:sony EXCLUDE=TABLE:"IN('TAB1')" EXCLUDE=SCHEMA:"IN\('MAYA'\,'SCOTT'\,'HR'\,'ROSE'\)" $ impdp system/ dumpfile=dp:schema.dmp ... remap_schema=sham:sony EXCLUDE=TABLE:\"=\'TAB1\'\" EXCLUDE=SCHEMA:"IN\('MAYA'\,'SCOTT'\,'HR'\,'ROSE'\)" EX – 15 EXCLUDE MORE THAN ONE TABLE – WHEN IMPORTING SCHEMA
$ impdp system/ dumpfile=dp:schema.dmp ... remap_schema=sham:sony EXCLUDE=TABLE:"IN('TAB1'\,'TAB2'\)" EXCLUDE=SCHEMA:"IN\('MAYA'\,'SCOTT'\,'HR'\,'ROSE'\)"
EX – 15 : EXCLUDE SCOTT OBJECTS – WHEN IMPORTING SCHEMA
$ expdp scott/password dumpfile=scott.dmp directory=data_pump_dir logfile=scott_exp.log $ impdp scott/password dumpfile=scott.dmp directory=data_pump_dir logfile=scott_imp.log EXCLUDE=TABLE:\"IN('BONUS'\,'CUSTOMERS'\)\"
EX- 16 : EXCLUDE STATISTICS – WHEN EXPORT AND IMPORT
$ expdp hr/hr dumpfile=hr.dmp directory=data_pump_dir logfile=hr.log EXCLUDE=STATISTICS $ impdp hr/hr dumpfile=hr.dmp directory=data_pump_dir remap_schema=hr:hr1 EXCLUDE=STATISTICS .. ... Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options ORA-39002: invalid operation ORA-39168: Object path STATISTICS was not found. Already done EXLUDE=STATISTICS with EXPDP. Again i set EXCLUDE=STATISTICS with IMPDP clause then, Oracle throws error ORA-39168: Object path STATISTICS was not found. EX- 16 : WHY IMPDP DOES NOT IGNORE TO COLLECT INDEX STATISTICS
Does IMPDP ignore statistics collection for EXCLUDE=STATISTICS? We think Data Pump IMPDP to exclude both table and index statistics collection; When you do import EXCLUDE=STATISTICS option, IMPDP can ignore statistics collection for the tables and not for indexes. Whys so? Let’s us check. SOURCE SCHEMA
USR1
TARGET SCHEMA
USR2
USR1> select * from cat; TABLE_NAME
TABLE_TYPE
------------------------------ ----------EMP
TABLE
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
SOURCE SCHEMA OBJECTS
USR1> select object_name,object_type,status from user_objects; OBJECT_NAME
OBJECT_TYPE
STATUS
---------------- ------------------- ------EMP
TABLE
VALID
IN2_EMP_LEVL
INDEX
VALID
IN1_EMP_DID
INDEX
VALID
EMP_EMPID_C1_PK
INDEX
VALID
SOURCE SCHEMA OBJECT STATISTICS
USR1> select TABLE_NAME,NUM_ROWS,LAST_ANALYZED,STALE_STATS from user_tab_statistics; TABLE_NAME
NUM_ROWS LAST_ANAL STA
------------------------------ ---------- --------- --EMP
10000 21-AUG-15 NO
USR1> select INDEX_NAME,DISTINCT_KEYS,NUM_ROWS,LAST_ANALYZED,STALE_STATS from user_ind_statistics; INDEX_NAME
DISTINCT_KEYS
NUM_ROWS LAST_ANAL STA
------------------------------ ------------- ---------- --------- --IN2_EMP_LEVL
4
9999 21-AUG-15 NO
12
10000 21-AUG-15 NO
10000
10000 21-AUG-15 NO
IN1_EMP_DID EMP_EMPID_C1_PK START EXPORT OF USR1 SCHEMA
$ expdp system/manager dumpfile=dp:schema.dmp logfile=schema.log schemas=usr1 Export: Release 11.2.0.1.0 - Production on Fri Aug 21 20:26:14 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01":
system/******** dumpfile=dp:schema.dmp
logfile=dp:schema.log schemas=usr1 Estimate in progress using BLOCKS method... Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 640 KB Processing object type SCHEMA_EXPORT/USER Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
. . exported "USR1"."EMP"
496.9 KB
10000 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is: /u03/datapump/schema.dmp Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at 20:26:35 IMPORT THE DUMP WITH EXCLUDE=STATISTICS
$ impdp system/manager dumpfile=dp:schema.dmp ... REMAP_SCHEMA=USR1:USR2 exclude=statistics Import: Release 11.2.0.1.0 - Production on Fri Aug 21 20:30:43 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Master table "SYSTEM"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded Starting "SYSTEM"."SYS_IMPORT_SCHEMA_01":
system/******** dumpfile=dp:schema.dmp
logfile=dp:schema.log schemas=usr1 remap_schema=usr1:usr2 exclude=statistics Processing object type SCHEMA_EXPORT/USER Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA . . imported "USR2"."EMP"
496.9 KB
10000 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Job "SYSTEM"."SYS_IMPORT_SCHEMA_01" successfully completed at 20:30:46 It’s good I don’t see any lines related to statistics collection of the objects. IMPDP client shows table statistics or index statistics were not processed. So I hope no statistics collected for the schema objects. Can check at database level? Let’s us check it out. CHECK OBJECT DETAILS IN TARGET SCHEMA
USR2> select * from cat; TABLE_NAME
TABLE_TYPE
------------------------------ ----------EMP
TABLE
USR2> select object_name, object_type, status from user_objects; OBJECT_NAME
OBJECT_TYPE
STATUS
---------------- ------------------- ------IN2_EMP_LEVL
INDEX
VALID
IN1_EMP_DID
INDEX
VALID
EMP_EMPID_C1_PK
INDEX
VALID
EMP
TABLE
VALID
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
CHECK STATISTICS DETAILS FOR TABLE AND INDEXES
USR2> select TABLE_NAME,NUM_ROWS,LAST_ANALYZED,STALE_STATS FROM user_tab_statistics; TABLE_NAME
NUM_ROWS LAST_ANAL STA
------------------------------ ---------- --------- --EMP USR2> select INDEX_NAME, DISTINCT_KEYS, NUM_ROWS,LAST_ANALYZED,STALE_STATS FROM user_ind_statistics; INDEX_NAME
DISTINCT_KEYS
NUM_ROWS LAST_ANAL STA
------------------------------ ------------- ---------- --------- --IN2_EMP_LEVL
4
9999
21-AUG-15
12
10000
21-AUG-15
10000
10000
21-AUG-15
IN1_EMP_DID EMP_EMPID_C1_PK
At database level, we can see already statistics level indexes were collected. By default oracle automates index level statistics collection during index creation/rebuild. Statistics collection for indexes is controlled by _optimizer_compute_index_stats. SYS> show parameter _optimizer_compute_index_stats; NAME
TYPE
VALUE
------------------------------------ ----------- -----------------------------_optimizer_compute_index_stats
boolean
TRUE
To speed up import job, you can use EXCLUDE=STATITICS with IMPDP clause. When you perform import job, if you set EXCLUDE=STATISTICS excludes only table statistics and not indexes level statistics. EX – 16 : INCLUDE SPECIFIC TABLE(S) WHEN EXPORTING SCHEMA
$ expdp system/ directory=data_pump_dir dumpfile=rose.dmp logfile=dp:rose.log SCHEMAS=ROSE INCLUDE=TABLE"IN('PSPRUFDEL')" $ expdp system/ directory=data_pump_dir dumpfile=rose.dmp logfile=dp:rose.log SCHEMAS=ROSE TABLE:\"=\'PSPRUFDEL\'\" $ expdp system/ dumpfile=dp:rose.dmp logfile=dp:rose.log SCHEMAS=ROSE INCLUDE=TABLE"IN('PSPRUFDEL'\,'PSQUEUEDEFN'\)" $ expdp system/ directory=data_pump_dir parfile=param15.ini $ vi param14.ini DUMPFILE=SAMPLE.DMP LOGFILE=SAMPLE.LOG SCHEMAS=ROSE INCLUDE=TABLE:"='DEPT'" # INCLUDE=TABLE:"IN('EMP','PAYROLL')" # INCLUDE=TABLE:"LIKE 'PST_HR%'"
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
EX – 16 : EXPORT LIST OF TABLES USING LIKE OPERATOR WITH QUERY CLAUSE
This command extracts all tables which table_name starts with PST from the rose schema. I have used query clause with like operator. $ expdp system/ directory=data_pump_dir dumpfile=rose.dmp logfile=rose.log schemas=rose include=TABLE:\"IN\(select table_name FROM dba_tables WHERE owner=\'ROSE\' and table_name like \'PST\%\'\)\" .. ... Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options .. ... Estimate in progress using BLOCKS method... . . exported "ROSE"."PST_GP_RTO_TRGR_VW"
79.87 MB
286364 rows
. . exported "ROSE"."PST_GP_ABSSS_V_XREF"
31.52 MB
227408 rows
. . exported "ROSE"."PST_GP_ELIG_GRPGP"
19.66 MB
112528 rows
. . exported "ROSE"."PST_HR_SSTEXT_TEXT"
19.69 MB
102096 rows
. . exported "ROSE"."PST_HR_SSTEXT_EFFDT"
18.99 MB
396468 rows
. . exported "ROSE"."PST_TL_RPTD_TIME"
11.92 MB
248736 rows
. . exported "ROSE"."PST_GP_PAYEE_DATA"
10.37 MB
216528 rows
. . exported "ROSE"."PST_GP_OPR_RULE_PRF"
7.417 MB
154568 rows
. . exported "ROSE"."PST_GP_ELIG_GRP"
4.223 MB
87992 rows
. . exported "ROSE"."PST_GP_ABS_SS"
564.7 KB
11332 rows
. . exported "ROSE"."PST_GP_ABS_TKTPE_VW"
954.7 KB
19287 rows
. . exported "ROSE"."PST_GP_ABS_TYPE"
899.0 KB
18065 rows
. . exported "ROSE"."PST_PTAFAW_STEP_INST"
496.9 KB
8468 rows
. . exported "ROSE"."PST_PTAFAW_STEP_VW"
682.2 KB
15225 rows
. . exported "ROSE"."PST_PTAFAW_USER_INST"
922.4 KB
19899 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is: /u02/app/oracle/admin/crms/dpdump/rose.dmp Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at 14:28:44
EX – 16 : EXPORT LIST OF TABLES USING > OPERATOR
$ expdp system/ dumpfile=dp:sham.dmp ... schemas=sham INCLUDE=TABLE: ">'E'"
EXAMPLE FOR INCLUDE =TABLE - WHEN EXPORT SCHEMA
As we know, metadata filetering is implemented through EXCLUDE/INCLUDE parameters. If a filter specifies a table is to included with Data Pump utility dependent objects of an identified objects will be processed. You cannot use same time EXCLUDE/INCLUDE parameters. Once you enter INCLUDE=TABLE at export job, all dependent objects of the tables such as (index, constraints, grants, triggers) will be processed by Data Pump utility. I.e. Data Pump utility includes objects and its associated dependent objects for the export/import job.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
CHECK SOURCE SCHEMA (SHAM) OBJECTS
SHAM> select count(*) from cat; COUNT(*) ---------20 SHAM> select object_name, object_type from user_objects where object_type='DATABASE LINK'; OBJECT_NAME
OBJECT_TYPE
---------------- ---------------TESTLINK
DATABASE LINK
SHAM> select object_name, object_type from user_objects where object_type='TRIGGER'; OBJECT_NAME
OBJECT_TYPE
---------------- ---------------TRI_EMP
TRIGGER
USRLOG
TRIGGER
SHAM> select count(*) from user_indexes; COUNT(*) ---------42
SCHEMA EXPORT - INCLUDE=TABLE
$ expdp system/ dumpfile=dp:sham.dmp logfile=dp:sham.log schemas=sham INCLUDE=TABLE $ impdp system/ dumpfile=dp:sham.dmp logfile=dp:sham.log remap_schema=sham:sony
CHECK TARGET SCHEMA (SONY) OBJECTS
Database link and trigger (USR_LOG) is missing. Because they are not dependent any of the sham schema objects. So that Data Pump utility did not process (usr_log and testlink). SONY> select object_name, object_type from user_objects where object_type='TRIGGER'; OBJECT_NAME
OBJECT_TYPE
---------------- ---------------TRI_EMP
TRIGGER
SONY> select object_name, object_type from user_objects where object_type='DATABASE LINK'; no rows selected SHAM> select count(*) from user_indexes; COUNT(*) ---------42
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
SONY> select count(*) from cat; COUNT(*) ---------16 In sony schema 4 objects are missing. What are they?
We will trace those objects in sham schema.
SHAM> select * from cat where table_type!='TABLE'; TABLE_NAME
TABLE_TYPE
------------------------------ ----------PAYROLL_INFO
SYNONYM
SEQ_EMP_INFO
SEQUENCE
SEQ_EMP_INFO1
SEQUENCE
VEMP_INFO
VIEW
FLASHBACK_SCN & FLASHBACK_TIME
Traditional export utility (EXP) had CONSISTENT=Y parameter which has been replaced by FLASHBACK_SCN or FLASHBACK_TIME; both are mutually exclusive. These are important feature of 11g of Data Pump. EXPDP utility can be used to take a consistent backups for (SCHEMA(S)/TABLE(S)/FULL). Using FLASHBACK_SCN parameter you can perform export operation as of that SCN. FLASHBACK_SCN : You need to pass SCN as the argument. FLASHBAK_TIME : You need to pass timestamp value. FLASHBACK_SCN & FLASHBACK_TIME Parameters are mutually exclusive. These parameters affect flashback query capability of Oracle Database and NOT supported for Flashback Database, Flashback Drop or Flashback Archive. FLASHBACK_SCN -
SYS> select current_scn from v$database; CURRENT_SCN ----------36758399 SYS> create restore point good_data; Restore point created. SYS> select current_scn from v$database; CURRENT_SCN ----------36758405 SYS> SELECT SCN, NAME FROM V$RESTORE_POINT; SCN --------36758400
NAME ----------GOOD_DATA
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
You can use queries may prove useful for converting between timestamp and scn’s SYS> SELECT SCN_TO_TIMESTAMP(SCN) FROM dual; SYS> SELELECT TIMESTAMP_TO_SCN(SYSTIMESTAMP) FROM DUAL; SYS> SELECT SCN_TO_TIMESTAMP(36758400) FROM dual; SCN_TO_TIMESTAMP(36758400) --------------------------------24-AUG-15 01.40.05.000000000 AM I have created a RESTORE POINT so that easily we can find out the exact time stamp. Now I am going to update salary column on the table (payroll). SHAM> select sum(salary) from payroll; SUM(SALARY) ----------4317769 SYS> update sham.payroll set salary=salary*1.2; 68 rows updated. SYS> commit; commit complete SHAM> select sum(salary) from payroll; SUM(SALARY) ----------5224500.49
The parameter FLASHBACK_SCN states the point in time from when the table is to be retrieved. $ expdp system/manager dumpfile=dpdir:sham_payroll.dmp logfile=sham_payroll.log TABLES=SHAM.PAYROLL FLASHBACK_SCN=367584000 .. ... Processing object type TABLE_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 160 KB Processing object type TABLE_EXPORT/TABLE/TABLE Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS . . exported "SHAM"."PAYROLL"
15 KB
68 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is: /u01/datapump/sham_payroll.dmp Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 09:41:09
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
Once the backup is done, drop the payroll table. When the import takes place it rebuilds the table with the data as it was before. Once the import job is finished you can query the payroll table it shows data ‘as it was’ prior to the update statement. $ impdp system/manager dumpfile=dpdir:sham_payroll.dmp logfile=payroll.log .. ... SHAM> select sum(salary) from payroll; SUM(SALARY) ----------4317769 EXAMPLE – 2 : FLASHBACK_SCN
SYS> SELECT CURRENT_SCN, DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM V$DATABASE; CURRENT_SCN
GET_SYSTEM_CHANGE_NUMBER
-----------
------------------------
36786969
36786969
MAYA> create table emp ( emp_id number NOT NULL, emp_name varchar2(15), gender varchar2(6), dept_id number , emp_desg varchar2(15), isactive varchar2(6), emp_hire_date date, emp_level varchar2(2)); Table created. SYS> insert into emp maya.values(1,'MAYA','FEMALE',10, 'SW_ENGG','ACTIVE', to_date('22-MAR-2000','DD-MM-YYYY'),'B'); 1 row created. SYS> commit; Commit complete SYS> select CURRENT_SCN, dbms_flashback.get_system_change_number from v$database; CURRENT_SCN
GET_SYSTEM_CHANGE_NUMBER
-----------
------------------------
36787635
36787635
SYS> insert into maya.emp values(2,'SONY','MALE',10, 'SW_ENGG','ACTIVE', to_date('20-DEC-2000','DD-MM-YYYY'),'B'); 1 row created. SYS> commit; Commit complete. SYS> select CURRENT_SCN, dbms_flashback.get_system_change_number from v$database; CURRENT_SCN
GET_SYSTEM_CHANGE_NUMBER
-----------
------------------------
36787687
36787687
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
SYS> select * from maya.emp; EMP_ID
EMP_NAME
GENDER
-------- -----------
------
DEPT_ID
EMP_DESG
--------- ----------
ISACTIV ---------
EMP_HIRE_ --------
EMP_LEVEL ---------
1
MAYA
FEMALE
10
SW_ENGG
ACTIVE
22-MAR-00
B
2
SONY
MALE
10
SW_ENGG
ACTIVE
20-DEC-00
B
We can export the maya.emp table with different SCN’s
EX- 1 : FLASHBACK_SCN = 38786969
$ expdp system/manager dumpfile=dpdir:maya_emp.dmp tables=maya.emp logfile=maya.emp.log flashback_scn=38786969 Export: Release 11.2.0.1.0 - Production on Mon Aug 24 13:24:24 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "SYSTEM"."SYS_EXPORT_TABLE_01":
system/******** dumpfile=dpdir:maya_emp.dmp
tables=maya.emp logfile=maya.emp.log flashback_scn=36786969 Estimate in progress using BLOCKS method... Processing object type TABLE_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 64 KB Processing object type TABLE_EXPORT/TABLE/TABLE ORA-31693: Table data object "MAYA"."EMP" failed to load/unload and is being skipped due to error: ORA-02354: error in exporting/importing data ORA-01466: unable to read data - table definition has changed Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is: /u03/datapump/maya_emp.dmp Job "SYSTEM"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at 13:24:43
EX : 2 - FLASHBACK_SCN=36787635
$ expdp system/manager dumpfile=dpdir:maya_emp.dmp ... tables=maya.emp flashback_scn=36787635 Export: Release 11.2.0.1.0 - Production on Mon Aug 24 13:35:06 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "SYSTEM"."SYS_EXPORT_TABLE_01":
system/******** dumpfile=dpdir:maya_emp.dmp
tables=maya.emp logfile=maya.emp.log flashback_scn=36787635 Estimate in progress using BLOCKS method... Processing object type TABLE_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 64 KB
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
Processing object type TABLE_EXPORT/TABLE/TABLE . . exported "MAYA"."EMP"
7.851 KB
1 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is: /u03/datapump/maya_emp.dmp Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 13:35:23 EX : 3 - FLASHBACK_SCN=36787687
$ expdp dumpfile=dpdir:maya_emp.dmp tables=maya.dmp ... flashback_scn=36787687 Export: Release 11.2.0.1.0 - Production on Mon Aug 24 13:35:41 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "SYSTEM"."SYS_EXPORT_TABLE_01":
system/******** dumpfile=dpdir:maya_emp.dmp
tables=maya.emp logfile=maya.emp.log flashback_scn=36787687 reuse_dumpfiles=yes Estimate in progress using BLOCKS method... Processing object type TABLE_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 64 KB Processing object type TABLE_EXPORT/TABLE/TABLE . . exported "MAYA"."EMP"
7.898 KB
2 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is: /u03/datapump/maya_emp.dmp Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 13:35:58
FLASHBACK_TIME=SYSTIMESTAMP (CURRENT_TIME)
FLASHBACK_TIME parameter value is converted to the approximate SCN for the specified time. FLASHBACK_TIME=SYSTIMESTAMP(SYSTEM_CURRENT_TIME). SYS> select systimestamp from dual; SYSTIMESTAMP -----------------------------------24-AUG-15 07.17.52
MAYA> create table tab1(no number, staring_val varchar2(15)); Table created. MAYA> insert into tab1 select rownum,'ORACLE' from dual connect by level commit; Commit complete.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
$ expdp system/ dumpfile=dpdir:maya.dmp tables=maya.tab1
logfile=dpdir:maya.log
FLASHBACK_TIME=SYSTIMESTAMP .. ... Estimate in progress using BLOCKS method... Processing object type TABLE_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 64 KB Processing object type TABLE_EXPORT/TABLE/TABLE . . exported "MAYA"."TAB1"
6.773 KB
100 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is: /u01/datapump/maya.dmp Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 19:27:27 FLASHBACK_TIME AT SPECIFIC TIME
SYS> drop table maya.tab1; Table dropped. $ cat flash_time.ini DUMPFILE=dpdir:flash_time.dmp LOGFILE=dpdir:flash_time.log TABLES=MAYA.TAB1 FLASHBACK_TIME="TO_TIMESTAMP('24-AUG-2015 19:17:52','DD-MON-YYYY HH24:MI:SS')" $ expdp system/ parfile=flash_time.ini ... Estimate in progress using BLOCKS method... Processing object type TABLE_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 64 KB Processing object type TABLE_EXPORT/TABLE/TABLE . . exported "MAYA"."TAB1"
6.773 KB
100 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is: /u01/datapump/flash_time.dmp Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 20:37:00 $ impdp system/manager dumpfile=dpdir:flash_time.dmp .. ... Master table "SYSTEM"."SYS_IMPORT_FULL_04" successfully loaded/unloaded Processing object type TABLE_EXPORT/TABLE/TABLE Processing object type TABLE_EXPORT/TABLE/TABLE_DATA . . imported "MAYA"."TAB1"
6.773 KB
100 rows
Job "SYSTEM"."SYS_IMPORT_FULL_04" successfully completed at 20:38:20
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
TABLE_EXISTS_ACTION
Sometime tickets come to us, load some tables into the user’s schema. But the same time already table is existing in the user’s schema but table contained data is NOT recent one. We need old data as well as updated data. Once we try to import the dumpfile into the schema, immediately throws error existence of tables. Then how can I import? Oracle comes up with table_exist_action parameter of the Data Pump. TABLE_EXIST_ACTION parameter comes with following additional parameters. SKIP
: Leaves the table as it is and moves to next object (Default).
APPEND
: Loads data from the source and existing rows never touched.
TRUNCATE
: Delete existing rows and loads freshly from the source.
REPLACE
: Drops the existing table and create and loads it from the source.
REPLACE and SPIK are NOT a valid option for CONTENT=DATA_ONLY
TUSER> select * from t1; ID
NAME
SSN
MOBILE
---- ------- -------- ----------1
SONY
200100
9999912345
2
SONI
200101
9999923456
3
SAAM
200102
9999934567
4
SHAM
200103
9999945678
5
JESI
200104
9999956789
EXPORT OF TUSER
$ expdp system/ dumpfile=tuser.dmp schemas=tuser Export: Release 11.2.0.1.0 - Production on Wed Aug 5 15:23:25 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "SYSTEM"."SYS_EXPORT_SCHEMA_09":
system/******** dumpfile=tuser.dmp schemas=tuser
Estimate in progress using BLOCKS method... Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 64 KB Processing object type SCHEMA_EXPORT/USER Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS . . exported "TUSER"."T1"
6.304 KB
5 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_09" successfully loaded/unloaded
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_09 is: /u02/app/oracle/admin/crms/dpdump/tuser.dmp Job "SYSTEM"."SYS_EXPORT_SCHEMA_09" successfully completed at 15:24:44
TABLE_EXISTS_ACTION=SKIP [DEFAULT]
$ impdp system/ dumpfile=tuser.dmp schemas=tuser Import: Release 11.2.0.1.0 - Production on Wed Aug 5 15:27:38 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Master table "SYSTEM"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded Starting "SYSTEM"."SYS_IMPORT_SCHEMA_01":
system/******** dumpfile=tuser.dmp schemas=t1
Processing object type SCHEMA_EXPORT/USER ORA-31684: Object type USER:"TUSER" already exists Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TABLE/TABLE ORA-39151: Table "TUSER"."T1" exists. All dependent metadata and data will be skipped due to table_exists_action of skip Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Job "SYSTEM"."SYS_IMPORT_SCHEMA_01" completed with 2 error(s) at 15:27:41 TABLE_EXISTES_ACTION=APPEND
Before import the dump file, just I am adding some recodes and constraints on the table. TUSER> TRUNCATE TABLE T1; Table truncated. TUSER> insert into t1 values(6,'BILL', 200105,9999967891); 1 row created. TUSER> insert into t1 values(7,'GREEN',200106,9999978912); 1 row created. TUSER> insert into t1 values(8,'DAVID',200107,9999989123); 1 row created. TUSER> ALTER TABLE T1 ADD CONSTRAINT T1_CONS1_ID_PK PRIMARY KEY(ID); Table altered. TUSER> ALTER TABLE T1 ADD CONSTRAINT T1_CONS2_SSN_UNIQ UNIQUE(SSN); Table altered.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
CONSTRAINT FOR TABLE T1
TUSER> SELECT CONSTRAINT_NAME FROM USER_CONSTRAINTS; CONSTRAINT_NAME -----------------------------T1_CONS1_ID_PK T1_CONS2_SSN_UNIQ
IMPORT THE DUMP - TABLE_EXISTS_ACTION= APPEND
$ impdp system/ dumpfile=tuser.dmp schemas=tuser table_exists_action=append Import: Release 11.2.0.1.0 - Production on Wed Aug 5 15:51:35 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Master table "SYSTEM"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded Starting "SYSTEM"."SYS_IMPORT_SCHEMA_01":
system/******** dumpfile=tuser.dmp schemas=tuser
table_exists_action=append Processing object type SCHEMA_EXPORT/USER ORA-31684: Object type USER:"TUSER" already exists Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TABLE/TABLE ORA-39152: Table "TUSER"."T1" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA . . imported "TUSER"."T1"
6.304 KB
5 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Job "SYSTEM"."SYS_IMPORT_SCHEMA_01" completed with 2 error(s) at 15:51:42 TUSER> SELECT * FROM T1; ID
NAME
--- ----------
SSN --------
MOBILE ----------
6
BILL
200105
9999967891
7
GREEN
200106
9999978912
8
DAVID
200107
9999989123
1
SONY
200100
9999912345
2
SONI
200101
9999923456
3
SAAM
200102
9999934567
4
SHAM
200103
9999945678
5
JESI
200104
9999956789
8 rows selected.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
TUSER> SELECT CONSTRAINT_NAME FROM USER_CONSTRAINTS; CONSTRAINT_NAME -----------------------------T1_CONS1_ID_PK T1_CONS2_SSN_UNIQ TABLE_EXISTS_ACTION=TRUNCATE
$ impdp system/ dumpfile=tuser.dmp schemas=tuser table_exists_action=truncate Import: Release 11.2.0.1.0 - Production on Wed Aug 5 15:57:08 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Master table "SYSTEM"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded Starting "SYSTEM"."SYS_IMPORT_SCHEMA_01":
system/******** dumpfile=tuser.dmp schemas=tuser
table_exists_action=truncate Processing object type SCHEMA_EXPORT/USER ORA-31684: Object type USER:"TUSER" already exists Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TABLE/TABLE ORA-39153: Table "TUSER"."T1" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA . . imported "TUSER"."T1"
6.304 KB
5 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Job "SYSTEM"."SYS_IMPORT_SCHEMA_01" completed with 2 error(s) at 15:57:12 TUSER> SELECT CONSTRAINT_NAME FROM USER_CONSTRAINTS; CONSTRAINT_NAME -----------------T1_CONS1_ID_PK T1_CONS2_SSN_UNIQ
TUSER> SELECT * FROM T1; ID
NAME
SSN
MOBILE
---------- ---------- ----------1
SONY
200100
9999912345
2
SONI
200101
9999923456
3
SAAM
200102
9999934567
4
SHAM
200103
9999945678
5
JESI
200104
9999956789
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
Truncate is just removed new added records; they are (id : 6,7,8) on the table (T1) but newly created constraints are still existing on the table in TUSER schema.
TUSER> ALTER TABLE T1 ADD CONSTRAINT T1_CONS3_MOBILE_UNIQ UNIQUE(MOBILE); Table altered. TUSER> insert into t1 values(009,'ROSE',200108,9999991234); 1 row created. TUSER> insert into t1 values(010,'SANDY',200109,9999992345); 1 row created. TUSER> COMMIT; Commit complete.
TUSER> SELECT * FROM T1; ID
NAME
SSN
MOBILE
--- ------- ------- ----------1
SONY
200100
9999912345
2
SONI
200101
9999923456
3
SAAM
200102
9999934567
4
SHAM
200103
9999945678
5
JESI
200104
5567891235
9
ROSE
200108
9999991234
10
SANDY
200109
9999992345
7 rows selected. TUSER> SELECT CONSTRAINT_NAME FROM USER_CONSTRAINTS; CONSTRAINT_NAME -----------------------------T1_CONS3_MOBILE_UNIQ T1_CONS1_ID_PK T1_CONS2_SSN_UNIQ
TABLE_EXISTS_ACTION=REPLACE
$ impdp system/ dumpfile=tuser.dmp schemas=tuser table_exists_action=REPLACE Import: Release 11.2.0.1.0 - Production on Wed Aug 5 16:34:52 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Master table "SYSTEM"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded Starting "SYSTEM"."SYS_IMPORT_SCHEMA_01":
system/******** dumpfile=tuser.dmp schemas=tuser
table_exists_action=REPLACE Processing object type SCHEMA_EXPORT/USER
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
ORA-31684: Object type USER:"TUSER" already exists Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA . . imported "TUSER"."T1"
6.304 KB
5 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Job "SYSTEM"."SYS_IMPORT_SCHEMA_01" completed with 1 error(s) at 16:34:57
TUSER> SELECT CONSTRAINT_NAME FROM USER_CONSTRAINTS; no rows selected. TUSER> SELECT * FROM T1; ID
NAME
SSN
MOBILE
---- ------- ------------- ---------1
SONY
200100
9999912345
2
SONI
200101
9999923456
3
SAAM
200102
9999934567
4
SHAM
200103
9999945678
5
JESI
200104
9999956789
POINTS TO NOTE
TRUNCATE deletes existing rows. REPLACE drops the existing table then creates tables and loading data as per the source. When you use SKIP, APPEND, TRUNCATE existing table dependent objects in the source, such as indexes, grants, triggers and constraints are NOT modified. For REPLACE the dependent objects are dropped and recreated from the source. If existing table has active constraints/triggers, when loading data if any row violates an active constraint, the load fails; you can override this behavior by specifying the IMPDP parameter DATA_OPTIONS=SKIP_CONSTRAINT_ERROR on the import command line. NETWORK_LINK
NETWORK_MODE is one of the most interesting Data Pump feature. This mode allows user to perform an import operation on the fly without any dumpfile. Which means the target database to receive data directly from the source without generating an intermediate dumpfile. NO dumpfiles are involved.
This is convenient and it saves space and there
is a communication between two different database located on the different servers. DBA’s can work with different environment like PROD/DEV/TEST. We need to keep same production data in TEST/DEV environment. For this, we need to export data from production database to dev database. I have two different databases in two different servers. Once I run EXPDP job on SERVER1, dumpfile will be created in SERVER2.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
SOURCE DATABASE : PSTPROD
-- 192.168.1.130
-- SERVER1
-- PRODUCTION
--
10.2.0.5
TARGET DATABASE : HRMS
-- 192.168.1.131
-- SERVER2
-- DEVELOPMENT --
11.2.0.1
HOW DATA IS TRANSFERRED
The network import mode is started when the parameter NETWORK_LINK is added to the IMPDP command, this parameter references a valid database link that points to the source database that database link is used to perform the connection with source database. The IMPDP client initiates the import request to the local database. Target database contacts source database referenced by the database link using NETWORK_LINK parameter, retrieves the data, and writes it directly to the target database. No dumpfiles are involved. CONFIGURE NETWORK CONFIGURATION FILES
In server2, add TNS entry on the local database tnsnames.ora file for the remote database. EX:
Add TNS entry (File location : /u02/app/oracle/product/11.2.0/dbhome_1/network/admin) for
pstprod database in server2. PSTPROD = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.130)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = pstprod) (SID = pstprod) ) ) In server1, add entries in listener.ora for the pstprod database. LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.130)(PORT = 1521)) ) ) ADR_BASE_LISTENER = /u02/app/oracle SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (ORACLE_HOME = /u01/app/oracle/product/10.2.0/dbhome_1) (SID_NAME = pstprod) ) ) $ lsnrctl start Listener must be up and running in source database server. In server2 also, listener is running.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
In SERVER2, Check test connectivity using the tnsping utility. $ tnsping PSTPROD TNS Ping Utility for Linux: Version 11.2.0.1.0 - Production on 25-AUG-2015 12:12:24 Copyright (c) 1997, 2009, Oracle.
All rights reserved.
Used parameter files: Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP) (HOST = 192.168.1.130)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = pstprod))) OK (70 msec)
In server2, listener also configured. So I am using NET SERVICE NAME to connect my database. $ rlsqlplus rose@HRMSDB SQL*Plus: Release 11.2.0.1.0 Production on Tue Aug 25 12:42:44 2015 Copyright (c) 1982, 2009, Oracle.
All rights reserved.
Enter password: ***** Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options
CREATE DATABASE LINK
Before exporting data over the network, you need to create a database link in the DESTINATION database first. At the target side, a new database link is created. ROSE> create database link remotelink connect to hr identified by hr using 'PSTPROD'; Database link created. ROSE> select count(*) from tab@remotelink; TNAME
TABTYPE
CLUSTERID
------------------------------ ------- ---------REGIONS
TABLE
COUNTRIES
TABLE
LOCATIONS
TABLE
DEPARTMENTS
TABLE
JOBS
TABLE
EMPLOYEES
TABLE
JOB_HISTORY
TABLE
EMP_DETAILS_VIEW
VIEW
8 rows selected. Database link is working and ready from database hrms to pstprod.
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
The source database (pstprod) version is 10.2.0.5 and the destination database (hrms) version is 11.2.0.1. There is a database link named remotelink on the hrms database. In the network_mode the source database must be an equal or lower version than the destination database. $ echo $ORACLE_HOME /u02/app/oracle/product/11.2.0/dbhome_1 $ export $ORACLE_HOME=/u02/app/oracle/product/11.2.0/dbhome_1 $ export ORACLE_SID=hrms Let’s start import on TARGET. (Import hr schema of pstprod to hrms database without dumpfile). As we know, data is transferred across the database_link from one database to another but still Data Pump requires a directory on the server to store some information. $ impdp rose/rose directory=dp logfile=hr.log network_link=remotelink remap_schema=hr:rose Import: Release 11.2.0.1.0 - Production on Tue Aug 25 16:15:06 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "ROSE"."SYS_IMPORT_SCHEMA_01":
rose/******** directory=dp logfile=hr.log
network_link=remotelink remap_schema=hr:rose Estimate in progress using BLOCKS method... Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 448 KB Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE Processing object type SCHEMA_EXPORT/TABLE/TABLE . . imported "ROSE"."COUNTRIES"
25 rows
. . imported "ROSE"."DEPARTMENTS"
27 rows
. . imported "ROSE"."EMPLOYEES"
107 rows
. . imported "ROSE"."JOBS"
19 rows
. . imported "ROSE"."JOB_HISTORY"
10 rows
. . imported "ROSE"."LOCATIONS"
23 rows
. . imported "ROSE"."REGIONS"
4 rows
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Processing object type SCHEMA_EXPORT/TABLE/COMMENT Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE Processing object type SCHEMA_EXPORT/VIEW/VIEW Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/TRIGGER Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS Job "ROSE"."SYS_IMPORT_SCHEMA_01" successfully completed at 16:16:36 Import gets succeeded without dumpfile, HR schema has been successfully loaded into rose schema
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
SCHEMA EXPORT USING NETWORK_LINK WITH DUMPFILE
We can take schema export from source database from target server. We can store dump files also. The network_link parameter initiates an export using a database link – EXPDP client contacts the source database referenced by database link, retrieves data from the source database and writes data to a dumpfile in the target database server. $ expdp rose/ directory=dp:hr_bkp.dmp logfile=dp:hr_bkp.log network_link=remotelink
Export: Release 11.2.0.1.0 - Production on Tue Aug 25 16:56:45 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "ROSE"."SYS_EXPORT_SCHEMA_01":
rose/******** dumpfile=dp:hr_bkp.dmp
logfile=dp:hr_bkp.log network_link=remotelink Estimate in progress using BLOCKS method... Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 448 KB Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Processing object type SCHEMA_EXPORT/TABLE/COMMENT Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE Processing object type SCHEMA_EXPORT/VIEW/VIEW Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/TRIGGER Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS . . exported "HR"."COUNTRIES"
6.765 KB
25 rows
. . exported "HR"."DEPARTMENTS"
7.710 KB
27 rows
. . exported "HR"."EMPLOYEES"
18.39 KB
107 rows
. . exported "HR"."JOBS"
7.656 KB
19 rows
. . exported "HR"."JOB_HISTORY"
7.843 KB
10 rows
. . exported "HR"."LOCATIONS"
9.148 KB
23 rows
. . exported "HR"."REGIONS"
5.851 KB
4 rows
Master table "ROSE"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded ****************************************************************************** Dump file set for ROSE.SYS_EXPORT_SCHEMA_01 is: /u03/datapump/hr_bkp.dmp Job "ROSE"."SYS_EXPORT_SCHEMA_01" successfully completed at 16:56:56 $ ls -l /u03/datapump/hr* -rw-r-----
1 oracle oinstall 380928 Aug 25 16:56 /u03/datapump/hr_bkp.dmp
-rw-r--r--
1 oracle oinstall
2276 Aug 25 16:56 /u03/datapump/hr_bkp.log
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
EXPORT SCHEMA USING NERWORK_LINK WITH FLASHBACK_SCN
The FLASHBACK_SCN is valid only when the network_link parameter is specified, because the value is passed to the source database to extract consistent data upto specified SCN. SOURCE_SCHEMA
:
SONI
TARGET_SCHEMA
:
SCOTT
SOURCE DATABASE :
CRMS
--
192.168.1.130
--
SERVER1
–-
11.2.0.1
TARGET DATABASE :
HRMS
--
192.168.1.131
–-
SERVER2
--
11.2.0.1
In server2, add TNS entry for local database tnsnames.ora file for the remote database. Add TNS entry (File location : /u02/app/.../dbhome_1/network/admin) for crms database in server2. CRMSDB = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.130)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = crms.server1.com) ) ) In server1, add entries in listener.ora for the pstprod database. LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.130)(PORT = 1521)) ) ) ADR_BASE_LISTENER = /u02/app/oracle SID_LIST_LISTENER = (SID_LIST = (SID_DESC = (ORACLE_HOME = /u02/app/oracle/product/11.2.0/dbhome_1) (SID_NAME = crms) ) ) In SERVER2, check test connectivity using the tnsping utility. $ tnsping CRMSDB .. Used parameter files: Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.130)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = crms.server1.com)))OK (10 msec)
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
CREATE DATABASE LINK AT TARGET DATABASE:
USR2> create database link remotelink_crms connect to usr1 identified by usr1 using 'CRMSDB'; Database link created. USR2> select count(*) from tab1@remotelink_crms; COUNT(*) ---------144688
CREATE DIRECTORY AT TARGET
Directory is already existing so I am giving permission to user USR2 to access the database directory object dp (database level). SYS> grant read, write on directory dp to usr2; Grant succeeded. In server1 (crms), I am going to delete records from USR1.TAB1 SYS> SELECT CURRENT_SCN, DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM V$DATABASE; CURRENT_SCN
GET_SYSTEM_CHANGE_NUMBER
-----------
------------------------
37513251
37513251
23:29:37@SYS> delete from usr1.tab1; 144688 rows deleted. 23:30:03@SYS> commit; Commit complete. 23:30:06@SYS> SELECT CURRENT_SCN, DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER FROM V$DATABASE; CURRENT_SCN
GET_SYSTEM_CHANGE_NUMBER
-----------
------------------------
37516210
37516210
IMPORT SCN= 37516210
$ impdp usr2/ directory=dp logfile=netwk.log network_link=remotelink_crms FLASHBACK_SCN= 37516210 REMAP_SCHEMA=USR1:USR2 Import: Release 11.2.0.1.0 - Production on Wed Aug 26 00:51:29 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
Starting "USR2"."SYS_IMPORT_SCHEMA_01":
usr2/******** directory=dp2 logfile=usr1.log
network_link=remotelink_crms remap_schema=usr1:usr2 flashback_scn=37516210 remap_tablespace=tbs1:users Estimate in progress using BLOCKS method... Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 17 MB Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TABLE/TABLE . . imported "USR2"."TAB1"
0 rows
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS Job "USR2"."SYS_IMPORT_SCHEMA_01" successfully completed at 00:52:09
IMPORT SCN= 37513251
$ impdp usr2/ directory=dp logfile=netwk.log network_link=remotelink_crms FLASHBACK_SCN=37513251 REMAP_SCHEMA=USR1:USR2 Import: Release 11.2.0.1.0 - Production on Wed Aug 26 01:12:01 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "USR2"."SYS_IMPORT_SCHEMA_01":
usr2/******** directory=dp logfile=netwk.log
network_link=remotelink_crms remap_schema=usr1:usr2 flashback_scn=37513251 remap_tablespace=tbs1:users Estimate in progress using BLOCKS method... Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 17 MB Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TABLE/TABLE . . imported "USR2"."TAB1"
144688 rows
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS Job "USR2"."SYS_IMPORT_SCHEMA_01" successfully completed at 01:12:44 I have recovered 144688 rows using flashback_scn parameter. IMPORT SCHEMAS USING NETWORK_LINK WITH FLASHBACK_TIME
$ nohup impdp usr2/ parfile= $ vi network_param.ini directory=dp logfile=dp:schema_import.log schemas=schema1,schema2 NETWORK_LINK=remotelink_crms FLASHBACK_TIME=SYSTIMESTAMP exclude=statistics parallel=2
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
REQUIRED PRIVILEGES ORA-31631: privileges are required ORA-39109: Unprivileged users may not operate upon other users' schema SYS> grant datapump_imp_full_database to usr2; Grant succeeded. ORA-31631: privileges are required ORA-39149: cannot link privileged user to non-privileged user SYS> grant datapump_exp_full_database to usr1; Grant succeeded.
$ impdp usr2/ parfile=network_param.ini Export: Release 11.2.0.1.0 - Production on Wed Aug 26 02:09:15 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "USR2"."SYS_IMPORT_SCHEMA_01":
usr2/******** parfile=network_param.ini
Estimate in progress using BLOCKS method... Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 12 MB Processing object type SCHEMA_EXPORT/USER Processing object type SCHEMA_EXPORT/SYSTEM_GRANT Processing object type SCHEMA_EXPORT/ROLE_GRANT Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TABLE/TABLE . . imported "USR1"."TAB1"
100000 rows
. . imported "USR1"."TAB2"
100000 rows
Job "USR2"."SYS_IMPORT_SCHEMA_01" successfully completed at 02:09:48 SAMPLE
Sample=value considered as the sample percentage. MAYA.TAB1 table contains 10000000 records. I am going to export 10% data for maya.tab1 table.
$ expdp system/ dumpfile=dpdir:maya_tab1.dmp ... tables=maya.tab1 sample=10 Export: Release 11.2.0.1.0 - Production on Wed Aug 26 21:34:57 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "SYSTEM"."SYS_EXPORT_TABLE_01":
system/******** dumpfile=dpdir:maya_tab1.dmp
logfile=dpdir:maya_tab1.log tables=maya.tab1 sample=10
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
Estimate in progress using BLOCKS method... Processing object type TABLE_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 320 MB Processing object type TABLE_EXPORT/TABLE/TABLE Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS . . exported "MAYA"."TAB1"
26.36 MB
999210 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is: /u01/datapump/maya_tab1.dmp Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 21:35:41 SKIP_UNUSABLE_INDEXES
SKIP_UNUSABLE_INDEXES parameter is useful when importing data into an existing table. Purpose is to skip the import of table or table partition (if any index is in unusable state). If your IMPDP command has SKIP_UNUSABLE_INDEXES=N, You can reduce the unnecessary time for index creation (which is in unusable state) during the import job. SKIP_UNUSABLE_INDEXES=Y is default.
SYS> select index_name, status from dba_indexes where owner='USR1'; INDEX_NAME
STATUS
------------------------------ -------INDX1
VALID
INDX2
UNUSABLE
EMP_EMPID_CONS1_PK
VALID
$ expdp system/ dumpfile=dpdir:usr1_emp.dmp ... tables=usr1.emp Export: Release 11.2.0.1.0 - Production on Wed Aug 26 23:13:55 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "SYSTEM"."SYS_EXPORT_TABLE_01": logfile=dpdir:usr_emp.log
system/******** dumpfile=dp:usr1_emp.dmp
tables=usr1.emp
Estimate in progress using BLOCKS method... Processing object type TABLE_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 64 KB Processing object type TABLE_EXPORT/TABLE/TABLE Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS . . exported "USR1"."EMP"
12.10 KB
68 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is: /u01/datapump/usr1_emp.dmp Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 23:14:12
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
EX – I : SKIP_UNUSABLE_INDEXES=N
$ impdp system/ dumpfile=dpdir:usr1_emp.dmp logfile=dpdir:usr1_imp.log skip_unusable_indexes=N table_exists_action=append Import: Release 11.2.0.1.0 - Production on Wed Aug 26 23:14:24 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Master table "SYSTEM"."SYS_IMPORT_FULL_04" successfully loaded/unloaded Starting "SYSTEM"."SYS_IMPORT_FULL_04":
system/******** dumpfile=dpdir:usr1_emp.dmp
logfile=dpdir:usr1_imp.log skip_unusable_indexes=N table_exists_action=append Processing object type TABLE_EXPORT/TABLE/TABLE ORA-39152: Table "USR1"."EMP" exists. Data will be appended to existing table but all dependent metadata will be skipped due to table_exists_action of append Processing object type TABLE_EXPORT/TABLE/TABLE_DATA ORA-31693: Table data object "USR1"."EMP" failed to load/unload and is being skipped due to error: ORA-26028: index USR1.INDX2 initially in unusable state Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Job "SYSTEM"."SYS_IMPORT_FULL_04" completed with 2 error(s) at 23:14:27 EX – II : SKIP_UNUSABLE_INDEXES=N
It has no effect when a table is created as part of an import. In that case, the table and indexes are newly created and will not be marked unusable. SYS> drop table usr1.emp; Table dropped. $ impdp system/ dumpfile=dpdir:usr1_emp.dmp skip_unusable_indexes=N Import: Release 11.2.0.1.0 - Production on Thu Aug 27 00:48:40 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Master table "SYSTEM"."SYS_IMPORT_FULL_04" successfully loaded/unloaded Starting "SYSTEM"."SYS_IMPORT_FULL_04":
system/******** dumpfile=dpdir:usr1_emp.dmp
logfile=dpdir:usr1_emp.log skip_unusable_indexes=N Processing object type TABLE_EXPORT/TABLE/TABLE Processing object type TABLE_EXPORT/TABLE/TABLE_DATA . . imported "USR1"."EMP"
12.10 KB
68 rows
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Job "SYSTEM"."SYS_IMPORT_FULL_04" successfully completed at 00:48:08
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
VERSION
Export schema from 11g database to 10g database. Purpose is to move data between oracle versions -(From higher version to lower version). 1) Export the table (maya.t1) from 11.2.0.1 2) Import the table (maya.t1) to 10.2.0.5
$ expdp system/ dumpfile=dpdir:tab1.dmp tables=maya.tab1 reuse_dumpfiles=yes Export: Release 11.2.0.1.0 - Production on Fri Aug 28 10:31:07 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "SYSTEM"."SYS_EXPORT_TABLE_01":
system/******** dumpfile=dpdir:tab1.dmp
tables=maya.tab1 reuse_dumpfiles=yes Estimate in progress using BLOCKS method... Processing object type TABLE_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 320 MB Processing object type TABLE_EXPORT/TABLE/TABLE Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS . . exported "MAYA"."TAB1"
263.8 MB 1000500 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is: /u01/datapump/tab1.dmp Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 10:31:26
SYS> create or replace directory dp_dir as '/u03/datapump/'; Directory created. SYS> grant read, write on directory dp_dir to system; Grant succeeded. Copy t1.dmp to the target database directory $ cp /u01/datapump/t1.dmp
/u03/datapump/t1.dmp
$ impdp system/ dumpfile=dp_dir:tab1.dmp Import: Release 10.2.0.5.0 - Production on Friday, 28 August, 2015 10:05:37 Copyright (c) 2003, 2007, Oracle.
All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options ORA-39001: invalid argument value ORA-39000: bad dump file specification ORA-39142: incompatible version number 3.1 in dump file "/u01/datapump/tab1.dmp"
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
POINTS TO NOTE
If we are exporting from a higher Oracle Version (11gr2) and import to lower version(10gr2) we will get ERROR ORA-39142: incompatible version number 3.1 in dump file. Any solution? We can use Data Pump to export from a lower Oracle version and import it into higher version without any issues using VERSION=VERSION_STRING parameter at that time of export job.
$ expdp system/ dumpfile=dpdir:tab1.dmp tables=maya.tab1 version=10.2.0.5 Export: Release 11.2.0.1.0 - Production on Fri Aug 28 12:08:44 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "SYSTEM"."SYS_EXPORT_TABLE_01":
system/******** dumpfile=dpdir:tab1.dmp
tables=maya.tab11 reuse_dumpfiles=yes version=10.2.0.5 Estimate in progress using BLOCKS method... Processing object type TABLE_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 320 MB Processing object type TABLE_EXPORT/TABLE/TABLE Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS . . exported "MAYA"."TAB1"
263.8 MB 1000500 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is: /u01/datapump/tab1.dmp Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 12:08:52 $ cp /u01/datapump/tab1.dmp
/u03/datapump/tab1.dmp
$ impdp system/ dumpfile=dp_dir:tab1.dmp remap_tablespace=crms:users Import: Release 10.2.0.5.0 - Production on Friday, 28 August, 2015 12:51:04 Copyright (c) 2003, 2007, Oracle.
All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Master table "SYSTEM"."SYS_IMPORT_FULL_03" successfully loaded/unloaded Starting "SYSTEM"."SYS_IMPORT_FULL_03":
system/******** dumpfile=dp_dir:tab1.dmp
remap_tablespace=crms:users Processing object type TABLE_EXPORT/TABLE/TABLE Processing object type TABLE_EXPORT/TABLE/TABLE_DATA . . imported "MAYA"."TAB1"
263.8 MB 1000500 rows
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS Job "SYSTEM"."SYS_IMPORT_FULL_03" successfully completed at 12:44:16
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu
TRADITIONAL EXPORT/IMPORT Vs EXPDP/IMPDP
POINTS TO NOTE
Some features NOT available when export from higher version to lower version. $ expdp system/ dumpfile=dpdir:tab1.dmp logfile=dpdir:tab1.log tables=maya.tab1 version=10.2.0.5
compression=all
Export: Release 11.2.0.1.0 - Production on Fri Aug 28 13:09:00 2015 Copyright (c) 1982, 2009, Oracle and/or its affiliates.
All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options ORA-39005: inconsistent arguments ORA-39055: The COMPRESSION feature is not supported in version 10.2.0.5.
MONITORING DATAPUMP JOBS
We can find details of data pump job using following jobs. Views are DBA_DATAPUMP_JOBS, DBA_DATAPUMP_SESSIONS and V$SESSION_LOGOPS. SQL> select * from dba_datapump_jobs; SQL> select * from dba_datapump_sessions; SQL> select sid, serial#, username, opname, sofar, totalwork, start_time, sysdate, time_remaining, message FROM v$session_longops WHERE opname like '%EXPORT%'; SQL> select to_char(sysdate,'YYYY-MM-DD HH24:MI:SS') "DATE", s.program, s.sid, s.status, s.username, d.job_name, p.spid, s.serial#, p.pid from v$session s, v$process p, dba_datapump_sessions d where p.addr=s.paddr and s.saddr=d.saddr;
JOB.txt
Exploring the Oracle DBA Technology by Gunasekaran ,Thiyagu