migrating-oracle-windows-2-rhel-v1

September 20, 2017 | Author: Sachin Raje | Category: Oracle Database, File System, Database Schema, Databases, Database Index
Share Embed Donate


Short Description

Download migrating-oracle-windows-2-rhel-v1...

Description

Migrating Oracle 10g/11g Environments from Windows Server Enterprise to Red Hat Enterprise Linux 5

Oracle 10g/11g Windows Server Enterprise Intel Xeon X5570 Nehalem Server

Version 1.0 January 2010

Oracle 10g/11g Migrated to:

Red Hat Enterprise Linux 5 Intel Xeon X5570 Nehalem Server

Migrating Oracle 10g/11g Environments from Windows Server Enterprise to Red Hat Enterprise Linux 5 1801 Varsity Drive Raleigh NC 27606-2072 USA Phone: +1 919 754 3700 Phone: 888 733 4281 Fax: +1 919 754 3701 PO Box 13588 Research Triangle Park NC 27709 USA Linux is a registered trademark of Linus Torvalds. Red Hat, Red Hat Enterprise Linux and the Red Hat "Shadowman" logo are registered trademarks of Red Hat, Inc. in the United States and other countries. Microsoft, Windows, Windows Server and SQL Server are registered trademarks of Microsoft Corporation. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Intel, the Intel logo, and Xeon are registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. All other trademarks referenced herein are the property of their respective owners. © 2010 by Red Hat, Inc. This material may be distributed only subject to the terms and conditions set forth in the Open Publication License, V1.0 or later (the latest version is presently available at http://www.opencontent.org/openpub/). The information contained herein is subject to change without notice. Red Hat, Inc. shall not be liable for technical or editorial errors or omissions contained herein. Distribution of modified versions of this document is prohibited without the explicit permission of Red Hat Inc. Distribution of this work or derivative of this work in any standard (paper) book form for commercial purposes is prohibited unless prior permission is obtained from Red Hat Inc. The GPG fingerprint of the [email protected] key is: CA 20 86 86 2B D6 9D FC 65 F6 EC C4 21 91 80 CD DB 42 A6 0E

www.redhat.com

2

Table of Contents

1 Executive Summary.....................................................................................4 1.1 1.2 1.3 1.4

11g Lateral Migration ................................................................................................4 Diagonal Migration.....................................................................................................6 Storage Considerations.............................................................................................8 Which Method to Choose..........................................................................................8

2 Overview......................................................................................................14 2.1 Diagonal Migration and Upgrade.............................................................................14 2.2 Lateral Migration......................................................................................................14 2.3 ASM and File System Transparency.......................................................................15

3 Migration or Migration/Upgrade ...............................................................17 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8

Diagonal Migration and Upgrade Using Data Pump..............................................17 Configuring Data Pump...........................................................................................18 Using File Based Data Pump...................................................................................18 Data Pump Network Import......................................................................................30 Lateral Migration Using Transportable Database..................................................34 Transportable Database Migration :........................................................................35 Transportable Database Migration ASM based .....................................................44 Database Upgrade Using Database Upgrade Assistant.......................................45

4 Conclusions ...............................................................................................57 Appendix A: Configuring Oracle 11gR2 ASM Based Storage ...................58

3

www.redhat.com

1 Executive Summary This paper discusses the preparation and tasks required to migrate Oracle version 10g/11g databases from Microsoft Windows 2003/2008 Enterprise Servers to Red Hat Enterprise Linux version 5.4. A previous publication in the Red Hat Reference Architecture Series has documented the out of the box performance gain users can realize by switching from Windows 2008 Server Enterprise to Red Hat Enterprise Linux 5.4. This paper will show several ways to perform the migration of an Oracle RDBMS from Windows Server 2008 Enterprise to Red Hat Enterprise Linux 5.4. Various Oracle 10g/11g versions running on Windows Server 2003/2008 Enterprise are used as the starting point. The use cases differ in the underlying storage configurations and target RDBMS versions. Different migration and upgrade strategies are discussed. Utilities native to Oracle 10g/11g RDBMS are discussed and demonstrated. All scenarios assume a quiesced source database. High availability options such as Data Guard, RAC and third party technology such as SAN based remote snapshot capabilities are not within the scope of this paper. Several alternative methods to switch database host platforms from Windows 200X to Red Hat Enterprise Linux Server 5.4 are available. This paper will focus on two Oracle utilities and feature sets that can be used to accomplish inter-platform migration and upgrade between Windows Server platforms and Red Hat Enterprise Linux. One is Oracle Data Pump. The other is Oracle Transportable Database, a new 11g feature that extends Transportable Tablespace capabilities to the entire database. Oracle database storage on Windows platforms can be based on file systems (typically NTFS) or Oracle's ASM based storage strategy. This paper will illustrate moving NTFS and ASM based database storage from Windows Server Enterprise 2008 to Red Hat Enterprise Linux native file systems such as ext3 or to Oracle 11g ASM storage configurations.

1.1 11g Lateral Migration Existing Oracle 11g databases running on MicroSoft Windows Server (2003/2008) platforms can be migrated to Red Hat Enterprise Linux without changing the version of the database. In fact, in all cases of Transportable Database migrations, the new target's database binaries must be the same Oracle 11g version as the source database's. The underlying storage technology can remain remain file system or ASM based or be switched from one to the www.redhat.com

4

other. While the Transportable Tablespace feature of the Oracle RDBMS has been available for some time,Transportable Database technology is specific to versions of Oracle 11g only.

Illustration 1: Oracle 11g Platform Migration using Oracle Transportable Database 

If an upgrade of the Oracle ASM/RDBMS binaries and the database is desired, it must be done as a separate follow on process after migration is completed.

5

www.redhat.com

Illustration 2: Oracle Transportable Database Followed by DBUA RDBMS Upgrade

1.2 Diagonal Migration Oracle Data Pump can be used to perform simultaneous migration and database upgrades from Windows Enterprise Server (2003/2008) to Red Hat Enterprise Linux 5.4. Data Pump can be used to upgrade an Oracle 10gR1/R2 database to Oracle 11gR1/R2 as part of the migration process. The advantage of Oracle Data Pump is that the import process will transparently handle the datafile header block conversion from Windows to Linux and upgrade the RDBMS version in a single operation.

www.redhat.com

6

Illustration 3: Oracle 10g/11g Database Migration and Upgrade Using Data Pump

7

www.redhat.com

1.3 Storage Considerations Whether the existing Windows/Oracle RDBMS install is based on OS specific file systems or Oracle managed ASM matters little. Both Data Pump and Transportable Database migration facilities allow the option of ASM or file system storage on the target server. Typically on Red Hat Enterprise Linux servers the ext3 file system is used for general purpose storage. Note that even if ASM is chosen as the primary database storage technology,ext3 file systems will be required for the Oracle binaries, environment scripts and configuration files such as listener.ora. While Oracle utilities will manage the transition of datafiles from Windows Server 2008 Enterprise, a file transfer utility that can handle Windows to Linux conversion is required for text based scripts and parameter files The file specification formats differ on Red Hat Enterprise Linux so scripts created on Windows Server platforms will need modification to run in a Linux environment

1.4 Which Method to Choose Varying factors such as the migration/maintenance window, database sizing and storage configurations all need to be considered when determining the best method of migration and/or upgrade for an individual database environment. This paper attempts to categorize and list these requirements. It offers several approaches based on varied scenarios that apply to typical Oracle environments. The illustration below shows the possible migration and upgrade paths available when using Oracle Data Pump to migrate and upgrade Oracle 10g/11g databases from a Windows Server platform to Red Hat Enterprise Linux.

www.redhat.com

8

Illustration 4: Oracle Data Pump Upgrade/Migration Storage Options 9

www.redhat.com

Because Oracle Transportable Database technology is limited to 11g versions of Oracle RDBMS and only supports migration, the options available are a bit more limited than when using Data Pump. Note however that underlying storage technology changes i.e. NTFS to ASM or ext3, are fully supported.

www.redhat.com

10

Illustration 5: Oracle 11g Transportable Database Migration and Storage Options

11

www.redhat.com

The single largest factor in the success of any Oracle migration/upgrade will be the degree to which the database team prepares and rehearses the operation in a test environment before attempting the operation in production. While test hardware and resources may be far more limited than the production environment – it is imperative to rehearse with at least partial data sets. This helps identify possible migration glitches such as corrupt data blocks in the preupgrade database. The DBA team needs to profile and estimate time frames for file conversions and transfers. The more preparation work done before the actual migration/upgrade, the better chance of turning the system back over to production users on time and without glitches. Beyond the investments in time and money RDBMS technology upgrades and database migration are each in their own right highly disruptive events. Any production upgrade or migration requires the full support and cooperation of the end user community, various IT entities and management's full support and approval. Merely negotiating down time to do migrations and upgrade work can become a serious and time consuming task in and of itself. Doing both the migration and an RDBMS upgrade in one maintenance cycle may seem risky – but availability and up time requirements may require this combined operation. So whenever the upgrade option is available, the process described will be a migration and upgrade procedure rather than simply a migration.

www.redhat.com

12

2 Overview 2.1 Diagonal Migration and Upgrade With several of the Data Pump techniques listed,the database is upgraded and migrated in a single operation. This is referred to as 'diagonal' migration/upgrade. This can save the need to negotiate and schedule additional down time for RDBMS upgrade work after migration. Performing both the migration and upgrade as a single maintenance task introduces greater complexity but those risks can be mitigated by preparation and practice. Each organization is different and the best choices presented for migration and/or upgrade vary by circumstance.

2.2 Lateral Migration Reflecting this, another option is a platform only migration or 'lateral' migration with no simultaneous database binary upgrade. In that scenario, after completion of the lateral migration, a database upgrade using Oracle's DBUA utility can be performed. but the upgrade could also be postponed to a later date. The migration moves an existing RDBMS environment to a new server platform without upgrading either the binaries or the database itself. Note that in order to use Oracle Transportable Database feature to accomplish this the RDBMS binary versions must match exactly, including CPU patches and minor revision levels. Transportable Database is an 11g specific feature. Transportable Tablespaces ( the ability to move one or more tablespaces of data from one database instance to another) has been around in one form or another since Oracle 8i. But this does not provide a means to easily move an entire set of application schemas. This is because an Oracle user's non data and index objects such as triggers, packages, other other stored logic are stored in the database data dictionary and not in the tablespaces that store rows of information or indexes based on those rows. There are also significant restrictions regarding data and referential integrity when using Transportable Tablespaces. Many of these restrictions such as tablespace self containment, are lessened with Transportable Database Technology. Database and application availability requirements coupled with the physical size of a database, help determine the migration method. While a 50 GB production database is a manageable size for a Data Pump or Transportable Database platform migration/upgrade – if given enough of a maintenance window – a 50 TB database MAY not be able to be migrated and upgraded via these methods during the normal maintenance down time window of 8 hours. Pre-upgrade testing will help determine this. 13

www.redhat.com

As size and availability requirements rise for database installations, Oracle and many SAN vendors now provide additional tools and software to minimize downtime. However,these are beyond the scope of this paper. This paper will concentrate on demonstrating the tools and technologies available natively from Red Hat Enterprise Linux and the Oracle RDBMS only. Such additional functionality as exemplified by Oracle's Data Guard or EMC'S Remote Snapshot Clone Technology will not be discussed

2.3 ASM and File System Transparency For Windows to Linux Oracle database migrations ASM and file system based database storage configurations are mostly transparent to the process. In both storage technology cases, additional operating system scratch space is needed. Ideally this is non volatile storage shared between the source and target systems. Additional infrastructure requirements such as shared temporary storage and increased network bandwidth can also expedite a Windows to Linux database server migration. Whether the existing Windows/Oracle RDBMS install is based on OS specific file systems or Oracle managed ASM matters little. Both Data Pump and Transportable Database migration facilities allow the choice of ASM or file system storage on the target server. Typically on Red Hat Enterprise Linux servers the ext3 file system is used for general purpose storage. Note that even if ASM is chosen as the primary database storage technology,ext3 file systems will be required for the Oracle binaries, environment scripts and configuration files such as listener.ora. While Oracle utilities will manage the transition of datafiles from Windows Server 2008 Enterprise, a file transfer utility that can handle Windows to Linux conversion is required for text based scripts and parameter files The file specification formats differ between Red Hat Enterprise Linux so scripts created on Windows Server platforms will need editing before they can be run in a Linux environment Migration of Oracle from Windows Server platforms to Red Hat Server platforms is somewhat more straightforward than other cases because Windows x86_64 and Red Hat Enterprise Linux are both little Endian CPU architectures This simplifies database and tablespace conversion processing when using RMAN and Transportable Database, but this doesn't mean such a migration is trivial. Header file conversion for a Transportable Database based migration is required and very often NLS_LANG settings, character sets and even flat file formats for parameter files will need to be converted. The single greatest key to successful migration is extensive planning and testing before the actual conversion takes place. While rehearsing migration procedures, be sure to also identify completion milestones and fall back positions. For instance, if there are delays and the migration includes an RDBMS upgrade – would it be acceptable to go into production with the database migrated to a new server platform without upgrading the RDBMS? Such a scenario may occur due to unwww.redhat.com

14

foreseen difficulties but can be a manageable event if some application testing done beforehand – such as validation that the application software can run on a migrated but not upgraded version of the RDBMS. Do not under estimate the impact that corrupt or 'special data' may have on migration plans. A very common example of this is orphaned rows. Poor application design may have allowed data to be incorporated into the database that does not comply with the Referential Integrity currently in place. Perhaps the database was originally created with no Referential Integrity constraints declared. Subsequently Referential Integrity based on triggers could have been put in place that enforces certain business and data quality rules for new insertions but not for pre-existing rows. Unloading the data and then attempting to reinsert the rows and rebuild the Referential Integrity may not be possible – yet another situation no one wishes to discover in the midst of a production database migration/upgrade. Oracle extended data types and storage such as CLOBs and BLOBs and external tables also need careful pre-migration/upgrade evaluation and testing. Oracle Intermedia Text and XML based objects can also complicate the migration. An inventory of all these objects (and a plan for migrating them) should be created and tested. Migration/upgrade planning is also a time to consider introducing internationalization or additional supported languages. Be sure to research and evaluate which of the expanded NLS_LANG options your organization could require and determine whether this introduces other unexpected complexities or issues. Finally, remember that the database server migration is only part of the change being introduced. Be sure the SAN, firewalls, application servers and anything else that may be considered part of the migrating database environment are included in the migration plan and that all members of the migration team are aware of the dependencies. Know who is responsible for each component.

15

www.redhat.com

3 Migration or Migration/Upgrade During the initial planning process a decision must be made as to whether the migration will include an RDBMS version upgrade component. Part of this decision will concern the application's compatibility with the Oracle version while part of it is determined by the length of the maintenance window allowed and the amount of complexity tolerable. If a migration/upgrade is the path chosen, there are then two general paths to choose. These can be best described as: •

lateral: migration then upgrade



diagonal: upgrade simultaneous with the migration

One of the advantages provided by Oracle Data Pump Export/Import and original Export/Import is their lack of impact on the existing database. enabling the original database to remain available throughout the upgrade process. Data Pump Export uses Oracle Flashback Technology to get a consistent view of the data. However, neither Data Pump Export nor the original Export provide consistent snapshots by default. For this a flag must be explicitly set. Note that Data Pump Export/Import support started in Oracle Database 10g. When upgrading an Oracle database prior to 10g, original Export and Import (imp/exp) must be used. Also note that the original imp/exp file format is not compatible with that of the new utilities' (impdb/expdp) file format. Discussion of the deprecated exp and the legacy imp utilities is not within the scope of this paper. Several other Data Pump based migration/upgrade options are also available. This includes a network based Data Pump migration and upgrade for the special case of upgrading an 11gR1 database on one platform to an 11gR2 database on another. Though there are specific version related limitations to this capability – it is one the fastest and most efficient migration and upgrade techniques available.

3.1 Diagonal Migration and Upgrade Using Data Pump Oracle's Data Pump utility offers a number of different paths to migrate Oracle RDBMS 10gR1 or higher. Data Pump's has a GUI based interface in Oracle Enterprise Manager, an API for PL/SQL based operations and command line executeables called expdp and impdb. Dump files produced by whatever method are usable only by Data Pump. Note that the deprecated utilities imp and exp can not produce or read files usable by Data Pump. In 11g www.redhat.com

16

R2, the exp utility is entirely gone while the imp executable is still retained for upgrading pre 10g databases via exp generated dump files. This paper will focus on the data pump utilities when invoked from the operating system command line. One of Data Pump's largest benefits is that it affords the database administrator the option of migrating and upgrading the RDBMS environment with a single procedure. In the following scenario, an approximately 50 GB Oracle 10gR2 Windows Server 2008 Enterprise R2 database will be migrated to a Red Hat Enterprise Linux server running Oracle 11gR2.

3.2 Configuring Data Pump To use the Oracle Data Pump feature with file based I/O, an Oracle Directory Object needs to be defined. In the case of the expdp utility this is where the dump and log files will be placed. For impdp, this is where that utility will be looking to find a dump file for import and to log events. An Oracle Directory Object is created by issuing the SQLPLUS command CREATE DIRECTORY OBJECT: SQL> create directory as 'c:\dir_obj1' ; Directory created. SQL> grant read, write on directory dir_obj1 to system; Grant succeeded.

Before or after the SQLPLUS command is issued the actual operating system directory structure must also be created before the directory object can be used. Also make sure the user that will run the impdp has read and write permissions on that object. C:\>mkdir dir_obj1

3.3 Using File Based Data Pump Data Pump can be used in a variety of ways. It can dump/reload an entire database including the the Oracle internal users and application schemas. It can also dump/reload one or more tablespaces, one or more specific tables or one or more specific users and all their objects. In most migration scenarios it makes no sense to dump the entire database and move administrative users and objects such as sys and system. Since database is moving to another Operating System and a new Oracle Home, it is typically better to pre-create a 'skeleton database' on the targeted server and then dump and reload only the critical user's 17

www.redhat.com

schemas and data elements. Here, a skeleton database is defined as an Oracle database created with Oracle DBCA or custom scripts. It includes all non-application specific user data dictionaries and schemas, undo tablespace, temporary tablespace redo logs and a valid listener configuration. The skeleton database is fully functional but has no user data defined it, The skeleton database is the Data Pump target. User accounts may or may not be pre-created. Depending on the migration requirements Data Pump will either automatically re-create these tablespaces per the data pump's export DDL or schema re-mapping can be configured. Data Pump provides a relatively easy way to reassign a data set from one schema to another via this type of re-mapping. The impdp command listed below will log onto the source database as the system user and extract all the DDL and Data for the user soe. It will place this data in a binary file called “DP_DUMP_1.DMP” in the directory c:\dir_obj1. Regardless of whether the database storage is NTFS or ASM the dump file will be stored on a file system. Sufficient file system space must be available. A log file for the job will also be placed there. The parameter parallel=16 instructs the impdp executable to spawn 16 worker threads to unload data. This parameter can greatly influence how much time the operation will take. The higher the parallel setting the greater the load on the system and to a degree the faster the process will execute to completion. Eventually, too much parallelism will slow down the operation. C:\dir_obj1>C:\oracle\product\10.2.0\db_1\bin\EXPDP SCHEMAS=soe DIRECTORY=EXPORT _DIR LOGFILE=DP.LOG DUMPFILE=DP_DUMP1.DMP parallel=16 Export: Release 10.2.0.4.0 - 64bit Production on Monday, 14 December, 2009 13:25 :46 Copyright (c) 2003, 2007, Oracle. All rights reserved. Username: system Password: Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/******** SCHEMAS=soe DIRECTORY =EXPORT_DIR LOGFILE=DP.LOG DUMPFILE=DP_DUMP1.DMP parallel=16 Estimate in progress using BLOCKS method... Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 3.722 GB Processing object type SCHEMA_EXPORT/USER Processing object type SCHEMA_EXPORT/SYSTEM_GRANT Processing object type SCHEMA_EXPORT/ROLE_GRANT Processing object type SCHEMA_EXPORT/DEFAULT_ROLE

www.redhat.com

18

Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_ PACKAGE_SPEC Processing object type SCHEMA_EXPORT/VIEW/VIEW Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMA P/INDEX_STATISTICS Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS . . exported "SOE"."ORDER_ITEMS":"SYS_P39" 83.66 MB 3884906 rows . . exported "SOE"."ORDER_ITEMS":"SYS_P41" 83.74 MB 3888632 rows . . exported "SOE"."ORDER_ITEMS":"SYS_P44" 83.74 MB 3888242 rows . . exported "SOE"."ORDER_ITEMS":"SYS_P42" 83.68 MB 3886060 rows . . exported "SOE"."ORDER_ITEMS":"SYS_P51" 83.69 MB 3886097 rows . . exported "SOE"."ORDER_ITEMS":"SYS_P43" 83.61 MB 3882522 rows . . exported "SOE"."ORDER_ITEMS":"SYS_P45" 83.55 MB 3879617 rows . . exported "SOE"."ORDER_ITEMS":"SYS_P47" 83.58 MB 3880938 rows . . exported "SOE"."ORDER_ITEMS":"SYS_P37" 83.52 MB 3878161 rows . . exported "SOE"."ORDER_ITEMS":"SYS_P38" 83.54 MB 3879173 rows . . exported "SOE"."ORDER_ITEMS":"SYS_P40" 83.53 MB 3878529 rows . . exported "SOE"."ORDER_ITEMS":"SYS_P46" 83.53 MB 3878739 rows . . exported "SOE"."ORDER_ITEMS":"SYS_P48" 83.51 MB 3877853 rows . . exported "SOE"."ORDER_ITEMS":"SYS_P49" 83.48 MB 3876542 rows . . exported "SOE"."ORDER_ITEMS":"SYS_P50" 83.54 MB 3879171 rows . . exported "SOE"."ORDER_ITEMS":"SYS_P52" 83.40 MB 3872506 rows . . exported "SOE"."CUSTOMERS":"SYS_P25" 79.59 MB 1231898 rows . . exported "SOE"."CUSTOMERS":"SYS_P28" 79.60 MB 1232241 rows . . exported "SOE"."CUSTOMERS":"SYS_P35" 79.56 MB 1231608 rows . . exported "SOE"."CUSTOMERS":"SYS_P23" 79.52 MB 1230908 rows . . exported "SOE"."CUSTOMERS":"SYS_P26" 79.53 MB 1231057 rows . . exported "SOE"."CUSTOMERS":"SYS_P29" 79.47 MB 1230165 rows . . exported "SOE"."CUSTOMERS":"SYS_P21" 79.42 MB 1229515 rows . . exported "SOE"."CUSTOMERS":"SYS_P22" 79.45 MB 1230006 rows . . exported "SOE"."CUSTOMERS":"SYS_P24" 79.43 MB 1229509 rows . . exported "SOE"."CUSTOMERS":"SYS_P27" 79.44 MB 1229726 rows . . exported "SOE"."CUSTOMERS":"SYS_P30" 79.39 MB 1229121 rows . . exported "SOE"."CUSTOMERS":"SYS_P31" 79.42 MB 1229576 rows . . exported "SOE"."CUSTOMERS":"SYS_P32" 79.39 MB 1229144 rows . . exported "SOE"."CUSTOMERS":"SYS_P34" 79.42 MB 1229449 rows . . exported "SOE"."CUSTOMERS":"SYS_P33" 79.29 MB 1227481 rows . . exported "SOE"."CUSTOMERS":"SYS_P36" 79.29 MB 1227458 rows

19

www.redhat.com

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported exported

"SOE"."ORDERS":"SYS_P55" "SOE"."ORDERS":"SYS_P57" "SOE"."ORDERS":"SYS_P58" "SOE"."ORDERS":"SYS_P60" "SOE"."ORDERS":"SYS_P67" "SOE"."ORDERS":"SYS_P53" "SOE"."ORDERS":"SYS_P54" "SOE"."ORDERS":"SYS_P56" "SOE"."ORDERS":"SYS_P59" "SOE"."ORDERS":"SYS_P61" "SOE"."ORDERS":"SYS_P62" "SOE"."ORDERS":"SYS_P63" "SOE"."ORDERS":"SYS_P64" "SOE"."ORDERS":"SYS_P65" "SOE"."ORDERS":"SYS_P66" "SOE"."ORDERS":"SYS_P68" "SOE"."INVENTORIES":"SYS_P78" "SOE"."INVENTORIES":"SYS_P75" "SOE"."INVENTORIES":"SYS_P74" "SOE"."INVENTORIES":"SYS_P76" "SOE"."INVENTORIES":"SYS_P79" "SOE"."INVENTORIES":"SYS_P84" "SOE"."INVENTORIES":"SYS_P73" "SOE"."INVENTORIES":"SYS_P82" "SOE"."INVENTORIES":"SYS_P77" "SOE"."INVENTORIES":"SYS_P81" "SOE"."INVENTORIES":"SYS_P70" "SOE"."INVENTORIES":"SYS_P71" "SOE"."INVENTORIES":"SYS_P72" "SOE"."INVENTORIES":"SYS_P83" "SOE"."INVENTORIES":"SYS_P80" "SOE"."INVENTORIES":"SYS_P69" "SOE"."PRODUCT_DESCRIPTIONS" "SOE"."PRODUCT_INFORMATION" "SOE"."WAREHOUSES" "SOE"."LOGON":"SYS_P100" "SOE"."LOGON":"SYS_P85" "SOE"."LOGON":"SYS_P86" "SOE"."LOGON":"SYS_P87" "SOE"."LOGON":"SYS_P88" "SOE"."LOGON":"SYS_P89" "SOE"."LOGON":"SYS_P90" "SOE"."LOGON":"SYS_P91" "SOE"."LOGON":"SYS_P92" "SOE"."LOGON":"SYS_P93" "SOE"."LOGON":"SYS_P94" "SOE"."LOGON":"SYS_P95" "SOE"."LOGON":"SYS_P96" "SOE"."LOGON":"SYS_P97" "SOE"."LOGON":"SYS_P98" "SOE"."LOGON":"SYS_P99"

www.redhat.com

20

34.78 MB 1109634 rows 34.83 MB 1111038 rows 34.80 MB 1110293 rows 34.83 MB 1110967 rows 34.81 MB 1110387 rows 34.75 MB 1108451 rows 34.76 MB 1108858 rows 34.74 MB 1108136 rows 34.76 MB 1108882 rows 34.75 MB 1108457 rows 34.74 MB 1108368 rows 34.75 MB 1108639 rows 34.73 MB 1107910 rows 34.71 MB 1107353 rows 34.74 MB 1108136 rows 34.68 MB 1106385 rows 81.67 KB 5150 rows 76.24 KB 4738 rows 73.11 KB 4532 rows 69.79 KB 4326 rows 70.20 KB 4326 rows 66.07 KB 4120 rows 63.96 KB 3914 rows 63.55 KB 3914 rows 57.32 KB 3502 rows 57.71 KB 3502 rows 54.60 KB 3296 rows 53.99 KB 3296 rows 51.47 KB 3090 rows 48.15 KB 2884 rows 45.24 KB 2678 rows 36.07 KB 2060 rows 86.30 KB 288 rows 71.64 KB 288 rows 11.97 KB 206 rows 0 KB 0 rows 0 KB 0 rows 0 KB 0 rows 0 KB 0 rows 0 KB 0 rows 0 KB 0 rows 0 KB 0 rows 0 KB 0 rows 0 KB 0 rows 0 KB 0 rows 0 KB 0 rows 0 KB 0 rows 0 KB 0 rows 0 KB 0 rows 0 KB 0 rows 0 KB 0 rows

Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is: F:\ORACLE\EXPORT_DIR\DP_DUMP1.DMPJob "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at 15:23:01

The above listing shows the entire output of a schema based data dump. A number of the tables within the soe schema happen to be partitioned. The output shows the degree of partitioning of those objects – and coincidentally the number of worker threads expdp uses for each object by listing each partition's size and number of rows. This Data Pump job also extracted soe's entire suite of stored logic, indexes and constraints as well as table data and DDL If partitioned tables are part of the database, It is very worthwhile to experiment with the PARALLEL setting. Start with a value based on the number of processors or hyperthreads the source system posseses. Try one half that valuer and then double it. Record the actual elapsed times for these runs until the system's ideal Data Pump PARALLEL setting is determined. The graph below is a chart of the effect of the PARALLEL parameter setting on the amount of time required to extract the contents of the soe schema. Throughout all these test cases the only variable changed was the value of PARALLEL. These results show a rather dramatic variance in the elapsed time caused only by varying the PARALLEL parameter of expdp.

21

www.redhat.com

Oracle 11gR1 Data Pump Export Parallel Performance 25

Dump Time (minut es)

20

15 Dump Time (minutes)

10

5

0 4

8

12

16

32

Degree of Parallelsm

By default, Data Pump dumps the contents of the specified schema along with the DDL to recreate the schema itself. Data Pump provides the CONTENT parameter to allow choice between the data and meta_data. This is useful if a user's stored logic and DDL must be extracted and migrated but not the associated data. Once the data has been extracted from its source database the dump file must be transferred to the new server. This is often a very time consuming task. Network bandwidth and the I/O abilities of both the source and target servers come into play. Certainly anything that can be done to optimize this portion of the process will earn big savings in the total elapsed time of the migration/upgrade. Ensure the destination disk resources are high performance and that network bandwidth is ample and unrestricted during the data dump file's movement of the data dump file.. If SAN storage is part of the configuration, taking advantage of SAN replication technology is also strongly recommended. This is a good area to do before hand experimentation and optimization. Can additional disk resources or additional network bandwidth be added temporarily to help expedite the transfer? www.redhat.com

22

Once the database dump file has been transferred to the new target server, the Data Pump uility impdp is used to load meta_data and data into the new target database. Like the expdp utility, impdp also requires an Oracle directory object specification to work. SQL> grant read, write on directory DMP_DIR to soe;

where DMP_DIR is the location of the expdp dumpfile. Ensure that Oracle processes have permissions to read the dump file. In this scenario, the entire application and data is based on one schema called 'SOE'. While the data pump utility will automatically create the user and underlying storage objects before creating and populating other schema objects, it is good idea to pre-create the user and tablespaces to be utilized. This will generate a few warnings at impdp start-up but it ensures data is going to the proper schema and underlying storage. During the load impdp process a great deal of redo log activity will be generated. One way to minimize the performance impact of repeated log switches and checkpoints is to temporarily disable archive log mode. This does not reduce the amount of redo generated – but does save the overhead of having to archive them. During a very large data load the number and size of the archive logs can present both disk space and management problems. If using FRA to store archive logs the database will hang if FRA fills up. If the archive log destination is file system based and that file system fills up, the database will also hang until space is made available. Be sure to put the database back into archive log mode before going into production. Listed below are the SQL commands used to temporarily disable archive log mode. #sqlplus /nolog SQL*Plus: Release 11.2.0.1.0 Production on Wed Dec 16 11:46:03 2009 Copyright (c) 1982, 2009, Oracle. All rights reserved. SQL> connect /as sysdba Connected. SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount; ORACLE instance started. Total System Global Area 3.5543E+10 bytes Fixed Size 2215264 bytes

23

www.redhat.com

Variable Size 2.7112E+10 bytes Database Buffers 8321499136 bytes Redo Buffers 106823680 bytes Database mounted. SQL> Alter database noarchivelog; Database altered. SQL> Alter database open; Database altered.

After the import is completed the database must be shutdown, started up in a mount state and archive mode re-enabled. Performance gains by tuning parallelism settings on the import can have an even more dramatic positive effect than for export. Experimentation with PARALLEL settings during trial runs will help determine the optimal setting. The following graph shows the effect of the PARALLEL parameter has on impdp loads executed with the database both in and out of archive log mode.

www.redhat.com

24

Oracle Data Pump Import DOP and Archive/NoArchive Performance 45 40

Load Time (Minut es)

35 30 Load Time (min) Archive Log mode Load Time (min) No Archive Mode

25 20 15 10 5 0 1

2

8

16

32

64

Degree of Parallelsm

Below is the output from a Data Pump import that brought all of the 10g database'schemas to a Red Hat Enterprise Linux environment while simultaneously upgrading the database version to 11gR2. The benefit of simultaneous migration and upgrade is evident in the elapsed time required for the operation. $ impdp system/ directory=DMP_DIR dumpfile=DP_DUMP1.DMP log=imp.log parallel=64 Import: Release 11.2.0.1.0 - Production on Tue Dec 15 13:20:10 2009 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded Starting "SYSTEM"."SYS_IMPORT_FULL_01": system/******** directory=DMP_DIR dumpfile=DP_DUMP1.DMP logfile=imp.log parallel=64

25

www.redhat.com

Processing object type SCHEMA_EXPORT/USER Processing object type SCHEMA_EXPORT/SYSTEM_GRANT Processing object type SCHEMA_EXPORT/ROLE_GRANT Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE Processing object type SCHEMA_EXPORT/TABLE/TABLE Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA . . imported "SOE"."ORDER_ITEMS":"SYS_P44" 83.74 MB 3888242 rows . . imported "SOE"."ORDER_ITEMS":"SYS_P39" 83.66 MB 3884906 rows . . imported "SOE"."ORDER_ITEMS":"SYS_P41" 83.74 MB 3888632 rows . . imported "SOE"."ORDER_ITEMS":"SYS_P51" 83.69 MB 3886097 rows . . imported "SOE"."ORDER_ITEMS":"SYS_P43" 83.61 MB 3882522 rows . . imported "SOE"."ORDER_ITEMS":"SYS_P47" 83.58 MB 3880938 rows . . imported "SOE"."INVENTORIES":"SYS_P78" 81.67 KB 5150 rows . . imported "SOE"."INVENTORIES":"SYS_P75" 76.24 KB 4738 rows . . imported "SOE"."INVENTORIES":"SYS_P74" 73.11 KB 4532 rows . . imported "SOE"."INVENTORIES":"SYS_P76" 69.79 KB 4326 rows . . imported "SOE"."INVENTORIES":"SYS_P79" 70.20 KB 4326 rows . . imported "SOE"."INVENTORIES":"SYS_P84" 66.07 KB 4120 rows . . imported "SOE"."INVENTORIES":"SYS_P73" 63.96 KB 3914 rows . . imported "SOE"."INVENTORIES":"SYS_P82" 63.55 KB 3914 rows . . imported "SOE"."INVENTORIES":"SYS_P77" 57.32 KB 3502 rows . . imported "SOE"."INVENTORIES":"SYS_P81" 57.71 KB 3502 rows . . imported "SOE"."INVENTORIES":"SYS_P70" 54.60 KB 3296 rows . . imported "SOE"."INVENTORIES":"SYS_P71" 53.99 KB 3296 rows . . imported "SOE"."INVENTORIES":"SYS_P72" 51.47 KB 3090 rows . . imported "SOE"."INVENTORIES":"SYS_P83" 48.15 KB 2884 rows . . imported "SOE"."INVENTORIES":"SYS_P80" 45.24 KB 2678 rows . . imported "SOE"."INVENTORIES":"SYS_P69" 36.07 KB 2060 rows . . imported "SOE"."PRODUCT_DESCRIPTIONS" 86.30 KB 288 rows . . imported "SOE"."PRODUCT_INFORMATION" 71.64 KB 288 rows . . imported "SOE"."WAREHOUSES" 11.97 KB 206 rows . . imported "SOE"."LOGON":"SYS_P100" 0 KB 0 rows . . imported "SOE"."LOGON":"SYS_P85" 0 KB 0 rows . . imported "SOE"."LOGON":"SYS_P86" 0 KB 0 rows . . imported "SOE"."LOGON":"SYS_P87" 0 KB 0 rows . . imported "SOE"."LOGON":"SYS_P88" 0 KB 0 rows . . imported "SOE"."LOGON":"SYS_P89" 0 KB 0 rows . . imported "SOE"."LOGON":"SYS_P90" 0 KB 0 rows . . imported "SOE"."LOGON":"SYS_P91" 0 KB 0 rows . . imported "SOE"."LOGON":"SYS_P92" 0 KB 0 rows . . imported "SOE"."LOGON":"SYS_P93" 0 KB 0 rows . . imported "SOE"."LOGON":"SYS_P94" 0 KB 0 rows . . imported "SOE"."LOGON":"SYS_P95" 0 KB 0 rows . . imported "SOE"."LOGON":"SYS_P96" 0 KB 0 rows . . imported "SOE"."LOGON":"SYS_P97" 0 KB 0 rows . . imported "SOE"."LOGON":"SYS_P98" 0 KB 0 rows . . imported "SOE"."LOGON":"SYS_P99" 0 KB 0 rows

www.redhat.com

26

. . imported "SOE"."ORDER_ITEMS":"SYS_P37" 83.52 MB 3878161 rows . . imported "SOE"."ORDER_ITEMS":"SYS_P38" 83.54 MB 3879173 rows . . imported "SOE"."ORDER_ITEMS":"SYS_P42" 83.68 MB 3886060 rows . . imported "SOE"."ORDER_ITEMS":"SYS_P40" 83.53 MB 3878529 rows . . imported "SOE"."ORDERS":"SYS_P55" 34.78 MB 1109634 rows . . imported "SOE"."ORDERS":"SYS_P58" 34.80 MB 1110293 rows . . imported "SOE"."ORDER_ITEMS":"SYS_P49" 83.48 MB 3876542 rows . . imported "SOE"."ORDERS":"SYS_P67" 34.81 MB 1110387 rows . . imported "SOE"."ORDER_ITEMS":"SYS_P48" 83.51 MB 3877853 rows . . imported "SOE"."ORDERS":"SYS_P53" 34.75 MB 1108451 rows . . imported "SOE"."ORDERS":"SYS_P54" 34.76 MB 1108858 rows . . imported "SOE"."ORDERS":"SYS_P56" 34.74 MB 1108136 rows . . imported "SOE"."CUSTOMERS":"SYS_P25" 79.59 MB 1231898 rows . . imported "SOE"."ORDERS":"SYS_P59" 34.76 MB 1108882 rows . . imported "SOE"."ORDER_ITEMS":"SYS_P50" 83.54 MB 3879171 rows . . imported "SOE"."CUSTOMERS":"SYS_P28" 79.60 MB 1232241 rows . . imported "SOE"."ORDERS":"SYS_P61" 34.75 MB 1108457 rows . . imported "SOE"."ORDERS":"SYS_P62" 34.74 MB 1108368 rows . . imported "SOE"."ORDERS":"SYS_P64" 34.73 MB 1107910 rows . . imported "SOE"."ORDERS":"SYS_P63" 34.75 MB 1108639 rows . . imported "SOE"."ORDERS":"SYS_P65" 34.71 MB 1107353 rows . . imported "SOE"."ORDERS":"SYS_P66" 34.74 MB 1108136 rows . . imported "SOE"."CUSTOMERS":"SYS_P35" 79.56 MB 1231608 rows . . imported "SOE"."ORDERS":"SYS_P68" 34.68 MB 1106385 rows . . imported "SOE"."CUSTOMERS":"SYS_P23" 79.52 MB 1230908 rows . . imported "SOE"."CUSTOMERS":"SYS_P29" 79.47 MB 1230165 rows . . imported "SOE"."CUSTOMERS":"SYS_P26" 79.53 MB 1231057 rows . . imported "SOE"."CUSTOMERS":"SYS_P22" 79.45 MB 1230006 rows . . imported "SOE"."CUSTOMERS":"SYS_P21" 79.42 MB 1229515 rows . . imported "SOE"."ORDERS":"SYS_P57" 34.83 MB 1111038 rows . . imported "SOE"."CUSTOMERS":"SYS_P24" 79.43 MB 1229509 rows . . imported "SOE"."ORDERS":"SYS_P60" 34.83 MB 1110967 rows . . imported "SOE"."CUSTOMERS":"SYS_P30" 79.39 MB 1229121 rows . . imported "SOE"."CUSTOMERS":"SYS_P31" 79.42 MB 1229576 rows . . imported "SOE"."ORDER_ITEMS":"SYS_P45" 83.55 MB 3879617 rows . . imported "SOE"."CUSTOMERS":"SYS_P32" 79.39 MB 1229144 rows . . imported "SOE"."CUSTOMERS":"SYS_P33" 79.29 MB 1227481 rows . . imported "SOE"."CUSTOMERS":"SYS_P34" 79.42 MB 1229449 rows . . imported "SOE"."CUSTOMERS":"SYS_P36" 79.29 MB 1227458 rows . . imported "SOE"."ORDER_ITEMS":"SYS_P46" 83.53 MB 3878739 rows . . imported "SOE"."ORDER_ITEMS":"SYS_P52" 83.40 MB 3872506 rows . . imported "SOE"."CUSTOMERS":"SYS_P27" 79.44 MB 1229726 rows Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC Processing object type SCHEMA_EXPORT/VIEW/VIEW ORA-39082: Object type PACKAGE_BODY:"SOE"."ORDERENTRY" created with compilation warnings

27

www.redhat.com

Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 13:39:44

I Note that in the above output listing that there is a single compilation error reported by Data Pump. The package body SOE.OrderENTRY compiled with errors. This is very typical of the sort of errors encountered during migration and upgrade actions. Before dumping data and schema DDL, query the source database's soe.orderentry data dictionary for schema objects that are invalid. After the migration/upgrade is completed, reconcile the lists of invalid objects. This case,illustrates why pre-testing and rehearsal are so important. Tracing the compilation error on soe's orderentry.sql by manually attempting to recompile, it becomes apparent that an Oracle object dbms_lock, called by the migrating schema's package body, is not available. SQL> set feedback on SQL> show error No errors. SQL> @soepackage.sql Type created. ... Package created. ... Warning: Package Body created with compilation errors. SQL> show error Errors for PACKAGE BODY ORDERENTRY: LINE/COL ERROR -------- ----------------------------------------------------------------67/9 PL/SQL: Statement ignored 67/9 PLS-00201: identifier 'DBMS_LOCK' must be declared

Data Pump's error message from the failed compilation attempt points to line 67 of the source and indicates an Oracle supplied package called dbms_lock appears to be missing. The SQL script to create it was found under $ORACLE_HOME/rdbms/admin. After verifying the system package existed and was valid it became evident that the schema holder did not have the execute privilege on this package and was reporting it as missing. After granting the www.redhat.com

28

execute privilege on dbms_lock to the soe schema all packages successfully compiled. These are the types of glitches that are best discovered in advance. Again, preparation before the migration will prevent issues like these from becoming an 11th hour crisis. As with the Data Pump expdp, Data Pump impdp performance can also be improved by the PARALLEL parameter.The PARALLEL parameter is only available in Oracle RDBMS Enterprise Edition. A sweet spot must again be determined by experimentation. Too high a degree of parallelization (DOP) increases the elapsed time of the load. Too low DOP also increases elapsed time of the load.

3.4 Data Pump Network Import In this scenario,the database to be migrated/upgraded is version 11gR1. The target database is Oracle Enterprise RDBMS for Red Hat Enterprise Linux version 11.2.0.1.0 (11gR2).This fits within the compatibility matrix for using Data Pump option to migrate/upgrade with the network. . Below is a compatibility/functionality matrix for Networked Oracle Data Pump in 10g/11g environments.

Source DB Version

Target DB Version

File load/unload

Network Load

10gR1

10gR2

yes

no

10gR2

11gR2

yes

no

11gR1

11gR2

yes

yes

Table 1: Data Pump Import Network Compatibility Matrix

In the migration/upgrade cases described in this paper, both servers are identical in memory and processing power so no consideration need be given regarding where best to perform resource intense procedures. However, if the source platform is significantly slower than the target server, the network based Data Pump migration/upgrade option may be the optimal choice. This is because the bulk of the processing will take place on the target server and few resources are tapped on the source server. With a network based Data Pump import procedure a database link is established between the source and target servers. Only the import Data Pump executable is used. The network link becomes the argument provided for impdp's dump file. Data Pump can then directly extract the schemas and data from the source database and load it directly into the target database. This saves the overhead of creating a dump file, populating it, transferring it to the new server and then reading and loading that dump file into the new RDBMS environment. The dump file transfer and subsequent reload is often the largest time block for migrations and optimizing it by using the 29

www.redhat.com

network option of Data Pump import can significantly reduce the total time for a migration/upgrade. IAs mentioned above a database link must be configured in order to perfom network Data Pump import.. A database link relies upon an existing tnsnames,ora entry naming the service name, or SID, of the remote database.In this case the remote source database's Service Name is TST11GDB. Confirm this configuration exists with a tnsping test. $ tnsping TST11GDB TNS Ping Utility for Linux: Version 11.2.0.1.0 - Production on 17-DEC-2009 09:57:21 Copyright (c) 1997, 2009, Oracle. All rights reserved. Used parameter files: /oracle/app/oracle/product/11.2.0/grid_1/network/admin/sqlnet.ora Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP) (HOST = lkg.lab.bos.redhat.com)(PORT = 1521))) (CONNECT_DATA = (SERVICE_NAME = tst11gdb))) OK (0 msec)

This confirms the required pre-existing SQLNET connectivity required for a database link. Next, log in as the user that will import the data from the source system. A database link will defined for that user to the remote sources schema so that data pump can communicate with the remote server during the Data Pump network import. All work is performed on the target system. The only reference to the source system is via the database link. To create the database link on the target, log onto the target server as the target schema user and issue the following command: SQL> CREATE DATABASE LINK soe_win1 CONNECT to soe IDENTIFIED BY soe USING 'TST11GDB'; SQL> SELECT count(*) from warehouses@soe_win1; COUNT(*) ---------124

The database link “soe_win1” now allows the target database user soe to access the source database user soe's entire schema. In this migration scenario, the schema holder name is not being changed but cross mapping from one user to another is also possible. With network www.redhat.com

30

based Data Pump import the new destination schema must be created in the target database in advance. Additionally, the schema must be granted Directory Object read, write privileges and data base link create permissions. SQL> create user soe identified by soe; User created. SQL> grant connect, resource, dba to soe; Grant succeeded. SQL> create directory TRGDIR as '/oracle'; Directory created. SQL> grant read, write on directory TRGDIR to soe; Grant succeeded. SQL> grant read, write on directory TRGDIR to soe; Grant succeeded.

The command line arguments below initiate a full schema transfer from the Windows based source database to the Linux based target server. A degree of parallelism of 16 is being used. Note that even though there will be no dump file, a directory object had to be created and specified for the log file. Data is directly loaded into the target database via a network connection to the source database. No dump file I/O need be performed on either system. saving enormous overhead. Network bandwidth coupled with the I/O and processing capabilities of the target system are the limiting factors for this type of migration and upgrade. Almost all of the processing is being performed on the target system. $ impdp soe/soe directory=TRGDIR network_link=soe_win1 logfile=soe1.log parallel = 16 Import: Release 11.2.0.1.0 - Production on Thu Dec 17 12:59:38 2009 Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options Starting "SOE"."SYS_IMPORT_SCHEMA_01": soe/******** directory=TRGDIR network_link=soe_win1 logfile=soe1.log parallel=16 Estimate in progress using BLOCKS method... Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA Total estimation using BLOCKS method: 2.352 GB

31

www.redhat.com

Processing object type SCHEMA_EXPORT/USER ORA-31684: Object type USER:"SOE" already exists Processing object type SCHEMA_EXPORT/SYSTEM_GRANT Processing object type SCHEMA_EXPORT/ROLE_GRANT Processing object type SCHEMA_EXPORT/DEFAULT_ROLE Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE Processing object type SCHEMA_EXPORT/TABLE/TABLE . . imported "SOE"."INVENTORIES" 35712 rows . . imported "SOE"."PRODUCT_DESCRIPTIONS" 288 rows . . imported "SOE"."PRODUCT_INFORMATION" 288 rows . . imported "SOE"."WAREHOUSES" 124 rows . . imported "SOE"."ORDERS" 11748975 rows . . imported "SOE"."CUSTOMERS" 11428783 rows . . imported "SOE"."ORDER_ITEMS" 41132792 rows . . imported "SOE"."LOGON" 0 rows Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC Processing object type SCHEMA_EXPORT/VIEW/VIEW Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY ORA-39082: Object type PACKAGE_BODY:"SOE"."ORDERENTRY" created with compilation warnings Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA Job "SOE"."SYS_IMPORT_SCHEMA_01" completed with 2 error(s) at 13:10:40

This process has transferred all of the data and stored logic for the user soe on the source database to the target database. Simultaneously the database environment was changed from 11.1.0.6 to 11.2.0.1 . Again note that a network based Data Pump import like this one can only occur between servers sharing the same exact Oracle version – or the target server can be one release level higher (11gR1 to 11gR2). Despite this limitation, networked Data Pumps are arguably the quickest and most efficient method to simultaneously migrate and upgrade. There is no need to provide space for the dump file for the export or import. With large databases not having to provision large amounts of scratch space is a really important benefit. Be conscious of network bandwidth. As the PARALLEL parameter is increased more bandwidth will be consumed and can be a potential bottleneck. The network traffic involved with this type of migration upgrade should be accounted for so that sufficient capacity can be made available and other non related processing is not severely impacted. www.redhat.com

32

While network based Data Pump imports are limited to 11g migrations and upgrades only, the performance gains are significant. It presents an ideal method of migrating small to medium sized Windows Server based Oracle database installations to Red Hat Enterprise Linux. The table below shows the time taken to perform a Data Pump Export to a dump file, file transfer to the new host and Data Pump import from a dump file into the 11gR2 target database. Compare this to the elapsed time of the single Network Data Pump import. (These operations were performed on an Identical database. All other Data Pump parameters the same for both methods.

Task

DataPump Network

Data Pump Dumpfile/Transfer

Export

19

File Transfer

24

Import

36

18

Total Required Time

36 minutes

61 minutes

Table 2: Data Pump File based versus Data Pump Network

3.5 Lateral Migration Using Transportable Database So far, discussion has concentrated on migrating application schemas from one Oracle environment to another via Data Pump. In the scenarios described Data Pump based it's exports on schemas rather than moving the actual entire Oracle database structure.(i.e user data and accounts are moved from one supporting RDBMS database infrastructure to another). These scenarios rely on Data Pump's ability to extract data and Data Definition Language from one version of Oracle and transfer it to another. This works for many if not all database environments. However, when there are a hundreds or perhaps thousands of user accounts and application schemas to migrate Transportable Databases may be a better option. When a database has a great many inter related schemas referential integrity can be come entangled and the number of objects to manage can become unwieldy. Transportable Tablespace/Database technology allows the DBA to move the whole database with out data reload. The Transportable Database database feature is a migration only feature. If aa RDBMS upgrade is also desired it must take place as a subsequent step. This is why it is referred to as a 'lateral' migration. 33

www.redhat.com

Transportable Database (TDB) is an extension to Oracle's Transportable Tablespace feature. Transportable Database Technology can move an entire database to another Operating System platform without actually extracting and reloading the data. The physical datafiles of one Oracle database are prepared and sent to another platform. After some special Oracle/RMAN manipulation on either source or target server, the datafiles can be transferred to and accessed by another Oracle instance on another operating system. Oracle versions must be the same.(11gR1 to 11gR1 or 11gR2 to 11gR2 only) Since Oracle's Transportable Databases feature is a migration technology only, another tool such as Oracle Database Upgrade Utility must be used in conjunction with Transportable Database if an upgrade is desired. In the case demonstrated the Oracle database on the Windows Server 2008 Enterprise platform is 11gR1. The target system running on Red Hat Enterprise Linux is 11gR2. Oracle's Database Upgrade Assistant (DBUA) tool is used in conjunction with Transportable Database to first migrate the datafiles via Transportable Database to Linux and then upgrade with DBUA.

3.6 Transportable Database Migration : A special procedure called DBMS_TBD.CHECK_EXTERNAL queries the data dictionary of the target looking for external objects such as Oracle external tables or directory objects. This is performed by the sys user. It flags any objects that will not be migrated by the Transportable Database operation. SQL> set serveroutput on SQL> declare x boolean; begin x := dbms_tdb.check_external; end; 2 / The following directories exist in the database: SYS.IDR_DIR, SYS.AUDIT_DIR, SYS.DATA_PUMP_DIR, SYS.ORACLE_OCM_CONFIG_DIR PL/SQL procedure successfully completed.

The output from the above commands list no external tables but several Directory Objects. These must be recreated on the target server side if they do not already exist. The Directory Objects listed above, however, are all default objects and should already exist in the target database. If they do not already exist, query the source database server's DBA_DIRECTORIES table of the source source server to get the information to create them. The exact file specification does not need to be the same but the Directory Object name must be identical and it must www.redhat.com

34

point to an existing, valid file specification. SQL> select directory_path from dba_directories; DIRECTORY_PATH -------------------------------------------------------------------------------c:\oracle\app\diag\rdbms\tst11gdb\tst11gdb\ir /tmp/ c:\oracle\app/admin/tst11gdb/dpdump/ c:\oracle\app\product\11.1.0\db_1/ccr/state#

If the source database has any Oracle On Line Analytical Processing (OLAP) work spaces, they must be exported in a special manner. A system procedure called DBMS_AW.EXECUTE is used to export. OLAP Analytic Work Spaces. DBMS_AW.EXECUTE is then used to re-import these Analytic Work Spaces on the target server. To initiate the Transportable Database migration operation the the source database must be placed in read only mode. This requires a shutdown and restart in read only mode. SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> SQL> startup mount; SQL> ORACLE instance started. Total System Global Area 1.1024E+10 bytes Fixed Size 2124288 bytes Variable Size 4373190144 bytes Database Buffers 6643777536 bytes Redo Buffers 4427776 bytes Database mounted. SQL> alter database open read only; Database altered.#

Perform one last check to make sure the source database is ready for transport to the target platform. Using the exact character string selected from querying v$DB_TRANSPORTABLE_PLATFORM as an argument, call DBMS_TBD_CHECK_DB to ensure the database is ready and that the target is a supported platform. 35

www.redhat.com

SQL> SELECT PLATFORM_NAME FROM V$DB_TRANSPORTABLE_PLATFORM; PLATFORM_NAME -------------------------------------------------------------------------------Microsoft Windows IA (32-bit) Linux IA (32-bit) HP Tru64 UNIX Linux IA (64-bit) HP Open VMS Microsoft Windows IA (64-bit) Linux 64-bit for AMD Microsoft Windows 64-bit for AMD Solaris Operating System (x86) HP IA Open VMS Solaris Operating System (AMD64) 11 rows selected. SQL> SET SERVEROSUTPUT ON SQL> declare 2 retcode boolean; 3 begin 4 retcode := dbms_tdb.check_db('Linux IA (64-bit)', dbms_tdb.skip_none); 5 end; 6 / PL/SQL procedure successfully completed.

The next step is performed with the RMAN utility on the source server. Two options for conversion are available. The conversion can be instructed to take place on either the source or the target platform The I/O and CPU requirements for this process are rather intensive so the system with the greater I/O and CPU performance should be used. The RMAN convert command is also capable of parallelism and this feature can be used to reduce conversion time. In this example the the Degree Of Parallelism (DOP) was set to four. The existing datafile file specification was supplied to locate the files and the file specification for the converted files was also indicated. Checking that directory, the Oracle converted data files are ready for transport to the target server. In addition to the datafile generation, a conversion script and an init.ora file are also created. The conversion script contains the following commands:

www.redhat.com

36

STARTUP NOMOUNT PFILE = 'F:\TEMP\INIT_LIN11GDB.ORA'; RUN { CONVERT FROM PLATFORM 'Microsoft Windows 64-bit for AMD' PARALLELISM 4 DATAFILE 'F:\ORACLE\ORADATA\TST11GDB\SYSTEM01.DBF' FORMAT 'F:\TEMP\TST11GDB\SYSTEM01.DBF' DATAFILE 'F:\ORACLE\ORADATA\TST11GDB\USERS01.DBF' FORMAT 'F:\TEMP\TST11GDB\USERS01.DBF' DATAFILE 'F:\ORACLE\ORADATA\SOE.DBF' FORMAT 'F:\TEMP\SOE.DBF' DATAFILE 'F:\ORACLE\ORADATA\TST11GDB\UNDOTBS01.DBF' FORMAT 'F:\TEMP\TST11GDB\UNDOTBS01.DBF' DATAFILE 'F:\ORACLE\ORADATA\TST11GDB\SYSAUX01.DBF' FORMAT 'F:\TEMP\TST11GDB\SYSAUX01.DBF' ;}

This is the startup command that will need to be run on the target server once the datafiles are transferred there. Thie RMAN command does several things. First, it creates converted copies of the source databases data files and puts them under the f:\temp files system. It also creates a script to be run on the target server as well as the init.ora file necessary for initial startup and completion of the Transportable Database migration. Starting conversion at source at 22-DEC-09 using channel ORA_DISK_1 using channel ORA_DISK_2 using channel ORA_DISK_3 using channel ORA_DISK_4 allocated channel: ORA_DISK_5 channel ORA_DISK_5: SID=250 device type=DISK allocated channel: ORA_DISK_6 channel ORA_DISK_6: SID=247 device type=DISK allocated channel: ORA_DISK_7 channel ORA_DISK_7: SID=263 device type=DISK allocated channel: ORA_DISK_8 channel ORA_DISK_8: SID=251 device type=DISK

37

www.redhat.com

Directory Directory Directory Directory

SYS.IDR_DIR found in the database SYS.AUDIT_DIR found in the database SYS.DATA_PUMP_DIR found in the database SYS.ORACLE_OCM_CONFIG_DIR found in the database

User SYS with SYSDBA and SYSOPER privilege found in password file channel ORA_DISK_2: starting datafile conversion input datafile file number=00005 name=F:\ORACLE\ORADATA\SOE.DBF channel ORA_DISK_3: starting datafile conversion input datafile file number=00003 name=F:\ORACLE\ORADATA\TST11GDB\UNDOTBS01.DBF channel ORA_DISK_4: starting datafile conversion input datafile file number=00002 name=F:\ORACLE\ORADATA\TST11GDB\SYSAUX01.DBF channel ORA_DISK_5: starting datafile conversion input datafile file number=00001 name=F:\ORACLE\ORADATA\TST11GDB\SYSTEM01.DBF channel ORA_DISK_6: starting datafile conversion input datafile file number=00004 name=F:\ORACLE\ORADATA\TST11GDB\USERS01.DBF converted datafile=F:\TEMP\TST11GDB\USERS01.DBF channel ORA_DISK_6: datafile conversion complete, elapsed time: 00:00:16 converted datafile=F:\TEMP\TST11GDB\SYSAUX01.DBF channel ORA_DISK_4: datafile conversion complete, elapsed time: 00:09:58 converted datafile=F:\TEMP\TST11GDB\SYSTEM01.DBF channel ORA_DISK_5: datafile conversion complete, elapsed time: 00:10:43 converted datafile=F:\TEMP\TST11GDB\UNDOTBS01.DBF channel ORA_DISK_3: datafile conversion complete, elapsed time: 00:15:26 converted datafile=F:\TEMP\SOE.DBF channel ORA_DISK_2: datafile conversion complete, elapsed time: 02:04:42 Edit init.ora file F:\TEMP\INIT_LIN11GDB.ORA. This PFILE will be used to create the database on the target platform Run SQL script F:\TEMP\TRANSPORT_TST11GDB.SQL on the target platform to create d atabase To recompile all PL/SQL modules, run utlirp.sql and utlrp.sql on the target plat form To change the internal database identifier, use DBNEWID Utility Finished conversion at source at 22-DEC-09#

Both the init.ora file and the final conversion/startup script must be edited. The file system specifications found in the FORMAT clauses of the above command are not Linux compliant. Before this command can sucessfully execute on the Linux server the FORMAT file specifications will have to be converted to the Linux file specification format. After editing the generated script TRANSPORT_TST11GDB.SQL to accommodate Linux file format the script is ready.. STARTUP NOMOUNT PFILE='F:\TEMP\INIT_LIN11GDB.ORA' CREATE CONTROLFILE REUSE SET DATABASE "LIN11GDB" RESETLOGS ARCHIVELOG MAXLOGFILES 16 MAXLOGMEMBERS 3 MAXDATAFILES 100 www.redhat.com

38

MAXINSTANCES 8 MAXLOGHISTORY 2336 LOGFILE GROUP 1 '/oracle11R1/oradata/redo1.log' SIZE 50M, GROUP 2 '/oracle11R1/oradata/redo2.log' SIZE 50M, GROUP 3 '/oracle11R1/oradata/redo3.log' SIZE 50M DATAFILE '/oracle11R1/oradata/SYSTEM01.DBF', '/oracle11R1/oradata/SYSAUX01.DBF', '/oracle11R1/oradata/UNDOTBS01.DBF', '/oracle11R1/oradata/USERS01.DBF', '/oracle11R1/oradata/SOE.DBF' CHARACTER SET WE8MSWIN1252 The contents of the init.ora file generated by the RMAN CONVERT command include four entries that must be edited before the database can be started. # Please change the values of the following parameters: control_files = "F:\TEMP\LIN11GDB" db_recovery_file_dest = "F:\TEMP\oradata" db_recovery_file_dest_size= 2147483648 audit_file_dest = "F:\TEMP\ADUMP" db_name = "LIN11GDB"

# Please review the values of the following parameters: # __oracle_base = "c:\oracle\app" __shared_pool_size = 1207959552 __large_pool_size = 67108864 __java_pool_size = 268435456 __streams_pool_size = 67108864 __sga_target = 8321499136 __db_cache_size = 6643777536 __shared_io_pool_size = 0 remote_login_passwordfile= "EXCLUSIVE" db_domain = "" dispatchers = "(PROTOCOL=TCP) (SERVICEbXDB)" __pga_aggregate_target = 2751463424 # The values of the following parameters are from source database: 39

www.redhat.com

processes = 500 sessions = 555 memory_target = 11072962560 db_block_size = 8192 compatible = "11.1.0.0.0" log_archive_format = "ARC%S_%R.%T" undo_tablespace = "UNDOTBS1" audit_trail = "NONE" open_cursors = 300 # diagnostic_dest = "C:\ORACLE\APP" Now the startup script can be run. The converted files are in place, the $ORACLE_SID and $ORACLE_HOME environment variables are set.. The transport script will start and stop the database a total of three times. The first instance start up creates the control files, temp file and redo logs for the newly transported database on the new platform. SQL> @TRANSPORT_TST11GDB.SQL ORACLE instance started. Total System Global Area 1.1024E+10 bytes Fixed Size 2147592 bytes Variable Size 4362078968 bytes Database Buffers 6643777536 bytes Redo Buffers 15515648 bytes Control file created. Database altered. Tablespace altered. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~ * Your database has been created successfully! * There are many things to think about for the new database. Here * is a checklist to help you stay on track: * 1. You may want to redefine the location of the directory objects. * 2. You may want to change the internal database identifier (DBID) * or the global database name for this database. Use the * NEWDBID Utility (nid). ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~ Database closed. Database dismounted. ORACLE instance shut down.

www.redhat.com

40

The newly instantiated database is shutdown and restarted in UPGRADE mode. The database then re-compiles all its objects under the new operating system using the Oracle administration script utlirp.sql. SQLPLUS> SHUTDOWN IMMEDIATE SQLPLUS> STARTUP UPGRADE PFILE='/oracle11R1/oradata/INIT_LIN11GDB.ORA' @ ?/rdbms/admin/utlirp.sql The output from this script will be extensive. Once utlirp.sql has completed, the database gets shutdown again to take it out of upgrade mode.The database instance is next started and opened normally. Once opened the utlrp.sql oracle administration script is run. This recompiles all non sys and system objects that may be invalid. Potentially this can take many hours. The more objects in the database the longer it will take. SQL> STARTUP PFILE='/oracle11R1/oradata/INIT_LIN11GDB.ORA' -- The following step will recompile all PL/SQL modules. -- It may take serveral hours to complete. @@ ?/rdbms/admin/utlrp.sql ORACLE instance started. Total System Global Area 1.1024E+10 bytes Fixed Size 2147592 bytes Variable Size 4362078968 bytes Database Buffers 6643777536 bytes Redo Buffers 15515648 bytes Database mounted. Database opened. SQL> SQL> SQL> TIMESTAMP -------------------------------------------------------------------------------COMP_TIMESTAMP UTLRP_BGN 2009-12-23 10:57:43 DOC> DOC> DOC> DOC> DOC>

The following PL/SQL block invokes UTL_RECOMP to recompile invalid objects in the database. Recompilation time is proportional to the number of invalid objects in the database, so this command may take a long time to execute on a database with a large number of invalid objects.

PL/SQL procedure successfully completed. DOC> The following query reports the number of objects that have compiled DOC> with errors (objects that compile with errors have status set to 3 in

41

www.redhat.com

DOC> obj$). If the number is higher than expected, please examine the error DOC> messages reported with each object (using SHOW ERRORS) to see if they DOC> point to system misconfiguration or resource constraints that must be DOC> fixed before attempting to recompile these objects. DOC># .... OBJECTS WITH ERRORS ------------------0 DOC> The following query reports the number of errors caught during DOC> recompilation. If this number is non-zero, please query the error DOC> messages in the table UTL_RECOMP_ERRORS to see if any of these errors DOC> are due to misconfiguration or resource constraints that must be DOC> fixed before objects can compile successfully. DOC># ERRORS DURING RECOMPILATION --------------------------0 PL/SQL procedure successfully completed. Invoking Ultra Search Install/Upgrade validation procedure VALIDATE_WK Ultra Search VALIDATE_WK done with no error PL/SQL procedure successfully completed.

In this example an Oracle 11gR1 database running on Windows Server 2008 Enterprise has now been successfully migrated to a Red Hat Enterprise Linux environment running Oracle 11gR1 binaries. The entire database, including the Oracle internal data dictionary and all user objects were moved intact within their tablespaces to the new platform. No unload or load operations were performed. As the database size grows this may be a more viable option than a Data Pump based migration. In the case where data quality issues may complicate or even prevent loading/unloading operations due to constraint violations – this may be the only option. Once again, extensive preparation and rehearsal is the key to a successful, speedy database migration. There are multiple options available with Transportable Database. - The RMAN file conversion can be performed on either the source or target server. It is well advised to test to determine on which platform RMAN willCONVERT them faster. The file transfer is also an area where optimization will offer big returns. Use SAN replication technology or shared storage devices whenever possible. www.redhat.com

42

3.7 Transportable Database Migration ASM based The example of Transportable Database Migration described above was performed on a Windows Server / Oracle Database using NTFS based storage. It was migrated to Red Hat Enterprise Linux ext3 file systems. The following Transport Database migration will move an ASM based Windows Server 2008 Enterprise hosted Oracle database to a Red Hat Enterprise Linux 5.4 server using ASM based storage. This is a sqlplus listing of the ASM based datafiles in the Windows Enterprise Server 2008 source host: SQL> select file_name from dba_data_files; FILE_NAME -------------------------------------------------------------------------------+DATA/testasm1/datafile/system.264.708008961 +DATA/testasm1/datafile/sysaux.265.708008973 +DATA/testasm1/datafile/undotbs1.266.708008977 +DATA/testasm1/datafile/users.268.708008989 +DATA/testasm1/datafile/tab1.dbf +DATA/testasm1/datafile/tab2.dbf 6 rows selected.#

This database is located on a single ASM disk volume called +DATA. ASM has automatically created the sub directory structure based on the instance name and ASM default file specification and naming conventions. Assuming, the previous described procedures to check the 'transportability' of the database have been performed, the database is ready to be started up in read only mode. RMAN is once again utilized to extract copies of these datafiles from the ASM environment onto a file system based scratch area. SQL> shutdown immediate; Database closed. Database dismounted. ORACLE instance shut down. SQL> startup mount; ORACLE instance started. Total System Global Area 2.0577E+10 bytes Fixed Size 2135264 bytes Variable Size 1.4397E+10 bytes Database Buffers 6174015488 bytes

43

www.redhat.com

Redo Buffers 4427776 bytes Database mounted. SQL> alter database open read only; Database altered. SQL>#

The database conversion can occur on either the target or source server. The first example converted on the source server. In this example tablespace conversion will occurs on the target server. The syntax of the RMAN convert command reflects this.

3.8 Database Upgrade Using Database Upgrade Assistant In both of the Transportable Database scenarios described, the database can migrated only to an $ORACLE_HOME on the target server that is the identical to the source. This rule applies down to CPU patch levels. Now the database needs an upgrade from 11gR1 to 11gR2. The new Oracle binaries are first installed in a separate $ORACLE_HOME but under the same $ORACLE_BASE. It is a time saving best practice to apply any required patches especially security related Critical Patch Updates (CPU's) before upgrading the database with those binaries. This saves another iteration of patching and recompilation. Test any patches before using them in a production environment. Oracle provides a GUI utility called Database Upgrade Assistant (DBUA). This is executed from the directory containing the latest Oracle binaries-- that is the new target's $ORACLE_HOME/bin. Before running Database Upgrade assistant some preparation is required. Under $ORACLE_HOME/rdbms/admin Oracle provides a pre-upgrade evaluation script that identifies these steps for a particular database. The script's name changes with the version of the binaries it ships with. For Oracle Enterprise RDBMS 11gR2 it is called utlu112i.sql. Check the header information of the script to confirm that you have the correct file. Rem Rem $Header: rdbms/admin/utlu112i.sql /st_rdbms_11.2.0.1.0/1 2009/07/23 14:09:03 cdilling Exp $ Rem www.redhat.com

44

Rem utlu112i.sql Rem Rem Copyright (c) 2006, 2009, Oracle and/or its affiliates. Rem All rights reserved. Rem Rem NAME Rem utlu112i.sql - UTiLity Upgrade Information Rem Rem DESCRIPTION Rem This script provides information about databases to be Rem upgraded to 11.2. Rem Rem Supported releases: 9.2.0, 10.1.0, 10.2.0 and 11.1.0 Rem Rem NOTES Rem Run connected AS SYSDBA to the database to be upgraded Rem

Copy this file to a directory accessible by the sqlplus utility of the pre-upgrade database environement and run it as sysdba. This script will examine the pre-upgrade database and identify any issues that must be resolved prior to upgrade. In this case, the utilitydiscovered a number of issues that must be resolved before Database Upgrade Assistant can be run successfully. SQL> @/tmp/utlu112i.sql Oracle Database 11.2 Pre-Upgrade Information Tool 01-07-2010 15:01:36 . ********************************************************************** Database: ********************************************************************** --> name: DB11GR1 --> version: 11.1.0.6.0 --> compatible: 11.1.0.0.0 --> blocksize: 8192 --> platform: Linux 64-bit for AMD --> timezone file: V4 . ********************************************************************** Tablespaces: [make adjustments in the current environment] ********************************************************************** --> SYSTEM tablespace is adequate for the upgrade. .... minimum required size: 1034 MB .... AUTOEXTEND additional space required: 344 MB --> SYSAUX tablespace is adequate for the upgrade.

45

www.redhat.com

.... minimum required size: 877 MB .... AUTOEXTEND additional space required: 310 MB --> UNDOTBS1 tablespace is adequate for the upgrade. .... minimum required size: 674 MB --> TEMP tablespace is adequate for the upgrade. .... minimum required size: 61 MB .... AUTOEXTEND additional space required: 22 MB . ********************************************************************** Flashback: OFF ********************************************************************** ********************************************************************** Update Parameters: [Update Oracle Database 11.2 init.ora or spfile] ********************************************************************** WARNING: --> "java_pool_size" needs to be increased to at least 128 MB . ********************************************************************** Renamed Parameters: [Update Oracle Database 11.2 init.ora or spfile] ********************************************************************** -- No renamed parameters found. No changes are required. . ********************************************************************** Obsolete/Deprecated Parameters: [Update Oracle Database 11.2 init.ora or spfile] ********************************************************************** -- No obsolete parameters found. No changes are required . ********************************************************************** Components: [The following database components will be upgraded or installed] ********************************************************************** --> Oracle Catalog Views [upgrade] VALID --> Oracle Packages and Types [upgrade] VALID --> JServer JAVA Virtual Machine [upgrade] VALID --> Oracle XDK for Java [upgrade] VALID --> Oracle Workspace Manager [upgrade] VALID --> OLAP Analytic Workspace [upgrade] VALID --> OLAP Catalog [upgrade] VALID --> EM Repository [upgrade] VALID --> Oracle Text [upgrade] VALID --> Oracle XML Database [upgrade] VALID --> Oracle Java Packages [upgrade] VALID --> Oracle interMedia [upgrade] VALID --> Spatial [upgrade] VALID --> Oracle Ultra Search [upgrade] VALID --> Expression Filter [upgrade] VALID --> Rule Manager [upgrade] VALID --> Oracle Application Express [upgrade] VALID --> Oracle OLAP API [upgrade] VALID . ********************************************************************** Miscellaneous Warnings **********************************************************************

www.redhat.com

46

WARNING: --> Database is using a timezone file older than version 11. .... After the release migration, it is recommended that DBMS_DST package .... be used to upgrade the 11.1.0.6.0 database timezone version .... to the latest version which comes with the new release. WARNING: --> Database contains schemas with stale optimizer statistics. .... Refer to the Upgrade Guide for instructions to update .... schema statistics prior to upgrading the database. .... Component Schemas with stale statistics: .... SYS .... OLAPSYS .... CTXSYS .... XDB .... ORDSYS .... WKSYS WARNING: --> EM Database Control Repository exists in the database. .... Direct downgrade of EM Database Control is not supported. Refer to the .... Upgrade Guide for instructions to save the EM data prior to upgrade. WARNING:--> recycle bin in use. .... Your recycle bin turned on. .... It is REQUIRED .... that the recycle bin is empty prior to upgrading .... your database. .... The command: PURGE DBA_RECYCLEBIN .... must be executed immediately prior to executing your upgrade. . PL/SQL procedure successfully completed. The environment variable $ORACLE_SID should be set to the target database's SID value. ORACLE_HOME is set to reflect the location of the 11gR2 software. Under $ORACLE_HOME/bin the executable dbua is invoked:

Most of the above output is informational but in this case , the script has indicated that: • the java_pool_size is inadequate, • that some object's optimizer statistics are stale and • the recycle bin needs to be purged just prior to invoking DBUA. To address these concerns, Increase the size of the java_pool_size: SQL> alter system set java_pool_size =128m scope = both; System altered.

Collect fresh optimizer statistics:

47

www.redhat.com

SQL> EXEC DBMS_STATS.GATHER_DICTIONARY_STATS; PL/SQL procedure successfully completed.

Purge the recycle bin: SQL> PURGE DBA_RECYCLEBIN ; DBA Recyclebin purged.

The Database Upgrade Utility Assistant can now be invoked. DBUA is run from the newer Oracle binary location. $ >pwd /oracle/app/oracle/product/11.2.0/db_1/bin [oracle@zko bin]$ ./dbua &

In a few seconds an Oracle splash screen will appear followed by the DBUA welcome screen.

www.redhat.com

48

The next screen appears listing databases that are valid upgrade candidates. The utility scans the file /etc/oratab for database instance names.

Select the database that is to be upgraded. The DBCUA will run for a moment while it examines the database environment. If the database is ready for upgrade the following screen is displayed.

49

www.redhat.com

Several options for performance enhancement are available on this screen. By default the assistant will recompile all invalid objects at the end of the upgrade. The assistant is suggesting a Degree of Parallelism of 15 which can be adjusted up or down depending on how CPU intensive this procedure can be. The option of temporarily disabling Archiving while the upgrade proceeds is also available. While this will save considerable I/O do not select this option unless a full backup has been performed prior to starting the upgrade. This should be done regardless for obvious reasons. Do not attempt to disable archiving on a production database unless prepared to restart the upgrade and replace the database with a backup. The next screen is used to migrate datafiles to ASM storage from file based datafile storage. This database is already ASM based so the options are grayed out.

www.redhat.com

50

Note that this is not a platform migration utility. It is only for migrating a file system based group of datafiles to management under an ASM instance.

51

www.redhat.com

The third DBUA screen prompts for information to configure the Flash Recovery Area.(FRA) This database has already been configured with an FRA, so all the defaults are prepopulated. DBUA next prompts for configuration information for the database to be managed by Enterprise Manager. This step is optional.

www.redhat.com

52

The next screen displayed summarizes the selections made and provides a variety of information about the database environment to be upgraded. It indicates the current version of the database as well as the destination or target version after upgrade. The Database Upgrade Assistant also displays several warnings about the environment and suggests corrective action. In this case, the displayed messages are all innocuous and are safely ignored.

53

www.redhat.com

TheDBUA progress is displayed..

www.redhat.com

54

After the DBUA Completes, the database has been successfully upgraded.

55

www.redhat.com

4 Conclusions This paper demonstrates how migrating Oracle 10g and 11g database environments from Windows server platforms to Red Hat Enterprise Linux can be done in several ways and that the process can be significantly optimized and simplified by using functionality and features native to Red Hat Enterprise Linux and the Oracle 11gR2 RDBMS environment. Endian compatibility and multiple migration and upgrade options make the transition from Windows Servers 200X to Red Hat Enterprise Linux servers one that is manageable in terms of both risk exposure and administrative effort. Migrating to Red Hat Enterprise Linux from Windows Server 200X platforms can be accomplished with minimal downtime and end user disruption if performed with proper planning and preparation. The Data Pump Network feature, though available only for 11gR1 to 11gR2 migrations /upgrades offers current Windows Server / Oracle users a very efficient, direct path to the superior hosting capabilities of Red Hat Enterprise Linux File based Data Pump operations are also effective and efficient especially if the migration is coupled with an Oracle version upgrade. Using file based Data Pump operations provides the greatest flexibility and upgrade capabilities. Oracle database versions starting with 10gR1 can be readily imported into Red Hat Enterprise Linux servers running Oracle 11gR2. Transportable Database technology allows the transfer of an entire database rather than just the database application schemas. Currently, Transportable Database operations can be performed only on database installations that have the same revision levels on both source and target platforms. There is considerably more work to be do when manually upgrading the database after a lateral migration. In contrast, this work is performed seamlessly when using the 'diagonal' migration/upgrade Data Pump based procedures described. The type of datafile storage on the source system, whether NTFS or ASM based has little impact on the migration procedure itself. For Data Pump based migration/upgrades it is entirely transparent and RMAN manages ASM based datafiles the same as file system based ones. However, if the source database is ASM based, additional intermediate scratch space may have to be configured. With Transportable Databases, scratch storage the sum total of all the datafile's sizes must be provided. It is optimal if that storage can be shared between the target and host systems. The reliability and out of the box performance of Red Hat Enterprise Linux make it an www.redhat.com

56

attractive option for hosting Oracle RDBMS environments. This paper has attempted to show that there a number of migration and upgrade paths available for Windows Servers to Red Hat Enterprise Linux platform exchange. As with any migration or upgrade proper planning and testing are critical to success.

57

www.redhat.com

Appendix A: Configuring Oracle 11gR2 ASM Based Storage Oracle's preferred storage technology solution is based on Oracle ASM. Oracle ASM requires it's own special ASM instance and non-OS mounted devices for ASM instance device candidates. Under Oracle RDBMS 10g ASM was bundled with Oracle's RDBMS binaries but starting with Oracle 11gR2 they are now located on the Cluster and High Availability media pack. ASMlib is still configured before ASM to facilitate OS/ASM integration and performance. ASMlib: Oracle provides these additional RPM's to help support ASM on Linux platforms. They can be downloaded from the Oracle support site at: http://www.oracle.com/technology/software/tech/linux/asmlib/rhel5.html

www.redhat.com

58

ASMLib is kernel specific. To determine which kernel version enter the command: # uname -r 2.6.18-164.el5

Match your ASMlib RPM download to the kernel version. Next, install them with the RPM installation utility: # rpm -i oracleasm-support-2.1.3-1.el5.x86_64.rpm # rpm -i oracleasm-2.6.18-164.6.1.el5-2.0.5-1.el5.x86_64.rpm # rpm -i oracleasmlib-2.0.4-1.el5.x86_64.rpm

59

www.redhat.com

With the ASM software installed, configure the ASMLib driver ( as root): # /etc/init.d/oracleasm configure This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface [oracle]:

Once completed, identify the devices to place under ASM control. Use fdisk -l to display the storage volumes previously configured at the SAN or Host Bus Adapter level. Fdisk -l will help determine how these devices are presented by the operating system # fdisk -l Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot /dev/sda1 * /dev/sda2

Start 1 14

End Blocks Id System 13 104391 83 Linux 60801 488279610 8e Linux LVM

WARNING: The size of this disk is 2.3 TB (2346434166784 bytes). DOS partition table format can not be used on drives for volumes larger than 2.2 TB (2199023255040 bytes). Use parted(1) and GUID partition table format (GPT). Disk /dev/sdb: 2346.4 GB, 2346434166784 bytes 255 heads, 63 sectors/track, 285271 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot

Start

End

Blocks Id System

Disk /dev/sdc: 1173.2 GB, 1173217083392 bytes 255 heads, 63 sectors/track, 142635 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot

Start

End

Blocks Id System

WARNING: The size of this disk is 2.3 TB (2346434166784 bytes). DOS partition table format can not be used on drives for volumes larger than 2.2 TB (2199023255040 bytes). Use parted(1) and GUID partition table format (GPT).

www.redhat.com

60

Disk /dev/sdd: 2346.4 GB, 2346434166784 bytes 255 heads, 63 sectors/track, 285271 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot

Start

End

Blocks Id System

Disk /dev/sde: 1173.2 GB, 1173217083392 bytes 255 heads, 63 sectors/track, 142635 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot

Start

End

Blocks Id System

Disk /dev/dm-2: 2346.4 GB, 2346434166784 bytes 255 heads, 63 sectors/track, 285271 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/dm-2 doesn't contain a valid partition table Disk /dev/dm-3: 1173.2 GB, 1173217083392 bytes 255 heads, 63 sectors/track, 142635 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot /dev/dm-3p1

Start 1

End Blocks Id System 142635 1145715606 83 Linux

Once the device to be controlled by ASM are identified, the following command run as root “stamps” it as an ASM ready device: # /etc/init.d/oracleasm createdisk VOL2 /dev/dm-3

After creating devices list them and verify their status with the following commands: # /etc/init.d/oracleasm listdisks VOL1 VOL2 # /etc/init.d/oracleasm status Checking if ASM is loaded: yes Checking if /dev/oracleasm is mounted: yes # /etc/init.d/oracleasm querydisk VOL1 Disk "VOL1" is a valid ASM disk # /etc/init.d/oracleasm querydisk VOL2 Disk "VOL2" is a valid ASM disk

61

www.redhat.com

Now that ASM devices are prepared by ASMLib the Oracle ASM binaries can be installed and configured. As user oracle,from the software download directory find runInstaller and execute it in an X-Windows ready shell. This is a stand alone environment so the option “Install and configure Grid Infrastructure for a Stand Alone Server” is chosen.

English is the default language choice. Other language support can be added at this juncture. Note that this has nothing to do with NLS_LANG variable settings of any databases that may be created or migrated to this installation. This primarily provides error and informational messages in the language(s) chosen for the RDBMS environment.

www.redhat.com

62

The Create ASM Disk Group dialog box displays the previously configured ASM devices as candidate disks. It proposed the default diskgroup name DATA. Since the devices are actually SAN based LUNs with integral fault tolerance, external redundancy is chosen.

63

www.redhat.com

Password

setup is completed on the next screen of the ASM installer. Oracle is encouraging strong passwords in 11gR2. 'Weak passwords' are accepted but Oracle will warn if complexity and length standards are not met by the password entered.

www.redhat.com

64

The screen prompts for operating system groups that will map to Oracle roles for administration and security. The new Oracle role OSASM is created to allow an OS user assigned to this group/role the ability to administer the underlying ASM instance but not the database instances that may be running atop it. It provides greater granularity for security.

65

www.redhat.com

The next screen prompts for the Oracle Base location and the $ORACLE_HOME location. Typically Oracle Home is placed under a directory within $ORACLE_BASE. OraInventory is also found within the $ORACLE_BASE directory structure.

www.redhat.com

66

The next screen confirms the location of OraInventory..

67

www.redhat.com

The summary screen displays the previous selections and prompts for confirmation. Clicking the finish button initiates the installation process.

www.redhat.com

68

A status screen is displayed.

69

www.redhat.com

The installation progresses and eventually prompts the installer to run several scripts as root. These check system swap and temp space and then start a group of new Oracle daemons that are part of the high availability/grid infrastructure:

/oracle/app/oracle/product/11.2.0/db_1/bin/ohasd.bin reboot /oracle/app/oracle/product/11.2.0/db_1/bin/cssdagent /oracle/app/oracle/product/11.2.0/db_1/bin/orarootagent.bin /oracle/app/oracle/product/11.2.0/db_1/bin/diskmon.bin -d -f /oracle/app/oracle/product/11.2.0/db_1/bin/ocssd.bin

Be sure to enable the oracleasm daemons before rebooting. Not doing so will corrupt the configuration upon first reboot. #chkconfig oracleasm on

www.redhat.com

70

These daemons function separately from the RDBMS software and are in a different $ORACLE_HOME location. Once this infrastructure is in place and running an Oracle database instance can be installed to using ASM configuration for storage.

71

www.redhat.com

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF