ODI doc
Short Description
Download ODI doc...
Description
Creating an Excel Data Server Posted on July 6, 2011
PREPARE YOUR EXCEL SPREADSHEET
First open up a Microsoft Excel spreadsheet, we will need to define a named range. Step: 1) Open spreadsheet
Step: 2) Select Formulas–>Define Name
Step: 3) Select the range for the data
DE FI NE AN O DB C DAT A S O URCE Step: 1) ODI will use an ODBC connection to natively access Microsoft Excel. Select start–>Administrative Tools–>data Source (ODBC)
Step: 2) click on Add select the Excel Driver press „Finish‟. Specify the data source Name and description. Click on „Select Workbook‟
Step: 3) Specify the Excel file path Press ok. You should now see your new ODBC Data Source listed in the ODBC Data Source Administrator.
CRE AT E A DAT A S E RVE R I N T O PO L O GY MAN AGE R Step: 1) Open up Topology Manager and go to Physical Architecture. Right click on theMicrosoft Excel technology and select New Data Server. In the Data Server window enter a name in the Name field.
Step: 2) Go to the JDBC tab. Select the Sun JDBC-ODBC Bridge in the JDBC DriverList.
Step: 3) In the JDBC URL template replace with the name of the ODBC Data Source you specified earlier. I used Excel data Server in this example.
Step: 4) Click on Test and make sure you get a successful connection.
Click OK.
Step: 5) Go to Newly created data server and Right click select New Physical Schema
Step: 6) Verify newly created Physical Schema.
Create New Logical Schema: Step: 1) Select Microsoft Excel in Logical Architecture tab Right click and select new logical schema. Specify the name and map it with Contexts.
Create Model for Excel Step: 1) Go to the Designer Navigator select model tab and create new model for excel.
Step: 2) Specify the Name, Technology and Logical schema.
Step: 3) Select Reverse Engineering Tab. Select system table check box and go to Selective reverse Engineering tab.
Step: 4) Here select Selective reverse Engineering and Objects to reverse Engineering check boxes.
Step: 5) Save it and perform reverse Engineer of this model.
Step: 6) Right click on data store select View data.
Done ! Posted in Uncategorized | Leave a reply
Creating an XML Data Server Posted on July 6, 2011
An XML Data Server corresponds to one XML file that is accessible through your local network. XML files are accessed through the Oracle Data Integrator Driver for XML. This JDBC driver loads the XML file hierarchical structure in a relational structure in a schema stored in memory to enable SQL queries through JDBC. It is also able to unload the relational structure back in the XML File. You must have following information: ·
The location of the DTD file associated with your XML file
·
The location of the XML file
·
The name of the Root element of your XML file
Creation of the Data Server
Step: 1) Select XML from the Technology list view. Right click and select New Data Server.
Step: 2) Fill in the following fields in the Definition tab: ·
Name: Name of the Data Server as it will appear in Oracle Data Integrator.
·
User/Password: Not used here.
Step: 3) Fill in the following fields in the JDBC tab: JDBC Driver: com.sunopsis.jdbc.driver.xml.SnpsXmlDriver JDBC URL: jdbc:snps:xml?[property=value&property=value...]
JDBC Driver Properties:
Example: jdbc:snps:xml?f=../xml/department.xml&re=Department_DIM&ro=false&case_sens=true&s=Depart
Step: 4) Click test Connection button it will ask you for saving it press „ok‟ then it will ask again for creating physical schema press „ok‟
Creating a Physical Schema for XML The Physical Schema will be a storage location for the tables associated with the XML file. Step: 1) Select the appropriate XML Data Server then right-click and select New Physical Schema. The Physical Schema window will appear.
Step: 2) Name the Schema and Work Schema. Note that if you have named the schemawith the s= property of the JDBC URL of the XML Data Server, you must use the same schema name here.
Step: 3) Save it.
Creating a Logical Schema for XML
A Logical Schema can be associated with only one Physical Schema in a given Context. Creating Model for XML:
Step: 1) Select Model tab in designer Navigator and select New model.
Step: 2) In Definition Tab Specify the following details.
Step: 3) Perform the reverse Engineering
Posted in Uncategorized | Leave a reply
Introduction to Knowledge Modules Posted on June 5, 2011
1) What is a Knowledge Module? Knowledge Modules (KMs) are code templates. Each KM is dedicated to an individual task in the overall data integration process. The code in the KMs appears in nearly the form that it will be executed except that it includes Oracle Data Integrator (ODI) substitution methods enabling it to be used generically by many different integration jobs. The code that is generated and executed is derived from the declarative rules and metadata defined in the ODI Designer module.
2) Types of Knowledge Modules:
2.1) Reverse-Engineering Knowledge Modules (RKM) The RKM role is to perform customized reverse engineering for a model. The RKM is in charge of connecting to the application or meta-data provider then transforming and writing the resulting meta-data into Oracle Data Integration‟s repository. The meta-data is written
temporarily into the SNP_REV_xx tables. The RKM then calls the Oracle Data Integrator API to read from these tables and write to Oracle Data Integrators meta-data tables of the work repository in incremental update mode. This is illustrated below
A typical RKM follows these steps: a).Cleans up the SNP_REV_xx tables from previous executions using the OdiReverseResetTable tool.
b). Retrieves sub models, datastores, columns, unique keys, foreign keys, conditions from the metadata provider to SNP_REV_SUB_MODEL, SNP_REV_TABLE,SNP_REV_COL, SNP_REV_KEY,SNP_REV_KEY_COL, SNP_REV_JOIN, SNP_REV_JOIN_COL, SNP_REV_COND tables.
C). Updates the model in the work repository by calling the OdiReverseSetMetaData tool.
Available RKM in ODI 11.1.1
1) 2) 3) 4) 5) 6) 7) 8) 9) 10) 11) 12) 13) 14) RKM TERADATA
RKM FILE
RKM RKM RKM RKM
DB2/400 EXCEL) ESSBASE MANAGEMENT PLANNING INFORMIX SE MSSQL ORACLE (JYOTHON) QUALITY (JYOTHON) (JYOTHON)
(FROM HYPERION FINANCIAL HYPERION
RKM RKM
INFORMIX RKM RKM
RKM
ORACLE
RKM RKM
ORACLE ORACLE
DATA OLAP
RKM
SQL
The CKM accepts a set of constraints and the name of the table to check. It creates an “E$” error table which it writes all the rejected records to. The CKM can also remove the erroneous records from the checked result set.
The
following
figures
show
how
a
CKM
operates
in
both
STATIC_CONTROL
and
FLOW_CONTROL
modes.
2.2) Check Knowledge Modules (CKM)
The CKM is in charge of checking that records of a data set are consistent with defined constraints. The CKM is used to maintain data integrity and participates in the overall data quality initiative. The CKM can be used in 2 ways:
a)
To check the consistency of existing data. This can be done on any datastore or within interfaces, by setting the STATIC_CONTROL option
to “Yes”. In the first case, the data checked is the data currently in the datastore. In the second case, data in the target datastore is checked after it is loaded.
b)
To check consistency of the incoming data before loading the records to a target datastore. This is done by using the FLOW_CONTROL
option. In this case, the CKM simulates the constraints of the target datastore on the resulting flow prior to writing to the target
(STATIC_CONTROL)
In STATIC_CONTROL mode, the CKM reads the constraints of the table and checks them against the data of the table. Records that don‟t match the constraints are written to the “E$” error table in the staging area. (FLOW_CONTROL)
In FLOW_CONTROL mode, the CKM reads the constraints of the target table of the Interface. It checks these constraints against the data contained in the “I$” flow table of the staging area. Records that violate these constraints
are
written
to
the
“E$”
table
of
the
staging
area.
In both cases, a CKM usually performs the following tasks: a) Create the “E$” error table on the staging area. The error table should contain the same columns as the datastore
as
well
as
additional
columns
to
trace
error
messages,
check
origin,
check
date
etc.
b) Isolate the erroneous records in the “E$” table for each primary key, alternate key, foreign key, condition, mandatory
column
that
needs
to
be
checked.
c) If required, remove erroneous records from the table that has been checked
Available CKM in ODI 11.1.1 a)
CKM
HSQL
b)
CKM
NETEZZA
c)
CKM
ORACLE
d) e)
CKM CKM
SQL SYBASE
IQ
f)
CKM
TERADATA
2.3) Loading Knowledge Modules (LKM) An LKM is in charge of loading source data from a remote server to the staging area. It is used by interfaces when some of the source datastores are not on the same data server as the staging area. The LKM implements the declarative rules that need to be executed on the source server and retrieves a single result set that it stores in a “C$” table in the staging area, as illustrated below.
The LKM creates the “C$” temporary table in the staging area. This table will hold records loaded from the source server a) The LKM obtains a set of pre-transformed records from the source server by executing the appropriate transformations on the source. Usually, this is done by a single SQL SELECT query when the source server is an RDBMS. When the source doesn‟t have SQL capacities (such as flat files or applications), the LKM simply reads the source data with the appropriate method (read file or execute API). b) The LKM loads the records into the “C$” table of the staging area.
An interface may require several LKMs when it uses datastores from different sources. When all source datastores are on the same data server as the staging area, no LKM is required. Available
LKM
1)
LKM
2) 3) 4)
LKM LKM
LKM LKM LKM
13) 15) 16) 17) 18) 19)
LKM LKM LKM
Hyperion Hyperion Informix LKM
(LOAD
to
Management to
SQL
to Data
Members
Informix JMS
(TTU) to
METADATA Management
Financial
TABLE)
Teradata DATA
Essbase Financial
(SQLLDR) SQL
IQ
Essbase
Hyperion
LKM
Oracle
Sybase
Hyperion
TABLE)
to
File
LKM
(NZLOAD)
(EXTERNAL
to
to
(BULK) TABLE)
Netezza
File
File
(LOAD)
(EXTERNAL
Oracle
File
LKM
14)
UDB
to to
(SAME to
400
(EXPORT_IMPORT)
MSSQL
Netezza
LKM
SQL
DB2
UDB
DB2
to
File
11)
TO
to
File
LKM
10) 12)
to
File
TO
DB2
File
LKM
8)
JOURNAL to
File
LKM
SQL
JOURNAL
400 UDB
11.1.1
TO
400
DB2 DB2
6)
9)
DB2
LKM
ODI
ATUNITY
LKM
5) 7)
in
SQL to To
SQL SQL SERVER) SQL
20)
LKM
21) 22) 23)
MSSQL
LKM
25)
LKM
29)
LKM
41) 42) 43)
SQL
LKM LKM
SQL
LKM
to SQL
Oracle
to SQL
XREF) (JYTHON)
(row
to
by Sybase
Sybase Sybase
SQL (ESB
SQL
SQL to
SQL
MSSQL (BULK)
to
SQL
LKM
(LOAD)
MSSQL
to to
UDB
UDB to
to
SQL
39)
DB2
SQL
LKM LKM
(CPYFRMIMPF) DB2
SQL SQL
(DBLINK)
400
to
LKM
37)
40)
DB2
SQL
LKM
SQL pump)
Oracle
to
LKM
35)
to
SQL
LKM
(DBLINK)
(data
to
SQL
34)
38)
to
XREF)
Oracle
to
LKM
33)
SQLLDR)
(ESB
Oracle
SQL
LKM
SERVERS)
(BCP
BI to
SQL LKM
(LINKED
to
Oracle
LKM
32)
36)
BI
Oracle
LKM
SQL (BCP)
SQL
Oracle
LKM
to MSSQL
ORACLE to
Oracle
30) 31)
MSSQL
to
MSSQL
26) 28)
to to
MSSQL
LKM
XML
MSSQL
LKM
24)
27)
JMS
LKM
ASE IQ
to
(LOAD Teradata
row) ASE (BCP) TABLE) (TTU)
LKM Sybase ASE to Sybase ASE (BCP)
2.4) Integration Knowledge Modules (IKM) The IKM is in charge of writing the final, transformed data to the target table. Every interface uses a single IKM. When the IKM is started, it assumes that all loading phases for the remote servers have already carried out their tasks. This means that all remote source data sets have been loaded by LKMs into “C$” temporary tables in the staging area, or the source datastores are on the same data server as the staging area. Therefore, the IKM simply needs to execute the “Staging and Target” transformations, joins and filters on the “C$” tables, and tables located on the same data server as the staging area. The resulting set is usually processed by the IKM and written into the “I$” temporary table before loading it to the target. These final transformed records can be written in several ways depending on the IKM selected in your interface. They may be simply appended to the target, or compared for incremental updates or for slowly changing dimensions. There are 2 types of IKMs: those that assume that the staging area is on the same server as the target datastore, and those that can be used when it is not. These are illustrated below:
(Staging Area on Target)
When the staging area is on the target server, the IKM usually follows these steps: a) The IKM executes a single set-oriented SELECT statement to carry out staging area and target declarative rules on all “C$” tables and local tables (such as D in the figure). This generates a result set.
b) Simple “append” IKMs directly write this result set into the target table. More complex IKMs create an “I$” table to store this result set.
c) If the data flow needs to be checked against target constraints, the IKM calls a CKM to isolate erroneous records and cleanse the “I$” table.
d) The IKM writes records from the “I$” table to the target following the defined strategy (incremental update, slowly changing dimension, etc.).
e) The IKM drops the “I$” temporary table.
f) Optionally, the IKM can call the CKM again to check the consistency of the target datastore.
These types of KMs do not manipulate data outside of the target server. Data processing is set-oriented for maximum efficiency when performing jobs on large volumes.
(Staging Area Different from Target)
When the staging area is different from the target server, as shown in Figure, the IKM usually follows these steps: a) The IKM executes a single set-oriented SELECT statement to carry out declarative rules on all “C$” tables and tables located on the staging area (such as D in the figure). This generates a result set. b) The IKM loads this result set into the target datastore, following the defined strategy (append or incremental update). This architecture has certain limitations, such as: A CKM cannot be used to perform a data integrity audit on the data being processed. Data needs to be extracted from the staging area before being loaded to the target, which may lead to performance issues.
Available IKM in ODI 11.1.1 1)
IKM Access Incremental Update
2) 3) 4) 5) 6) 7) 8) 9) 10) 11) 12) 13) 14) 15) 16) 17) 18) 19) 20) 21) 22) 23) 24) 25) 26) 27) 28) 29) 30) 31) 32) 33) 34) 35) 36) 37) 38) 39) 40) 41) 42) 43) 44) 45)
IKM DB2 400 Incremental Update IKM DB2 400 Incremental Update (CPYF) IKM DB2 400 Slowly Changing Dimension IKM DB2 UDB Incremental Update IKM DB2 UDB Slowly Changing Dimension IKM File to Teradata (TTU) IKM Informix Incremental Update IKM MSSQL Incremental Update IKM MSSQL Slowly Changing Dimension IKM Netezza Control Append IKM Netezza Incremental Update IKM Netezza To File (EXTERNAL TABLE) IKM Oracle AW Incremental Update IKM Oracle BI to SQL Append IKM Oracle Incremental Update IKM Oracle Incremental Update (MERGE) IKM Oracle Incremental Update (PL SQL) IKM Oracle Multi Table Insert IKM Oracle Slowly Changing Dimension IKM Oracle Spatial Incremental Update IKM SQL Control Append IKM SQL Control Append (ESB XREF) IKM SQL Incremental Update IKM SQL Incremental Update (row by row) IKM SQL to File Append IKM SQL to Hyperion Essbase (DATA) IKM SQL to Hyperion Essbase (METADATA) IKM SQL to Hyperion Financial Management Data IKM SQL to Hyperion Financial Management Dimension IKM SQL to Hyperion Planning IKM SQL to JMS Append IKM SQL to JMS XML Append IKM SQL to SQL Append IKM SQL to Teradata (TTU) IKM Sybase ASE Incremental Update IKM Sybase ASE Slowly Changing Dimension IKM Sybase IQ Incremental Update IKM Sybase IQ Slowly Changing Dimension IKM Teradata Control Append IKM Teradata Incremental Update IKM Teradata Multi Statement IKM Teradata Slowly Changing Dimension IKM Teradata to File (TTU) IKM XML Control Append 2.5)
Journalizing
Knowledge
Modules
(JKM)
JKMs create the infrastructure for Change Data Capture on a model, a sub model or a datastore. JKMs are not used in interfaces, but rather within a model to define how the CDC infrastructure is initialized. This infrastructure is composed of a subscribers table, a table of changes, views on this table and one or more triggers or log capture programs as illustrated below.
Available JKM in ODI 11.1.1 1)
JKM
2)
DB2
JKM
3)
DB2
JKM
4)
DB2 JKM
5)
UDB
9)
JKM
10)
JKM
11) JKM
13)
JKM
14)
HSQL
JKM
Informix
10g
Oracle
11g
JKM
Oracle
18)
JKM
19)
JKM Sybase ASE Simple
Simple Consistent Consistent
Oracle (Update
Oracle to Sybase
(Streams) (Streams) Consistent
Consistent
JKM
17)
Simple Consistent
MSSQL
Oracle
Oracle
16)
Simple Consistent
MSSQL
JKM
15)
Consistent
Informix
JKM
12)
Simple
HSQL
JKM
(Journal) Consistent
UDB
JKM
8)
Simple
Simple
DB2 JKM
7)
Consistent
400 400
DB2
JKM
6)
400
Date) Simple
Oracle
Consistent ASE
(OGG) Consistent
2.6) Service Knowledge Modules (SKM) SKMs are in charge of creating and deploying data manipulation Web Services to your Service Oriented Architecture (SOA) infrastructure. SKMs are set on a Model. They define the different operations to generate for each datastores web service. Unlike other KMs, SKMs do no generate an executable code but rather the Web Services deployment archive files. SKMs are designed to generate Java code using Oracle Data Integrator’s framework for Web Services. The code is then compiled and eventually deployed on the Application Server’s containers.
Available SKM in ODI 11.1.1
1)
SKM
2)
SKM
3) 4)
HSQL IBM
SKM
UDB Informix
SKM Oracle
Posted in Uncategorized | Leave a reply
Working With Change Data Capture Posted on April 7, 2011
Changed Data Capture
Purpose of CDC is to enable applications to process changed data only. CDC enables ODI to track changes in source data caused by other applications. When running integration interfaces, ODI can avoid processing unchanged data in the flow. Loads will process only changes since the last load. The volume of data to be processed is dramatically reduced.
Reducing the source data flow to only changed data is useful in many contexts, such as data synchronization and replication. It is essential when setting up an event-oriented architecture for integration. In such architecture, applications make changes in the data (“Customer Deletion”, “New Purchase Order”) during a business process. These changes are captured by Oracle Data Integrator and transformed into events that are propagated throughout the information system.
CDC Techniques 1)Trigger based: ODI will create and maintain triggers to keep track of the changes.
2) Logs based: ODI retrieves changes from the database logs (Oracle, AS/400).
3) Time stamp based: Processes written with ODI can filter the data by comparing the time stamp value with the last load time (cannot process deletes)
4) Sequence number: If the records are numbered in sequence, ODI can filter the data based on the last value loaded (cannot process updates and deletes).
Changed Data Capture is performed by journalizing models. Journalizing a model consists of setting up the infrastructure to capture the changes (inserts, updates and deletes) made to the records of this model‟s datastores. Oracle Data Integrator supports two journalizing modes:
•Simple Journalizing tracks changes in individual datastores in a model. •Consistent Set Journalizing tracks changes to a group of the model‟s datastores, taking into account the referential integrity between these datastores. The group of datastores journalized in this mode is called a Consistent Set.
The Journalizing Components
The journalizing components are: •Journals: Where changes are recorded. Journals only contain references to the changed records along with the type of changes (insert/update, delete).
•Capture processes: Journalizing captures the changes in the source datastores either by creating triggers on the data tables, or by using database-specific programs to retrieve log data from data server log files.
•Subscribers: CDC uses a publish/subscribe model. Subscribers are entities (applications, integration processes, etc) that use the changes tracked on a datastore or on a consistent set. They subscribe to a model‟s CDC to have the changes tracked for them. Changes are captured only if there is at least one subscriber to the changes. When all subscribers have consumed the captured changes, these changes are discarded from the journals.
•Journalizing views: Provide access to the changes and the changed data captured. They are used by the user to view the changes captured, and by integration processes to retrieve the changed data. These components are implemented in the journalizing infrastructure
Setting up Journalizing:
This is the basic process for setting up CDC on an Oracle Data Integrator data model. Each of these steps is described in more detail below.
1.Set the CDC parameters 2.Add the datastores to the CDC 3.For consistent set journalizing, arrange the datastores in order 4.Add subscribers 5.Start the journals
Journalizing Tools:
Oracle Data Integrator provides a set of tools that can be used in journalizing to refresh information on the captured changes or trigger other processes:
•SnpsWaitForData waits for a number of rows in a table or a set of tables. •SnpsWaitForLogData waits for a certain number of modifications to occur on a journalized table or a list of journalized tables. This tool calls SnpsRefreshJournalCount to perform the count of new changes captured.
•SnpsWaitForTable waits for a table to be created and populated with a pre-determined number of rows.
•SnpsRetrieveJournalData retrieves the journalized events for a given table list or CDC set for a specified journalizing subscriber. Calling this tool is required if using Database-Specific Processes to load journalizing tables. This tool needs to be used with specific knowledge modules.
•SnpsRefreshJournalCount refreshes the number of rows to consume for a given table list or CDC set for a specified journalizing subscriber.
Implementing Changed Data Capture:
Step:1) Import the appropriate JKM in the project. Click the Projects tab. Expand the Procedure-Demo > Knowledge Modules node, right-click Journalization (JKM), and select Import Knowledge Modules.
Step:2) In the Models tab, create a new model named Oracle_relational_01. For Technology, enter: Oracle. Select the logical schema Sales_Order. Click the Reverse Engineer tab and set Context to development. Verify the setting, as shown in the following screen. Click the Journalizing tab.
Step: 3) In the Knowledge Module menu, select JKM Oracle Simple. Procedure-Demo, as shown in the following screen. Click the Save to save your model and then close the tab.
Step: 4) Reverse-engineer the model Oracle_Relational_01. Expand this model and verify its structure as follows.
Step: 5) Set up the CDC Infrastructure. You will start the CDC on the EMPLOYEE table in the Oracle_Relational_01 model.
To add the table to CDC, expand the Oracle_Relational_01 model, right-click the EMPLOYEE table, and select Change Data Capture > Add to CDC. Click Yes to confirm.
Step: 6) Click the Refresh icon. The small yellow clock icon is added to the table.
Step: 7) Right-click the EMPLOYEE table again and select Changed Data Capture > Start Journal.
Step: 8) you use the default subscriber SUNOPSIS. For that reason, you do not have to add another subscriber. Click OK to confirm that your subscriber is SUNOPSIS. In the Information window, click OK again. Wait several seconds, then click Refresh and verify that the tiny clock icon at the EMPLOYEE table is green now. This means that your journal has started properly.
Step: 9) Click the ODI Operator icon to open the Operator. Click Refresh. Select All Executions and verify that the EMPLOYEE session executed successfully.
Step: 10) View the data and the changed data. In the Designer window, open the Models tab. Right-click the EMPLOYEE datastore and select Data.
Step: 11) Select the row with Employee_Key = 10. Change the value of the NAME2 column to “Symond”. Similarly, select the row with Employee_Key = 15, and then change the value to “jacob”. Save your changes and close the tab.
Step: 12) Right-click the table again and select View Data. Scroll down, and verify that the rows are modified. Close the tab.
To verify that your changed data is captured, right-click EMPLOYEE, and select Change Data Capture > Journal Data. Find the captured changed records in the journal data. Close the tab.
Done !
Posted in Uncategorized | Leave a reply
Oracle Data Integrator 11g Knowledge Modules Description Posted on March 26, 2011
RKM Oracle
Reverse-engineers tables, views, columns, primary keys, non unique indexes and foreign keys.
JKM Oracle 10g Consistent (Streams)
Creates the journalizing infrastructure for consistent set journalizing on Oracle 10g tables, using Oracle Streams.
JKM Oracle 11g Consistent (Streams)
Creates the journalizing infrastructure for consistent set journalizing on Oracle 11g tables, using Oracle Streams.
JKM Oracle Consistent
Creates the journalizing infrastructure for consistent set journalizing on Oracle tables using triggers.
JKM Oracle Consistent (Update Date)
Creates the journalizing infrastructure for consistent set journalizing on Oracle tables using triggers based on a Last Update Date column on the source tables.
JKM Oracle Simple
Creates the journalizing infrastructure for simple journalizing on Oracle tables using triggers.
JKM Oracle to Oracle Consistent (OGG)
Creates and manages the ODI CDC framework infrastructure when using Oracle GoldenGate for CDC. See Chapter 26, “Oracle GoldenGate” for more information.
CKM Oracle
Checks data integrity against constraints defined on an Oracle table.
LKM File to Oracle (EXTERNAL TABLE)
Loads data from a file to an Oracle staging area using the EXTERNAL TABLE SQL Command.
LKM File to Oracle (SQLLDR)
Loads data from a file to an Oracle staging area using the SQL*Loader command line utility.
LKM MSSQL to Oracle (BCP SQLLDR)
Loads data from a Microsoft SQL Server to Oracle database (staging area) using the BCP and SQL*Loader utilities.
LKM Oracle BI to Oracle (DBLINK)
Loads data from any Oracle BI physical layer to an Oracle target database using database links. See Chapter 17, “Oracle Business Intelligence Enterprise Edition” for more information.
LKM Oracle to Oracle (DBLINK)
Loads data from an Oracle source database to an Oracle staging area database using database links.
LKM Oracle to Oracle (datapump)
Loads data from an Oracle source database to an Oracle staging area database using external tables in the datapump format.
LKM SQL to Oracle
Loads data from any ANSI SQL-92 source database to an Oracle staging area.
LKM SAP BW to Oracle (SQLLDR)
Loads data from SAP BW systems to an Oracle staging using SQL*Loader utilities. See the Oracle Fusion Middleware Application Adapters Guide for Oracle Data Integrator for more information.
LKM SAP ERP to Oracle (SQLLDR)
Loads data from SAP ERP systems to an Oracle staging using SQL*Loader utilities. See the Oracle Fusion Middleware Application Adapters Guide for Oracle Data Integrator for more information.
IKM Oracle AW Incremental Update
Integrates data in an Oracle target table in incremental update mode and is able to refresh a Cube in an Analytical Workspace. See Chapter 21, “Oracle OLAP” for more information.
IKM Oracle Incremental Update
Integrates data in an Oracle target table in incremental update mode.
IKM Oracle Incremental Update (MERGE)
Integrates data in an Oracle target table in incremental update mode, using a MERGE statement.
IKM Oracle Incremental Update (PL SQL)
Integrates data in an Oracle target table in incremental update mode using PL/SQL.
IKM Oracle Multi Table Insert
Integrates data from one source into one or many Oracle target tables in append mode, using a multi-table insert statement (MTI).
IKM Oracle Slowly Changing Dimension
Integrates data in an Oracle target table used as a Type II Slowly Changing Dimension.
IKM Oracle Spatial Incremental Update
Integrates data into an Oracle (9i or above) target table in incremental update mode using the MERGE DML statement. This module supports the SDO_GEOMETRY datatype.
IKM Oracle to Oracle Control Append (DBLINK)
Integrates data from one Oracle instance into an Oracle target table on another Oracle instance in control append mode.
This IKM is typically used for ETL configurations: source and target tables are on different Oracle instances and the interface‟s staging area is set to the logical schema of the source tables or a third schema.
SKM Oracle
Generates data access Web services for Oracle databases. See “Working with Data Services” in the Oracle Fusion Middleware Developer‟s Guide for Oracle Data Integrator for information about how to use this SKM.
Posted in Uncategorized | Leave a reply
Extracting Hyperion Financial Management Metadata to Oracle Data Integrator Posted on March 26, 2011
Step: 1) Go to Start—>Program Files—>Oracle—>Oracle Data Integrator explore ODI and login to work repository.
Step: 2) After successfully login to work repository go to Topology Navigator select Physical Architecture Tab and select Technology and expand it. Go to the Hyperion Financial Management Right Click and select „New Data Server‟.
Step: 3) In the definition tab enter the following details as shown in the picture below: Name: Appropriate name to data server
Cluster (Data Server): System name User: User Name Password: password
Step: 4) Click on Save it will ask you to create physical schema press „OK‟.
Step: 5) In Technology Tab go to Hyperion Financial Management expand it and select newly create data server Right Click and select „New Physical schema‟.
Step: 6) In the definition tab enter the HFM Application name to which you want to connect.
Step: 7) Click on Save it will ask you to specify the context press „OK‟.
Step: 8) Go to the Logical Architecture tab Expand Technologies and select Hyperion Financial Management Right-click and select „New Logical Schema‟.
Step: 9) Give appropriate name to logical schema and set the Context to Hyperion Financial Management physical schema.
Step: 10) Click on „Save‟.
Step: 11) Go to the ODI Designer and create new Project with appropriate name.
Step: 12) Click on „Save‟. You can check in the project tab
Step: 13) Expand project go to knowledge module right click and import following knowledge modules.
RKM Hyperion Financial Management LKM File to SQL LKM Hyperion Financial Management Members To SQL LKM Hyperion Financial Management Data To SQL IKM SQL to Hyperion Financial Management Data IKM SQL to Hyperion Financial Management Dimension IKM SQL to File Append
Step: 14) Press Ctrl and select all three knowledge
Step: 15) „Close‟.
Step: 16) Now go to the model tab and create new model folder.
Step: 17) Go to the model tab and create new model.
Step:18) Go to the Reverse Engineer tab select customize check box and enter details as follows:
Step: 19) „Save‟
Step:20) Newly created Model will be available for Reverse Engineering go to Model tab select Hyperion planning model and Right Click and select Reverse Engineers. Before Reverse Engineer the models make sure your agent services are running if not start it with following command:
C:\oracle\Middleware\Oracle_ODI1\oracledi\agent\bin>agent –NAME=localagent
Step: 21) Now you can reverse engineer it.
Press „OK‟.
Step: 22) Go to the Operator Tab check whether the Execution run successfully or not.
If all the checks are green that means your model reversed successfully.
Step: 23) Go to the Designer and open model tab expand Hyperion Financial Management model again. You can check all the dimensions are visible there from your HFM application
Done !
Posted in Uncategorized | Leave a reply
Configuring ODI for integrating with Hyperion Financial Management Application Posted on March 26, 2011
I will be using Single-machine architecture; machine will be hosting ODI and hosting HFM and EPM foundation (Workspace and shared services). I have created a FM application named SAMPLE that is based on the “Simple Demo” information that can be found in “Hyperion\products\Financial Management\Sample Apps\Simple Demo\” of the HFM installation.
The machine is windows based as that is a pre-requisite for HFM and also the HFM client that has to be installed on the ODI machine. The EPM version installed is 11.1.1.3 We need to perform some work around solutions for integrating ODI 11g with EPMA 11.1.1.3.
Step: 1) Go to directory C:\oracle\Middleware\Oracle_ODI1\oracledi\agent\lib Copy the HFMDriver.dll file and pest it to directory C:\oracle\Middleware\Oracle_ODI1\oracledi\agent\drivers
Step: 2) If you will use the “Local Agent” you will need to add the driver location to the windows environment variable (Path). C:\oracle\Middleware\Oracle_ODI1\oracledi\agent\drivers
Once you have added the location make sure you restart all the ODI components to pick up the driver or you will continue to receive the error message.
Posted in Uncategorized | Leave a reply
Creating and Connecting to ODI Master and Work Repositories Posted on March 25, 2011
1
The first steps to setting up Oracle Data Integrator are to create the Master repository, connect to the Master repository, create the Work repository, and then connect to the Work repository
Step:1) Connecting to the RDBMS (Oracle 10g XE)
Step:2) Connecting to the RDBMS (Oracle 10g XE)
Step:3) Creating the RDBMS Schema/User (Oracle 10g XE) for the Master Repository i.On the Oracle Database Express Edition Home screen, select SQL Commands > Enter Command from the SQL drop-down list.
Create the schemas by executing the following SQL commands: create user identified by ; grant dba, connect, create session to ;
SQL> create user bispmaster identified by password; SQL> grant dba,connect,create session to bispmaster;
Step:4) Creating the ODI Master Repository
In the URL field, enter the following URL: jdbc:oracle:thin:@bispcognos:1521:xe, and then enter the User as bispmaster and Password as password(This is in our condition). In the Technology field, select Oracle. Click the Test Connection button and verify successful connection. Click Next. Click OK on Master Repository Creation Wizard screen.
The SQL command is executed in the command window. If the Master repository creation is successful, you will see the following message. Click OK. The ODI Master repository is now created.
Step:5) Connecting to the ODI Master Repository
Configure Repository Connections with the parameters provided in the following screen. Click the Test button. Verify successful connection and click OK. Click OK to save the connection.
Select the newly created repository connection (Master Repository) from the drop-down list. Click OK. The ODI Topology Manager starts.
A relational schema must be created for each repository. The following steps walk you through creating the relational schema for the ODI Work repository. Step:1)Connecting to the RDBMS
If not opened, open the Oracle XE Database Home page: Start > All Programs > Oracle Database 10g Express Edition > Go to Database Home page.
The Login screen appears. Log in to Oracle XE Database as system. The default password is password. Click Login.
Step:2) Creating the RDBMS Schema/User (Oracle 10g XE) for the Work Repository
On the Oracle Database Express Edition Home screen, select SQL Commands > Enter Command from the SQL drop-down list.
Create the schemas by executing the following SQL commands: create user identified by ; grant dba, connect, create session to ;
SQL> create user bispwork identified by password; SQL> grant dba,connect,create session to bispwork;
Step:3) Creating the ODI Work Repository
In ODI, click the Topology Navigator tab and then click to the Repositories panel. Right-click the Work Repositories node and select New Work Repository. The Create Work Repository Wizard opens.
In the screen that follows, enter the parameters shown in the following screen. Click the Test button. Verify successful connection and click OK. Click Next.
In the Create Work Repository Login window, click Yes. Enter the Login name: WORK_REPO as shown on the screenshot. Click OK. Verify that the newly created Work repository is now in the work repositories tree view
Now you disconnect from the Master repository and connect to the Work repository. Click ODI menu and select Disconnect “Master Repository”.
Click Connect to Repository. Select “WORK_REPO” from the Login Name drop-down list. Enter Password: SUNOPSIS. Click OK. Click the Designer tab. The following ODI Designer screen appears.
Done !
View more...
Comments