Performance_Tips_for_ETL_Informatica_Mappings.doc

July 21, 2017 | Author: Prashant Sharma | Category: Central Processing Unit, Database Index, Parallel Computing, Databases, Cpu Cache
Share Embed Donate


Short Description

self help doc for informatica developers...

Description

Performance Tips for ETL/Informatica Mappings

Performance Tips for ETL/Informatica Mappings

Author : Owner : Version: Reviewed by :

 Wipro Technologies

Randeep Kumar([email protected]) Randeep Kumar 1.1 Venkateshwara rao Kondepudi([email protected])

Page 1 of 13

Performance Tips for ETL/Informatica Mappings

Table of Contents Table of Contents.........................................................................................................................................2 1.Introduction...............................................................................................................................................3 2. Lookup Transformation Optimizing Tips ...............................................................................................3 3.Memory allocation by Power Center:.......................................................................................................5 4.Database performance tuning tips.............................................................................................................5 5.Concept of parallelism work when there are Aggregator, Sorter, and Rank transformations in the mapping........................................................................................................................................................7 6.Do we need to prepare the source file for partition parallelism? .............................................................7 7.What is the difference between partition parallelism and database partitioning? ...................................8 8.How many sessions can run concurrently?...............................................................................................8 9.Is there any correlation between the number of CPUs and the number of concurrent running sessions?8 10.How do you prevent duplicate data with the pipeline partitioning?.......................................................9 11.Basic Performance Tips for ETL/Informatica mappings .......................................................................9 12.Other Trivia Stuff: Challenge...............................................................................................................11 13.Conclusion ...........................................................................................................................................12 References:.................................................................................................................................................12 About Wipro Technologies........................................................................................................................12

 Wipro Technologies

Page 2 of 13

Performance Tips for ETL/Informatica Mappings

1. Introduction Performance tuning is the improvement of system performance. The goal of performance tuning is optimize session performance so sessions run during the available load window for the Informatica Server.

2. Lookup Transformation Optimizing Tips 1. When your source is large, cache lookup table columns for those lookup tables of 500,000 rows or less. This typically improves performance by 10 to 20 percent. 2. The rule of thumb is not to cache any table over 500,000 rows. This is only true if the standard row byte count is 1,024 or less. If the row byte count is more than 1,024, then the 500k rows will have to be adjusted down as the number of bytes increase (i.e., a 2,048 byte row can drop the cache row count to between 250K and 300K, so the lookup table should not be cached in this case). This is just a general rule though. Try running the session with a large lookup cached and not cached. Caching is often still faster on very large lookup tables. 3. When using a Lookup Table Transformation, improve lookup performance by placing all conditions that use the equality operator = first in the list of conditions under the condition tab. 4. Cache only lookup tables if the number of lookup calls is more than 10 to 20 percent of the lookup table rows. For fewer number of lookup calls, do not cache if the number of lookup table rows is large. For small lookup tables (i.e., less than 5,000 rows), cache for more than 5 to 10 lookup calls. 5. Replace lookup with decode or IIF (for small sets of values). 6. If caching lookups and performance is poor, consider replacing with an unconnected, uncached lookup. 7. For overly large lookup tables, use dynamic caching along with a persistent cache. Cache the entire table to a persistent file on the first run, enable the update else insert option on the dynamic cache and the engine will never have to go back to the database to read data from this table. You can also partition this persistent cache at run time for further performance gains. 8. Review complex expressions. 9. Examine mappings via Repository Reporting and Dependency Reporting within the mapping. 10. Minimize aggregate function calls. 11. Replace Aggregate Transformation object with an Expression Transformation object and an Update Strategy Transformation for certain types of Aggregations. 12. Numeric operations are faster than string operations. 13. Optimize char-varchar comparisons (i.e., trim spaces before comparing). 14. Operators are faster than functions (i.e., || vs. CONCAT). 15. Optimize IIF expressions. 16. Avoid date comparisons in lookup; replace with string. 17. Test expression timing by replacing with constant. 18. Use flat files. 19. Using flat files located on the server machine loads faster than a database located in the server machine. 20. Fixed-width files are faster to load than delimited files because delimited files require extra parsing.

 Wipro Technologies

Page 3 of 13

Performance Tips for ETL/Informatica Mappings

21. If processing intricate transformations, consider loading first to a source flat file into a relational database, which allows the PowerCenter mappings to access the data in an optimized fashion by using filters and custom SQL Selects where appropriate. 22. If working with data that is not able to return sorted data (e.g., Web Logs), consider using the Sorter Advanced External Procedure. 23. Use a Router Transformation to separate data flows instead of multiple Filter Transformations. 24. Use a Sorter Transformation or hash-auto keys partitioning before an Aggregator Transformation to optimize the aggregate. With a Sorter Transformation, the Sorted Ports option can be used, even if the original source cannot be ordered. 25. Use a Normalizer Transformation to pivot rows rather than multiple instances of the same target. 26. Rejected rows from an update strategy are logged to the bad file. Consider filtering before the update strategy if retaining these rows is not critical because logging causes extra overhead on the engine. Choose the option in the update strategy to discard rejected rows. 27. When using a Joiner Transformation, be sure to make the source with the smallest amount of data the Master source. 28. If an update override is necessary in a load, consider using a Lookup transformation just in front of the target to retrieve the primary key. The primary key update will be much faster than the non-indexed lookup override. 29. Suggestions for Using Mapplets 30. A Mapplet is a reusable object that represents a set of transformations. It allows you to reuse transformation logic and can contain as many transformations as necessary. Use the Mapplet Designer to create Mapplets. 31. Create a Mapplet when you want to use a standardized set of transformation logic in several mappings. For example, if you have several fact tables that require a series of dimension keys, you can create a Mapplet containing a series of Lookup transformations to find each dimension key. You can then use the Mapplet in each fact table mapping, rather than recreate the same lookup logic in each mapping. 32. To create a Mapplet, add, connect, and configure transformations to complete the desired transformation logic. After you save a Mapplet, you can use it in a mapping to represent the transformations within the Mapplet. When you use a Mapplet in a mapping, you use an instance of the Mapplet. All uses of a Mapplet are tied to the parent Mapplet. Hence, all changes made to the parent Mapplet logic are inherited by every child instance of the Mapplet. When the server runs a session using a Mapplet, it expands the Mapplet. The server then runs the session as it would any other session, passing data through each transformation in the Mapplet as designed. 33. A Mapplet can be active or passive depending on the transformations in the Mapplet. Active Mapplets contain at least one active transformation. Passive Mapplets only contain passive transformations. Being aware of this property when using Mapplets can save time when debugging invalid mappings. 34. Unsupported transformations that should not be used in a Mapplet include: COBOL source definitions, normalizer, non-reusable sequence generator, pre- or post-session stored procedures, target definitions, and PowerMart 3.5-style lookup functions. 35. Do not reuse Mapplets if you only need one or two transformations of the Mapplet while all other calculated ports and transformations are obsolete. 36. Source data for a Mapplet can originate from one of two places: 37. Sources within the Mapplet. Use one or more source definitions connected to a Source Qualifier or ERP Source Qualifier transformation. When you use the Mapplet in a mapping, the Mapplet provides source data for the mapping and is the first object in the mapping data flow. 38. Sources outside the Mapplet. Use a Mapplet Input transformation to define input ports. When you use the Mapplet in a mapping, data passes through the Mapplet as part of the mapping data flow. 39. To pass data out of a Mapplet, create Mapplet output ports. Each port in an Output transformation connected to another transformation in the Mapplet becomes a Mapplet output port.

 Wipro Technologies

Page 4 of 13

Performance Tips for ETL/Informatica Mappings

40. Active Mapplets with more than one Output transformations. You need one target in the mapping for each Output transformation in the Mapplet. You cannot use only one data flow of the Mapplet in a mapping. 41. Passive Mapplets with more than one Output transformations. Reduce to one Output Transformation; otherwise you need one target in the mapping for each Output transformation in the Mapplet. This means you cannot use only one data flow of the Mapplet in a mapping

3. Memory allocation by Power Center: PowerCenter allocates memory for a session using the DTM buffer size and Default buffer block size session properties. These memory allocations are independent of the actual precision of the source columns (row size). The Default buffer block size property determines the number of rows the PowerCenter server can process at a time, while, DTM buffer size sets the number of blocks used by these rows. PowerCenter does not allocate memory dynamically according to the input row length, but, instead it processes the input rows in blocks. The number of rows to be processed at a time will be decided in advance (based on total highest precision of all sources and targets instances in mapping). Example When you import a table with 6 columns in source, the source instance will have 6 columns of precision 256. The buffer block size will be specified to 64000. The number of rows that would be processed at a time would be decided as the following: The total precision of source table columns is 256*6= 1536. Number of rows that would be accommodated in a buffer block are 64000/1536= 41. Hence, 41 rows would be processed irrespective of actual length of input row (doesn't matter if input rows length is more or less than 1536).

4. Database performance tuning tips. 1. Select appropriate driving table while using joins. This is required only if rule based optimization is being used. ORACLE chooses rule based optimization if the objects are not analyzed using the ANALYZE command. The table that has fewer number of rows should be the driving table. e.g. Assume that table1 has 10 records and table2 has 5000 records. /* This SQL takes more time to execute SELECT FROM table1,table2 WHERE ;

 Wipro Technologies

Page 5 of 13

*/

Performance Tips for ETL/Informatica Mappings

/* This takes less time */ SELECT FROM table2,table1 WHERE ;

2. Use EXISTS/NOT EXISTS instead if IN/NOT IN /*This SQL executes slower then the next one */ SELECT FROM WHERE IN ();

/* This is a better way */ SELECT FROM WHERE EXISTS ();

3. Try >= instead of >: If there is an index on column then try 'select * from table where column >= 4' Instead of 'select * from table where 'column > 3'. Because instead of looking in the index for the first row with column = 3 and then scanning forward for the first value that is > 3, the DBMS may jump directly to the first entry that is = 4. 4. Avoid Correlated Subselects: A correlated subselect is a nested select that refers to a column from the outer select. Here is an example that uses product.id as a correlation column to find all products that have no sales orders:

select product.id from product where not exists ( select sales_order_items.id from sales_order_items where sales_order_items.prod_id = product.id )

Correlated subselects can be very slow if the inner result set is re-selected for each and every candidate row in the outer result set. Alternative SQL can sometimes look rather bizarre but it's usually worth the effort. In Watcom SQL the following select runs almost 4 times faster by using an outer join instead of a correlated subselect: 5. Use explicit cursors. When implicit cursors are used, two calls are made to the database, once to fetch the record and then to check for the TOO MANY ROWS exception. Explicit cursors prevent the second call.

 Wipro Technologies

Page 6 of 13

Performance Tips for ETL/Informatica Mappings

6. Use Hints if necessary. A Hint is directive enclosed within a comment of a SQL statement that instructs oracle to execute the statement using a particular approach. For optimization purpose, use of hints is a good idea too. Use "first_rows" hint to have the best response time. select -- +first_rows "rest of query"; This gives the first set of records to the user while oracle searches for the whole lot. Hints are very useful if the query takes longer than expected

5. Concept of parallelism work when there are Aggregator, Sorter, and Rank transformations in the mapping Regardless of the transformations in the mapping, the same basic parallelism principles apply. Pipeline parallelism splits the session into three threads; extract, transform and load. Transformation parallelism can be applied to provide extra threads, i.e. processing power, to the particular transformations you may have identified bottlenecks. Partition parallelism applies across the pipeline. For example, if you had only pipeline parallelism giving you the default 3 threads, adding a partition provides a total of 6 threads. Data smart parallelism comes into play when transformations, such as those above, are included in the mapping to require that data be shared between partitions. Data smart features are applied under the covers to exchange data, so there's no extra work to ensure data integrity.

6. Do we need to prepare the source file for partition parallelism? This depends on the data in the source file. For complex files such as EBCDIC COBOL data files there is little you can do. You could use a DD command or split, but this would depend on the COBOL program creating the file. PowerCenter provides flat file partitioning. However, if you want to use partitioning for delimited and fixed-width files in previous versions of PowerCenter, you can use split or csplit on UNIX to break the source into multiple sessions. You can also perform pre-processing with an additional session that route the data to multiple output files using a hash function, which is similar to a modulus function. Relational sources need no such pre processing. When you add more partitions to a transformation like a Lookup transformation, which involves a cache, it needs more memory resources. However, single mapping memory is limited to 2 GB. What are ways to mitigate this?

If you use cache partitioning, the PowerCenter Server only requires a portion of the total memory for each partitioning.

 Wipro Technologies

Page 7 of 13

Performance Tips for ETL/Informatica Mappings

Otherwise, you are encountering the 32-bit computing limits. 64-bit processing provides far more capabilities with respect to memory usage. If you run PowerCenter as a 64-bit application, it takes complete advantage of the large addressable memory. This mitigates the problems of memory limitations.

7. What is the difference between partition parallelism and database partitioning? Database partitioning applies to DB2 multi-node targets. When this type of partitioning is selected, the PowerCenter Server queries the DB2 system and loads partitioned data into the corresponding nodes. How do you determine the optimal degree of partitioning/parallelism based on your hardware?

Consider the mapping that will be partitioned. Basic tuning methodology suggests that you run the session a couple of times without partitioning to get a baseline number. Assess whether the session performance meets the expectations or not. If not, determine where the bottleneck exists by using performance statistics gathered during the session (turn on Collect Performance Information in the session properties) as well as thread statistics provided in the log file (the closer to 100% busy a thread is, the more likely there is a bottleneck). You must consider what else is running in parallel (other jobs outside of PowerCenter, other PowerCenter sessions, etc.) to know how much of the resources will be available. For example, if you have a 4 CPU box but there are other applications/sessions/database jobs running at the same time, you might only have 2 CPUs available. Know how much available memory you have and where it might be appropriate to add to a particular transformation or to the entire session (DTM buffer size in the session properties). Once you have determined a baseline performance number as well as an ideal number (something reasonable, so don't try to partition a session that runs in a couple of minutes!), you can start by adding partition points where they might be most applicable and run again. If that doesn't provide a benefit, try adding a partition to run to see if there's a benefit. If you attempt these things and are unable to improve performance, please contact Informatica Technical Support for additional assistance.

8. How many sessions can run concurrently? This depends on your server capacity as well as your server configuration. There is a property in the server configuration called Max No. of Concurrent Sessions. This defaults to 10, but you can change it to meet your needs or to scale to your hardware capacity. If you change this number, you should also change the shared memory property in the server configuration. A general guideline is: For every 10 sessions in the Max No. of Concurrent Sessions property, add 2 MB to shared memory.

9. Is there any correlation between the number of CPUs and the number of concurrent running sessions? Any non-partitioned session should run between about 75% and 150% of a single CPU. This would directly depend on the CPU, the memory bus, mother-board clocking, and memory clocking. The impact is lower for faster, newer CPUs and higher for older, slower CPU systems. As you add partition points, the burden on the CPU increases. However, the amount of increase depends directly on the logic of the

 Wipro Technologies

Page 8 of 13

Performance Tips for ETL/Informatica Mappings

transformation. The PowerCenter server is unable to estimate what the specific load will be for any given CPU. For planning purposes, Informatica recommends 1.25 CPUs per session, thus 3 sessions for a 4 CPU machine with nothing else running on it. If databases or other software are running on the machine, the CPU usage must also be considered for load planning and load balancing. Also, keep in mind that more PowerCenter session loads to a database that is on the same machine as the server engine products would increase the database CPU requirements. If we can add more resources to maximize performance, then reading from the source and loading to the target becomes the bottleneck. How does Informatica handle that bottleneck? If the bottleneck truly is the source or target (e.g. database), then you must resolve the bottleneck at the source or target. You may need to configure and tune the database.

10. How do you prevent duplicate data with the pipeline partitioning? Pipeline partitioning wouldn't cause duplicate data. It's simply breaking the source, transformation, and load process into 3 threads. The threads are 'shared nothing'. Therefore, the PowerCenter Server passes the data from the source thread to the transformation and load threads without adding anything extra. The issue in question is more applicable to partition parallelism. However, datamart functions occur under the covers to ensure the data integrity is preserved.

11. Basic Performance Tips for ETL/Informatica mappings 1. Always use sorted data for very large data aggregations, or use Power Center server 64-bit, and allocate a large amount of memory. Sorted aggregations run much faster than unsorted aggregations. 2. Aggregator transformations do not sort data. The aggregator uses a clustering algorithm, not a sort algorithm. When there are duplicate rows the aggregator may put data out in a seemingly sorted order but it does not guarantee it. 3. Keep mappings as simple as possible. The smaller, the better in terms of performance and tuning. Divide and conquer is the best strategy for fastest mapping performance. Sometimes multi-staging the work, or splitting the workload between the database and stages can release dependencies upstream, and increase parallelism. 4. Make sure to allocate a large amount of memory (as much as possible) for mapping objects that cache. 5. Aggregator transformations can be used to pivot (de-normalize) data 6. When replacing PERL code, make sure to break the code into units of work. Use each unit as a design step in the mapping architecture. Develop the overall complex mapping, then break it apart into smaller manageable steps.

 Wipro Technologies

Page 9 of 13

Performance Tips for ETL/Informatica Mappings

7. Keep the mapping objects as streamlined as possible. Run the data through the transforms, not around them. This helps with the partitioning options at the session level, as well as the parallelism capabilities of the mapping. 8. When using a Sort, Aggregator, Joiner, or Lookup transformation keep the keys as "small" as possible (measured in precision). Much of the same mathematics that play in computing relational database indexes also play in computing the "indexed" fields that perform the operations listed above. 9. Keep filter conditions simple, move the complex condition expressions into expression objects. This keeps the filter fast. When the filter runs slowly it’s usually because of a complex condition. 10. Break complex conditions down into smaller parts. Use the variables within an expression to build complex expression logic. This keeps the mappings more maintainable. 11. Never have more than five (5) targets per mapping. This will slow down the mapping exponentially. Complex maps usually demand multiple targets, but the more targets you have, the poorer the performance. 12. Complex architectures usually require update strategies within the mapping. The update strategies can result in a performance hit in your session, sometimes significant. It is recommended to minimize the usage of Update Strategies transformations for optimal performance. 13. Minimize crossing port lines. Any time these "fields" move from object to object, they will be shuffled in memory. By keeping the field lines as straight as possible, you give the server internals a chance at "copying" chunks of memory, rather than field by field data movement. 14. If you have very large and complex mappings that are running with a large amount of data (~50 million+ rows) then it is recommended to use the PowerCenter 64-bit server for optimal performance. It provides you with access to plenty of memory and high speed performance for large mappings. 15. Any mapping with 50+ objects is simply too large and MUST be broken down into multiple mappings. 16. To create complex output (say a mainframe ASCII file), use a single flat file, single string (4k if necessary), format the string in one or more export "expressions". Use the LPAD and RPAD functions to re-format data, and put record indicator columns on the output side. 17. Always set the "master" in the joiner to be the smaller of the two tables (except when using detail outer join or full outer join). This will keep the caching of the two set to the minimum number of rows. Replace a look up with a joiner, whenever you are faced with extremely large data sets 18. Use reusable lookups instead of the same lookup multiple times. This will assist in reusing the lookup caches, and improve performance. 19. If you’re going to use a sequence generator, and your going to share it across mappings or make the session run in parallel – then set it to cache a minimum of 10,000 values.

 Wipro Technologies

Page 10 of 13

Performance Tips for ETL/Informatica Mappings

12. Other Trivia Stuff: Challenge Optimizing PowerCenter to create an efficient execution environment.

Description Although PowerCenter environments vary widely, most sessions and/or mappings can benefit from the implementation of common objects and optimization procedures. Follow these procedures and rules of thumb when creating mappings to help ensure optimization.

General Suggestions for Optimizing 1. Reduce the number of transformations. There is always overhead involved in moving data between transformations. 2. Consider more shared memory for large number of transformations. Session shared memory between 12MB and 40MB should suffice. 3. Calculate once, use many times. o Avoid calculating or testing the same value over and over. o Calculate it once in an expression, and set a True/False flag. o Within an expression, use variable ports to calculate a value than can be used multiple times within that transformation. 4. Only connect what is used. o Delete unnecessary links between transformations to minimize the amount of data moved, particularly in the Source Qualifier. o This is also helpful for maintenance. If a transformation needs to be reconnected, it is best to only have necessary ports set as input and output to reconnect. o In lookup transformations, change unused ports to be neither input nor output. This makes the transformations cleaner looking. It also makes the generated SQL override as small as possible, which cuts down on the amount of cache necessary and thereby improves performance. 5. Watch the data types. o The engine automatically converts compatible types. o Sometimes data conversion is excessive. Data types are automatically converted when types are different between connected ports. Minimize data type changes between transformations by planning data flow prior to developing the mapping. 6. Facilitate reuse. o Plan for reusable transformations upfront. o Use variables. Use both mapping variables as well as ports that are variables. Variable ports are especially beneficial when they can be used to calculate a complex expression or perform a disconnected lookup call only once instead of multiple times o Use Mapplets to encapsulate multiple reusable transformations. o Use Mapplets to leverage the work of critical developers and minimize mistakes when performing similar functions. 7. Only manipulate data that needs to be moved and transformed. o Reduce the number of non-essential records that are passed through the entire mapping. o Use active transformations that reduce the number of records as early in the mapping as possible (i.e., placing filters, aggregators as close to source as possible). o Select appropriate driving/master table while using joins. The table with the lesser number of rows should be the driving/master table for a faster join.

 Wipro Technologies

Page 11 of 13

Performance Tips for ETL/Informatica Mappings

8. Utilize single-pass reads. o Redesign mappings to utilize one Source Qualifier to populate multiple targets. This way the server reads this source only once. If you have different Source Qualifiers for the same source (e.g., one for delete and one for update/insert), the server reads the source for each Source Qualifier. o Remove or reduce field-level stored procedures. o If you use field-level stored procedures, the PowerCenter server has to make a call to that stored procedure for every row, slowing performance.

13. Conclusion The above points are high-level issues on where to go to perform “tuning” in Informatica’s products. These are, in no way, permanent problem solvers, nor are they the end-all solution. Just some items (which if tuned first) might make a difference. The level of skill available for certain items will cause the results to vary.

References: https://community.informatica.com http://datawarehouse.ittoolbox.com

About Wipro Technologies Wipro is the first PCMM Level 5 and SEI CMMi Level 5 certified IT Services Company globally. Wipro provides comprehensive IT solutions and services (including systems integration, IS outsourcing, package implementation, software application development and maintenance) and Research & Development services (hardware and software design, development and implementation) to corporations globally. Wipro's unique value proposition is further delivered through our pioneering Offshore Outsourcing Model and stringent Quality Processes of SEI and Six Sigma.

Wipro in MHLS Wipro Technologies offers world class software and technology solutions for the insurance industry. Wipro has successfully executed several projects spanning Life, P&C, Re-insurance Companies and Insurance Brokers. We address Sales and Distribution, Underwriting, Policy Administration, Accounting, Claims Processing and Backoffice. Wipro’s unique value proposition is delivered through our pioneering Offshore Development Model and stringent Quality Processes including ISO 9000, SEI CMM Level 5 and Six Sigma.

 Wipro Technologies

Page 12 of 13

Performance Tips for ETL/Informatica Mappings

© Copyright 2002. Wipro Technologies. All rights reserved. No part of this document may be reproduced, stored in a retrieval system, transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without express written permission from Wipro Technologies. Specifications subject to change without notice. All other trademarks mentioned herein are the property of their respective owners. Specifications subject to change without notice.

 Wipro Technologies

Page 13 of 13

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF