Developer Ref Guide

January 22, 2017 | Author: Amit Sharma | Category: N/A
Share Embed Donate


Short Description

Download Developer Ref Guide...

Description

Amit Sharma [email protected] Contact for Hyperion Training and consultancies

Some Important Consideration while working with Essbase (Developer Reference Guide)

Amit Sharma [email protected] Contact for Hyperion Training and consultancies

Essbase Calculation Performance Tuning 1. After we enabled Parallel Calculation, by default, Essbase uses the last sparse dimension in an outline to identify tasks that can be performed concurrently. But the distribution of data may cause one or more tasks to be empty; that is, there are no blocks to be calculated in the part of the database identified by a task. This situation can lead to uneven load balancing, reducing parallel calculation effectiveness. 2. To resolve this situation, you can enable Essbase to use additional sparse dimensions in the identification of tasks for parallel calculation. For example, if you have a FIX statement on a member of the last sparse dimension, you can include the next-to-last sparse dimension from the outline as well. Because each unique member combination of these two dimensions is identified as a potential task, more and smaller tasks are created, increasing the opportunities for parallel processing and improving load balancing. 3. Add or modify CALCTASKDIMS in the essbase.cfg file on the server, or use the calculation script command SET CALCTASKDIMS at the top of the script. Sample Code: SET CALCTASKDIMS 2 This will enable last 2 sparse dimensions to be included in the checking, it may significantly increase the running performance. (416-3025810)

Parallel Calculation and Tuning 1. We can enable parallel calculation in Essbase.cfg file in system level, or enable in calculation sript level. Sample code: SET CALCPARALLEL SET CALCTASKDIMS 2. Parallel calculation only works with Uncommitted Access. 3. There is a risk that the parallel calculation may freeze the computer. 4. Use FIX commands so that special data block is calculated, don’t use cross dimension operator in most cases.

Intelligent Calculation and Performance Tuning Switch on the Intelligent Calculation. It seems if we switch off Intelligent Calculation, the calculation result will be correct. Because all of the data blocks no matter if it is marked as dirty or clean will be recalculated. But the price is too many data blocks will be included in calculation. We can switch on and off the intelligent calculation inside Calculation Script in different cases so that only the necessary data blocks are recalculated.

Amit Sharma [email protected] Contact for Hyperion Training and consultancies Use command: Set CLEARUPDATASTATUS ONLY/AFTER/OFF, SET UPDATECALC ON/OFF Sample: SET UPDATECALC OFF; SET CLEARUPDATESTATUS AFTER; CALC TWOPASS;

Time Dim - Calculation Performance By default, the time dimension is set to be dense. But if you use incremental data loading in MaxL srcipt. And the data is loaded in the end of every month. You can set Time dimension as sparse dimension, if you have Intelligent Calculation enabled, only the data blocks marked as dirty are recalculated.This will significantly increase the data loading performance.

Incremental Data Loading Many companies load data incrementally. For example, a company may load data each month for that month. To optimize calculation performance when you load data incrementally, make the dimension tagged as time a sparse dimension. If the time dimension is sparse, the database contains a data block for each time period. When you load data by time period, Essbase accesses fewer data blocks because fewer blocks contain the relevant time period. Thus, if you have Intelligent Calculation enabled, only the data blocks marked as dirty are recalculated. For example, if you load data for March, only the data blocks for March and the dependent parents of March are updated. However, making the time dimension sparse when it is naturally dense may significantly increase the size of the index, creating possibly slower performance due to more physical I/O activity to accommodate the large index. If the dimension tagged as time is dense, you still receive some benefit from Intelligent Calculation when you do a partial data load for a sparse dimension. For example, if Product is sparse and you load data for one product, Essbase recalculates only the blocks affected by the partial load, although time is dense and Intelligent Calculation is enabled. Note: This method works only for ASO

Data Block - Calculation Performance 1. Use FIX instead of cross-dimensional operator Compare the next 2 statements: Fix(Jan) Sales = Sales * 1.05; EndFIX Sales(Sales -> Jan = Sales -> Jan * 1.05);

Amit Sharma [email protected] Contact for Hyperion Training and consultancies The 2nd is not efficient, it will look through all of time dimension even if only the Jan is calculated. The 1st one only calculates the Jan for sales block which is more efficient. 2. The data block size setting. It should be 10k - 100k, if the data block size is too big (>100k), the intelligent calculation will not work well. If the data block size is too small (nearby 10k), the index may become too huge, and this will affect the calculation speed.

Essbase Committed Setting Under uncommitted access, Essbase locks blocks for write access until Essbase finishes updating the block. Under committed access, Essbase holds locks until a transaction completes.With uncommitted access, blocks are released more frequently than with committed access. The essbase performace is better if we set uncommitted access. Besides, parallel calculation only works with uncommitted access. Database performance: Uncommitted access always yields better database performance than committed access. When using uncommitted access, Essbase does not create locks that are held for the duration of a transaction but commits data based on short-term write locks. Data consistency: Committed access provides a higher level of data consistency than uncommitted access. Retrievals from a database are more consistent. Also, only one transaction at a time can update data blocks when the isolation level is set to committed access. This factor is important in databases where multiple transactions attempt to update the database simultaneously. Data concurrency: Uncommitted access provides better data concurrency than committed access. Blocks are released more frequently than during committed access. With committed access, deadlocks can occur. Database rollbacks: If a server crash or other server interruption occurs during active transactions, the Essbase kernel rolls back the transactions when the server is restarted. With committed access, rollbacks return the database to its state before transactions began. With uncommitted access, rollbacks may result in some data being committed and some data not being committed.

Essbase Restructure There are 3 restructure: Dense restructure, sparse restructure, and outline only restreucture. If a member of dense dimension is changed, thr resturcture command will make a dense restructure. Dense restructure use a long time, becuase some data blocks will be created. Sparse restructure happens only if a sparse member is changed, sparse restructure only resturcture index, it should not use too much time. Outline only restucture don't change data block or index, no data block or index restructure happen, it uses no time.

Amit Sharma [email protected] Contact for Hyperion Training and consultancies



Dense restructure: If a member of a dense dimension is moved, deleted, or added, Essbase restructures the blocks in the data files and creates new data files. When Essbase restructures the data blocks, it regenerates the index automatically so that index entries point to the new data blocks. Empty blocks are not removed. Essbase marks all restructured blocks as dirty, so after a dense restructure you must recalculate the database. Dense restructuring, the most time-consuming of the restructures, can take a long time to complete for large databases.



Sparse restructure: If a member of a sparse dimension is moved, deleted, or added, Essbase restructures the index and creates new index files. Restructuring the index is relatively fast; the time required depends on the index size.



Outline-only restructure: If a change affects only the database outline, Essbase does not restructure the index or data files. Member name changes, creation of aliases, and dynamic calculation formula changes are examples of changes that affect only the database outline.

Validata Essbase Structure Using VALIDATE to Check Integrity. The VALIDATE command performs many structural and data integrity checks: •

Verifies the structural integrity of free space information in the index.



Compares the data block key in the index page with the data block key in the corresponding data block.



The Essbase index contains an entry for every data block. For every read operation, VALIDATE automatically compares the index key in the index page with the index key in the corresponding data block and checks other header information in the block. If it encounters a mismatch, VALIDATE displays an error message and continues processing until it checks the entire database.



Restructures data blocks whose restructure was deferred with incremental restructuring.



Checks every block in the database to make sure each value is a valid floating point number.



Verifies the structural integrity of the LROs catalog.

Note: When you issue the VALIDATE command, we recommend placing the database in read-only mode. As Essbase encounters mismatches, it records error messages in the VALIDATE error log. You can specify a file name for error logging; Essbase prompts you for this information if you do not provide it. The VALIDATE utility runs until it has checked the entire database. You can use the VALIDATE command in ESSCMD to perform these structural integrity checks. During index free space validation, the VALIDATE command verifies the structural integrity of free space information in the index. If integrity errors exist, Essbase records them in the VALIDATE log. The file that you specified on the VALIDATE command holds the error log.

Amit Sharma [email protected] Contact for Hyperion Training and consultancies If VALIDATE detects integrity errors regarding the index free space information, the database must be rebuilt. You can rebuild in three ways:



Restore the database from a recent system backup.

• Restore the data by exporting data from the database; creating an empty database; and loading the exported data into the new database.

The principle for design Essbase Point 1: Don't use too deep levels; it's ok to have a lot of members in one level. But it is not wise if we have deep nested level, and each level doesn't have too much members. This is very important principle when we design a cube. The outline secuqence: time, account, the other dense dimensions, the sparse dimension that has fewest members, other sparse dimensions, the attribute dimensions. Point 2: Calculation performance may be affected if a database outline has multiple flat dimensions. A flat dimension has very few parents, and each parent has many thousands of children; in other words, flat dimensions have many members and few levels. You can improve performance for outlines with multiple flat dimensions by adding intermediate levels to the database outline. The above 2 points are from different source, they looks some different and even in the opporsite. my understanding is: we should have fewer levels anyway, but the huge amount of member should happen in the parent level. That means we have thousands of parents, but each parent has few members.

Incremental Data loading Many companies load data incrementally. For example, a company may load data each month for that month. To optimize calculation performance when you load data incrementally, make the dimension tagged as TIME a SPARSE dimension. If the time dimension is sparse, the database contains a datablock for each time period. When you load data by time period, Essbase accesses fewer data blocks because fewer blocks contain the relevant time period. Thus, if you have Intelligent Calculation enabled, only the data blocks marked as dirty are recalculated. For example, if you load data for March, only the data blocks for March and the dependent parents of March are updated. However, making the time dimension sparse when it is naturally dense may significantly increase the size of the index, creating possibly slower performance due to more physical I/O activity to accommodate the large index. If the dimension tagged as time is dense, you still receive some benefit from Intelligent Calculation when

Amit Sharma [email protected] Contact for Hyperion Training and consultancies you do a partial data load for a sparse dimension. For example, if Product is sparse and you load data for one product, Essbase recalculates only the blocks affected by the partial load, although time is dense and Intelligent Calculation is enabled.

Simulated Calculation You can simulate a calculation using SET MSG ONLY in a calculation script. A simulated calculation produces results that help you analyze the performance of a real calculation that is based on the same data and outline. By running a simulated calculation with a command such as SET NOTICE HIGH, you can mark the relative amount of time each sparse dimension takes to complete. Then, by performing a real calculation on one or more dimensions, you can estimate how long the full calculation will take, because the time a simulated calculation takes to run is proportional to the time that the actual calculation takes to run. For example, if the calculation starts at 9:50:00 AM, and the first notice is time-stamped at 09:50:10 AM and the second is time-stamped at 09:50:20 AM, you know that each of part of the calculation took 10 seconds. If you then run a real calculation on only the first portion and note that it took 30 seconds to run, you know that the other portion also will take 30 seconds. If there were two messages total, then you would know that the real calculation will take approximately 60 seconds (20 / 10 * 30 = 60 seconds). Use the following topics to learn how to perform a simulated calculation and how to use a simulated calculation to estimate calculation time.

Performing a Simulated Calculation Before you can estimate calculation time, you must perform a simulated calculation on a data model that is based on your actual database. To perform a simulated calculation: 1. Create a data model that uses all dimensions and all levels of detail about which you want information. 2. Load all data. This procedure calculates only data loaded in the database.

3. Create a calculation script with these entries: SET MSG ONLY; SET NOTICE HIGH; CALC ALL; If you are using dynamic calculations on dense dimensions, substitute the CALC ALL command with the specific dimensions that you need to calculate; for example, CALC DIM EAST. Note: If you try to validate the script, Essbase reports an error. Disregard the error. 4. Run the script. 5. Find the first sparse calculation message in the application log and note the time in the message. 6. Note the time for each subsequent message. 7. Calculate the dense dimensions of the model that are not being dynamically calculated: CALC DIM (DENSE_DIM1, DENSE_DIM2, …);

Amit Sharma [email protected] Contact for Hyperion Training and consultancies 8. Calculate the sparse dimensions of the model: CALC DIM (SPARSEDIM1, SPARSEDIM2, …); 9. Project the intervals at which notices will occur, and then verify against sparse calculation results. You can then estimate calculation time.

Select Essbase Compression method 1. If you have not too much information for the essbase, you don't need to make any compression setting on the Essbase. By default, the essbase is compressed using BITMAP which is the best way in most cases. 2. If your essbase is 90% dense, you may use ZLIB for the compression method. 3. If your Essbase is sparse, and you have huge repeated no missing data cells, you should use RLE for essbase compression method. By the way, the "Index Value Pair" compression is selected automatically by the Essbase system. Index Value Pair addresses compression on databases with larger block sizes, where the blocks are highly sparse. This compression algorithm is not selectable but is automatically used whenever appropriate by the database. The user must still choose between the compression types None, bitmap, RLE, and zlib through Administration Services.

Recovery from Spreadsheet Log The Essbase Spreadsheet addin can update Essbase in data cell level. In case of Essbase disaster, the essbase can be restored from the last backup. Suppose the last backup is yesterday night, and the Essbase disaster happens in lunch time today. The data this morning is not restored by default. How to restore all of the detail data include the data one second before the Essbase disaster? Here is the method: 1. Set SSAUDIT or SSAUDITR in essbase.cfg file Sample code: SSAUDITR Sample Basic C:\logfoldername The above statement will set the spreadsheet log on Sample application, and basic database. If logfoldername is not specified, the default log folder is used. SSAUDIT Sample The above statement will set the spreadsheet log on Sample application for all sub databases. SSAudit xxxxx xxxxx c:\sslog The above statement will set the spreadsheet log on all application for all sub databases. 2. After adding the above code to the essbase.cfg, restart Essbase Server. You will see the next words in the C:\Hyperion\logs\essbase\app\Sample\Sample.log [Sun Nov 15 22:22:50 2009]Local/Sample///Info(1002088) Starting Spreadsheet Log [C:\Hyperion\products\Essbase\EssbaseServer\APP\Sample\Basic\Basic.alg] For Database [Basic] This means the setting is successful.

Amit Sharma [email protected] Contact for Hyperion Training and consultancies

3. You can go on to make updating in Sample.Basic using Excel Spreadsheet addin, everything is logged now. 4. Now, suppose your Essbase is just restored from a backup, further more, you can recover transactions from the update log. To do so, use the Essbase command-line facility, ESSCMD, from the server console. The following ESSCMD command sequence loads the update log: LOGIN hostnode username password SELECT appname dbname //Example: Select Sample Basic LOADDATA 3 filepath:appname.ATX //LOADDATA 3 C:\Hyperion\products\Essbase\EssbaseServer\APP\Sample\Basic\Basic.atx EXIT 5. The difference between SSAUDIT and SSAUDITR is: SSAUDIT append logdata to existing logs after archiving. SSAUDITR clear the logs at the end of the archiving process. 6. Please note, you should mannully backup and clear the Basic.atx and Basic.alg log files, so that they will not become too huge.

Hot backup - LDAP & Shared Service Steps: 1. Back up any related components, including Shared Services relational database and the OpenLDAP database. Note: The Shared Services relational database and the OpenLDAP database must be backed up at the same time. Ensure that the administrator does not register a product application or create an application group at backup time. 2. Run this command to create a hot backup of OpenLDAP: Windows: c:/Hyperion/products/Foundation/server/scripts/backup.bat HSS_backup UNIX: /home/username/Hyperion/products/Foundation/server/scripts/backup.sh /home/username/backups/HSS_backup To recover Shared Services from a hot backup: 1. Stop OpenLDAP and Shared Services. 2. Recover the Shared Services relational database with RDBMS tools, using the backup with the same date as the OpenLDAP backup. 3. If you use OpenLDAP as Native Directory, recover the OpenLDAP database by running: Examples: Windows noncatastrophic recovery—C:/Hyperion/products/Foundation/server/scripts/recover.bat c:/HSS_backup UNIX catastrophic recovery —/home/username/Hyperion/products/Foundation/server/scripts/recover.sh /home/username/HSS_backup catRecovery

Amit Sharma [email protected] Contact for Hyperion Training and consultancies

Note: Physical backup and logical backup. A physical backup can be hot or cold: *. Hot backup—Users can make changes to the database during a hot backup. Log files of changes made during the backup are saved, and the logged changes are applied to synchronize the database and the backup copy. A hot backup is used when a full backup is needed and the service level does not allow system downtime for a cold backup. *. Cold backup—Users cannot make changes to the database during a cold backup, so the database and the backup copy are always synchronized. Cold backup is used only when the service level allows for the required system downtime. Note: A cold full physical backup is recommended. * Full—Creates a copy of data that can include parts of a database such as the control file,transaction files (redo logs), archive files, and data files. This backup type protects data from application error and safeguards against unexpected loss by providing a way to restore original data. Perform this backup weekly, or biweekly, depending on how often your data changes. Making full backups cold, so that users cannot make changes during the backups, is recommended. Note: The database must be in archive log mode for a full physical backup. * Incremental—Captures only changes made after the last full physical backup. The files differ for databases, but the principle is that only transaction log files created since the last backup are archived. Incremental backup can be done hot, while the database is in use, but it slows database performance. In addition to backups, consider the use of clustering or log shipping to secure database content. Logical Backup A logical backup copies data, but not physical files, from one location to another. A logical backup is used for moving or archiving a database, tables, or schemas and for verifying the structures in a database.

Cold backup - LDAP & Shared Service 1. Stop OpenLDAP and Shared Services. 2. Back up the Shared Services directory from the file system.Shared Services files are in HYPERION_HOME/deployments and HYPERION_HOME/products/Foundation. 3. Optional: * Windows—Back up these Windows registry entries using REGEDIT and export: HKLM/SOFTWARE/OPENLDAP HKLM/SOFTWARE/Hyperion Solutions * UNIX—Back up these items: .hyperion.* files in the home directory of the user name used for configuring the product user profile (.profile or equivalent) file for the user name used for configuring the product. 4. Shut down the Shared Services relational database and perform a cold backup using RDBMS tools.

Amit Sharma [email protected] Contact for Hyperion Training and consultancies To recover Shared Services from a cold backup: 1. Restore the OS. 2. Using Oracle Hyperion Enterprise Performance Management System Installer, Fusion Edition, install Shared Services binaries. Note: Do not configure the installation. OpenLDAP Services is created during installation. 3. Restore the Shared Services cold backup directory from the file system. 4. Restore the cold backup of the Shared Services relational database using database tools. 5. Optional: Restore the Windows registry entries from the cold backup. 6. (Windows) If Shared Services Web application service must be recreated, run HYPERION_HOME/deployments/AppServer/bin/installServiceSharedServices9.bat. 7. Start the OpenLDAP service and Oracle's Hyperion Shared Services.

Essbase backup and Recovery Note: Essbase Outline change will NOT be logged! Note: in the essbase.cfg file, set the SPLITARCHIVEFILE configuration to TRUE. This will split archive file to smaller size(NA_Product * ("Sales Volume"->IDR->Marketing / ("Sales Volume">IDR->Marketing->"All Products"->"Total Customer" - "Sales Volume"->IDR->Marketing->"All Products"->"MICHELIN NORTH AMERICA, INC." - "Sales Volume"->IDR->"21000"->"Reclaimed Rubber"->"C-0005-01")); SET CREATENONMISSINGBLK OFF; ENDFIX and more aggregate commands similarly .... ---------------------------Method to Increase Calculation speed: 1. When the intelligent calculation is turned off and CREATENONMISSINGBLK is ON, within the scope of the calculation script, all blocks are calculated, regardless if they are marked clean or dirty. 2. Cross dimension operators can be reduced, include more items inside FIX instead of using "->"

Bulk User creation in Essbase How can we create bulk users in Essbase ? say if there are 500 users need to be created at a time. what is the technique ? Suppose you have 500 users in c:\user.csv, you can create batch of MaxL command using next JavaScript code. Copy the below code and name as gm.js, execute gm.js in Windows. var fso = new ActiveXObject("Scripting.FileSystemObject"); var rs = fso.OpenTextFile("C:\ \user.csv"); var fso1 = new ActiveXObject("Scripting.FileSystemObject"); var ws = fso1.CreateTextFile("C:\ \MaxL.txt");

Amit Sharma [email protected] Contact for Hyperion Training and consultancies for(var i=1;i
View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF