135917807-Value-Dimension-and-Aggregation-in-HFM.pdf
Short Description
Download 135917807-Value-Dimension-and-Aggregation-in-HFM.pdf...
Description
For folks who don’t understand the value dimension in HFM. This is a visualization of the value dimension. If you don’t have this you should.
Aggregation Details A quick review of HFM's aggregation process for the Account, ICP, and Custom dimensions. Hopefully, it will help you in optimizing your metadata. Definition: A sub-cube refers to all data for a fixed Scenario, Year, Entity, Parent, and Value. Therefore, a sub-cube refers to all combinations of Account, ICP, Custom1, Custom2, Custom3, and Custom4. When a user requests a single number, all of the data for a sub-cube is loaded into RAM on the application server. We start with data for base members only (i.e., for each number, all 6 dimensions are base-level). When loading the sub-cube, we calculate numbers for all parent Accounts (using only base members for the other 5 dimensions). The performance of this part is mostly dependent on the number of base numbers in the sub-cube as well as the number of ancestors in the account dimension (i.e., the depth). In a worst case, for every base number, a new number is calculated for every parent account. For example, if the sub-cube has 1,000 base numbers and every base account is 10 levels deep, then we could end up with 10,000 numbers in RAM. Later, when a user asks for a number for a parent ICP member and/or Custom members, the system calculates only what is necessary to determine the requested number. This requires us to inspect all of the numbers for the specified account and then determine if the number contributes to the total. In general, this is fast since fewer numbers need to be inspected (all numbers for just one account), and the end-result is just a single number. Therefore, the depth of Custom dimensions really doesn't matter too much. In all cases, calculations are data-driven, not metadata driven. That is, we never calculate by asking a member for all its children and then adding adding up thousands of NODATA results. We always look at the base data that exists, and then push results up to the parents. This approach works works best for sparse sub-cubes. Summary: In general, the best aggregation performance will be achieved achieved by limiting the number of base numbers in a sub-cube, and by limiting limiting the depth of the Account dimension. Also, data grids should minimize the number of sub-cubes required. A data grid involving several Entities will will require HFM to cache and process several sub-cubes.
1 of 2
[Contribution Total]
[Contribution]
[Contribution Adjs]
[Proportion]
[Elimination]
[Parent Total]
[Parent]
[Parent Adjs]
2 of 2
View more...
Comments