BW362-4

March 2, 2017 | Author: Ferney Ospina | Category: N/A
Share Embed Donate


Short Description

bw...

Description

BW362 SAP NetWeaver BW, powered by SAP HANA SAP NetWeaver - Business Intelligence

Date Training Center Instructors Education Website

Participant Handbook Course Version: 98 Course Duration: 3 Day(s) Material Number: 50116137

An SAP course - use it to learn, reference it for work

Copyright Copyright © 2013 SAP AG or an SAP affiliate company. All rights reserved. No part of this publication may be reproduced or transmitted in any form or for any purpose without the express permission of SAP AG. The information contained herein may be changed without prior notice. Some software products marketed by SAP AG and its distributors contain proprietary software components of other software vendors.

Trademarks Adobe, the Adobe logo, Acrobat, PostScript, and Reader are trademarks or registered trademarks of Adobe Systems Incorporated in the United States and other countries. Apple, App Store, FaceTime, iBooks, iPad, iPhone, iPhoto, iPod, iTunes, Multi-Touch, Objective-C, Retina, Safari, Siri, and Xcode are trademarks or registered trademarks of Apple Inc. Bluetooth is a registered trademark of Bluetooth SIG Inc. Citrix, ICA, Program Neighborhood, MetaFrame now XenApp, WinFrame, VideoFrame, and MultiWin are trademarks or registered trademarks of Citrix Systems Inc. Computop is a registered trademark of Computop Wirtschaftsinformatik GmbH. Edgar Online is a registered trademark of EDGAR Online Inc., an R.R. Donnelley & Sons Company. Facebook, the Facebook and F logo, FB, Face, Poke, Wall, and 32665 are trademarks of Facebook. Google App Engine, Google Apps, Google Checkout, Google Data API, Google Maps, Google Mobile Ads, Google Mobile Updater, Google Mobile, Google Store, Google Sync, Google Updater, Google Voice, Google Mail, Gmail, YouTube, Dalvik, and Android are trademarks or registered trademarks of Google Inc. HP is a registered trademark of the Hewlett-Packard Development Company L.P. HTML, XML, XHTML, and W3C are trademarks, registered trademarks, or claimed as generic terms by the Massachusetts Institute of Technology (MIT), European Research Consortium for Informatics and Mathematics (ERCIM), or Keio University. IBM, DB2, DB2 Universal Database, System i, System i5, System p, System p5, System x, System z, System z10, z10, z/VM, z/OS, OS/390, zEnterprise, PowerVM, Power Architecture, Power Systems, POWER7, POWER6+, POWER6, POWER, PowerHA, pureScale, PowerPC, BladeCenter, System Storage, Storwize, XIV, GPFS, HACMP, RETAIN, DB2 Connect, RACF, Redbooks, OS/2, AIX, Intelligent Miner, WebSphere, Tivoli, Informix, and Smarter Planet are trademarks or registered trademarks of IBM Corporation. Microsoft, Windows, Excel, Outlook, PowerPoint, Silverlight, and Visual Studio are registered trademarks of Microsoft Corporation. INTERMEC is a registered trademark of Intermec Technologies Corporation. IOS is a registered trademark of Cisco Systems Inc. The Klout name and logos are trademarks of Klout Inc. Linux is the registered trademark of Linus Torvalds in the United States and other countries. Motorola is a registered trademark of Motorola Trademark Holdings LLC. Mozilla and Firefox and their logos are registered trademarks of the Mozilla Foundation. Novell and SUSE Linux Enterprise Server are registered trademarks of Novell Inc.

g201311103942

OpenText is a registered trademark of OpenText Corporation. Oracle and Java are registered trademarks of Oracle and its affiliates. QR Code is a registered trademark of Denso Wave Incorporated. RIM, BlackBerry, BBM, BlackBerry Curve, BlackBerry Bold, BlackBerry Pearl, BlackBerry Torch, BlackBerry Storm, BlackBerry Storm2, BlackBerry PlayBook, and BlackBerry AppWorld are trademarks or registered trademarks of Research in Motion Limited. SAVO is a registered trademark of The Savo Group Ltd. The Skype name is a trademark of Skype or related entities. Twitter and Tweet are trademarks or registered trademarks of Twitter. UNIX, X/Open, OSF/1, and Motif are registered trademarks of the Open Group. Wi-Fi is a registered trademark of Wi-Fi Alliance. SAP, R/3, ABAP, BAPI, SAP NetWeaver, Duet, PartnerEdge, ByDesign, SAP BusinessObjects Explorer, StreamWork, SAP HANA, the Business Objects logo, BusinessObjects, Crystal Reports, Crystal Decisions, Web Intelligence, Xcelsius, Sybase, Adaptive Server, Adaptive Server Enterprise, iAnywhere, Sybase 365, SQL Anywhere, Crossgate, B2B 360° and B2B 360° Services, m@gic EDDY, Ariba, the Ariba logo, Quadrem, b-process, Ariba Discovery, SuccessFactors, Execution is the Difference, BizX Mobile Touchbase, It’s time to love work again, SuccessFactors Jam and BadAss SaaS, and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP AG in Germany or an SAP affiliate company. All other product and service names mentioned are the trademarks of their respective companies. Data contained in this document serves informational purposes only. National product specifications may vary.

Disclaimer These materials are subject to change without notice. These materials are provided by SAP AG and its affiliated companies (“SAP Group”) for informational purposes only, without representation or warranty of any kind, and SAP Group shall not be liable for errors or omissions with respect to the materials. The only warranties for SAP Group products and services are those that are set forth in the express warranty statements accompanying such products and services, if any. Nothing herein should be construed as constituting an additional warranty.

g201311103942

g201311103942

About This Handbook This handbook is intended to complement the instructor-led presentation of this course, and serve as a source of reference. It is not suitable for self-study.

Typographic Conventions American English is the standard used in this handbook. The following typographic conventions are also used. Type Style

Description

Example text

Words or characters that appear on the screen. These include field names, screen titles, pushbuttons as well as menu names, paths, and options. Also used for cross-references to other documentation both internal and external.

2013

Example text

Emphasized words or phrases in body text, titles of graphics, and tables

EXAMPLE TEXT

Names of elements in the system. These include report names, program names, transaction codes, table names, and individual key words of a programming language, when surrounded by body text, for example SELECT and INCLUDE.

Example text

Screen output. This includes file and directory names and their paths, messages, names of variables and parameters, and passages of the source text of a program.

Example text

Exact user entry. These are words and characters that you enter in the system exactly as they appear in the documentation.



Variable user entry. Pointed brackets indicate that you replace these words and characters with appropriate entries.

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

v

About This Handbook

BW362

Icons in Body Text The following icons are used in this handbook. Icon

Meaning For more information, tips, or background

Note or further explanation of previous point Exception or caution Procedures

Indicates that the item is displayed in the instructor’s presentation.

vi

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

Contents Course Overview .......................................................... ix Course Goals ........................................................... ix Course Objectives ..................................................... ix

Unit 1: Introduction........................................................ 1 Evolution of HANA Landscapes.......................................2 SAP HANA Basics ......................................................9 SAP NetWeaver BW 7.3 ............................................. 20

Unit 2: HANA-optimized DataStore Object and InfoCube ...... 31 HANA-Optimized DataStore Object ................................ 32 HANA-Optimized InfoCube .......................................... 52 Semantically Partitioned Object..................................... 70

Unit 3: Consuming HANA Models in BW ........................... 75 VirtualProvider......................................................... 76 TransientProvider ..................................................... 82 CompositeProvider ................................................... 89 DB Connect ...........................................................100

Unit 4: BI for BW powered by HANA ............................... 117 BI for BW Powered by HANA ...................................... 118

Unit 5: SAP BW Workspace .......................................... 127 SAP BW Workspace.................................................128

Unit 6: Layered Scalable Architecture without SAP HANA ... 161 Layered Scalable Architecture without SAP HANA .............162

Unit 7: Layered Scalable Architecture plus plus (LSA++) with SAP HANA ................................................................ 177 Layered Scalable Architecture plus plus (LSA++) with SAP HANA ..............................................................178

Unit 8: Exercises ........................................................ 215 Exercises..............................................................217

Appendix 1: Analyze Requirements for the Case Study

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

.... 311

vii

Contents

viii

BW362

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

Course Overview Target Audience This course is intended for the following audiences: • • •

BW Consultants BW Project Managers BW System Administrator

Course Prerequisites Required Knowledge •

Basic BW knowledge is mandatory.

Course Goals This course will prepare you to: • •

Run BW on InMemory Database (HANA) Learn new function designed especially for BW HANA

Course Objectives After completing this course, you will be able to: • • • • • • • •

Explain the BW on HANA Landscape. Differentiate between HANA-Optimized Datastore Object and standard BW Data Store Object. Differentiate between HANA Optimized Info Cube and standard BW Info Cube. Explain the consuming of HANA models. Describe BW powered by HANA. Working with BW Workspace in the HANA environment. Explain the Layered Scalable Architecture without SAP HANA. Explain the new functionalities of Layered Scalable Architecture with SAP HANA.

SAP Software Component Information The information in this course pertains to the following SAP Software Components and releases:

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

ix

Course Overview

BW362



x

SAP Business Information Warehouse 7.3 on SAP HANA 1.0

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

Unit 1 Introduction Unit Overview Content of this unit is Introduction

Unit Objectives After completing this unit, you will be able to: • • •

Explain the SAP HANA Roadmap Explain SAP HANA Basics Give an overview on basic knowledge in SAP NetWeaver BW 7.3

Unit Contents Lesson: Evolution of HANA Landscapes .......................................2 Lesson: SAP HANA Basics.......................................................9 Lesson: SAP NetWeaver BW 7.3.............................................. 20

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

1

Unit 1: Introduction

BW362

Lesson: Evolution of HANA Landscapes Lesson Overview Evolution of HANA Landscapes

Lesson Objectives After completing this lesson, you will be able to: •

Explain the SAP HANA Roadmap

Business Example You want to use SAP HANA

Evolution of HANA Landscapes

Figure 1: HANA Development Start

In the exclusive consideration of the aspect of data storage would be a database that stores data in main memory and can answer structured queries, the goal. However, applications spend only a small percentage of their time access to data. In fact, many applications have processes that need access to as much data that the passage of time when transferring the data is immense. In addition, complex handling routines need to be implemented, which can deal with these data volumes. Applications read in the traditional 3-tier architecture (data, application, presentation layer), the first data from a database, process them in its memory and write results back either in the database or provide it to the presentation layer. Given the immense amount of data that produce current business software, sensors and social networks, this concept is becoming increasingly problematic. Will you now have to evaluate the volume of data very quickly and deliver results on

2

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: Evolution of HANA Landscapes

mobile platforms, it is no longer viable. In-memory techniques but have all the data in memory, and modern computer systems have many computing cores, providing an impressive performance. Therefore, it is obvious not to move the data, but the instructions. Why not a complex process in the memory instead of the application layer in which the data storage layer execute? Under the slogan “In-Memory Computing” SAP persecuted an approach to transfer data-intensive processes from the application to the storage layer and perform there. Thus, for example, the software giant intends to be able to make business processes, which have in the past run by the performance limitations of many databases either only on weekends in a batch process or were not to be realized.

Figure 2: Todays Situation – without HANA

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

3

Unit 1: Introduction

BW362

Figure 3: Application View – HANA 1.0 SP02

SAP HANA is a completely re-imagined platform for real-time business. It transforms business by streamlining transactions, analytics, planning, predictive, sentiment data processing on a single in-memory database so business can operate in real-time. • •

Innovations can also be used in a side-by-side scenario limiting the implementation efforts and risk to the existing core processes Side-by-side solutions can be retrofitted at a later point, as required

Figure 4: Application View – HANA 1.0 SP03

4

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: Evolution of HANA Landscapes

SAP Netweaver BW Powered by HANA has now been on the market since April 10, 2012 and we have seen huge interest from customers who want to execute their EDW strategy with BW powered by HANA. However, with this new release has come a number of questions from customers that want to know more about the marriage of BW and HANA such as: Are InfoCubes still required; what modeling objects are available for mixed scenarios: what is next for BW in terms of a roadmap? Attached is a updated presentation of BW powered by HANA that will help you better understand this exciting new release. HANA as the Primary Database for BW and Foundation for new Applications • • • • •

In-Memory database used as primary persistence for BW BW is becoming a HANA optimized EDW BW continues to manage the analytic metadata and the data provisioning processes HANA provides all the functionality of BWA and more High-performance foundation for new SAP applications

Figure 5: Application View – ERP and BW on HANA XX

• • •

2013

Porting ERP / Suite on SAP HANA Flexible, view-based analytics with enriched openness for customers Allow for reuse of Side-by-Side apps

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

5

Unit 1: Introduction

BW362

Figure 6: Technical Architecture Patterns

Figure 7: SAP HANA Innovation Overview

SAP is delivering a new class of solutions on top of the SAP HANA platform that provide real-time insights on big data and state-of-the-art analysis; such as machine learning, pattern recognition, and predictive capabilities. These solutions can empower companies to transform the way they run their business, from enabling rapid, sense-and-respond processes and targeted actions, to even rethinking their business models. We distinguish between, Applications,

6

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: Evolution of HANA Landscapes

powered by SAP HANA which provide a rich set of functionalities that addresses a company’s existing business processes and enables innovations in processes and business models (e.g. SAP Smart Meter Analytics powered by SAP HANA); applications provide business logic, user interface, workflow, and other capabilities and are optimally designed to run natively on the SAP HANA platform to fully utilize its in-memory computing capabilities (e.g., business logic executed in the database layer). In addition there are accelerators and analytic content available. Accelerators are software that utilizes the power of SAP HANA to dramatically improve the performance of existing SAP Business Suite functionality in small, well-defined areas that bring immediate value to customers. The Customer Segmentation Accelerator that we are focusing on today is such an example and follows the SAP CO-PA Accelerator powered by SAP HANA. Analytic Content is complementary software that complements SAP applications and supports the customer’s integration, implementation, and configuration activities. Analytic content provides pre-built reporting, dashboards, and data models that run natively on the SAP HANA platform (e.g. Operational Reporting powered by SAP HANA). In the near future, SAP plans to provide unrestricted SAP HANA powered applications, accelerators and analytic content for Lines of Business that improve enterprise planning and performance management, business planning and consolidation as well as customer planning and intelligence. In addition, several industry applications, especially for Banking, Retail, Consumer Products and Utilities, are planned to bring real-time processing on Big Data to key industry specific processes. In order to ensure quicker time to value some of these solutions, including the SAP Financial and Controlling Accelerator are also delivered as rapid-deployment solutions. The SAP Rapid Deployment solutions provide maximum predictability with a fixed cost and scope.

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

7

Unit 1: Introduction

BW362

Lesson Summary You should now be able to: • Explain the SAP HANA Roadmap

8

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: SAP HANA Basics

Lesson: SAP HANA Basics Lesson Overview SAP HANA Basics

Lesson Objectives After completing this lesson, you will be able to: •

Explain SAP HANA Basics

Business Example You want to know the HANA Basics

Figure 8: In-Memory Computing

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

9

Unit 1: Introduction

BW362

Figure 9: HANA Performance

This is tremendous. 12000 times faster on an average of 29 customers across industries is real-time! Depth and breadth of SAP HANA. I am sure all of you sitting here would be able to identify your industry and relate to the gain HANA provides!

Figure 10: SAP HANA Data

In simplified terms means 64-bit processors that are designed by their ALU construction so that 64 bits (8 bytes) can be processed simultaneously or during a cycle. This includes the external and internal design of data and address bus, the width of the register set with one. Furthermore, the instruction set is usually designed consistently on 64-bit, unless a backward-compatible legacy (see X86

10

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: SAP HANA Basics

architecture) are present. This applies similarly to the standard addressing modes, the bit width of the arithmetic logic unit (ALU), in principle, may differ from the address of the unit (as with most 64-bit CPUs). In order to provide more acceleration in data processing, manufacturers have come up with different acceleration techniques: These range from the reduction of write operations on the outer tracks of the disk sectors on the preprocessing of the data in or or on the hard drive itself to large caches that are designed to reduce the actual number of hits on hard drives. These techniques have one thing in common: In essence, they assume that data is stored on the hard drives, and they are trying to speed up access. Memory is now available not only in much larger capacities than before, he is now also affordable and thanks to modern 64-bit operating systems in the first usable. With the 32-bit address space is limited to four gigabytes of memory, while one with 64-bit addressing can use so much memory that it still does not fit into a server. However, all data in main memory would be useless if the CPU does not have enough power possessed to process this data. Namely, if the processing speed of the CPU is so slow that a read from a hard drive is fast enough to not have to be all data already in memory. At this point there has been in recent years a great change of complex CPUs to multi-core processor units. This Reckoner have up to ten cores and can be for two, four or eight built into a server. So wait on every shot, the heart of a computer beats, up to 80 data intensive computing cores on new data or instructions. For this computing power, it is necessary to write specific software. This has complex tasks into many small process strands (threads) can break, which can utilize the large number of parallel cores. For optimal processing of the data is also fast enough to provide optimized data structures.

Figure 11: Hardware Rethink

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

11

Unit 1: Introduction

BW362

Techniques are irrelevant to dealing with the acceleration reading from hard drives and other relevant, which deal with the rapid exchange of information between main memory and CPU registers. In the words of the database expert Jim Gray, the technology has moved a step closer to the CPU “is RAM disk, disk is tape, and tape is dead”.

Figure 12: Scale – SW Side Distribute Across Cores

Figure 13: Fast – SW Side Optimization for Memory

Assume that we want to aggregate the sum of all sales amounts in the example in figure 2 using a row based table. Data transfer from main memory into CPU cache happens always in blocks of fixed size called “cache lines” (for example 64 bytes). With row based data organization it may happen that each cache line contains only

12

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: SAP HANA Basics

one “sales” value (stored using 4 bytes) while the remaining bytes are used for the other fields of the data record. For each value needed for the aggregation a new access to main memory would be required. This shows that with row based data organization the operation will be slowed down by cache misses that cause the CPU to wait until the required data is available. With column-based storage, all sales values are stored in contiguous memory, so the cache line would contain 16 values which are all needed for the operation. In addition, the fact that columns are stored in contiguous memory allows memory controllers to use data prefetching to further minimize the number of cache misses.

Figure 14: Column Store High Lights

The column store uses efficient compression algorithms that help to keep all relevant application data in memory. Write operations on this compressed data would be costly as they would require reorganizing the storage structure. Therefore write operations in column store do not directly modify compressed data. All changes go into a separate area called the delta storage. The delta storage exists only in main memory. Only delta log entries are written to the persistence layer when delta entries are inserted. Delta merge operation: • • • •

2013

The delta merge operation is executed on table level. Its purpose is to move changes collected in write optimized delta storage into the compressed and read optimized main storage. Read operations always have to read from both main storage and delta storage and merge the results. The delta merge operation is decoupled from the execution of the transaction that performs the changes. It happens asynchronously at a later point in time.

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

13

Unit 1: Introduction

BW362

Delta merge is triggered by one of the following events: • • • •

• •

Number of lines in delta storage for this table exceeds specified number Memory consumption of delta storage exceeds specified limit Merge is triggered explicitly by a client using SQL The delta log for a columnar table exceeds the defined limit. As the delta log in truncated only during merge operation, a merge operation needs to be performed in this case. Triggering a Delta Merge (When Using the SAP HANA Database) http://help.sap.com/saphelp_nw73/helpdata/en/62/9d41934c0744ef9a8f21fa4c70baa3/frameset.htm

Figure 15: Modeling for HANA 1.0 – Using SAP HANA Studio

Attribute View An attribute view is used to model an entity based on the relationships between attribute data contained in multiple source tables. For example, customer ID is the attribute data that describes measures (that is, who purchased a product). However, customer ID has much more depth to it when joined with other attribute data that further describes the customer (customer address, customer relationship, customer status, customer hierarchy, and so on).

14

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: SAP HANA Basics

You create an attribute view to locate the attribute data and to define the relationships between the various tables to model how customer attribute data, for example, will be used to address business needs. You can model the following elements within an attribute view: • • •

Simple attributes Calculated attributes Hierarchies note

Analytic View An analytic view is used to model data that includes measures. For example, an operational data mart representing sales order history would include measures for quantity, price, and so on. The data foundation of an analytic view can contain multiple tables. However, measures that are selected for inclusion in an analytic view must originate from only one of these tables (for business requirements that include measure sourced from multiple source tables, see calculation view). Analytic views can be simply a combination of tables that contain both attribute data and measure data. For example, a report requiring the following: Customer_ID Order_Number Product_ID Quantity_Ordered Quantity_Shipped Optionally, attribute views can also be included in the analytic view definition. In this way, you can achieve additional depth of attribute data can be achieved. The analytic view inherits the definitions of any attribute views that are included in the definition. For example: Customer_ID/Customer_Name Order_Number Product_ID/Product_Name/Product_Hierarchy Quantity_Ordered Quantity_Shipped You can model the following elements within an analytic view: • • • • • •

2013

Simple attributes Calculated attributes Private attributes Simple measures Calculated measures Restricted easures

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

15

Unit 1: Introduction

BW362

Figure 16: Modeling for HANA 1.0 – Using SAP HANA Studio

Calculation View A calculation view is used to define more advanced slices on the data in SAP HANA database. Calculation views can be simple and mirror the functionality found in both attribute views and analytic views. However, they are typically used when the business use case requires advanced logic that is not covered in the previous types of information views. For example, calculation views can have layers of calculation logic, can include measures sourced from multiple source tables, can include advanced SQL logic, and so on. The data foundation of the calculation view can include any combination of tables, column views, attribute views and analytic views. You can create joins, unions, projections, and aggregation levels on the sources. You can model the following elements within a calculation view • • • • • • • •

16

Simple attributes Calculated attributes Private attributes Simple measures Calculated measures Restricted measures Counters Hierarchies (created outside of the attribute view)

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: SAP HANA Basics

Figure 17: SQL Script – New Programming Model

Functional extension – allows the definition of (side-effect free) functions which can be used to express and encapsulate complex data flows Data type extension – allows the definition of types without corresponding tables

Figure 18: Column Views in SAP HANA – Run-Time and Design-Time Objects

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

17

Unit 1: Introduction

BW362

Figure 19: How to Build and Consume Content for SAP HANA Appliance

18

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: SAP HANA Basics

Lesson Summary You should now be able to: • Explain SAP HANA Basics

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

19

Unit 1: Introduction

BW362

Lesson: SAP NetWeaver BW 7.3 Lesson Overview SAP NetWeaver BW 7.3

Lesson Objectives After completing this lesson, you will be able to: •

Give an overview on basic knowledge in SAP NetWeaver BW 7.3

Business Example Based on a change to SAP HANA you must know about some SAP BW basics.

SAP NetWeaver BW 7.3

Figure 20: SAP NetWeaver Business Warehouse – EDW Model and Dataflow Definition

Define a central EDW model that satisfies the need of decision makers across all areas of a company and acts as a single point of truth for any kind of information •

20

Dataflow Modeler

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: SAP NetWeaver BW 7.3

Define ETL processes to populate the persistency layers of the EDW Model with cleansed and consolidated, consistent and harmonized data in an adequate periodicity, will say periodically based on batch or near-real or real time processes • • •

Transformations / DTP Source System handling Realtime Data Acquisition (RDA)

Figure 21: SAP NetWeaver Business Warehouse – Scheduling and Monitoring the Dataflow

Organize, schedule and monitor the dataflow towards and within the EDW and provide tools to repair or redo unexpected failures during load processes. • • • • • •

External ETL Processes Metadata Management Process Chains Admin Cockpit Generating Repair Chains Checking Error DTPs

Use the software to design and deploy projects quickly, and to address data quality and integration with one solution. Your users can manage data across critical business processes and collaborate between business and IT, supporting a

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

21

Unit 1: Introduction

BW362

single project or a larger data governance initiative. For a more complete view of business, the software delivers native support for free-form data, structured data, and unstructured text data. With SAP BusinessObjects Data Services, you can assess and improve data quality with a comprehensive solution that supports data within any industry, locality, or data domain, including customer, product, supplier, and material data. Virtually all data – including free-form, structured, and unstructured data – can be processed and cleansed. Business user interfaces, designed to be intuitive, allow you to standardize, correct, and match data to reduce duplicate information and to identify relationships. Extensive global support comprises data quality coverage in more than 230 countries – including an exclusive partnership with China Post.

Figure 22: SAP NetWeaver Business Warehouse – EDW Persistency and Performance Management

Provide Data Management capabilities in order to massage the data persistency according to the specific characteristics of the data and information partitions such as actual, frequently asked data, volatile data that is going to be updated very likely, old, read only data – with nearly no demand for reporting, data that has to be hidden but kept for legal reasons. Provide a technology for high performance OLAP processing on top of all parts of the data resulting out of adequate modeling features (like Star Schema), particular persistency layers in the model (granular vs. aggregated data resp. information) and sophisticated storage paradigms

22

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: SAP NetWeaver BW 7.3

Figure 23: Database Shared Library (DBSL) – Platform Concept Supporting Standard RDBMS

The database-dependent part of the SAP database interface can be found in its own library that is dynamically linked to the SAP kernel. This database library contains the Database Shared Library (DBSL), as well as libraries belonging to the corresponding database manufacturer. These are either statically or dynamically linked to the database library.

Missing analytical capabilities on DB level lead to massive AppServer/DBServer traffic • •

DataStoreObjekt (DSO) (e.g. Activation) Integrated Planning (e.g. Disaggregation)

Distributed data management (RDBMS vs. BWA vs. NLS vs. Archive) •

Missing data aging strategies in RDBMS

Nature of RDBMS – tupel based data storage, indexing necessary for performance •

2013

Read/Load Performance on the RDBMS (e.g. Extended SAP Star Schema too complex)

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

23

Unit 1: Introduction

BW362

Other Examples •

Exception Aggregation (e.g.Distinct Count only available as BWA Calculation Engine feature)

Figure 24: SAP NetWeaver BW Accelerator 7.20

Figure 25: Application vs. Database Server – Technical Overview

With large data volumes, reading information becomes a bottleneck

24

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: SAP NetWeaver BW 7.3

Next generation applications will delegate data intense operations The runtime environment executes complex processes in memory In memory computing returns results by pointing apps to a location in shared memory

Figure 26: SAP NetWeaver BW, Powered by SAP HANA

Enhanced built-in analytical capabilities • •

Full database functionality Full BWA functionality

Advanced features • • •

2013

HANA-optimized InfoCube HANA-optimized DataStore Objects Publishing SAP HANA models into BW

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

25

Unit 1: Introduction

BW362

Figure 27: System Upgrade: Necessary Steps

Sizing Starting with version 7.30 SP5 you can run SAP Business Information Warehouse (SAP BW) on SAP HANA as database platform. This enables you to leverage the In-Memory capabilities of HANA and the SAP-HANA-optimized BW objects. Note that for a stand-alone version of SAP HANA (i.e. HANA without BW) separate sizing information is available in note 1514966. If you want to migrate an existing SAP NetWeaver BW system from any database platform to HANA, we strongly recommend using the new ABAP sizing report for SAP NetWeaver BW described in SAP note 1736976 which – provides much better accuracy of sizing results • • • •

Handles source database compression auto Uses table type specific compression factors Considers sizing effects of the concept of non-active data (version 1.3) Produces much more detailed results than the attached database specific scripts, facilitating the selection of a suitable hardware configuration

New report /SDF/HANA_BW_SIZING describes in note 1736976 and you can find additional PDF files there

26

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: SAP NetWeaver BW 7.3

Figure 28: Traditionell Approach

Figure 29: New DataWarehousing Approach incl. HANA Technologie

Reduction data replication Higher compression and significant persistent lower data volume Fast Mainteneance including prototyping and testing

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

27

Unit 1: Introduction

BW362

Reduced HW-Footprint Reduced TCO: Fast Implementation and simple Administration (DB) Improved load performance for DSOs Exzellente Query Performance Accelerated HANA-Planung Migration without New Implementation – no interrupts for current scenarios

28

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: SAP NetWeaver BW 7.3

Lesson Summary You should now be able to: • Give an overview on basic knowledge in SAP NetWeaver BW 7.3

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

29

Unit Summary

BW362

Unit Summary You should now be able to: • Explain the SAP HANA Roadmap • Explain SAP HANA Basics • Give an overview on basic knowledge in SAP NetWeaver BW 7.3

30

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

Unit 2 HANA-optimized DataStore Object and InfoCube Unit Overview Content of this unit is HANA-optimized DataStore Object and InfoCube

Unit Objectives After completing this unit, you will be able to: • • • • • • • • • •

Explain the motiviation for HANA optimized DataStore Objects Identify the difference in the architecture and structure compared with DataStore Objects stored in a relational database List the different Conversion options Elaborate the typcial business scenarios where HANA optimized DataStore Objects can be used in Understand the motivation for SAP HANA optimized InfoCubes Identify the difference in the architecture and structure compared with InfoCubes stored in a relational database Describe the Conversion process Elaborate the typcial business scenarios where SAP HANA optimized InfoCubes can be used in Describe the use cases of an SPO Create an SPO and integrate it into the data flow

Unit Contents Lesson: HANA-Optimized DataStore Object ................................. 32 Lesson: HANA-Optimized InfoCube........................................... 52 Lesson: Semantically Partitioned Object ..................................... 70

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

31

Unit 2: HANA-optimized DataStore Object and InfoCube

BW362

Lesson: HANA-Optimized DataStore Object Lesson Overview HANA-Optimized DataStore Object

Lesson Objectives After completing this lesson, you will be able to: • • • •

Explain the motiviation for HANA optimized DataStore Objects Identify the difference in the architecture and structure compared with DataStore Objects stored in a relational database List the different Conversion options Elaborate the typcial business scenarios where HANA optimized DataStore Objects can be used in

Business Example Your company has built a large Enterprise Data Warehouse system based on the SAP BW. With the invention of SAP HANA you decided to benefit the most its‘ In-Memory capabilities. After analysing your dataflow in the SAP BW system, you identified the DataStore Objects as the first objects to be explored. You envisage to migrate some of your existing DataStore Objects in your dataflow to SAP-HANA optimized ones as well as using the SAP HANA-optimized DataStore Objects in newly created dataflows.

32

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: HANA-Optimized DataStore Object

HANA-Optimized DataStore Object

Figure 30: Motivation

The HANA-optimized DataStore Objects leverage SAP HANA technology to • • •

Reduce the amount of physical storage Accelerate data loads Allow faster Remodeling of structural changes

No adoption of processes, MultiProvider, Queries required

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

33

Unit 2: HANA-optimized DataStore Object and InfoCube

BW362

Figure 31: General Description

The SAP HANA-optimized DataStore object is a standard DataStore object that is optimized for use with the SAP HANA database. By using SAP HANA-optimized DataStore objects, you can achieve significant performance gains when activating requests. The change log of the SAP HANA-optimized DataStore object is displayed as a table on the BW system. However, this table does not save any data, which helps to save memory space. When the change log is accessed, the data content is calculated using a calculation view. Data is read from the history table for the temporal table of active data in the SAP HANA database. Note: If you want to view the change log data in the ABAP Dictionary, a warning appears explaining that the table does not exists in the database. This is due to optimization – the table in the database is replaced by a calculation view.

34

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: HANA-Optimized DataStore Object

Figure 32: DataStore Objects in SAP NetWeaver BW 7.30 – Creation of Consistent Delta Information

Figure 33: SAP BW – DataStore Objects – Main Principles

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

35

Unit 2: HANA-optimized DataStore Object and InfoCube

BW362

Figure 34: HANA-Optimized DataStore Objects – Overview and Design

Figure 35: SAP HANA-optimized DataStore Objects – Mapping Between Application Server and HANA DB

The table for active data is a temporal table that consists of three components: History table, main table and delta table. Data activation is started on the BW system and executed in SAP HANA. No data is transferred to the application server during activation.

36

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: HANA-Optimized DataStore Object

On the DataStore object editing screen, you can choose the Unique Data Records property. However, this does not improve system performance when using an SAP HANA-optimized DataStore object. The uniqueness of the data is not checked, meaning that data consistency cannot be guaranteed.

Figure 36: HANA-optimized DataStore Objects – Performance Numbers (Lab Results, Record Size ~1KB, Runtime in Seconds)

Figure 37: Standard DataStore Object (NOT HANA-optimized) Active Table and Change Log Table

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

37

Unit 2: HANA-optimized DataStore Object and InfoCube

BW362

Figure 38: SAP HANA-Optimized DataStore Object – DB Status and the Additional Field

Differences to a Normal Standard DataStore Object The SAP HANA-optimized DataStore object contains the additional field IMO__INT_KEY in the active data table. This field is required for optimizing SAP HANA and is hidden in queries. A before/after image is still written during activation, even if no changes are made to the active data. It cannot be used as a source of update flows in a 3.x data flow. More information: Data Flow in Business Warehouse The complete history of a request is not saved. Only the start status and end status (relating to an activation) are saved. Since real-time data acquisition (RDA) usually involves small data volumes for each activation step, SAP HANA optimization does not produce any advantages. The use of SAP HANA-optimized DataStore objects for RDA is therefore not supported.

38

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: HANA-Optimized DataStore Object

Figure 39: SAP HANA-Optimized DataStore Object – Active Table in the HANA Studio

Figure 40: SAP HANA-Optimized DataStore Object – Change Log in the HANA Studio

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

39

Unit 2: HANA-optimized DataStore Object and InfoCube

BW362

Figure 41: HANA-Optimized DataStore Object – Conversion

If you are using a SAP HANA database and want to benefit from it when loading data into DataStore objects, we recommend converting existing standard InfoCubes. After migration to the SAP HANA database, normal standard DataStore objects are contained in the SAP HANA database’s column-based store. Prerequisites DataStore objects can only be converted to SAP HANA optimized DataStore objects if: • •



40

They are not part of a HybridProvider. They are not integrated into a 3.x dataflow. This basically means that the dataflow from the DataStore object to other InfoProviders must be a new dataflow. The dataflow to the DataStore object could also be a 3.x dataflow. We recommend migrating to the new dataflow however. More information: Migrating a Data Flow They are not supplied with data using real-time data acquisition (RDA).

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: HANA-Optimized DataStore Object

Procedure 1.

There are two ways of calling the conversion transaction: •

2. 3.

You are in the DataStore object editing screen. Choose – Goto – Conversion to SAP HANA-Optimized. The system displays the DataStore object to be edited. • Call transaction RSMIGRHANADB directly. You can now use input help to select DataStore objects for conversion. You have the following options: •



Without change log: In this case, only the table of active data is converted. The change log is empty after conversion is finished. You should therefore ensure that all delta requests are updated before conversion takes place. This conversion option is faster, but has the disadvantage that old requests cannot be rolled back. Reconstruct change log: A new change log is created. This option takes more time. Note: If you have already archived data from the DataStore object, you can only restore requests that do not belong to the archived area. This means that the reconstructed change log might contain fewer requests than the original one.



4. 5.

2013

Package size for data transfer: If the conversion breaks down, a package size should be defined that the data can be processed with. If no package size is defined, the conversion is performed in a transaction. You can specify whether the log is displayed after conversion. Choose Execute. The DataStore objects are converted. While the conversion is running, a lock is set so that no changes (for loading data, for example) can be made to the DataStore object.

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

41

Unit 2: HANA-optimized DataStore Object and InfoCube

BW362

Figure 42: HANA-Optimized DataStore Objects – Benefits

Figure 43: Performance Improvements – Overall DataFlow, Process Chain

42

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: HANA-Optimized DataStore Object

Figure 44: Limitations and Features

For certain requirements, the role of the classical Reporting layer (query optimized InfoCubes) might diminish. DataStore Object might be able to provide sufficient query performance to omit InfoCubes. Nevertheless, the semantic separation of data and query structure via MultiProvider will not become obsolete. In cases where the DataStore Object Activation time is the bottleneckin the overall staging process, the SAP HANA-optimized DataStoreObjects will accelerate the staging processing. The reduced DB space consumption of the SAP HANA-optimized DataStore Object helps to control data growth.

Figure 45: Partitioning of Write-Optimized DSOs

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

43

Unit 2: HANA-optimized DataStore Object and InfoCube

BW362

Better performance of write-optimized DataStore Objects • • • •

Write-optimized DSO are now partitioned by request-ID Partitioning improves merging performance significantly, especially in case the delta Index is merged for a large Write-Optimized DSO With partitioning only the relevant (last) partition (with changed/new records) is merged In addition performance for read and delete is improved, because with partition pruning only a subset of the data is accessed

Figure 46: State of Data

Active + Non-Active + NLS HOT • •

Data is read/written frequently In Memory, additional memory required for dynamic objects (merge, intermediate results, etc.)

WARM • •

Mostly read access – lookups, transformations, etc. In Memory, no additional memory required for dynamic objects

COOL • •

44

Infrequent access – no need to keep in memory all the time On disk, loaded to memory only on demand, good candidate for displacement if memory runs short

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: HANA-Optimized DataStore Object

COLD • •

Infrequent access, restricted to [BW] NLS capabilities – not stored in HANA persistence In NLS subsystem only – data volume not considered in HANA sizing Hint: Non-active Data concept is reflected in the last version (V1.3) of Sizing Report Optimized usage of HANA memory resources Modeling of LSA concepts (Corporate Memory, …) by specifying objects that are supposedto be handled as “warm” or “cool” See note 1736976 for more details about the Sizing Report

Tables/partitions in SAP HANA can be marked as “non-active” (“cool”) Such tables/partitions are … • • • •

Loaded into RAM only when accessed Loaded into RAM in case of read access (column-wise) and processed as usual (same speed and functionality) Loaded to RAM for merge process (if new data was written and delta reaches limit) Displaced from RAM with highest priority in case of RAM shortage (but only then) or when actively a cleanup is triggered

BW automatically marks all PSA tables and write-optimized DSOs (except their latest partitions) as “cool”, so no extra maintenance or tuning is necessary. All other BW objects are treated as before. In addition, customers can override the handling of write-optimized DataStore objects (e.g. those which are used to build a Corporate Memory layer within an LSA). Customers need to actively define their “warm” and “cool” data objects (both in sizing report and in BW metadata). See note 1767880 for more details.

In detail – non-active data concept for BW on SAP Large BW systems contain large amounts of data that are no longer or rarely actively used but that should remain in the system (historical data, keeping data for legal reasons, and so on). This data is callednon-active data in the following.

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

45

Unit 2: HANA-optimized DataStore Object and InfoCube

BW362

To optimize utilization of the main memory, you want to make sure the following for non-active data: • • •

If the main memory has sufficient capacity, non-active data is available directly in the main memory. When there are bottlenecks in the main memory, it is preferably non-active data that is removed from the main memory. If you access non-active data that is not available in the main memory, the system only loads the smallest possible amount of data into the main memory. (Basically, the columns of the relevant partition)

This makes sure there is an optimal access to data that is relevant for reporting and processes because this is almost always availablein the main memory. Also, you can reduce the main memory for sizing if the share of non-active data is very large. HOT data: Data that is accessed very often, for example, for reporting orfor processes in warehouse management. (queries for InfoCubes, DataStore objects, activation of data in a standard DataStore object) WARM data: Only certain columns of the object are accessed. This means that there are columns that are no longer or rarely accessed (mostly lookups with transformation rule type, or specific columns in routines). COOL data: These columns are no longer or rarely accessed. (write-optimized DataStore objects of the corporate memory, or Persistent Staging Areas or write-optimized DataStore objects of the acquisition layer) COLD data: Data of a BW system that is no longer required, and that can beor was archived or saved using Nearline Storage. Optimization of the data stores with regard to non-active data In a BW system, most data with the classification COOL and WARMin the Persistent Staging Areas of the DataSources and in write-optimized DataStore objects of the acquisition layer and the corporate memory. These objects often contain data that is no longer used, however, newdata is loaded to these objects on a daily basis. As a result, the period of time since the last usage is normally not longer than 24 hours. Despite this, it should be rather such objects that are removed from the memory than objects that, for example, are used for reporting. You should also avoid that the data that is no longer used is loaded to the main memory when loading new data. How Persistent Staging Areas and write-optimized DataStore objects have been optimized with regard to non-active data is described in the following. For this, it is first explained how displacement works on the SAP HANA database (DB) in general. Moreover, request partitioning of Persistent Staging Areas and write-optimized DataStore objects,and the early unload setting are explained. Displacement:

46

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: HANA-Optimized DataStore Object

Displacement of columns of a partition is carried out if a bottleneck occurs in the main memory, this means if usage of main memory by a database process exceeds a threshold value. SAP HANA DB uses last-recently-used concept for displacing table columns. First of all, the columns of a table partition are removed fromthe main memory whose data has not been accessed for the longest period of time. Partitioning of the Persistent Staging Area and of the write-optimized DataStore objects based on the requests. The Persistent Staging Areas of DataSources and the write-optimized DataStore objects are created on the database in partitions by the request. With the restriction that duplicate data records have to be allowed for write-optimized DataStore objects (setting in the maintenance dialog). Partitioning by request means that a request is completely written to a partition. If the threshold value of five million lines (for write-optimized DataStore objects 20 million) is exceeded, a new partition is created and the data of the next request is written to this new partition. As a result, normally only the data from the newest partitions is accessed by data warehousing processes (loading data and reading data) because these processes always specify precisely the partition ID for table operations, and as a consequence, no data from other partitionshas to be accessed. CAUTION For write-optimized DataStore objects that are connected to 3.x data flows (inbound or outbound update rules), the system always loads all the data to the main memory because no partition ID is used to access the data. Write-optimized DataStore objects with a semantic key, for which duplicate data records are not allowed, are not partitioned by request. As a result, all data is always loaded to the main memory when accessing theobject. Setting EARLY UNLOAD of a table in the SAP HANA DB For some BW objects, you can make the EARLY UNLOAD setting. If a bottleneck occurs in the main memory, the data of an object that is flagged in such a way is prioritized for displacement from the main memory. For these objects, the time that has passed since the last usage is by default multiplied by 27. As a consequence, these objects are displacedquicker than objects that have not been accessed for a long time but that do not have this setting. Implementation of non-active data in BW: As of Support Package 08 and SAP HANA Support Package 05, the “non-active data” concept is introduced in the BW system due to the following settings that are implemented in the system automatically. Persistent Staging Area tables and write-optimized DataStore objects are flagged as EARLY UNLOAD by default. This means that these objects are displaced from the memory before other BW objects (such as InfoCubes or standard DataStore objects). Persistent Staging Areas and write-optimized DataStore objects are also partitioned by request: Partitions that have once been displaced are no longer loaded because new data is loaded only to the newest partition, and older data is normally no longer accessed.

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

47

Unit 2: HANA-optimized DataStore Object and InfoCube

BW362

However, if old data should be accessed, this data is loaded tothe main memory. Typically, it is load processes that are used for setting up new target objects, or data has to be reloaded. For such processes,it is acceptable that the data must first be loaded to the main memory. If only certain columns are accessed in the displaced objects, only these columns are loaded to the main memory. The other columns remainon the disk. (for example, lookups in transformations that select onlycertain columns). Due to this concept, the main memory resource management is improved using an automatism. This affects sizing. If Persistent Staging Areasand write-optimized DataStore objects contain large amounts of non-active data, this data remains on the disk, and the main memory can be chosen smaller correspondingly. Caution: The BW system is optimized in such a way that Persistent Staging Areas and write-optimized DataStore objects are accessed only with the respective partition ID, so that not the complete table has to be loaded to the main memory. Avoid in any case to access such tables using regular manual accesses (SQL editor or transaction SE16), or using yourown source code. This would load the complete table to the main memory. EARLY UNLOAD for other BW objects You can also flag InfoCubes and other DataStore objects as EARLY UNLOAD. However, we generally do not recommend this. Since these objects are not partitioned by request, the complete object is loaded to the main memory, or it is never displaced because it is accessed too often (for example, by a daily loading process). This can even lead to counterproductive behavior if they are displaced due to the EARLY UNLOAD flag, but then are reloaded to the main memory a short time later for a loading process or a query. As a consequence, you should only use this setting in a very restricted manner for such objects. (For example, if an InfoCube exists for everyyear, but you only report to the current year. For InfoCubes from the past years, you could make this setting because they are no longer connected toloading processes, and no reporting is carried out.) Effects on hardware sizing of SAP HANA DB This concept improves main memory resource management, which has positive effects on hardware sizing for a large amount of non-active data. For more information about this, see SAP Note 1736976. CAUTION However, you have to fill in the sizing report with input that matches reality. If data is classified incorrectly, and is used frequently despite this, significant problems may occur in mainmemory management if continuously insufficient amounts of main memory are available. FAQ:

48

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: HANA-Optimized DataStore Object

How can I set or reset the EARLY UNLOAD behavior for BW objects? Persistent Staging Areas and write-optimized DataStore objects are set to EARLY UNLOAD by default. You can use transaction RSHDBMON to reset this behavior or to set it for other BW objects. Is the EARLY UNLOAD behavior also provided for other tables (such as Z tables)? No; other database tables are not supported. Are there restrictions for tables that are flagged with EARLY UNLOAD? No; there are no general restrictions. When accessing data of these objects, these may first be loaded from the disk to the main memory. How does this concept affect main memory sizing for SAP HANA DB? Persistent Staging Areas and write-optimized DataStore objects that are COOL or WARM in terms of usage enter sizing with a correspondingly lower factor. See sizing note 1736976. Is this concept valid for 3.x data flows? No; for 3.x data flows, the partition ID is not used when accessing the data, so that all the data has always to be reloaded to the main memory. Will this concept be extended to further BW objects or databasetables? Yes; the treatment of non-active data is optimized further, be it by extending this concept or by using new technologies or concepts.

Solution

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

49

Unit 2: HANA-optimized DataStore Object and InfoCube

BW362

Procedure for activating the “non-active data” concept: • •







50

Import SAP HANA Support Package 05 Import Support Package 9 for SAP NetWeaver BW 7.30 (SAPKW73009) into your BW system. The Support Package is available when SAP Note 1750249 “SAPBWNews NW 7.30 BW ABAP SP9”, which describes this Support Package in more detail, is released for customers or import Support Package 6 for SAP NetWeaver BW 7.31 (SAPKW73106) into your BW system. The Support Package is available when SAP Note 1753103 “SAPBWNews NW BW 7.31/7.03 ABAP SP6”, which describes this Support Package in more detail, is released for customers. Restart program RS_BW_POST_MIGRATION so that Persistent Staging Area tables are classified correctly (unload Priority 7, see SE14). It is sufficient to run the report exclusively with option 14 (set unload priority). If a migration has been carried out from a different database to BW on SAP HANA, and BW Support Package 09 is contained, a single default run of the report is sufficient. Run the program RSDU_WODSO_REPART_HDB for all relevant write-optimized DataStore objects so that these are partitioned and classified (unload Priority 7). Objects that have small amounts of data or that are in a distributed SAP HANA DB landscape are already classified and partitioned by the program RS_BW_POST_MIGRATION. See SAP Note 1776749 also. After you carry out the mentioned steps, all the data from Persistent Staging Areas and write-optimized DataStore objects that is rarely used is displaced automatically with a higher priority. As of this point in time, the process is completely automatic; there is no further maintenance required for existing or new tables.

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: HANA-Optimized DataStore Object

Lesson Summary You should now be able to: • Explain the motiviation for HANA optimized DataStore Objects • Identify the difference in the architecture and structure compared with DataStore Objects stored in a relational database • List the different Conversion options • Elaborate the typcial business scenarios where HANA optimized DataStore Objects can be used in

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

51

Unit 2: HANA-optimized DataStore Object and InfoCube

BW362

Lesson: HANA-Optimized InfoCube Lesson Overview HANA-Optimized InfoCube

Lesson Objectives After completing this lesson, you will be able to: • • • •

Understand the motivation for SAP HANA optimized InfoCubes Identify the difference in the architecture and structure compared with InfoCubes stored in a relational database Describe the Conversion process Elaborate the typcial business scenarios where SAP HANA optimized InfoCubes can be used in

Business Example Your company has built a large Enterprise Data Warehouse system based on the SAP BW. After the invention of SAP HANA you decided to benefit the most from the In-Memory capabilities. After upgrading your SAP BW system to a HANA DB based SAP BW system, all newly created InfoCubes will be SAP HANA optimized. In addition you have to decide which of the existing InfoCubes should be converted to benefit from the HANA capabilities.

Motivation

Figure 47: Motivation

52

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: HANA-Optimized InfoCube

Figure 48: General Description

Figure 49: SAP HANA-Optimized InfoCube – From Snowflake to Star Schema

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

53

Unit 2: HANA-optimized DataStore Object and InfoCube

BW362

SAP HANA-Optimized InfoCube in the System

Figure 50: Standard InfoCube (not SAP HANA-Optimized)

An InfoCube is a type of InfoProvider. It describes a closed data set (from an analysis point of view) a self-contained dataset, for example, a business-orientated area. An InfoCube is comprised of a set of relational tables arranged according to the enhanced star schema: a large fact table in the middle surrounded by several dimension tables. The data in an InfoCube is stored either physically or in BW Accelerator. The InfoCube receives the data by means of a data transfer process. It is then available as an InfoProvider for analysis and reporting purposes. InfoCubes are made up of a number of InfoObjects. All InfoObjects (characteristics and key figures) are available independent of the InfoCube. Characteristics refer to master data with their attributes and text descriptions. An InfoCube consists of several InfoObjects and is structured according to the enhanced star schema. This means there is a (large) fact table that contains the key figures for the InfoCube, as well as several (smaller) dimension tables which surround it. The characteristics of the InfoCube are stored in these dimensions. An InfoCube fact table only contains key figures, in contrast to a DataStore object, whose data part can also contain characteristics. The characteristics of an InfoCube are stored in its dimensions. The dimensions and the fact table are linked to one another using abstract identification numbers (dimension IDs), which are contained in the key part of the respective database table. As a result, the key figures of the InfoCube relate to the characteristics of the dimension. The characteristics determine the granularity (the degree of detail) at which the key figures are stored in the InfoCube.

54

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: HANA-Optimized InfoCube

Characteristics that logically belong together (for example, district and area belong to the regional dimension) are grouped together in a dimension. By adhering to this design criterion, dimensions are to a large extent independent of each other, and dimension tables remain small with regards to data volume. This is beneficial in terms of performance. This InfoCube structure is optimized for data analysis. The fact table and dimension tables are both relational database tables.

Figure 51: SAP HANA-Optimized InfoCube

The SAP HANA-optimized InfoCube is a standard InfoCube that is optimized for use with SAP HANA. When you create SAP HANA-optimized InfoCubes, you can assign characteristics and key figures to dimensions. The system does not create any dimension tables apart from the package dimension however. The SIDs (master data IDs) are written directly to the fact table. This improves system performance when loading data. Since dimensions are omitted, no DIM IDs (dimensions keys) have to be created. The dimensions are simply used as a sort criterion and provide you with a clearer overview when creating a query in BEx Query Designer.

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

55

Unit 2: HANA-optimized DataStore Object and InfoCube

BW362

Figure 52: SAP HANA-Optimized InfoCube – Tables in the HANA Studio

Hint: A DTP setting is used to automatically activate master data after it has been updated (for master data load processes). This is possible due to the fact that aggregates are not used with the SAP HANA database. You find the structure of the InfoCube in the SAP HANA Studio (or Information Modeler) in the respective catalog for the SAP system, in the folder of the tables. •

Looking at the Active Table reveals the 3 indices which comprise the Active Table. Beside the main index with the suffix 00, this is the delta index with the suffix 70 and the historic index with the suffix 80.

Figure 53: HANA-Optimized InfoCube – Compression

56

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: HANA-Optimized InfoCube

Inventory InfoCubes

Figure 54: Inventory Management (1)

The main processes for the Inventory Management (for non-cumulative key figures) has moved to the DTP. You need the following DTPs to manage inventory data: • •

First, the DTP which loads the initial data set. This is indicated via the Extraction Mode of the DTP. You need a second DTP for regular movements.

In case you want to load historic data you have to switch on the flag in this particular DTP in the productive system.

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

57

Unit 2: HANA-optimized DataStore Object and InfoCube

BW362

Figure 55: HANA-Optimized Non-Cumulative InfoCube (1)

Initialization record stored with SID_0RECORDTP = ‘1’ – ‘NCUM Initialization’ partition of /BI0/F0IC_C03 table Historical transactions stored with SID_0RECORDTP = ‘2’ – ‘NCUM History’ partition of /BI0/F0IC_C03 table Delta transactions stored with SID_0RECORDTP = ‘0’ – ‘uncompressed’ partition of /BI0/F0IC_C03 table

58

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: HANA-Optimized InfoCube

Figure 56: HANA-Optimized Non-Cumulative InfoCube (2)

Hint: SAP HANA-optimized InfoCubes with non-cumulative key figures can be integrated as the target of update rules into a 3.x data flow (HANA 1.0 SPS05). Note: A DTP can be changed in a production system if the development class of the imported DTP can be changed. You then just need to change the settings in the DTP and reactivate it. For more information, see SAP Note 1558791. This logic replaces the compression logic (and therefore the associated marker update) for the SAP HANA-optimized InfoCube. There is no semantic basis for compression with SAP HANA-optimized InfoCubes. The data is only compiled. Markers are not updated.

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

59

Unit 2: HANA-optimized DataStore Object and InfoCube

BW362

Migration

Figure 57: Conversion

If you are using a SAP HANA database, you can only create SAP HANA-optimized InfoCubes. You can continue using existing standard InfoCubes that do not have the SAP HANA-optimized property or you can convert them. The property Data Persistency in BWA is not available in SAP HANA because the SAP HANA database assumes the role of primary persistence.

Figure 58: HANA-Optimized InfoCube – Conversion of Standard InfoCubes

60

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: HANA-Optimized InfoCube

If you are using a SAP HANA database and want to benefit from it when loading data into InfoCubes, we recommend converting existing standard InfoCubes. Note: Note that the table layout changes during conversion. If you access data with programs of your own, you have to adjust these manually after conversion. We generally recommend only using released interfaces to access BW data. After migration to the SAP HANA database, normal standard InfoCubes are in the SAP HANA database’s column-based store and have a logical index (CalculationScenario). In the analysis, they behave like BWA-indexed InfoCubes. If the InfoCubes have data persistency in BWA, the content is deleted during the system migration to HANA and the InfoCubes are set to inactive. If you want to continue using one of these InfoCubes as a standard InfoCube, you need to activate it again and reload the data from the former primary persistence (DataStore object for example). If you have integrated InfoCubes for converting into process chains, there is no need for you to modify the process chains. Certain process types have become obsolete, but can remain in the process chain when a SAP HANA database is used. The obsolete processes do not have any funkction when a SAP HANA database is used. The system simply skips them.

Figure 59: HANA-Optimized InfoCube – Conversion

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

61

Unit 2: HANA-optimized DataStore Object and InfoCube

BW362

Procedure 1.

There are two ways of calling the conversion transaction: •

2. 3. 4.

You are in the InfoObject editing screen. Choose –Goto– Conversion to SAP HANA-Optimized. The InfoCube you want to edit is displayed. • Call transaction RSMIGRHANADB directly. You can now use input help to select InfoCubes for conversion. You can specify whether the log is displayed after conversion. Choose Execute. The InfoCubes are converted.

BW Integrated Planning

Figure 60: BW Integrated Planning – PAK Architecture

SAP HANA will be available in multiple deployment options. Phase I is a side by side approach, where HANA is added to the system landscape and data that is needed for analytic or other purposes is replicated from the underlying DBMS to HANA using various ELT mechanisms. Phase II is an embedded approach where HANA replaces the underlying DBMS, for example BW. Phase III is pushing processing down into HANA and removing further layers like BW.

62

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: HANA-Optimized InfoCube

Figure 61: SAP HANA and SAP NetWeaver BW

Remodeling: From a technical point of view, no remodeling is required. The planning engine has no impact on your existing ABAP-based planning model. If a planning function is already supported by the planning engine (disaggregation for example) the planning function will be executed in SAP HANA. If a planning function is not supported by the planning engine yet (revaluation for example), the planning function will be executed in the ABAP stack without HANA support. Performance optimization: You are advised to investigate the performance optimization potential of your current architecture planning architecture. HANA offers huge performance improvements during runtime. HANA is not able to compensate in situations where modeling is less than perfect however ☺. License: The SAP BusinessObjects Planning and Consolidation, Version for SAP NetWeaver license contains SAP BusinessObjects planning and consolidation, version for SAP NetWeaver (BPC) and the Planning Application Kit. If you need more details regarding license and pricing, please check our official pricelist.

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

63

Unit 2: HANA-optimized DataStore Object and InfoCube

BW362

Figure 62: Planning Applications Kit and SAP HANA

The Planning Applications Kit (PAK) provides a connection between an BW-IP based planning application and SAP HANA. It consists of a buffer connector and a function connector which provide a link to the planning sessions and planning operations that are now located within the HANA database. Reconfiguration of current planning models is not necessary; existing models profit from significantly increased performance by using PAK.

Figure 63: BW In-Memory Planning: Accelerated Planning Functions

64

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: HANA-Optimized InfoCube

Figure 64: HANA Planning – Simple Disaggregation Example

Figure 65: Deployment Options for BW Integrated Planning

Use of the Planning Applications Kit requires a license for the following SAP functionality: ’SAP BusinessObjects Planning and Consolidation, version for SAP NetWeaver’. If you do not have this license, please contact your account professional for further information.

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

65

Unit 2: HANA-optimized DataStore Object and InfoCube

BW362

Use of the Planning Applications Kit requires a license for the following SAP functionality: ’SAP BusinessObjects Planning and Consolidation, version for SAP NetWeaver’. If you do not have this license, please contact your account professional for further information. •



There are special methods for managing planning sessions commands. These methods handle the interaction between Planning Applications Kit and Planning Engine. – Open – Close A planning functionality method is also provided for – – – – – – – – – –

Copy Disaggregate Set Values Delta Restrict Delete Lookup Snapshot Save FOX

Figure 66: How To: Customize Planning Application Kit (1)

General activation of the Planning Applications KIT: SM30: Table view RSPLS_HDB_ACT. Choose HANA_ACT in order to activate the Planning Applications KIT

66

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: HANA-Optimized InfoCube

Figure 67: How To: Customize Planning Application Kit (2)

Infoprovider specific activation of the Planning Applications KIT: SM30 Table view: RSPLS_HDB_ACT_IP. To activate Planning Applications KIT for individual Infoprovider in addition. Choose the real-time cube you want to use for HANA planning. This is in order to gurantee a smoth transition from existing BW-IP scenarios and to avoid additional roundtrips due to the restrictions (Note 1637199). Those switches might become obsolete.

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

67

Unit 2: HANA-optimized DataStore Object and InfoCube

BW362

Use Cases and Limitations

Figure 68: HANA-Optimized InfoCube

68

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: HANA-Optimized InfoCube

Lesson Summary You should now be able to: • Understand the motivation for SAP HANA optimized InfoCubes • Identify the difference in the architecture and structure compared with InfoCubes stored in a relational database • Describe the Conversion process • Elaborate the typcial business scenarios where SAP HANA optimized InfoCubes can be used in

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

69

Unit 2: HANA-optimized DataStore Object and InfoCube

BW362

Lesson: Semantically Partitioned Object Lesson Overview This lesson describes when to use the Semantically Partitioned Object (SPO) in modeling and outlines the use cases, advantages, and limitations of this type of InfoProvider.

Lesson Objectives After completing this lesson, you will be able to: • •

Describe the use cases of an SPO Create an SPO and integrate it into the data flow

Business Example You expect a high data volume in the InfoProviders that you provide for reporting. To optimize query performance, you decide to divide the InfoProviders into several semantic partitions.

The Concept of Semantic Partitioning Semantic partitioning means that transactional data is loaded into different InfoProviders that are partitioned according to characteristics such as: • • •

Geographical characteristics (countries, regions, and so on) Time characteristics Organizational characteristics (departments or business units)

Semantic partitioning is necessary in enterprise data warehouse architectures because it improves performance with mass data and scalability of the data warehouse. If an InfoCube contains too many records, the compression and reconstruction of aggregates may take a very long time. The same applies to DataStore objects, where the duration of the activation process increases. Using semantic partitions helps to keep a lower data volume in the respective PartProviders. Error handling and the handling of load processes in different time zones is also facilitated by this concept. If a request for a region leads to an error, the new data for the whole InfoProvider would not be available for reporting. With semantic partitioning, only the data for the affected regions would not be available. PartProviders for regions with different time zones also allow data loading and administrative tasks to be scheduled independently of the time zone. Without partitioning, it is very difficult to find a suitable time slot for all countries.

70

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: Semantically Partitioned Object

Query performance can also be improved by accessing partitioned InfoProviders with fewer records. This is especially the case if only a particular region or time interval that has been used for partitioning is of interest for reporting users.

Implementation of Semantic Partitioning in SAP BW with Semantically Partitioned Objects Semantically Partitioned Objects (SPOs) allow users to create a semantic partition once and use it repeatedly. This concept improves scalability and reduces development efforts when a new partition is created to extend the existing data model. In earlier SAP NetWeaver BW releases, this concept could also be applied, but required several manual steps. An administrator of a BW 7.0 system who wants to partition InfoProviders by geographical characteristics (for example, Europe, North America, and Asia) has to create a master InfoCube and copy it. After that transformations, data transfer processes with different filter settings and – ideally – InfoSources have to be created manually. Changes in the data model of the InfoCube or changes in the partitioning also have to be applied to the InfoProviders manually. In SAP NetWeaver BW 7.3, the effort for creating, maintaining, and changing semantic partitions has decreased significantly. In SAP BW 7.3, a wizard helps you to create Semantically Partitioned Objects. These objects comprise multiple InfoCubes or DataStore objects that are logically partitioned. The SPO does not bring semantic partitioning as a new concept for data modeling into SAP NetWeaver BW, but makes it easier to implement it by reducing the necessary manual steps. That means that now only a master InfoCube or master DataStore object is required. Multiple InfoCubes or DataStore objects with the same structure are then created automatically and integrated into the data flow. Changes to the structure do not have to be applied to each object, but can be applied to the SPO once. All data transfer processes (DTPs) for the different partitions can be integrated into process chains in one step. The components of the SPO are •





2013

MasterProvider The InfoProvider that is maintained by the end user and used as a template object (InfoCube or DataStore object) PartProviders Homogeneous set of InfoProviders with properties copied from the MasterProvider. An SPO partition is characterized by partitioning criteria (maximum 5 characteristics) that have to be disjoint across SPO partitions (no overlap). InfoSources Used as interface objects to embed the SPO in the data flow

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

71

Unit 2: HANA-optimized DataStore Object and InfoCube

BW362

An SPO is always created with two InfoSources: one for incoming and one for outgoing data flows. The InfoSources have the function of an inbound and outbound data layer for the object, which facilitates the integration of the Semantically Partitioned Object into a data flow. Between the InfoSources and the partitions there are simple dummy transformations. Actual transformation of data takes place in the transformation objects underneath or above the inbound and outbound InfoSources. Note that reporting is only possible on the partitions, not on the SPO itself.

72

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: Semantically Partitioned Object

Lesson Summary You should now be able to: • Describe the use cases of an SPO • Create an SPO and integrate it into the data flow

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

73

Unit Summary

BW362

Unit Summary You should now be able to: • Explain the motiviation for HANA optimized DataStore Objects • Identify the difference in the architecture and structure compared with DataStore Objects stored in a relational database • List the different Conversion options • Elaborate the typcial business scenarios where HANA optimized DataStore Objects can be used in • Understand the motivation for SAP HANA optimized InfoCubes • Identify the difference in the architecture and structure compared with InfoCubes stored in a relational database • Describe the Conversion process • Elaborate the typcial business scenarios where SAP HANA optimized InfoCubes can be used in • Describe the use cases of an SPO • Create an SPO and integrate it into the data flow

74

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

Unit 3 Consuming HANA Models in BW Unit Overview Content of this unit is Consuming HANA Models in BW

Unit Objectives After completing this unit, you will be able to: • • • •

Understand the Architecture when consuming SAP HANA models in SAP BW Show how SAP HANA models can be accessed from SAP BW for query usage (TransientProvider) Merge the SAP BW and SAP HANA models via a CompositeProvider Include SAP HANA data via DB Connect in SAP BW Staging process

Unit Contents Lesson: Lesson: Lesson: Lesson:

2013

VirtualProvider ......................................................... 76 TransientProvider...................................................... 82 CompositeProvider .................................................... 89 DB Connect ...........................................................100

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

75

Unit 3: Consuming HANA Models in BW

BW362

Lesson: VirtualProvider Lesson Overview Content of this lesson is VirtualProvider

Lesson Objectives After completing this lesson, you will be able to: •

Understand the Architecture when consuming SAP HANA models in SAP BW

Business Example Architecture

Figure 69: Overview: Mixed Scenarios SAP BW & SAP HANA Schemas

TransientProvider based on HANA Model For ad hoc scenarios Generated not modeled, no InfoObjects required Full BEx Query support Can be included in a CompositeProvider to combine with other BW InfoProviders

76

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: VirtualProvider

VirtualProvider based on HANA Model For a flexible integration of HANA data with BW managed metadata (e.g. lifecycle) Security handled by BW Full BEx Query support Can be included to Composite- and MultiProvider to combine with other BW InfoProviders

Semantically Partitioned Object

Figure 70: Semantically Partitioned Object – Modeling for Large Data Volume (DSO or InfoCube)

A semantically partitioned object is an InfoProvider that consists of several InfoCubes or DataStore objects with the same structure. Semantic partitioning is a property of the InfoProvider. You specify this property when creating the InfoProvider. Semantic partitioning divides the InfoProvider into several small, equally sized units (partitions).

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

77

Unit 3: Consuming HANA Models in BW

BW362

A semantically partitioned object offers the following advantages compared to standard InfoCubes or standard DataStore objects: •





Better performance with mass data: The larger the data volume, the longer the runtimes required for standard DataStore objects and standard InfoCubes. Semantic partitioning means that the data sets are distributed over several data containers. This means that runtimes are kept short even if the data volume is large. Close data connection: Error handling is better. If a request for a region ends with an error, for example, the entire InfoProvider is unavailable for analysis and reporting. With a semantically partitioned object, the separation of the regions into different partitions means that only the region that caused the error is unavailable for data analysis. Working with different time zones: EDW scenarios usually involve several time zones. With a semantically partitioned object, the time zones can be separated by the partitions. Data loading and administrative tasks can therefore be scheduled independently of the time zone.

VirtualProvider

Figure 71: VirtualProvider Based on SAP HANA

Virtual Provider is a type of InfoProvider which reads data from a BW Infocube. Characteristics and Key figures are created based on the data fields in the HANA model and an Infocube is constructed using the characteristics as dimension and key figures duly filled in.

78

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: VirtualProvider

The advantage of the Virtual provider over Transient provider is that here the Infocube acts as Dataprovider and it can be extended to have BW traditional datatype as its source and which help in multi-level reporting. For example: If data comes as a timestamp from the HANA model, you can actually inherit the traditional date datatype from BW and the data could be displayed in terms of quarter, year, calendar week etc., and it eventually shows the flexibility you have over reporting when you use Virtual Provider. Creating VirtualProviders Based on a SAP HANA Model You can create a VirtualProvider based on a SAP HANA model if you want to use this model in the BW system. This VirtualProvider can be used in a MultiProvider. Navigation attributes can be used, and the BW analysis authorizations apply. Prerequisites You are using a SAP HANA database. You have created a SAP HANA model in SAP HANA Studio. Procedure In the Data Warehousing Workbench under Modeling, select the InfoProvider tree. In the context menu, choose Create VirtualProvider. Select Based on a SAP HANA Model. Choose Details. In the dialog box, enter the SAP HANA model and the package that it belongs to. Press (Continue). Choose (Create). The VirtualProvider processing screen appears. Define the VirtualProvider by adding the required InfoObjects. To do this, you need to know how the SAP HANA model is defined. Choose Provider-Specific InfoObject Properties from the context menu on the Dimensions and Key Figures folders. In the following dialog box, you can assign the InfoObjects to the fields in the SAP HANA model. Press (Continue). Activate the VirtualProvider. Hint: Masterdata also possible as a VirtualProvider Using Virtual Master Data If you want to use virtual navigation attributes and texts in a VirtualProvider based on an SAP HANA model, you need to create virtual master data. Prerequisites You are using an SAP HANA database. Procedure: You are in the Data Warehousing Workbench in the Modeling area. Choose the InfoObjects area.

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

79

Unit 3: Consuming HANA Models in BW

BW362

In the context menu for your InfoObjectCatalog, choose Create InfoObject. Assign a name and a description and choose (Continue). The InfoObject editing screen appears. Make the required entries for your InfoObject. On the Master Data/Texts tab page, choose SAP HANA Attribute View as the master data access. Specify an SAP HANA package and an SAP HANA attribute view. Choose Suggest SAP HANA Links. Select the SAP HANA attributes for which you want to generate proposals and choose (Apply). A list of proposals appears. The attributes of the SAP HANA model are displayed in folders. The proposals are listed under these folders. Choose (Technical Details) to display the strategy for creating proposals. In each case, select a suitable proposal for attributes, texts, and compounding (if applicable). If none of the suggested InfoObjects are suitable, you can initially leave the attribute unassigned and assign it manually later on. Choose (Continue). The attributes are assigned. If you have selected texts (TXTSH, TXTMD, TXTLG), the relevant indicator for texts is set on the Master Data/Texts tab page. Assigning attributes manually: Choose Maintain HANA Links. In each case, select suitable SAP HANA attributes for attributes, texts, and compounding (if applicable) and choose (Apply). Activate the characteristic. Result You can now use the characteristic with virtual master data in your VirtualProvider.

80

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: VirtualProvider

Lesson Summary You should now be able to: • Understand the Architecture when consuming SAP HANA models in SAP BW

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

81

Unit 3: Consuming HANA Models in BW

BW362

Lesson: TransientProvider Lesson Overview Content of this lesson is TransientProvider

Lesson Objectives After completing this lesson, you will be able to: •

Show how SAP HANA models can be accessed from SAP BW for query usage (TransientProvider)

Business Example TransientProviders

Figure 72: Analytical Index – TransientProvider

If you want to use SAP BW OLAP functions to report on SAP HANA Analytic or Calculation Views you can publish these SAP HANA models to the SAP BW system. Publish Analytic/Calculation View: Transaction RSDD_HM_PUBLISH These published Analytic or Calculation Views are visible as Analytical Indexes (TransientProviders)

82

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

2013

BW362

Lesson: TransientProvider

Administration of Analytical Indexes: Transaction RSDD_LTIP Note: Transient Provider are not transportable at the moment and must re-created in all systems. 1. 2. 3. 4.

5. 6.

7.

Run transaction RSDD_HM_PUBLISH Choose an SAP HANA Model belonging to a catalogue (SAP HANA package) Choose Create The system suggests a name for the new Analytical Index from the SAP HANA Model‘s name. You can change this name. Choose Enter. The definition of the Analytical Index is displayed. On the tab Properties you can assign an InfoArea to the Analytical Index. On the tabs Characteristics and Key Figures you can assign SAP BW InfoObjects to the fields of the Analytical Index. Thus, the Analytical Index has access to SAP BW meta data. Furthermore, analysis authorizations for these InfoObjects are considered. Choose Save. The Analytical Index is created •

Generated name for TransientProvider: @3 .........

RESULT: SAP BW Queries accessing the Analytical Index can be defined now!

Figure 73: Publishing SAP HANA Models: Step 1

2013

© 2013 SAP AG or an SAP affiliate company. All rights reserved.

83

Unit 3: Consuming HANA Models in BW

BW362

Figure 74: Publishing SAP HANA Models: Steps 2-4

Restrictions for HanaModels in BW ProviderName Limitation to catalog_name+cube_name+schema not longer than 63 (Trex API limitation) The full model name incl. package must be
View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF