SAP BW Interview Questions
Short Description
SAP BW Interview Questions...
Description
SAP Bw Interview Questions 1)How we do the SD and MM configuration for BW ? You need to activate the data sources in R3 system. You need to maintain the login information for the logical system.sm59 : Choose the RFC destination , BW system, Under logon Security, maintain the user credentials. Maintain control parameters for data transfer. Filling in of setup tables ta bles SBIW I feel that these are certain prerequisites. From an SD perspective, you as a BW consultant should first understand the basic SD process flow on the R3 side. (search the forum for SD process flow and you'll get a wealth of information on the flow and the tables as well as transactions involved in SD). Next you need to understand the process flow that has been implemented at the clients place. How the SD data flows and what are the integration points with other modules as well as how the integration happens. This knowledge is essential when modeling your BW design. From a BW perspective you need to first know all the SD extractors and what information they bring. Next look at all the cubes and ODS for SD. 1. What is the t-code to see log of transport connection? in RSA1 -> Transport Connection you can collect the Queries and the Role and after this you can transport them (enabling the transport in SE10, import it in STMS 1. RSA1 2. Transport connection (button on the left bar menu) 3. Sap transport -> Object Types (button on the left bar menu) 4. Find Query Elements -> Query 5. Find your query 6. Group necessery object 7. Transport Object (car icon) 8. Release transport (SE10 T-code) 9. load transport (STMS T-code) 2.Lo; mm inventory data source with marker significance? Marker is as like check point when u upload the data from inventory data source 2lis_03_bx data source for current stock and BF for movement type after uploading data from BX u should rlise the request in cube or i menn to say compress it then load data from another data source BF and set this updated data to no marker update so marker is use as a check point if u dont do this u getting data missmatch at bex level bcz system get confuse . (2LIS_03_BF Goods Movement From Inventory Management-- -----Unckeck the no marker update tab) (2LIS_03_BX Stock Initialization for Inventory Management-- ---select the no marker update check box) 2LIS_03_UM Revaluations ----Uncheck the no marker update tab) in the infopackege of "collaps" 3. How can you navigate to see the error idocs ? If it is fine check the IDOCs in source system go to BD87->give Ur user ID and date>execute->you can find Red status Idocs select the erroneous Idoc->Rt.click and select Manual process. You need to Reprocess this IDOC which are RED. For this you can take help of Any of your Team (ALE IDOC Team or BAsis Team)Or Else youcan push it manually. Just search it in bd87 screen only to Reprocess. Also, Try to find why this IDocs are stuck there. 4)Difference between v1, v2, v3 jobs in extraction? V1 Update: when ever we create a transaction in R/3(e.g.,Sales Order) then the entries get into the R/3 Tables(VBAK, VBAP..) and this takes place in V1 Update. V2 Update: V2 Update starts a few seconds after V1 Update and in this update the values get into Statistical Tables, from where we do the extraction into BW.
V3 Update: Its purely for BW extraction. But in the Document below, V1, V2 and V3 are defined in a different way. Can You please explain me in detial what exactly V1, V2 and V3 updates means? 5.What are statistical update and document d ocument update? Synchronous Updating (V1 Update) The statistics update is made synchronously with the document update. While updating, if problems that result in the termination of the statistics update occur, the original documents are NOT saved. The cause of the termination should be investigated and the problem solved. Subsequently, the documents can be entered again. Radio button: V2 updating 6.Do you have any idea how to improve the performance of the BW..? Asynchronous Updating (V2 Update) With this update type, the document update is made separately from the statistics update. A termination of the statistics update has NO influence on the document update (see V1 Update). Radio button: Updating in U3 update program Asynchronous Updating (V3 Update) With this update type, updating is made separately from the document update. The difference between this update type and the V2 Update lies,however, with the time schedule. If the V3 update is active, then the update can be executed at a later time. In contrast to V1 and V2 Updates, no single documents are updated. The V3 update is, therefore, also described as a collective update. 7)How can you decide the query performance is slow or fast ? You can check that in RSRT tcode. execute the query in RSRT and after that follow the below steps Goto SE16 and in the resulting screen give table name as RSDDSTAT for BW 3.x and RSDDSTAT_DM for BI 7.0 and press enteryou can view all the details about the query like time taken to execute the query and the timestmaps 8)What is statistical setup and what is the need and why? Follow these steps to filling the set up table. 1. Go to transaction code RSA3 and see if any data is available related to your DataSource. If data is there in RSA3 then go to transaction code LBWG (Delete Setup data) and delete the data by entering the application name. 2. Go to transaction SBIW --> Settings for Application Specific Datasource --> Logistics --> Managing extract structures --> Initialization --> Filling the Setup table --> Application specific setup of statistical data --> perform setup (relevant application) 3. In OLI*** (for example OLI7BW for Statistical setup for old documents : Orders) give the name of the run and execute. Now all the available records from R/3 will be loaded to setup tables. 4. Go to transaction RSA3 and check the data. 5. Go to transaction LBWE and make sure the update mode for the corresponding DataSource is serialized V3 update. 6. Go to BW system and create infopackage and under the update tab select the initialize delta process. And schedule the package. Now all the data available in the setup tables are now loaded into the data target. 7. Now for the delta records go to LBWE in R/3 and change the update mode for the corresponding DataSource to Direct/Queue delta. By doing this record will bypass SM13 and directly go to RSA7. Go to transaction code RSA7 there you can see green light # Once the new records are added immediately you can see the record in RSA7. 8. Go to BW system and create a new infopackage for delta loads. Double click on new infopackage. Under update tab you can see the delta update radio button.. 9. Now you can go to your data target and see the delta record. 9.Why we have construct setup tables? The R/3 database structure for accounting is much more easier than the Logistical structure.
V3 Update: Its purely for BW extraction. But in the Document below, V1, V2 and V3 are defined in a different way. Can You please explain me in detial what exactly V1, V2 and V3 updates means? 5.What are statistical update and document d ocument update? Synchronous Updating (V1 Update) The statistics update is made synchronously with the document update. While updating, if problems that result in the termination of the statistics update occur, the original documents are NOT saved. The cause of the termination should be investigated and the problem solved. Subsequently, the documents can be entered again. Radio button: V2 updating 6.Do you have any idea how to improve the performance of the BW..? Asynchronous Updating (V2 Update) With this update type, the document update is made separately from the statistics update. A termination of the statistics update has NO influence on the document update (see V1 Update). Radio button: Updating in U3 update program Asynchronous Updating (V3 Update) With this update type, updating is made separately from the document update. The difference between this update type and the V2 Update lies,however, with the time schedule. If the V3 update is active, then the update can be executed at a later time. In contrast to V1 and V2 Updates, no single documents are updated. The V3 update is, therefore, also described as a collective update. 7)How can you decide the query performance is slow or fast ? You can check that in RSRT tcode. execute the query in RSRT and after that follow the below steps Goto SE16 and in the resulting screen give table name as RSDDSTAT for BW 3.x and RSDDSTAT_DM for BI 7.0 and press enteryou can view all the details about the query like time taken to execute the query and the timestmaps 8)What is statistical setup and what is the need and why? Follow these steps to filling the set up table. 1. Go to transaction code RSA3 and see if any data is available related to your DataSource. If data is there in RSA3 then go to transaction code LBWG (Delete Setup data) and delete the data by entering the application name. 2. Go to transaction SBIW --> Settings for Application Specific Datasource --> Logistics --> Managing extract structures --> Initialization --> Filling the Setup table --> Application specific setup of statistical data --> perform setup (relevant application) 3. In OLI*** (for example OLI7BW for Statistical setup for old documents : Orders) give the name of the run and execute. Now all the available records from R/3 will be loaded to setup tables. 4. Go to transaction RSA3 and check the data. 5. Go to transaction LBWE and make sure the update mode for the corresponding DataSource is serialized V3 update. 6. Go to BW system and create infopackage and under the update tab select the initialize delta process. And schedule the package. Now all the data available in the setup tables are now loaded into the data target. 7. Now for the delta records go to LBWE in R/3 and change the update mode for the corresponding DataSource to Direct/Queue delta. By doing this record will bypass SM13 and directly go to RSA7. Go to transaction code RSA7 there you can see green light # Once the new records are added immediately you can see the record in RSA7. 8. Go to BW system and create a new infopackage for delta loads. Double click on new infopackage. Under update tab you can see the delta update radio button.. 9. Now you can go to your data target and see the delta record. 9.Why we have construct setup tables? The R/3 database structure for accounting is much more easier than the Logistical structure.
Once you post in a ledger that is done. You can correct, but that give just another posting. BI can get information direct out of this (relatively) simple database structure. In LO, you can have an order with multiple deliveries to more than one delivery addresses. And the payer can also be different. When 1 item (orderline) changes, this can have its reflection on order, supply, delivery, invoice, etc. Therefore a special record structure is build for Logistical reports.and this structure now is used for BI. In order to have this special structre filled with your starting position, you must run a set-up. from that moment on R/3 will keep filling this LO-database. If you wouldn't run the setup. BI would start with data from the moment you start the filling of LO (with the logistica cocpit) 10.How can you eliminate the duplicate records in TD, MD? Try to check the system logs through SM21 for the same. 11.What use marker in MM? Marker update is just like check point. ie it will give the snapshot of the stock on a particular date ie when was the marker updated. Because we are using Noncumulative keyfigure it will lot of time to calculate the current stock for example at report time. to overcome this we use marker update Marker updates do not summarize the data.. In inventory management scenarios, we have to calculate opening stock and closing stock on a daily basis. In order to facilitate this, we set a marker which will add and subtract the values for each record. In the absence of marker update, the data will be added up and will not provide the correct values. 12.Tell me web template? You get information on where the web template details are stored from the following tables : RSZWOBJ Storage of the Web Objects RSZWOBJTXT Texts for Templates/Items/ Views RSZWOBJXREF Structure of the BW Objects in a Template RSZWTEMPLATE Header Table for BW HTML Templates You can check these tables and search for your web template entry . However, If I understand your question correctly , you will have to open the template in the WAD and then make the corrections in the same to correct it. 13.What is dashboard? A dash board can be created using u sing the web application Designer (WAD) or the visual composer (VC). A dashboard is just a collection of reports, views and links etc in a single view. For e.g. igoogle is a dashboard. A dashboard is a graphical reporting interface, which displays KPIs (Key Performance Indicators) as charts and graphs. A dashboard is a performance management system When we look at the all organization measures how they are performing with helicopter view, we need a report that teaches and shows the trend in a graphical display quickly. These reports are called as Dashboard Reports, still we can report these measures individually, but by keeping all measures in a single page, we are creating single access point to the users to view all information available to them. t hem. Absolutely this will save lot of precious time, gives clarity on decision that needs to be taken, helps the users to understand the measure(s) trend with business flow creating dashboard Dashboards : Could be built with Visual Composer & WAD create your dashboard in BW, (1) Create all BEx Queries with required variants,tune them perfectly. (2) Differentiate table queries and graph queries. (3) Choose the graph type required that meet your requirement. (4) Draw the layout how the Dashboard page looks like. (5) Create a web template that has navigational block / selection information. (6) Keep navigational block fields are common across the measures. (7) Include the relevant web items into web template.
(8) Deploy the URL/Iview to users through portal/intranet The steps to be followed in the creation of Dashboard using WAD are summarized as below: 1) Open a New Web template in WAD. 2) Define the tabular layout as per the requirements so as to embed the necessary web items. 3) Place the appropriate web items in the appropriate tabular grids 4) Assign queries to the web items (A Query assigned to a web item is called as a data provider) 5) Care should be taken to ensure that the navigation block‟s selection parameters are common across all the BEx queries of the affected dataproviders. 6) Properties of the individual web items are to be set as per the requirements. They can be modified in Properties window or in the HTML code. 7) The URL when this web template is executed should be used in the portal/intranet 14.How can you solve the data mismatch tickets between r/3 and bw? Check the mapping at BW side for 0STREET in transfer rules.Check the data in PSA for the same field.If the PSA is also doesn't have complete data then check the field in RSA3 in source system. 16)What is replacement path tell me one scenario? http://www.sd- solutions. com/documents/ SDS_BW_Replaceme nt%20Path% 20Variables. html 17.What is difference between PSA & IDOC? BI7 is PSA used only for Data load from Source System into BW 18). what we do in Business Blue Print Stage? SAP has defined a business blueprint phase to help extract pertinent information about your company that is necessary for implementation. These blueprints are in the form of questionnaires that are designed to probe for information that uncovers how your company does business. As such, they also serve to document the implementation. Each business blueprint document essentially outlines your future business processes and business requirements. The kinds of questions asked are germane to the particular business function, as seen inthe following sample questions:1) What information do you capture on a purchase order?2) What information is required to complete a purchase order?Accelerated SAP question and answer database:The question and answer database (QADB) is a simple although aging tool designed to facilitate the creation and maintenance of your business blueprint.This database stores the questions and the answers and serves as the heart of your blue print. Customers are provided with a customer input template for each application that collects the data. The question and answer format is standard across applications to facilitate easier use by the project team.Issues database: Another tool used in the blueprinting phase is the issues database. Thisdatabase stores any open concerns and pending issues that relate to the implementation. Centrally storing this information assists in gathering and then managing issues to resolution, so that important matters do not fall through the cracks. You can then track the issues in database, assign them to teammembers, and update the database accordingly. 19). How do we gather the requirements for an Implementation Project? One of the biggest and most important challenges in any implementation is gathering and understanding the end user and process team functional requirements. These functional requirements represent the scope of analysis needs and expectations (both now and in the future) of the end user. These typically involve all of the following:- Business reasons for the project and business questions answered by the implementation- Critical success factors for the implementation- Source systems that are involved and the scope of information needed from each- Intended audience and stakeholders and their analysis needs- Any major transformation that is needed in order to provide the information- Security requirements to prevent unauthorized useThis process involves one seemingly simple task: Find out exactly what theend users' analysis requirements are, both now and in the future, and buildthe BW system to these requirements. Although simple in concept, in practicegathering and reaching
a clear understanding and agreement on a complete setof BW functional requirements is not always so simple. 20) How do we decide what cubes has to be created? Its depends on your project requirement. Customized cubes are not mandatory for all the projects. If your bussines requirement is differs from given scenario ( BI content cubes ) then only we will opt for customized cubes.Normally your BW customization or creation of new info providers all are depending on your source system.If your source system other that R3 then you should go with customization of your all objects.If your source system is R3 and your users are using only R3 standard business scenarios like SD,MM or FI... etc., then you dont want to create any info providers or you dont want to enhance any thing in the existing BW Business Content. But 99% this is not possible. Because surely they should have included their new business scenario or new enhancements.For example, In my first project we implemented for Solution Manager BW implemention. There we have activated all the business content in CRM. But the source system have new scenarios for message escalation, ageing calculation etc., According their business scenrio we could't use standard business content. For that we have taken only existing info objects and created new info objects which are not there in the business content. After that we have created custom data source to info providers as well asreports. 21) Who used to make the Technical and Functional Specifications? Technical Specification:Here we will mention all the BW objects (info objects, data sources, info sources and info providers). Then we are going to say the data flow and behaviour of the data load (either delta or full) also we can tell the duration of the cube activation or creation. Pure BW technical things are available in this document. This is not for End users document.Functional Specification:Here we will describe the business requirements. That means here we are going to say which are all business we are implementing like SD, MM and FI etc., then we are going to tell the KPI and deliverable reports detail to the users. This document is going to mingle with both Function Consultants and Business Users. This document is applicable for end users also. 22) Give me one example of a Functional Specification and explain what information we will get from that? Functional Specs are requirements of the business user.Technical Specs translate these requirements in a technical fashion.Let's say Functional Spec says,1. the user should be able to enter the Key date, Fiscal Year, Fiscal Version.2. The Company variable should be defaulted to USA but then if the user wants to change it, they can check the drop down list and choose other countries.3. The calculations or formulas for the report will be displayed in precision of one decimal point.4. The report should return values for 12 months of data depending on the fiscal year that the user enters Or it should display in quarterly values. Functional specs are also called as Software requirements.Now from this Techinal Spec follows, to resolve each of the line items listed above.1. To give the option of key date, Fiscal year and Fiscal Version – certain Info Obejcts should be availble in the system. If available, then should we create any variables for them - so that they are used as user entry variable. To create any varaibles, what is the approch, where do you do it, what is the technical of the objects you'll use, what'll be the technical name of the objects you'll crete as a result of this report.2. Same explanation goes for the rest. How do you set up the varaible, 3. What changes in properties willu do to get the precision.4. How will you get the 12 months of data.What will be the technical and display name of the report, who'll be authorized to run this report, etc are clearly specified in the technical specs. 23) What is Customization? How do we do in LO? How to do basic LO extraction for SAP-R3-BW1. Go to transaction code RSA3 and see if any data is available related to your DataSource. If data is there in RSA3 then go to transaction code LBWG (Delete Setup data) and delete the data by entering the application name.2. Go to transaction SBIW --> Settings for Application Specific Datasource --> Logistics --> Managing extract structures --> Initialization --> Filling the Setup table --> Application specific setup of statistical data --> perform setup (relevant application)3. In OLI*** (for example OLI7BW for Statistical setup for old documents : Orders) give the name
of the run and execute. Now all the available records from R/3 will be loaded to setup tables.4. Go to transaction RSA3 and check the data.5. Go to transaction LBWE and make sure the update mode for the corresponding DataSource is serialized V3 update.6. Go to BW system and create infopackage and under the update tab select the initialize delta process. And schedule the package. Now all the data available in the setup tables are now loaded into the data target.7.Now for the delta records go to LBWE in R/3 and change the update mode for the corresponding DataSource to Direct/Queue delta. By doing this record will bypass SM13 and directly go to RSA7. Go to transaction code RSA7 there you can see green light # Once the new records are added immediately you can see the record in RSA7. 24) When we use Maintain Data Source, What we do? What we will maintain? Go to BW system and create a new infopackage for delta loads. Double click on new infopackage. Under update tab you can see the delta update radio button. 25) Tickets and Authorization in SAP Business Warehouse What is tickets? And example? Tickets are the tracking tool by which the user will track the work which we do. It can be a change requests or data loads or what ever. They will of types critical or moderate. Critical can be (Need to solve in 1 day or half a day) depends on the client. After solving the ticket will be closed by informing the client that the issue is solved. Tickets are raised at the time of support project these may be any issues, problems.... .etc. If the support person faces any issues then he will ask/request to operator to raise a ticket. Operator will raise a ticket and assign it to the respective person. Critical means it is most complicated issues ....depends how you measure this...hope it helps. The concept of Ticket varies from contract to contract in between companies. Generally Ticket raised by the client can be considered based on the priority. Like High Priority, Low priority and so on. If a ticket is of high priority it has to be resolved ASAP. If the ticket is of low> priority it must be considered only after attending to high priority tickets. The typical tickets in a production Support work could be: 1. Loading any of the missing master data attributes/texts. 2. Create ADHOC hierarchies. 3. Validating the data in Cubes/O DS. 4. If any of the loads runs into errors then resolve it. 5. Add/remove fields in any of the master data/ODS/Cube. 6. Data source Enhancement. 7. Create ADHOC reports. 1. Loading any of the missing master data attributes/texts - This would be done by scheduling the infopackages for the attributes/texts mentioned by the client. 2. Create ADHOC hierarchies. - Create hierarchies in RSA1 for the info-object. 3. Validating the data in Cubes/ODS. - By using the Validation reports or by comparing BW data with R/3. 4. If any of the loads runs into errors then resolve it. - Analyze the error and take suitable action. 5. Add/remove fields in any of the master data/ODS/Cube. - Depends upon the requirement 6. Data source Enhancement. 7. Create ADHOC reports. - Create some new reports based on the requirement of client. 26) Change attribute run. Generally attribute change run is used when there is any change in the master data..it is used for realingment of the master data..Attribute change run is nothing but adjusting the master data after its been loaded from time to time so that it can change or generate or adjust the sid's so that u may not have any problem when loading the trasaction data in to data targets.the detail explanation about Attribute change run.The hierarchy/attribute change run which activates hierarchy and attribute changes and adjusts the corresponding aggregates is devided, into 4 phases:1. Finding all affected aggregates2. set up all affected aggregates again and write the result in the new aggregate table.3. Activating attributes and hierarchies4. rename the new aggregate table. When renaming, it is not possible to execute queries. In some databases, which cannot rename the indexes, the indexes are also created in this phase. 27) Different types of Delta updates? Delta loads will bring any new or changed records after the last upload.This method is used for better loading in less time. Most of the std SAP data sources come as delta enabled, but some are not. In this case you can do a full load to the ODS and then do a delta from the ODS to the cube. If you create generic datasources, then you have the option of creating a
delta onCalday, timestamp or numeric pointer fields (this can be doc number, etc).You'll be able to see the delta changes coming in the delta queue through RSA7 on the R3 side.To do a delta, you first have to initialize the delta on the BW side and then set up the delta.The delta mechanism is the same for both Master data and Transaction data loads.============ ========= ==There are three deltasDirect Delta: With t his update mode, the extraction data is transferred with each document posting directly into the BW delta queue. In doing so, each document posting with delta extraction is posted for exactly one LUW in the respective BW delta queues.Queued Delta: With this update mode, the extraction data is collected for the affected application instead of being collected in an extraction queue, and can be transferred as usual with the V3 update by means of an updating collective run into the BW delta queue. In doing so, up to 10000 deltaextractions of documents for an LUW are compressed for each Data Source into the BW delta queue, depending on the application.Non-serialized V3 Update: With this update mode, the extraction data for the application considered is written as before into the update tables with the help of a V3 update module. They are kept there as long as the data is selected through an updating collective run and are processed. However, in contrast to the current default settings (serialized V3 update), the data in the updating collective run are thereby read without regard to sequence from the update tables and are transferred to the BW delta queue. 28) Function modules;1) UNIT_CONVERSION_ SIMPLE and2) MD_CONVERT_MATERIAL _UNITexplain how to use these things, if possible with a well explained example. The conversion of units of measure is required to convert business measurements into other units. Business measurements encompass physical measurements which are either assigned to a dimension or are nondimensional. Nondimensional measurements are understood as countable measurements(palette, unit..).You differentiate between conversions for which you only need to enter a source and target unit in order to perform conversion and conversions for which specifying these values alone is not sufficient. For the latter, you have to enter a conversion factor which is derived from a characteristic ora characteristic combination (compound characteristic) and the corresponding properties....1. Measurements of lengthConversions within the same dimension ID (T006-DIMID) – for example, length:1 m = 100 cm (linear correlation)*Meter* and *Centimeter* both belong to dimension ID LENGTH.2. Measurements of number associated with measurements of weightConversions involving different dimension IDs – for example, number andweight.1 unit = 25 g (linear correlation)*Unit* has dimension ID AAAADL and *Gram* has dimension ID MASS.ExampleNumber Unit Number Unit1 Chocolate bar 25 g1 Small carton 12 Chocolate bar1 Large carton 20 Small carton1 Europallet 40 Large carton* Quantity Conversion* * *UseQuantity conversion allows you to convert key figures with units that have different units of measure in the source system into a uniform unit of measure in the BI system.FeaturesThis function enables the conversion of updated data records from the source unit of measure into a target unit of measure, or into different target units of measure, if the conversion is repeated. In terms of functionality, quantity conversion is structured similarly to currency translation.In part it is based on the quantity conversion functionality in SAP NetWeaver Application Server. Simple conversions can be performed between units of measure that belong to the same dimension (such as meters to kilometers, kilograms to grams). You can also perform InfoObject-specific conversions (for example, two palettes (PAL) of material 4711 were ordered and this order quantity has to be converted to the stock quantity *Carton*(CAR) ).Quantity conversion is based on quantity conversion types. The business transaction rules of the conversion are established in the quantity conversion type. The conversion type is a combination of different parameters (conversion factors, source and target units of measure) that determine how the conversion is performed. For more
information, see QuantityConversion Types.IntegrationThe quantity conversion type is stored for future use and is available for quantity conversions in the transformation rules for InfoCubes and in the Business Explorer:In the transformation rules for InfoCubes you can specify, for each key figure or data field, whether quantity conversion is performed during the update. In certain cases you can also run quantity conversion in user-defined routines in the transformation rules..In the Business Explorer you can:● Establish a quantity conversion in the query definition.● Translate quantities at query runtime. Translation is more limitedhere than in the query definition.[image: This graphic is explained in the accompanying text]*Quantity Conversion Types*DefinitionA quantity conversion type is a combination of different parameters that establish how the conversion is performed. StructureThe parameters that determine the conversion factors are the source and target unit of measure and the option you choose for determining the conversion factors.The decisive factor in defining a conversion type is the way in which you want conversion factors to be determined. Entering source and target quantities is optional.Conversion FactorsThe following options are available:· Using a reference InfoObjectThe system tries to determine the conversion factors from the reference InfoObject you have chosen or from the associated quantity DataStore object.If you want to convert 1000 grams into kilograms but the conversion factors are not defined in the quantity DataStore object, the system cannot perform the conversion, even though this is a very simple conversion.· Using central units of measure (T006)Conversion can only take place if the source unit of measure and target unit of measure belong to the same dimension (for example, meters to kilometers, kilograms to grams, and so on).· Using reference InfoObject if available, central units of measure (T006) if notThe system tries to determine the conversion factors using the quantity DataStore object you have defined. If the system finds conversion factors, it uses these to perform the calculation. If the system cannot determine conversion factors from the quantity DataStore object it tries again usingthe central units of measure.· Using central units of measure (T006) if available, reference InfoObject if notThe system tries to find the conversion factors in the central units of measure table. If the system finds conversion factors it uses these to perform the conversion. If the system cannot determine conversion factors from the central units of measure it tries to find conversion factors that match the attributes of the data record by looking in the quantity DataStore object.The settings that you can make in this regard affect performance and the decision must be strictly based on the data set. If you only want to perform conversions within the same dimension, option 2 is most suitable.If you are performing InfoObject-specific conversions (for example, material-specific conversions) between units that do not belong to the same dimension, option 1 is most suitable.In both cases, the system only accesses one database table. That table contains the conversion factors.With option 3 and option 4, the system tries to determine conversion factors at each stage. If conversion factors are not found in the basic table (T006), the system searches again in the quantity DataStore object, or in reverse.The option you choose should depend on how you want to spread the conversion. If the source unit of measure and target unit of measure belong to the same dimension for 80% of the data records that you want to convert, first try to determine factors using the central units of measure (option4), and accept that the system will have to search in the second table also for the remaining 20%.The *Conversion Factor from InfoObject *option (as with *Exchange Rate from InfoObject* in currency translation types) is only available when you load data. The key figure you enter here has to exist in the InfoProvider and the attribute this key figure has in the data record is taken as the conversionfactor.Source Unit of MeasureThe source unit of measure is the unit of measure that you want to convert. The source unit of measure is determined dynamically from the data record or from a specified InfoObject (characteristic) . In addition, you can specify a fixed source unit of measure or determine the source unit of measure using avariable.When converting quantities in the Business Explorer, the source unit of measure is
always determined from the data record.During the data load process the source unit of measure can be determined either from the data record or using a specified characteristic that bears master data.You can use a fixed source unit of measure in planning functions. Data records are converted that have the same unit key as the source unit of measure.The values in input help correspond to the values in table T006 (units of measure).You reach the maintenance for the unit of measure in *SAP Customizing Implementation Guide* (r) *SAP NetWeaver *(r) *General Settings* (r) *Check Units of Measure*.In reporting, you can use a source unit of measure from a variable. The variables that have been defined for InfoObject 0UNIT are used. Target Unit of MeasureYou have the following options for determining the target unit of measure:· You can enter a fixed target unit of measure in the quantityconversion type (for example, 'UNIT').· You can specify an InfoObject in the quantity conversion type that is used to determine the target unit of measure during the conversion. This is not the same as defining currency attributes where you determine a currency attribute on the *Business Explorer* tab page in characteristic maintenance. With quantity conversion types you determine the InfoObject in the quantity conversion type itself. Under *InfoObject for Determining Unit of Measure*, all InfoObjects are listed that have at least one attribute of type *Unit*. You have to select one of these attributes as the corresponding quantity attribute.· Alternatively, you can determine that the target unit of measure be determined during the conversion. In the Query Designer under the properties for the relevant key figure, you specify either a fixed target unit of measure or a variable to determine the target unit of measure.· Target quantity using InfoSetThis setting covers the same functionality as *InfoObject for Determining Target Quantity*. If the InfoObject that you want to use to determine the target quantity is unique in the InfoSet (it only occurs once in the whole InfoSet), you can enter the InfoObject under *InfoObject for DeterminingTarget Quantity*.You only have to enter the InfoObject in *Target Quantity Using InfoSet* if you want to determine the target quantity using an InfoObject but that occurs more than once in the InfoSet.The InfoSet contains InfoProviders A and B and both A and B contain InfoObject X with a quantity attribute. In this case you have to specify exactly whether you want to use X from A or X from B to determine the target quantity. Field aliases are used in an InfoSet to ensure uniqueness.All the active InfoSets in the system can be displayed using input help. As long as you have selected an InfoSet, you can select an InfoObject. All the InfoObjects with quantity attributes contained in the InfoSet can be displayed using input help. 29) An SAP BW functional consultant is responsible for the following: Key responsibilities include: Maintain project plans Manage all project activities, many of which are executed by resources not directly managed by the project leader (central BW development team, source system developer, business key users) Liase with key users to agree reporting requirements, report designs Translate requirements into design specifications( report specs, data mapping / translation, functional specs) Write and execute test plans and scripts Coordinate and manage business / user testing Deliver training to key users Coordinate and manage product ionization and rollout activities Track CIP (continuous improvement) requests, work with users to prioritize, plan and manage CIP An SAP BW technical consultant is responsible for:SAP BW extraction using standard data extractor and available development tools for SAP and non-SAP data sources. -SAP ABAP programming with BWData modeling, star schema, master data, ODS and cube design in BWData loading process and procedures (performance tuning)Query and report development using Bex Analyzer and Query DesignerWeb report development using Web Application. 29. Production support In production support there will be two kind jobs which you will be doing mostly 1, looking into the data load errors. 2, solving the tickets raised by the user. Data loading involves monitoring process chains, solving the errors related to data load, other than this you will also be doing some enhancements to the present cubes and master data but that done on requirement. User will raise a ticket when they face any problem with the query, like report showing wrong values incorrect data etc.if the system response is slow or if the query run
time is high. Normally the production support activities include * Scheduling * R/3 Job Monitoring * B/W Job Monitoring * Taking corrective action for failed data loads. * Working on some tickets with small changes in reports or in AWB objects. The activities in a typical Production Support would be as follows: 1.Data Loading - could be using process chains or manual loads. 2. Resolving urgent user issues - helpline activities 3. Modifying BW reports as per the need of the user. 4. Creating aggregates in Prod system 5. Regression testing when version/patch upgrade is done. 6. Creating adhoc hierarchies. We can perform the daily activities in Production 1. monitoring Dataload failures thru RSMO 2. Monitoring Process Chains Daily/weekly/ monthly 3. Perform Change run Hirerachy 4. Check Aggr's Rollup. 30) How to convert a BeX query Global structure to local structure (Steps involved) BeX query Global structure to local structureSteps; ***a local structure when you want to add structure elements that are unique to the specific query. Changing the global structure changes the structure for all the queries that use the global structure. That is reason you go for a local structure.Coming to the navigation part--In the BEx Analyzer, from the SAP Business Explorer toolbar, choose the open query icon (icon tht looks like a folder) On the SAP BEx Open dialog box:Choose Queries.Select the desired InfoCubeChoose New.On the Define the query screen:In the left frame, expand the Structure node.Drag and drop the desired structure into either the Rows or Columnsframe.Select the global structure.Rightclick and choose Remove reference.A local structure is created.Remember that you cannot revert back the changes made to global structure inthis regard. You will have to delete the local structure and then drag ndrop global structure into query definition.*When you try to save a global structure, a dialogue box prompts you tocomfirm changes to all queries. that is how you identify a global structure* 31) What is the use of Define cell in BeX & where it is useful? Cell in BEX:::Use*When you define selection criteria and formulas for structural components and there are two structural components of a query, generic cell definitions are created at the intersection of the structural components that determine the values to be presented in the cell.Cell-specific definitions allow you to define explicit formulas, along with implicit cell definition, and selection conditions for cells and in this way, to override implicitly created cell values. This function allows you to design much more detailed queries.In addition, you can define cells that have no direct relationship to the structural components. These cells are not displayed and serve as containers for help selections or help formulas.you need two structures to enable cell editor in bex. In every query you have one structure for key figures, then you have to do another structure with selections or formulas inside.Then having two structures, the cross among them results in a fix reporting area of n rows * m columns. The cross of any row with any column can be defined as formula in cell editor.This is useful when you want to any cell had a diferent behaviour that the general one described in your query defininion.For example imagine you have the following where % is a formula kfB/KfA *100.kfA kfB %chA 6 4 66%chB 10 2 20%chC 8 4 50%Then you want that % for row chC was the sum of % for chA and % chB. Then in cell editor you are enable to write a formula specifically for that cell as sum of the two cell before. chC/% = chA/% + chB/% then:kfA kfB %chA 6 4 66%chB 10 2 20%chC 8 4 86% Manager Round Review Questions. 32) What is SAP GUI and what use of it? AP Graphic User Interface: SAP GUI is the GUI client in SAP R/3's 3-tier architecture of database, application server and client. It is software that runs on a Microsoft Windows, Apple Macintosh or Unix desktop, and allows a user to access SAP functionality in SAP applications such as mySAP ERP and SAP Business Information Warehouse (now called SAP Business Intelligence). You need the SAP GUI to log on to and to use the SAP systems. Check alsohttp://help.sap.com/saphelp_nw70/helpdata/en/4f/472e42e1ef5633e10000000a155106/f rameset.htm 33) What is the RMS Application? SAP Records Management is a component of the SAP Web Application Server for the
electronic management of records and even paper-based information can be part of the electronic record in the SAP RMS. Other advantages of using SAP Records Management compared to other providers of record-based solutions:Records Management is a solution for the electronic management of records. The RMS divides various business units logically thereby making it possible to provide particular groups of users with access to particular records, as needed within their business processes. Quick access to information is a key factor for performing business successfully. Records Management guarantees this quick access. In one record, all information objects of a business transaction are grouped together in a transparent hierarchical structure. By converting paper records to electronic records, an organisation can enjoy all the advantages of a paper-free office: No storage costs for records, no cost-intensive copying procedures, and optimal retrieval of information. However, SAP Records Management not only provides an electronic representation of the conventional paper record. 34) Bug resolution for the RMS Application? 3A. http://rmsitservices.co.uk/upgrade.pdf 35) Development tasks for RMS release work? The main task isComplete life cycle development of SAP Authorization Roles . This includes participating in the high level, low level, RMS's and technical development of the roles. 36) What is BP Master Data? BP Master data is nothing but Business partner data used in CRM Master tables describe the BP Master Data tables, Authorization Objects A.Basic Table : BUT000 Steps to view this tables:Go to TX (tcode) se16 , specify the tab le u want ot view in this case is But000 and click on the icon table contents (or enter) and u can find the entries by giving a selection or view the total no of entries. You can't set an automatic code for BPs. However, you could use a formatted search to bring up the next code, provided that the code you are using has a logical sequence. You can assign this formatted search to the BP Code field and then the user can trigger it (ShiftF2) when they are creating a new BP. If you want to have a separate range for each BP type then the user needs to set the BP type field before using the formatted search. I've also included this kind of function in an add-on. In this case, the query is still the same but the user leaves the BP Code field blank and the add-on will populate it when the user clicks on the Add button. Process Flow:1. Configure application components in SAP Solution Manager.In the Business Blueprint, transactions can already be assigned for process steps from the reference model. You can also assign transactions to any additional processes and steps you have defined, and thereby specify how your business processes are to run in the SAP system. Furthermore, you can also edit the Implementation Guide. 2. Use metadata (PI).You specify the necessary metadata for your integration requirements, such as data types, message interfaces, mappings, and so on. 3. Configure integration scenario and integration process (PI).You adapt the defined integration scenarios and integration processes to your specific system landscape. In doing so, you specify, for example, collaboration profiles (communication party, service and communication channel). You can use wizards for the configuration. https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/a8ffd911-0b01-0010679e-d47dade98cdd Tools used for business process:1. BPM2. ARIS etc. Business Process Management with SAP NetWeaver and ARIS for SAP NetWeaver provides procedure models, methods, technologies and reference content for modeling, configuring, executing and monitoring these business processes. Process ModelingA process model is an abstraction of a process and describes all the aspects of the process:· Activities: steps that are executed within the process· Roles: users or systems that execute the activities· Artifacts: objects, such as business documents, for example, that are processed by the process
Processes within a company can be modeled on multiple abstraction levels and from numerous different viewpoints. To implement and utilize innovative processes and strategies successfully, you must convert business process views into technical views and relate both views. Typically, different individuals or departments within a company are responsible for modeling processes from the business and technical perspectives. A deciding factor for the success of business process modeling is, therefore, that all those involved have a common understanding of the business processes and “speak the same language”. Business Process Management in SAP NetWeaver provides a common methodology for all levels of process modeling. This common methodology forms a common reference framework for all project participants and links models for multiple abstraction levels:· Business process models describe the process map and process architecture of a company – from value chain diagrams and event-driven process chains, right up to end-to-end processes. · Process configuration models support the process-driven configuration and implementation of processes.· Process execution models support service-based process execution 37) Describe the BP Master Data, Authorization Objects? Authorization Objects: SAP R/3 Authorization ConceptFundamental to SAP R/3 security is the authorization concept. To get an understanding of SAP R/3 security, one needs to thoroughly understand the authorization concept. The authorization concept allows the assignment of broad or finely defined authorizations/permissions for system access. Several authorizations may be required to perform a task such as creating a material master record. Based upon design, these authorizations can be limited to: Access to the transaction code (TCODE) to create a material master Access to specific material Authorization to work in a particular plant in the system Authorization ObjectAuthorization objects can best be described as locks that limit access to SAP R/3 system objects, such as programs, TCODES and data entry screens. Depending on the SAP R/3 version, there are approximately 800 standard authorizations. There can be 10 fields in an authorization object, but all 10 fields are not used in all objects. The most common field in an authorization object is the activity field. These are predefined activity codes that reside in a table named TACT. Examples of activity are "01" create or generate, "02" change, "03" read, "04" print or edit message, and "06" delete. The next most common field is an organization field, such as company code or plant. Authorization objects are classified and cataloged in the system based upon f unctionality, such as FI (financial accounting) or HR (human resources). These classifications are called object classes. Developers and programmers can create new authorization objects through the developers' workbench called ABAP Workbench in SAP R/3. ABAP/4 is a 4GL (fourth-generation programming language) that was used to develop all SAP R/3 applications. It stands for Advanced Business Application Programming Language. AuthorizationsAuthorizations are the keys that can open the authorization objects, and they contain the specific information for field values. For instance, an authorization contains a specific set of values for one or all the fields of a particular authorization object. If a field is not restricted, an authorization will have an asterisk (*) as a field value. check in following table AGR_TCODES An example of an authorization is as follows: Field Value ACTVT (Activity) 01 BUKRS (Company Code) 0010 This particular authorization grants users access to create for company code 0010 the specific object that is locked by the authorization object, such as a purchase order. The following authorization will grant total access to all the activities for all the company codes: Field Value ACTVT (Activity) * BUKRS (Company Code) * 40) What is 0Recordmode? A. it is an info object , 0Record mode is used to identify the delta images in BW which is used in DSO .it is automatically activated when u activate DSO in BW. Like that in R/3 also
have field 0cancel. It holds delta images in R/3. When ever u extracting data from R/3 using LO or Generic.. Etc. this field 0Cancel is mapping with 0Record mode in BW. Like this BW identify the Delta images. 41)What is the difference between filter & Restricted Key Figures? Examples & Steps in BI? Filter restriction applies to entire query. RKF is restriction applied on a keyfigure.Suppose for example, you want to analyse data only after 2006...showing sales in 2007,2008 against Materials..You have got a keyfigure called Sales in your cube Now you will put global restriction at query level by putting Fiscyear > 2006 in the Filter.This will make only data which have fiscyear >2006 available for query to process or show. Now to meet your requirement. ..like belowMaterial Sales in 2007 Sales in 2008M1 200 300M2 400 700You need to create two RKF's.Sales in 2007 is one RKF which is defined on keyfigure Sales restricted by Fiscyear = 2007Similarly,Sales in 2008 is one RKF which is defined on Keyfigure Sales restricted by Fiscyear = 2008Now i think u understood the differenceFilter will make the restriction on query level..Like in above case putting filter Fiscyear>2006 willmake data from cube for yeaers 2001,2002,2003, 2004,2005 ,2006 unavailable to the query for showing up.So query is only left with data to be shown from 2007 and 2008.Within that data.....you can design your RKF to show only 2007 or something like that... 42)How to create condition and exceptions in Bi.7.0? But I know in Bw3.5 version.? From a query name or description, you would not be able to judge whether the query is having any exception.There are two ways of finding exception against a query:1. Execute queries one by one, the one which is having background colour as exception reporting are with exceptions.2. Open queries in the BEX Query Designer. If you are finding exception tab at the right side of filter and rows/column tab, the query is having exception. 43)The FI Business Flow related to BW. case studies or scenarios FI FlowBasically there are 5 major topics/areas in FI,1. GL Accounting -related tables are SKA1, SKB1 Master dataBSIS and BSAS are the Transaction Data2. Account Receivablesrelated to CustomerAll the SD related data when transfered to FI these are created.Related Tables BSID and BSAD3. Account Payables - related VendorAll the MM related documents data when transfered to FI these are createdRelated Tables BSIK and BSAKAll the above six tables data is present in BKPF and BSEG tablesYou can link these tables with the hlp of BELNR and GJAHR and with Dates also.4. Special Purpose Ledger.. which is rarely used.5. Asset ManagmentIn CO there are Profit center AccountingCost center Accounting will be there. Q) I want to delete a BEx query that is in Production system through request. Is anyone aware about it? A) Have you tried the RSZDELETE transaction? Q) What are the five ASAP Methodologies?
A: Project plan, Business Blue print, Realization, Final preparation & Go-Live - support. 1. Project Preparation: In this phase, decision makers define clear project objectives and an efficient decision making process (i.e. Discussions with the client, like what are his needs and requirements etc.). Project managers will be involved in this phase (I guess). A Project Charter is issued and an implementation strategy is outlined in this phase. 2. Business Blueprint: It is a detailed documentation of your company's requirements. (i.e. what are the objects we need to develop are modified depending on the client's requirements). 3. Realization: In this only, the implementation of the project takes place (development of objects etc) and we are involved in the project from here only.
4. Final Preparation: Final preparation before going live i.e. testing, conducting pre-go-live, end user training etc. End user training is given that is in the client site you train them how to work with the new environment, as they are new to the technology. 5. Go-Live & support: The project has gone live and it is into production. The Project team will be supporting the end users. Q) What is landscape of R/3 & what is landscape of BW. Landscape of R/3 A) Landscape of b/w: u have the development system, testing system, production system Development system: All the implementation part is done in this sys. (I.e., Analysis of objects developing, modification etc) and from here the objects are transported to the testing system, but before transporting an initial test known as Unit testing (testing of objects) is done in the development sys. Testing/Quality system: quality check is done in this system and integration testing is done.
Production system: All the extraction part takes place in this sys. Q). Difference between infocube and ODS?
A: Infocube is structured as star schema (extended) where a fact table is surrounded by different dim table that are linked with DIM'ids. And the data wise, you will have aggregated data in the cubes. No overwrite functionality. ODS is a flat structure (flat table) with no star schema concept and which will have granular data (detailed level). Overwrite functionality. Also check the following link: http://sapbibobj.blogspot.com/2010/09/differeces-between-dso-and-infocube.html Q) What is ODS? http://sapbibobj.blogspot.com/2010/10/data-store-objects.html Q) What is InfoSet?
A) An InfoSet is a special view of a dataset, such as logical database, table join, table, and sequential file, and is used by SAP Query as a source data. InfoSets determine the tables or fields in these tables that can be referenced by a report. In most cases, InfoSets are based on logical databases. SAP Query includes a component for maintaining InfoSets. When you create an InfoSet, a DataSource in an application system is selected. Navigating in a BW to an InfoSet Query, using one or more ODS objects or InfoObjects. You can also drill-through to BEx queries and InfoSet Queries from a second BW system that is connected as a data mart. _The InfoSet Query functions allow you to report using flat data tables (master data reporting). Choose InfoObjects or ODS objects as data sources. These can be connected using joins. You define the data sources in an InfoSet. An InfoSet can contain data from one or more tables that are connected to one another by key fields. The data sources specified in the InfoSet form the basis of the InfoSet Query. Q) What does InfoCube contains? A) Each InfoCube has one FactTable & a maximum of 16 (13+3 system defined, time, unit & data packet) dimensions.
Q). Differences between STAR Schema & Extended Schema?
A) In STAR SCHEMA, A FACT Table in center, surrounded by dimensional tables and the dimension tables contains of master data. In Extended Schema the dimension tables does not contain master data, instead they are stored in Masterdata tables divided into attributes, text & hierarchy. These Masterdata & dimensional tables are linked with each other with SID keys. Masterdata tables are independent of Infocube & reusability in other InfoCubes. Q) What does FACT Table contain?
A FactTable consists of KeyFigures. Each Fact Table can contain a maximum of 233 key figures. Dimension can contain up to 248 freely available characteristics. Q) How many dimensions are in a CUBE?
A) 16 dimensions. (13 user defined & 3 system pre-defined [t ime, unit & data packet]) Q) What does SID Table contain?
SID keys linked with dimension table & master data tables (attributes, texts, hierarchies) Q) What does ATTRIBUTE Table contain?
Master attribute data Q) What does TEXT Table contain?
Master text data, short text, long text, medium text & language key if it is language dependent Q) What does Hierarchy table contain?
Master hierarchy data Q) How would we delete the data in ODS?
A) By request IDs, Selective deletion & change log entry deletion. Q) How would we delete the data in change log table of ODS? A) Context menu of ODS → Manage → Environment → change log entries.
Q) Difference between display attributes and navigational attributes?
A: Display attribute is one, which is used only for display purpose in the report. Where as navigational attribute is used for drilling down in the report. We don't need to maintain Navigational attribute in the cube as a characteristic (that is the advantage) to drill down. Q) What are the extra fields does PSA contain? A) (4) Record id, Data packet …
Q) Partitioning possible for ODS?
A) No, It's possible only for Cube.
Q) Why partitioning?
A) For performance tuning. Q) Different types of Attributes?
A) Navigational attribute, Display attributes, Time dependent attributes, Compounding attributes, Transitive attributes, Currency attributes. Q. CAN U ADD A NEW FIELD AT THE ODS LEVEL?
Sure you can. ODS is nothing but a table. Q) Why we delete the setup tables (LBWG) & fill them (OLI*BW)?
A) Initially we don't delete the setup tables but when we do change in extract structure we go for it. We r changing the extract structure right, that means there are some newly added fields in that which r not before. So to get the required data (i.e.; the data which is required is taken and to avoid redundancy) we delete n then fill the setup tables. To refresh the statistical data. The extraction set up reads the dataset that you want to process such as, customers orders with the tables like VBAK, VBAP) & fills the relevant communication structure with the data. The data is stored in cluster tables from where it is read when the initialization is run. It is important that during initialization phase, no one generates or modifies application data, at least until the tables can be set up. Q) WHAT ARE THE DIFFERENT VARIABLES USED IN BEX?
A ) Different Variable's are Texts, Formulas, Hierarchies, Hierarchy nodes & Characteristic values. Variable Types are Manual entry /default value Replacement path SAP exit Customer exit Authorization Q) WHAT ARE INDEXES?
Indexes are data base indexes, which help in retrieving data fastly. Q) CAN CHARECTERSTIC INFOOBJECT CAN BE INFOPROVIDER.
Of course Q) What types of partitioning are there for BW?
There are two Partitioning Performance aspects for BW (Cube & PSA) Query Data Retrieval Performance Improvement: Partitioning by (say) Date Range improves data retrieval by making best use of database [data range] execution plans and indexes (of say Oracle database engine). B) Transactional Load Partitioning Improvement: Partitioning based on expected load volumes and data element sizes. Improves data loading into PSA and Cubes by infopackages (Eg. without timeouts). Q) What are Process Chains?
A) TCode is RSPC, is a sequence of processes scheduled in the background & waiting to be triggered by a specific event. Process chains nothing but grouping processes. Process variant (start variant) is the place the process chain knows where to start. There should be
min and max one start variant in each process chain, here we specify when should the process chain start by giving date and time or if you want to start immediately Some of theses processes trigger an event of their own that in-turn triggers other processes. Ex: Start chain → Delete BCube indexes → Load data from the source system to PSA → Load data from PSA to DataTarget ODS → Load data from ODS to BCube → Create Indexes for BCube after loading data → Create database statistics → Roll -Up data into the aggregate → Restart chain from beginning. Q) What are Process Types & Process variant?
A) Process types are General services, Load Process & subsequent processing, Data Target Administration, Reporting agent & Other BW services. Process variant (start variant) is the place the process type knows when & where to start. Q) Types of Updates?
A) Full Update, Init Delta Update & Delta Update. Q) For what we use HIDE fields, SELECT fields & CANCELLATION fields?
A) Selection fields-- The only purpose is when we check this column, the field will appear in InfoPackage Data selection tab. Hide fields -- These fields are not transferred to BW transfer structure. Cancellation - It will reverse the posted documents of keyfigures of customer defined by multiplying it with -1...and nullifying the value. I think this is reverse posting Q) How can I compare data in R/3 with data in a BW Cube after the daily delta loads?
Are there any standard procedures for checking them or matching the number of records? A) You can go to R/3 TCode RSA3 and run the extractor. It will give you the number of records extracted. Then go to BW Monitor to check the number of records in the PSA and check to see if it is the same & also in the monitor header tab. A) RSA3 is a simple extractor checker program that allows you to rule out extracts problems in R/3. It is simple to use, but only really tells you if the extractor works. Since records that get updated into Cubes/ODS structures are controlled by Update Rules, you will not be able to determine what is in the Cube compared to what is in the R/3 environment. You will need to compare records on a 1:1 basis against records in R/3 transactions for the functional area in question. I would recommend enlisting the help of the end user community to assist since they presumably know the data. To use RSA3, go to it and enter the extractor ex: 2LIS_02_HDR. Click execute and you will see the record count, you can also go to display that data. You are not modifying anything so what you do in RSA3 has no effect on data quality afterwards. However, it will not tell you how many records should be expected in BW for a given load. You have that information in the monitor RSMO during and after data loads. From RSMO for a given load you can determine how many records were passed through the transfer rules from R/3, how many targets were updated, and how many records passed through the Update Rules. It also gives you error messages from the PSA. Q) X & Y Tables? X-table = A table to link material SIDs with SIDs for time-independent navigation attributes. Y-table = A table to link material SIDs with SIDS for time-dependent navigation attributes. There are four types of sid tables X time independent navigational
attributes sid tables Y time dependent navigational attributes sid tables H hierarchy sid tables I hierarchy structure sid tables Q) How to know in which table (SAP BW) contains Technical Name / Description and creation data of a particular Reports. Reports that are created using BEx Analyzer.
A) There is no such table in BW if you want to know such details while you are opening a particular query press properties button you will come to know all the details that you wanted. You will find your information about technical names and description about queries in the following tables. Directory of all reports (Table RSRREPDIR) and Directory of the reporting component elements (Table RSZELTDIR) for workbooks and the connections to queries check Where- used list for reports in workbooks (Table RSRWORKBOOK) Titles of Excel Workbooks in InfoCatalog (Table RSRWBINDEXT) Q) What is a LUW in the delta queue?
A) A LUW from the point of view of the delta queue can be an individual document, a group of documents from a collective run or a whole data packet of an application extractor. Q) Why does the number in the 'Total' column in the overview screen of Transaction RSA7 differ from the number of data records that is displayed when you call the detail view?
A) The number on the overview screen corresponds to the total of LUWs (see also first question) that were written to the qRFC queue and that have not yet been confirmed. The detail screen displays the records contained in the LUWs. Both, the records belonging to the previous delta request and the records that do not meet the selection conditions of the preceding delta init requests are filtered out. Thus, only the records that are ready for the next delta request are displayed on the detail screen. In the detail screen of Transaction RSA7, a possibly existing customer exit is not taken into account. Q) Why does Transaction RSA7 still display LUWs on the overview screen after successful delta loading?
A) Only when a new delta has been requested does the source system learn that the previous delta was successfully loaded to the BW System. Then, the LUWs of the previous delta may be confirmed (and also deleted). In the meantime, the LUWs must be kept for a possible delta request repetition. In particular, the number on the overview screen does not change when the first delta was loaded to the BW System. Q) Why is there a DataSource with '0' records in RSA7 if delta exists and has also been loaded successfully? It is most likely that this is a DataSource that does not send delta data to the BW System via the delta queue but directly via the extractor (delta for master data using ALE change pointers). Such a DataSource should not be displayed in RSA7. This error is corrected with BW 2.0B Support Package 11. Q) Do the entries in table ROIDOCPRMS have an impact on the performance of the loading procedure from the delta queue?
A) The impact is limited. If performance problems are related to the loading process from the delta queue, then refer to the application-specific notes (for example in the CO-PA area, in the logistics cockpit area and so on). Caution: As of Plug In 2000.2 patch 3 the entries in table ROIDOCPRMS are as effective for the delta queue as for a full update. Please note, however, that LUWs are not split during data loading for consistency reasons. This means that when very large LUWs are written to the DeltaQueue, the actual package size may differ considerably from the MAXSIZE and MAXLINES parameters. Q) What is the purpose of function 'Delete data and meta data in a queue' in RSA7? What exactly is deleted? A) You should act with extreme caution when you use the deletion function in the delta queue. It is comparable to deleting an InitDelta in the BW System and should preferably be executed there. You do not only delete all data of this DataSource for the affected BW System, but also lose the entire information concerning the delta initialization. Then you can only request new deltas after another delta initialization. When you delete the data, the LUWs kept in the qRFC queue for the corresponding target system are confirmed. Physical deletion only takes place in the qRFC outbound queue if there are no more references to the LUWs. The deletion function is for example intended for a case where the BW System, from which the delta initialization was originally executed, no longer exists or can no longer be accessed. Q) What is the relationship between RSA7 and the qRFC monitor (Transaction SMQ1)?
A) The qRFC monitor basically displays the same data as RSA7. The internal queue name must be used for selection on the initial screen of the qRFC monitor. This is made up of the prefix 'BW, the client and the short name of the DataSource. For DataSources whose name are 19 characters long or shorter, the short name corresponds to the name of the DataSource. For DataSources whose name is longer than 19 characters (for delta-capable DataSources only possible as of PlugIn 2001.1) the short name is assigned in table ROOSSHORTN. In the qRFC monitor you cannot distinguish between repeatable and new LUWs. Moreover, the data of a LUW is displayed in an unstructured manner there. Q) I loaded several delta inits with various selections. For which one is the delta loaded?
A) For delta, all selections made via delta inits are summed up. This means, a delta for the 'total' of all delta initializations is loaded. Q) How many selections for delta inits are possible in the system?
A) With simple selections (intervals without complicated join conditions o r single values), you can make up to about 100 delta inits. It should not be more. With complicated selection conditions, it should be only up to 10-20 delta inits. Reason: With many selection conditions that are joined in a complicated way, too many 'where' lines are generated in the generated ABAP source code that may exceed the memory limit. Q) I intend to copy the source system, i.e. make a client copy. What will happen with delta? Should I initialize again after that?
A) Before you copy a source client or source system, make sure that your deltas have been fetched from the DeltaQueue into BW and that no delta is pending. After the client copy, an inconsistency might occur between BW delta tables and the OLTP delta tables as described in Note 405943. After the client copy, Table ROOSPRMSC will probably be empty in the OLTP since this table is client-independent. After the system copy, the table will contain the entries with the old logical system name that are no longer useful for further delta loading from the new logical system. The delta must be initialized in any case since delta depends on both the BW system and the source system. Even if no dump 'MESSAGE_TYPE_X' occurs in BW when editing or creating an InfoPackage, you should expect that the delta have to be initialized after the copy. Q) Despite of the delta request being started after completion of the collective run (V3 update), it does not contain all documents. Only another delta request loads the missing documents into BW. What is the cause for this "splitting"?
A) The collective run submits the open V2 documents for processing to the task handler, which processes them in one or several parallel update processes in an asynchronous way. For this reason, plan a sufficiently large "safety time window" between the end of the collective run in the source system and the start of the delta request in BW. An alternative solution where this problem does not occur is described in Note 505700. Q) In SMQ1 (qRFC Monitor) I have status 'NOSEND'. In the table TRFCQOUT, some entries have the status 'READY', others 'RECORDED'. ARFCSSTATE is 'READ'. What do these statuses mean? Which values in the field 'Status' mean what and which values are correct and which are alarming? Are the statuses BW-specific or generally valid in qRFC?
A) Table TRFCQOUT and ARFCSSTATE: Status READ means that the record was read once either in a delta request or in a repetition of the delta request. However, this does not mean that the record has successfully reached the BW yet. The status READY in the TRFCQOUT and RECORDED in the ARFCSSTATE means that the record has been written into the DeltaQueue and will be loaded into the BW with the next delta request or a repetition of a delta. In any case only the statuses READ, READY and RECORDED in both tables are considered to be valid. The status EXECUTED in TRFCQOUT can occur temporarily. It is set before starting a DeltaExtraction for all records with status READ present at that time. The records with status EXECUTED are usually deleted from the queue in packages within a delta request directly after setting the status before extracting a new delta. If you see such records, it means that either a process which is confirming and deleting records which have been loaded into the BW is successfully running at the moment, or, if the records remain in the table for a longer period of time with status EXECUTED, it is likely that there are problems with deleting the records which have already been successfully been loaded into the BW. In this state, no more deltas are loaded into the BW. Every other status is an indicator for an error or an inconsistency. NOSEND in SMQ1 means nothing (see note 378903). The value 'U' in field 'NOSEND' of table TRFCQOUT is discomforting. Q) How and where can I control whether a repeat delta is requested?
A) Via the status of the last delta in the BW Request Monitor. If the request is RED, the next load will be of type 'Repeat'. If you need to repeat the last load for certain reasons, set the
request in the monitor to red manually. For the contents of the repeat see Question 14. Delta requests set to red despite of data being already updated lead to duplicate records in a subsequent repeat, if they have not been deleted from the data targets concerned before. Q) THERE is one ODS AND 4 INFOCUBES. WE SEND DATA AT TIME TO ALL CUBES IF ONE CUBE GOT LOCK ERROR. HOW CAN U RECTIFY THE ERROR?
A) Go to TCode sm66 then see which one is locked select that pid from there and g oto sm12 TCode then unlock it this is happened when lock errors are occurred when u scheduled. Q) In BW we need to write abap routines. I wish to know when and what type of abap routines we got to write. Also, are these routines written in update rules? I will be glad, if this is clarified with real-time scenarios and few examples?
A) Over here we write our routines in the start routines in the update rules or in the transfer structure (you can choose between writing them in the start routines or directly behind the different characteristics. In the transfer structure you just click on the yellow triangle behind a characteristic and choose "routine". In the update rules you can choose "start routine" or click on the triangle with the green square behind an individual characteristic. Usually we only use start routine when it does not concern one single characteristic (for example when you have to read the same table for 4 characteristics). I hope this helps. We used ABAP Routines for example: To convert to Uppercase (transfer structure) To convert Values out of a third party tool with different keys into the same keys as our SAP System uses (transfer structure) To select only a part of the data for from an infosource updating the InfoCube (Start Routine) etc. Q) Difference between Calculated KeyFigure & Formula? A)
Q) Variables in Reporting?
A) Characteristics values, Text, Hierarchies, Hierarchy nodes & Formula elements, Q) Variable processing types in Reporting?
A) Manual, Replacement path, SAP Exit, Authorizations, Customer Exit Q) Why we use this RSRP0001 Enhancement? A) For enhancing the Customer Exit in reporting. Q) We need to find the table in which query and variable assignments are stored. We must read this table in a user-exit to get which variables are used in a query.
A) Check out tables RSZELTDIR and RSZCOMPDIR for query BEx elements. I found this info on one of the previous posting, variable table, RSZGLOBV, query table - get query id from table RSRREPDIR (field RSRREPDIR-COMPUID), use this id for table start with RSZEL* When 'ZFISPER1' name of the variable when VNAM = 'ZVC_FY1' - characteristics. Step 1 - before selection Step 2 - after selection Step 3 - processed all variable at the same time Q) What is an aggregate?
A) Aggregates are small or baby cubes. A subset of InfoCube. Flat Aggregate --when u have more than 15 characteristics for aggregate system will generate that aggregate as flat aggregate to increase performance. Roll-up--when the data is loaded for second time into cube, we have to roll-up to make that data available in aggregate. Q) How can we stop loading data into infocube?
A) First you have to find out what is the job name from the monitor screen for this load monitor screen lo Header Tabstrip lo untadhi. SM37 (job monitoring) in r/3 select this job and from the menu u can delete the job. There are some options, just check them out in sm37 and also u have to delete the request in BW. Cancellation is not suggestible for delta load.
Repair bad data and subsequent data targets with delta update Posted by Aaron Wang in aaron.wang3 on 11/04/2007 9:44:37 PM
Initialization / Delta update is what we do everyday and the update method for most data targets.Delta mechanism ensures the consistency of transactional data. So if some data is wrong in the early stage of data flow, we have to not only correct itself but also correct the related records in subsequent data targets. Unfortunately such kind of data can't be avoided in daily extraction. In some situations the data to ODS is correct but reports error while loading from ODS to Cubes. Here is one example: Some FI documents transferred in R/3 with LSMW's background mode and one of the line items 'Negative posting signal' was inputted with 'x' while the right one should be 'X'. This line time stays cool in my ODS Z1CCWO01 but encounters error while loaded into several Cubes during process chain's
running. Request 25344 is bad It's easy to modify the data in PSA of ODS and then re-construct it.But please don't do that immediately - if we simply delete the delta requests in Cubes afterwards and then re-load delta again, it won't reach your expectation because the delta is broken! If we want to fix both data in ODS / Cubes and keep the right delta, just take the following steps: 1, Set the request's QM status in all your data targets(in my case,4 cubes and 1 ODS)to red and then delete all of them. This will cause a repeat delta load in the following steps.
2, Open table RSDMDELTA via SE16.Here stores the successful DataMarted request. Delete the request contains the bad data (25344 in my case)
3, Open table RSBODSLOGSTATE and change fields 'Max. delta slice that was extracted by all receivers' and 'Max. delta slice extracted until now' to the latest right request number.( 25344 to 24860 in my case)
4,Since the Data Mart Status of the bad request has been cleared, we can thus delete the request and modify data in PSA. But still mark the QM status in 'Requests' tab to red (not in Monitor). 5, Modify data in PSA.
6, Re-construct bad request (25344) in ODS and activate it.
7, Load the delta package in Data Mart from ODS to subsequent data targets. A warning of 'repeat delta' will populate and choose 'request again'. The above steps could ensure right delta after PSA change and reconstruction for all data targets in the data flow. It's somewhat complicated but I still didn't find any better solution instead of redo the whole initialization (which may contain several millions of records and affect all the data targets!) Business Intelligence Interview Questions I have got these queries from someone and I would like to share my views. I dont claim my answers to be correct and hence wherever you dont agree, please let me know and share your views. If you have any questions or comments, please post them. Question:How you generally Approach to ur Analytics Project? Answer: Any project should start from defining the scope of the project and t he approach should be not to deviate from the scope. Then the project should be functionally divided into smaller modules generally done by project managers alongwith technical and functional leads.
The functional leads then decide on majorly three things: 1. According to the defined scope of the project they start gathering requirements while interacting with the clients. 2. They had a discussion with the technical leads and try to reach a solution. 3. Technical leads decides what schemas to create and what requirements are going to fulfill by that schema. Technical leads discuss all this with the developers and try to close requirements. Simultaneously testing and deployment is planned in a phased manner. Question: How we are going to decide which schema we are going to implement in the data warehouse? Answer: One way is what is mentioned in Question above. If you ask me to blindly create schemas for the warehouse without knowing any requirements, I will simply first divide the schemas on the basis of functional areas of an Organisation which are similar to the modules in an ERP like sales, finance, purchase, inventory, production, HR etc. I will broadly describe the expected analysis an organisation would like to do in every module. I think this way you would be able to complete at least 40-50 % of the requirements. To move ahead, study the data and business and you can create few more schemas. Question: What are the Challenges You Faced while making of Reports? Answer: Making of an report has never been a difficult task. But problem comes when users are reluctant to adopt a new system. I have experienced that if you are not able to create the report in exactly the way they used to see, they will keep asking for the changes. Your approach should be to first show them what they want to see and then add more information in the report. Question: What you will do when your Report is not Fetching Right Data? Answer: this is the biggest problem in report creation and verification. There could b e two reasons for report not fetching the right data. 1. Mostly clients do not have correct data in their database and on top of that to correct the results they make some changes at the report level to bring the desired result which you may not e aware of while creating the reports. Clients try to match the data with their existing reports and you never get the correct results. you try to discover the things and at later stage come to know of all these problems and you are held responsible for this delay. Hence always consult the SPOC(Single Point of Contact) and try to understand the logic they have used to generate their reports. 2. If the database values are correct, there there could be a problem with the joins and relations in the schema. You need to discover that analysing and digging deep into the matter.
There are more questions which I will try to answer later. The questions are very specific to OBIEE and I dont have much experience in that. Hence you may not agree to my answers, but wherever please post a comment and let me know too. Question: How analytics Process Your Request When you Create your Requests. Answer: If the Question means how does Oracle BI Analytics Server processes the user requests, the answer is- Oracle BI server converts the logical SQL submitted by the client into optimised physical SQL which is then sent to the backend database. Also in between it performs various tasks like converting the user operations like user selections to form a logical SQL, checking and verifying credentials, breaking the request into threads(as Oracle BI is a multi threaded server), processes the requests, manages the cached results, again
converting the results received from the database into user presentable form etc. Question: From where u Get the Logical Query of your Request? Answer: The logical SQL generated by the server can be viewed in BI Answers. If I have not understood the question, Please raise your voice. Question: Major Challenges You Faced While Whil e Creating the RPD?????? Answer: Every now and then there are problems pro blems with the database connections but the problem while creating the repository RPD files comes with complex schemas made on OLTP systems consisting of lot of joins and checking the results. Th type of join made need to be checked. By default it is inner join but sometimes the requirement demands other types of joins. There are lot of problems with the date formats also.
Question: What are Global Filter and how thery differ From Column Filter? Answer: Column filter- simply a filter applied on a column which we can use to restrict our column values while pulling the data or in charts to see the related content. Global filter- Not sure. I understand this filter will have impact on across the application but I really dont understand where and how it can be user. I heard of global variables but not global filters.
How to make the Delivery Profilers Work? When we are Use SA System how Does SA Server understand that It needs to use it For Getting the User Profile information? Where to Configure the Scheduler? Answer: I am not sure if Iam correct but we configure the OBIEE schedular in dat abase.
Question: How to hide Certain Columns From a User? Answer: Application access level security- Do not add the column in the report, Do not add the column in the presentation layer. Question:How can we Enable Drills in a Given Gi ven Column Data? Answer: To enable Drill down for a column, it should be included in the hirarchy in OBIEE. Hyperion IR has a drill anywhere feature where dont have to define and can drill to any available column. Question: Is Drill Down Possible without the attribute being a Part of a Hierarchical Dimension? Answer: No Question: How do u Conditional Format.? Answer: while creating a chat in BI Answers, you can define th e conditions and can apply colour formatting. Question: What is Guided Navigation? Answer: I think it is just the arrangement of hyperlinks to guide the user to navigate between the reports to do the analysis. How is Webcat File Deployed Across Environment?
Question: How the users Created Differs From RPD/Ans RPD/Answers/Dashboards wers/Dashboards Level?????
Answer: RPD users can do administrator tasks like adding new data source, creat e hirarchies, change column names where as Answers users may create new charts, edit those charts and Dashboard users may only view and analyse the dashboard or can edit dashboard by adding/removing charts objects. Question: Online/Offline Mode how it Impact in Dev and Delpoyment???? Answer: Online Mode- You can make changes in the RPD f ile and push in changes which will be immediately visible to the users who are already connected. This feature we may use in production environment. Offline mode- can be useful in test or development environment.
Questions: Explan me the Schema in Your Last Project???????
DB What happens if u Reconcile/Sync Both????
Technical Business Intelligence Questions and Answers Many questions asked during the interview will test the applicant‟s knowledge of the field and
their ability to use business intelligence software and other related applications. Q: Could you please explain the concept of business intelligence?
A: Business intelligence is the management and collection of data that is used t o help a business in the decision making process. The gathered data can also be used to predict the outcome of various business operations. There are a few key steps in business intelligence, which include: the gathering of data, analysis of that data, a review of the situation, risk evaluation and then using all of this information to make the best decision for the business. This data and analysis can be used to make financial and sales decisions, and also help a company gain an edge over its competitors. Q: What are some of the standard tools used in business intelligence?
A: Some of the standard business intelligence tools are: - BusinessObjects - Crystal Reports - Micro Strategy - Microsoft OLAP - Qlik View Note: Make sure that the most frequently used solutions are mentioned, as well as new and successful programs. This will demonstrate your interest in the field and knowledge of trends. Both are very important. Q: Describe what Online Analytical Processing is.
A: Online analytical processing, or OLAP, is a versatile tool that analyzes data stored within a multidimensional database. It allows the user to isolate pieces of information and view them from many different perspectives. For example: Sales of a particular product in April can be compared to the sales of the same product in September. On the other hand, sales of a particular product can also be compared to other products sold in the area. OLAP software programs can also be used for data mining purposes. Q: Please explain what a universe is in business intelligence.
A: A universe is terminology used in the BusinessObjects application. It is actually the semantic layer between the end user and the data warehouse. A universe masks the complex, traditional database structure and replaces it with familiar business terminology. This makes it easier for the end user to understand and use. Q: What is an aggregate table? A: Aggregate tables summarize information gathered from existing warehouse data. An example could be yearly or monthly sales information. These aggregate tables are typically used to reduce query time, as the actual table is likely to have millions of records. Rather than retrieving the information from the actual table, it is taken from the aggregate table, which is much smaller. Retrieving that information directly would take quite a bit of time and would also put a huge strain on the server. Q: Please explain what business intelligence dashboards are.
A: A business intelligence dashboard is, more or less, a reporting tool that tells a business how its performing at a particular point in time. It consolidates important pieces of information and creates a visual display so that a user can see whether or not the company is in good shape. A dashboard‟s
interface is usually customizable and can pull real-time data.
Behavioral BI Questions
Aside from technical questions, the applicant will likely be asked about how they perform certain tasks and what they would do in certain situations. These are much like the typical behavioral questions asked during an interview, but are still geared towards the business intelligence field. These can be questions about data, analytics or reporting methods. Below are some potential questions and tips on how to answer them. Q: How much experience do you have with dashboards, reporting tools and scorecards?
A: Be as thorough as possible and completely honest. If you have any experience at all in this field, there is a good chance you are pretty familiar with each of these tools. Tell the employer how long you have been working with these tools and how often you used them (i.e. daily or weekly). Q: What is your method of analyzing data? Please provide some examples.
A: The interviewer is looking to find out how you approach data analysis via examples of what you have done in the past. Try to choose instances where you took a different approach or pinpointed something that was previously overlooked. Q: What is the most important report you have created? Was this report easily understood by others? Were they able to grasp the implications of that data?
A: The employer wants to know if you are capable of t urning complicated, complex data into a report that is easily understood by others in the company. You may be able to create compelling reports, but if the person who receives the report cannot comprehend the implications of your data, all of your hard work will mean nothing. Again, be thorough with your answer and give as much detail about the report as possible. Depending on what position you have applied for, the questions may vary. Jot down some questions to ask during an interview before the meeting, as you will have an opportunity to ask some of your own towards the end. Business intelligence interview questions may be a bit more in-depth and technical in nature, but they are important in determining which candidates are truly knowledgeable in the area and able to provide the enterprise with the support it needs. Try not to be intimidated by the wording of the questions and focus on the core of what is being asked. SAP BW/BI Interview Questions Part-1
1. What are the extractor types? Ans: There are three types of extractors Application Specific - BW Content FI, HR, CO, SAP CRM, LO Cockpit Customer-Generated Extractors FI-SL, CO-PA Cross Application - Generic Extractors based on Table, View, InfoSet, Function Module 2. What are the steps involved in LO Extraction? Ans: The steps are a) RSA5 Select the DataSources b) LBWE Maintain DataSources and Activate Extract Structures c) LBWG Delete Setup Tables d) 0LI*BW Setup tables e) RSA3 Check extraction and the data in Setup tables f) LBWQ Check the extraction queue g) LBWF Log for LO Extract Structures h) RSA7 BW Delta Queue Monitor 3. What is the difference between ODS, InfoCube and MultiProvider? Ans: a) ODS: Provides granular data, allows overwrite and data is in transparent tables and provides operational reports. b) CUBE: Follows the star schema, we can only append data i.e. additive property and provides analytical reports. c) MultiProvider: It is the logical unification of physical infoproviders. It does not have physical data. There should be one characteristic in common for unifying the physical infoproviders. 4. What are Start routines, Transfer routines and Update routines? Ans: Start Routines: The start routine is run for each DataPackage after the data has been written to the PSA and before the transfer rules have been executed. It allows complex computations for a key figure or a characteristic. It has no return value. Its purpose is to execute preliminary calculations and to store them in global DataStructures. This structure or table can be accessed in the other routines. The entire DataPackage in the transfer structure format is used as a parameter for the routine.
Transfer / Update Routines: They are defined at the InfoObject level. It is like the Start Routine. It is independent of the DataSource. We can use this to define Global Data and Global Checks.
5. What is the table that is used in start routines? Ans: Always the table structure will be the structure of an ODS or InfoCube. For example if it is an ODS then active table structure will be the table. 6. Explain how you used Start routines in your project? Ans: Start routines are used for mass processing of records. In start routine all the records of DataPackage is available for processing. So we can process all these records together in start routine. In one of scenario, we wanted to apply size % to the forecast data. For example if material M1 is forecasted to say 100 in May. Then after applying size %(Small 20%, Medium 40%, Large 20%, Extra Large 20%), we wanted to have 4 records against one single record that is coming in the info package. This is achieved in start routine. 7. What are Return Tables? Ans: When we want to return multiple records, instead of single value, we use the return table in the Update Routine. Example: If we have total telephone expense for a Cost Center, using a return table we can get expense per employee. 8. What is compression? Ans: When we perform the compression the data in the infocube moves from F-Fact table to the E-Fact table. The entire request IDs will be deleted and compressed into a single request. When you perform this compression by choosing the check box “with zero elimination” keyfigures with the zero values also will be deleted. As a whole compression saves database size and increases the query performance. 9. What is Rollup? Ans: This is used to load new requests (delta records) in the Infocube into the aggregates. If we have not performed a rollup then the current data (which is not rolled up) will not be available for reporting 10. What is table partitioning and what are the benefits of partitioning in an Infocube? Ans: It is the method of dividing the fact table which would improve the query performance. We can perform partition only when we have either 0CALMONTH or 0FISCPER in the infocube. When we perform partition the E fact table of the infocube will be partitioned and hence we should perform the compression followed by partition in order to get the partition effect. Partitioning helps to run the report faster as data is stored in the relevant partitions. 11. What are the options available in transfer rules for mapping? Ans: InfoObject, Constant, Routine, Formula 12. How would you optimize the dimensions? Ans: We should define as many dimensions as possible and we have to take care that no single dimension crosses more than 20% of the fact table size. If the dimension table crosses more than 20% of the size of the fact table, we should declare the dimension as line item dimension (There should not be more than one characteristic assigned to a dimension in order to declare it as line item dimension) 13. How to find the size of the dimension table? Ans: With the program SAP_CUBES_DESIGN
14. What are Conversion Routines for units and currencies in the update rule? Ans: Using this option we can write ABAP code for Units / Currencies conversion. If we enable this flag then unit of Key Figure appears in the ABAP code as an additional parameter. For example, we can convert units in Pounds to Kilos. 15. Can an InfoObject be an InfoProvider, how and why? Ans: Yes, when we want to report on Characteristics or Master Data. We have to right click on the InfoArea and select "Insert characteristic as data target". For example, we can make 0CUSTOMER as an InfoProvider and report on it. 16. What is Open Hub Service? Ans: The Open Hub Service (process) enables us to distribute data from an SAP BW system into external system. In BW 3.5 version an object called Infospoke is used to send data from BW to the external systems. In BI 7 version we use open hub destination (object) for this purpose. 17. What is BW Statistics and what is its use? Ans: They are group of Business Content InfoCubes which are used to measure performance for Query and Load Monitoring. It also shows the usage of aggregates, OLAP and Warehouse management. There are some standard reports on these infocubes which provides us the in information about the BW system performance as a whole. 18. What are the delta options available when you load from flat file? Ans: The 3 options for Delta Management with Flat Files: Full Upload New Status for Changed records (ODS Object only) Additive Delta (ODS Object & InfoCube 19. Can we make a datasource to support delta? Ans: If this is a custom (user-defined) datasource you can make the datasource delta enabled. While creating datasource from RSO2, after entering datasource name and pressing create, in the next screen there is one button at the top, which says generic delta. Generic delta can be enabled baseda) Time stamp b) Calendar day c) Numeric pointer, such as document number & counter 20. How much time does it take to extract 1 million (10 lakhs) of records into an infocube? Ans: This depends, if you have complex coding in update rules it will take longer time, or else it will take less than 30 minutes. 21. What is ASAP Methodologies? Ans: ASAP stands for Accelerated SAP methodology which is used to implement and deliver the project efficiently. The five stages of ASAP methodologies areProject plan, Business Blue print, Realization, Final preparation & Go-Live - support. 1. Project Preparation: In this phase the system landscape will be set up. 2. Business Blueprint: It is a detailed documentation of your company's requirements. (I.e. what are the objects we need to develop are modified depending on the client's requirements).
3. Realization: In this only, the implementation of the project takes place (development of objects etc) And we are involved in the project from here only. 4. Final Preparation: Final preparation before going live i.e. testing, conducting pre-go-live, end user training etc. End user training is given that is in the client site you train them how to work with the new environment, as they are new to the technology. 5. Go-Live & support: The project has gone live and it is into production. The Project team will be supporting the end users. 21. Difference between display attributes and navigational attributes? Ans: Display attribute is one, which is used only for display purpose in the report. Whereas navigational attribute is used for drilling down in the report. We don't need to maintain Navigational attribute in the cube as a characteristic (that is the advantage) to drill down. 22. Some data is uploaded twice into infocube. How to correct it? Ans: Selective deletion if the data is already compressed otherwise deletes the du plicate request 23. Can number of Datasources have one infosource? Ans: Yes of course. For example, for loading text and hierarchies we use different data sources but the same Infosource which is already used for attribute 24: Can many infosources assign to one infoprovider? Ans: Yes 25: Can many transaction Datasources can be assigned to one infosource? Ans: No, it‟s a one to one assignment in this case. 26. Currency conversions can be written in update rules. Why not in transfer rules? Ans: 27: What are the types of data update we have in BW? Ans: Full, Initialize delta (init), Delta, Repair full 28: What is early delta? Ans: 29. When do we go for initialization without data transfer? Ans: 30. Why we delete the setup tables (LBWG) & fill them (OLI*BW)? Ans: For the first time when we are loading the historical data in the setup tables as a caution we perform deletion for the setup tables in order to avoid any junk data residing in it. Also when a DataSource is enhanced we will go for deleting the setup tables followed by the statistical setup (filling setup tables) 31. Why do we have set up tables in LO extraction only but not the other extractions? Ans: As the historical data volume in case of logistics modules (SD, MM, and PP) is high we have setup table which are the intermediate tables of type transparent tables. 31) What Are The Different Variables Used In BEx? Ans: Different Variable's are Texts, Formulas, Hierarchies, Hierarchy nodes & Characteristic values.
32) What is processing type of a variable? Ans: The way how the variable gets the input value is called processing type. We have the following five processing typesVariable Types are: a) Manual entry /default value b) Replacement path c) SAP exit d) Customer exit e) Authorization 33) How many Levels we can go In Reporting? Ans: We can drill down to any level by using Navigational attributes and jump targets (RRI). 34) What are Indexes? Ans: Indexes are data base indexes, which help in retrieving data faster. 35) What is the significance of KPI's? Ans: KPI‟s (Key Performance indicators) indicate the performance of a company. These are key figures 36) What types of partitioning are there for BW? Ans: There are two Partitioning Performance aspects for BW (Cube & PSA) 37) How can I compare data in R/3 with data in a BW Cube after the daily delta loads? Are there any standard procedures for checking them or matching the number of records? Ans: You can go to R/3 T-Code RSA3 and run the extractor. It will give you the number of records extracted. Then go to BW Monitor to check the number of records in the PSA and check to see if it is the same & also in the monitor header tab. 38) What is the difference between writing a routine in transfer rules and writing a routine in update rules? Ans: If you are using the same Infosource to update data in more than one data target its better u write in transfer rules because u can assign one InfoSource to more than one data target & and whatever logic we write in update rules it is specific to particular one data target. 39) Can one infosource send data to multiple datatargets? Ans:Yes 40) What does the number in the 'Total' column in Transaction RSA7 mean? Ans: The 'Total' column displays the number of LUWs that were written in the delta queue and that have not yet been confirmed. The number includes the LUWs of the last delta request (for repetition of a delta request) and the LUWs for the next delta request. A LUW only disappears from the RSA7 display when it has been transferred to the BW System and a new delta request has been received from the BW System.
1)name the two table that provide detail information about data source 2)how and when can you control whether repeat delta is requested? 3)how can you improve the performance of a query 4)how to prevent duplicate record in at the data target level 5)what is virtual cube? its significance 6)diff methods of generic datasource 7)how to connect anew data target to an existing data flow 8)what is partition 9) SAP batch process 10)how do you improve the info cube design preformance 12)is there any diff between repair run/repaire request.if yes then please tell me in detail 13)difference between process chain and infopackage group diff between partition/aggregate
Answers
Q 3) Query Performance can be improved by making the Aggregates having all the Chars & KF used in Query.
Q 5) Virtual Cube : InfoProvider with transaction data that is not stored in the object itself, but which is read directly for analysis and reporting purposes. The relevant data can be from the BI system or from other SAP or non-SAP systems. VirtualProviders only allow read access to data.
Q 6) Diff Methods of Generic datasource using Transaction RSO2 : a) Extraction from DB Table or View b) Extraction from SAP Query c) Extraction by Function Module
2) Important BW datasource relevant tables
ROOSOURCE: Table Header for SAP BW OLTP Sources
RODELTAM: BW Delta Process
ROOSFIELD: DataSource Fields
ROOSGEN: Generated Objects for OLTP Source, Last changed date and who etc.
3) For Q 8) i think you mean table partition You use partition to improve performance. You can only partiton on 0CALMONTH or 0FISCPER 4) 1. ROOSOURCE 6. Generic Extarction using 1.Views 2. Infoset Queries , 3. Function modules 5) Hi Santosh
Pls note down the Q& ANS Some of the Real time question. Q) Under which menu path is the Test Workbench to be found, including in earlier Releases?
The menu path is: Tools - ABAP Workbench - Test - Test Workbench. Q) I want to delete a BEx query that is in Production system through request. Is anyone aware about it?
A) Have you tried the RSZDELETE transaction?
Q) Errors while monitoring process chains.
A) During data loading. Apart from them, in process chains you add so many process types, for example after loading data into Info Cube, you rollup data into aggregates, now this rolling up of data into aggregates is a process type which you keep after the process type for loading data into Cube. This rolling up into aggregates might fail.
Another one is after you load data into ODS, you activate ODS data (another process type) this might also fail.
Q) In Monitor----- Details (Header/Status/Details) à Under Processing (data packet): Everything OK à Context menu of Data Package 1 (1 Records): Everything OK ---- Simulate update. (Here we can debug update rules or transfer rules.)
SM50 à Program/Mode à Program à Debugging & debug this work process.
Q) PSA Cleansing.
A) You know how to edit PSA. I don't think you can de lete single records. You have to delete entire PSA data for a request.
Q) Can we make a datasource to support delta.
A) If this is a custom (user-defined) datasource you can make the dat asource delta enabled. While creating datasource from RSO2, after entering datasource name and pressing create, in the next screen there is one button at the top, which says generic delta. If you want more details about this there is a chapter in Extraction book, it's in last pages u find out.
Generic delta services: Supports delta extraction for generic extractors according to: Time stamp Calendar day Numeric pointer, such as document number & counter Only one of these attributes can be set as a delta attribute. Delta extraction is supported for all generic extractors, such as tables/views, SAP Query and function modules The delta queue (RSA7) allows you to monitor the current status of the delta attribute
Q) Workbooks, as a general rule, should be transported with the role.
Here are a couple of scenarios:
1. If both the workbook and its role have been previously transported, then the role does not need to be part of the transport.
2. If the role exists in both dev and the target system but the workbook has never been transported, and then you have a choice of transporting the role (recommended) or just the workbook. If only the workbook is transported, then an additional step will have to be taken after import: Locate the WorkbookID via Table RSRWBINDEXT (in Dev and verify the same exists in the target system) and proceed to manually add it to the role in the target system via Transaction Code PFCG -- ALWAYS use control c/control v copy/paste for manually adding!
3. If the role does not exist in the target system you should transport both the role and workbook. Keep in mind that a workbook is an object unto itself and has no dependencies on other objects. Thus, you do not receive an error message from the transport of 'just a workbook' -- even though it may not be visible, it will exist (verified via Table RSRWBINDEXT).
Overall, as a general rule, you should transport roles with workbooks.
Q) How much time does it take to extract 1 million (10 lackhs) of records into an infocube?
A. This depends, if you have complex coding in update ru les it will take longer time, or else it will take less than 30 minutes.
Q) What are the five ASAP Methodologies?
A: Project plan, Business Blue print, Realization, Final preparation & Go-Live - support.
1. Project Preparation: In this phase, decision makers define clear project objectives and an efficient decision making process ( i.e. Discussions with the client, like what are his needs and requirements etc.). Project managers will be involved in this phase (I guess).
A Project Charter is issued and an implementation strategy is outlined in this phase.
2. Business Blueprint: It is a detailed documentation of your company's requirements. (i.e. what are the objects we need to develop are modified depending on the client's requirements).
3. Realization: In this only, the implementation of the project takes place (development of objects etc) and we are involved in the project from here only.
4. Final Preparation: Final preparation before going live i.e. testing, conducting pre-go-live, end user training etc.
End user training is given that is in the client site you train them how to work with the new environment, as they are new to the technology.
5. Go-Live & support: The project has gone live and it is into production. The Project team will be supporting the end users.
Q) What is landscape of R/3 & what is landscape of BW. Landscape of R/3 not sure.
Then Landscape of b/w: u have the development system, testing system, production system
Development system: All the implementation part is done in this sys. (I.e., Analysis of objects developing, modification etc) and from here the objects are transported to the testing system, but before transporting an initial test known as Unit testing (testing of objects) is done in the development sys.
Testing/Quality system: quality check is done in this system and integration testing is done.
Production system: All the extraction part takes place in this sys.
Q) How do you measure the size of infocube?
A: In no of records. Q). Difference between infocube and ODS?
A: Infocube is structured as star schema (extended) where a fa ct table is surrounded by different dim table that are linked with DIM'ids. And the data wise, you will have aggregated data in the cubes. No overwrite functionality ODS is a flat structure (flat table) with no star schema concept and which will have granular data (detailed level). Overwrite functionality.
Flat file datasources does not support 0recordmode in extraction.
x before, -after, n new, a add, d delete, r reverse Q) Difference between display attributes and navigational attributes?
A: Display attribute is one, which is used only for display purpose in the report. Where a s navigational attribute is used for drilling down in the report. We don't need to maintain Navigational attribute in the cube as a characteristic (that is the advantage) to drill down.
Q. SOME DATA IS UPLOADED TWICE INTO INFOCUBE. HOW TO CORRECT IT?
A: But how is it possible? If you load it manually twice, then you can delete it by requestID. Q. CAN U ADD A NEW FIELD AT THE ODS LEVEL?
Sure you can. ODS is nothing but a table. Q. CAN NUMBER OF DATASOURCES HAVE ONE INFOSOURCE?
A) Yes of course. For example, for loading text and hierarchies we use diffe rent data sources but the same InfoSource. Q. BRIEF THE DATAFLOW IN BW.
A) Data flows from transactional system to analytical system (BW). DataSources on the transactional system needs to be replicated on BW side and attached to infosource and update rules respectively. Q. CURRENCY CONVERSIONS CAN BE WRITTEN IN UPDATE RULES. WHY NOT IN TRANSFER RULES?
Q) WHAT IS PROCEDURE TO UPDATE DATA INTO DATA TARGETS?
FULL and DELTA. Q) AS WE USE Sbwnn, sbiw1, sbiw2 for delta update in LIS THEN WHAT IS THE PROCEDURE IN LO-COCKPIT?
No LIS in LO cockpit. We will have datasources and can be maintained (append fields). Refer white paper on LO-Cockpit extractions. Q) Why we delete the setup tables (LBWG) & fill them (OLI*BW)?
A) Initially we don't delete the setup tables but when we do change in extract structure we go for it. We r changing the extract structure right, that means there are some newly added fields in that which r not before. So to get the required data ( i.e.; the data which is required is taken and to avoid redundancy) we delete n then fill the setup tables. To refresh the statistical data. The extraction set up reads the dataset that you want to process such as, customers orders with the tables like VBAK, VBAP) & fills the relevant communication structure with the data. The data is stored in cluster tables from where it is read when the initialization is run. It is important that during initialization phase, no one generates or modifies application data, at least until the tables can be set up.
Q) SIGNIFICANCE of ODS?
It holds granular data (detailed level). Q) WHERE THE PSA DATA IS STORED?
In PSA table. Q) WHAT IS DATA SIZE?
The volume of data one data target holds (in no. of records) Q) Different types of INFOCUBES.
Basic, Virtual (remote, sap remote and multi) Virtual Cube is used for example, if you consider railways reservation all the information has to be updated online. For designing the Virtual cube you have to write the function module that is linking to table, Virtual cube it is like a the structure, when ever the table is updated the virtual cube will fetch the data from table and display report Online... FYI.. you will get the information : https://www.sdn.sap.com/sdn/index.sdn and search for Designing Virtual Cube and you will get a good material designing the Function Module
Q) INFOSET QUERY.
Can be made of ODS's and Characteristic InfoObjects with masterdata. Q) IF THERE ARE 2 DATASOURCES HOW MANY TRANSFER STRUCTURES ARE THERE.
In R/3 or in BW? 2 in R/3 and 2 in BW Q) BRIEF SOME STRUCTURES USED IN BEX.
Rows and Columns, you can create structures. Q) WHAT ARE THE DIFFERENT VARIABLES USED IN BEX?
Different Variable's are Texts, Formulas, Hierarchies, Hierarchy nodes & Characteristic values. Variable Types are
Manual entry /default value Replacement path SAP exit Customer exit Authorization
Q) HOW MANY LEVELS YOU CAN GO IN REPORTING?
You can drill down to any level by using Navigational attributes and jump targets. Q) TOOLS USED FOR PERFORMANCE TUNING.
ST22, Number ranges, delete indexes before load. Etc Q) PROCESS CHAINS: IF U has USED IT THEN HOW WILL U SCHEDULING DATA DAILY.
There should be some tool to run the job daily (SM37 jobs) Q) What types of partitioning are there for BW?
There are two Partitioning Performance aspects for BW (Cube & PSA) Query Data Retrieval Performance Improvement: Partitioning by (say) DateRange improves data retrieval by making best use of database [data range] execution plans and indexes (of say Oracle database engine). B) Transactional Load Partitioning Improvement:
Partitioning based on expected load volumes and data element sizes. Improves data loading into PSA and Cubes by infopackages (Eg. without timeouts).
Q) How can I compare data in R/3 with data in a BW Cube after the daily delta loads? Are there any standard procedures for checking them or matching the number of records?
A) You can go to R/3 TCode RSA3 and run the extractor. It will give you the number of records extracted. Then go to BW Monitor to check the number of records in the PSA and check to see if it is the same & also in the monitor header tab. A) RSA3 is a simple extractor checker program that allows you to rule out extracts problems in R/3. It is simple to use, but only really tells you if the extractor works. Since records that get updated into Cubes/ODS structures are controlled by Update Rules, you will not be able to determine what is in the Cube compared to what is in the R/3 environment. You will need to compare records on a 1:1 basis against records in R/3 transactions for the functional area in question. I would recommend enlisting the help of the end user community to assist since they presumably know the data.
To use RSA3, go to it and enter the extractor ex: 2LIS_02_HDR. Click execute and you will see the record count, you can also go to display that data. You are not modifying anything so what you do in RSA3 has no effect on data quality afterwards. However, it will not tell you how many records should be expected in BW for a given load. You have that information in the monitor RSMO during and after data loads. From RSMO for a given load you can determine how many records were passed through the transfer rules from R/3, how many targets were updated, and how many records passed through the Update Rules. It also gives you error messages from the PSA. Q) What is the difference between writing a routine in transfer rules and writing a routine in update rules?
A) If you are using the same InfoSource to update data in more than one data target its better u write in transfer rules because u can assign one InfoSource to more than one data target & and what ever logic u write in update rules it is specific to particular one data target.
Q) Routine with Return Table.
A) Update rules generally only have one return value. However, you can create a routine in the tab strip key figure calculation, by choosing checkbox Return table. The corresponding key figure routine then no longer has a return value, but a return table. You can then generate as many key figure values, as you like from one data record. Q) Start routines?
A) Start routines u can write in both updates rules a nd transfer rules, suppose you want to restrict (delete) some records based on conditions before getting loaded into data targets, then you can specify this in update rules-start routine.
Ex: - Delete Data_Package ani ante it will delete a record based on the condition Q) X & Y Tables?
X-table = A table to link material SIDs with SIDs for time-independent navigation attributes. Y-table = A table to link material SIDs with SIDS for time-dependent navigation attributes. There are four types of sid tables X time independent navigational attributes sid tables Y time dependent navigational attributes sid tables H hierarchy sid tables I hierarchy structure sid tables
Q) Filters & Restricted Key figures (real time example)
Restricted KF's u can have for an SD cube: billed quantity, billing value, no: of billing documents as RKF's. Q) Line-Item Dimension (give me an real time example)
Line-Item Dimension: Invoice no: or Doc no: is a real time example Q) What does the number in the 'Total' column in Transaction RSA7 mean?
A) The 'Total' column displays the number of LUWs that were written in the d elta queue and that have not yet been confirmed. The number includes the LUWs of the last delta request (for repetition of a delta request) and the LUWs for the next delta request. A LUW only disappears from the RSA7 display when it has been transferred to the BW System and a new delta request has been received from the BW System.
Q) How to know in which table (SAP BW) contains Technical Name / Description and
creation data of a particular Reports. Reports that are created using BEx Analyzer. A) There is no such table in BW if you want to know such details while you are o pening a particular query press properties button you will come to know all the details that you wanted.
You will find your information about technical names and description about queries in the following tables. Directory of all reports (Table RSRREPDIR) and Directory of the reporting component elements (Table RSZELTDIR) for workbooks and the connections to queries check Where- used list for reports in workbooks (Table RSRWORKBOOK) Titles of Excel Workbooks in InfoCatalog (Table RSRWBINDEXT) Q) What is a LUW in the delta queue?
A) A LUW from the point of view of the delta queue can be an individual document, a group of documents from a collective run or a whole data packet of an application extractor. Q) Why does the number in the 'Total' column in the overview screen of Transaction RSA7 differ from the number of data records that is displayed when you call the detail view?
A) The number on the overview screen corresponds to th e total of LUWs (see also first question) that were written to the qRFC queue and that have not yet been confirmed. The detail screen displays the records contained in the LUWs. Both, the records belonging to the previous delta request and the records that do not meet the selection conditions of the preceding delta init requests are filtered out. Thus, only the records that are ready for the next delta request are displayed on the detail screen. In the detail screen of Transaction RSA7, a possibly existing customer exit is not taken into account. Q) Why does Transaction RSA7 still display LUWs on the overview screen after successful delta loading?
A) Only when a new delta has been requested does the source system learn that the previous delta was successfully loaded to the BW System. Then, the LUWs of the previous delta may be confirmed (and also deleted). In the meantime, the LUWs must be kept for a possible delta request repetition. In particular, the number on the overview screen does not change when the first delta was loaded to the BW System.
Q) Why are selections not taken into account when the delta queue is filled?
A) Filtering according to selections takes place when the system reads from the d elta queue. This is necessary for reasons of performance. Q) Why is there a DataSource with '0' records in RSA7 if delta exists and has also been loaded successfully? successfully?
It is most likely that this is a DataSource that does not send delta data to the BW System via the delta queue but directly via the extractor (delta for master data using ALE change pointers). Such a DataSource should not be displayed in RSA7. This error is corrected with BW 2.0B Support Package 11. Q) Do the entries in table ROIDOCPRMS have an impact on the performance of the loading procedure from the delta queue?
A) The impact is limited. If performance problems are related to the loading process from the delta queue, then refer to the application-specific notes (for example in the CO-PA area, in the logistics cockpit area and so on).
Caution: As of Plug In 2000.2 patch 3 the entries in table ROIDOCPRMS are as effective for the delta queue as for a full update. Please note, however, that LUWs are not split during data loading for consistency reasons. This means that when very large LUWs are written to the DeltaQueue, the actual package size may differ considerably from the MAXSIZE and MAXLINES parameters. Q) Why does it take so long to display the data in the delta queue (for example approximately 2 hours)?
A) With Plug In 2001.1 the display was changed: the user has the option of defining the th e amount of data to be displayed, to restrict it, to selectively choose the number of a data record, to make a distinction between the 'actual' delta data and the data intended for repetition and so on. Q) What is the purpose of function 'Delete data and meta data in a queue' in RSA7? What exactly is deleted?
A) You should act with extreme caution when you use t he deletion function in the delta queue. It is comparable to deleting an InitDelta in the BW System and should preferably be executed there. You do not only delete all data of this DataSource for the affected BW System, but also lose the entire information concerning the delta initialization. Then you can
only request new deltas after another delta initialization.
When you delete the data, the LUWs kept in the qRFC queue for the corresponding target system are confirmed. Physical deletion only takes place in the qRFC outbound queue if there are no more references to the LUWs. The deletion function is for example intended for a case where the BW System, from which the delta initialization was originally executed, no longer exists or can no longer be accessed. Q) Why does it take so long to delete from the delta queue (for example half a day)?
A) Import PlugIn 2000.2 patch 3. With this patch the performance during deletion is considerably improved. Q) Why is the delta queue not updated when you start the V3 update in the logistics cockpit area?
A) It is most likely that a delta initialization had not yet run or that the delta initialization was not successful. A successful delta initialization (the corresponding request must have QM status 'green' in the BW System) is a prerequisite for the application data being written in the delta queue. Q) What is the relationship between RSA7 and the qRFC monitor (Transaction SMQ1)?
A) The qRFC monitor basically displays the same data as RSA7. The internal queue na me must be used for selection on the initial screen of the qRFC monitor. This is made up of the prefix 'BW, the client and the short name of the DataSource. For DataSources whose name are 19 characters long or shorter, the short name corresponds to the name of the DataSource. For DataSources whose name is longer than 19 characters (for delta-capable DataSources only possible as of PlugIn 2001.1) the short name is assigned in table ROOSSHORTN. In the qRFC monitor you cannot distinguish between repeatable and new LUWs. Moreover, the data of a LUW is displayed in an unstructured manner there. Q) Why are the data in i n the delta queue although the V3 update was not started?
A) Data was posted in background. Then, the t he records are updated directly in the delta queue (RSA7). This happens in particular during automatic goods receipt posting (MRRS). There is no duplicate transfer of records to the BW system. See Note 417189.
Q) Why does button 'Repeatable' on the RSA7 data details screen not only show data loaded into BW during the last delta but also data that were w ere newly added, i.e. 'pure' delta records?
A) Was programmed in a way that the request in repeat mode f etches both actually repeatable (old) data and new data from the source system. Q) I loaded several delta inits with various selections. For which one is the delta loaded?
A) For delta, all selections made via delta inits are summed up. This means, a delta for the 'total' of all delta initializations is loaded. Q) How many selections for delta inits are possible in the system?
A) With simple selections (intervals without complicated join conditions or single values), you can make up to about 100 delta inits. It should not be more. With complicated selection conditions, it should be only up to 10-20 delta inits. Reason: With many selection conditions that are joined in a complicated way, too many 'where' lines are generated in the generated ABAP source code that may exceed the memory limit. Q) I intend to copy the source system, i.e. make a client copy. What will happen with may delta? Should I initialize again after that?
A) Before you copy a source client or source system, make sure that your deltas have been fetched from the DeltaQueue into BW and that no delta is pending. After the client copy, an inconsistency might occur between BW delta tables and the OLTP delta tables as described in Note 405943. After the client copy, Table ROOSPRMSC will probably be empty in the OLTP since this table is client-independent. After the system copy, the table will contain the entries with the old logical system name that are no longer useful for further delta loading from the new logical system. The delta must be initialized in any case since delta depends on both the BW system and the source system. Even if no dump 'MESSAGE_TYPE_X' occurs in BW when editing or creating an InfoPackage, you should expect that the delta have to be initialized after the copy. Q) Is it allowed in Transaction SMQ1 to use the functions for manual control of processes?
A) Use SMQ1 as an instrument for diagnosis and control only. Make changes to BW queue s only after informing the BW Support or only if this is explicitly requested in a note for
component 'BC-BW' or 'BW-WHM-SAPI'. Q) Despite of the delta request being started after completion of th e collective run (V3 update), it does not contain all documents. Only another delta request loads the missing documents into BW. What is the cause for this "splitting"?
A) The collective run submits the open V2 documents for processing to the ta sk handler, which processes them in one or several parallel update processes in an asynchronous way. For this reason, plan a sufficiently large "safety time window" between the end of the collective run in the source system and the start of the delta request in BW. An alternative solution where this problem does not occur is described in Note 505700. Q) Despite my deleting the delta init, LUWs are still written into the DeltaQueue?
A) In general, delta initializations and deletions of delta inits should always be c arried out at a time when no posting takes place. Otherwise, buffer problems may occur: If a user started the internal mode at a time when the delta initialization was still active, he/she posts data into the queue even though the initialization had been deleted in the meantime. This is the case in your system. Q) In SMQ1 (qRFC Monitor) I have status 'NOSEND'. In the table TRFCQOUT, some entries have the status 'READY', others 'RECORDED'. ARFCSSTATE is 'READ'. What do these statuses mean? Which values in the field 'Status' mean what and which values are correct and which are alarming? Are the statuses BW-specific or generally valid in qRFC?
A) Table TRFCQOUT and ARFCSSTATE: Status READ means that the record was read once either in a delta request or in a repetition of the delta request. However, this does not mean that the record has successfully reached the BW yet. The status READY in the TRFCQOUT and RECORDED in the ARFCSSTATE means that the record has been written into the DeltaQueue and will be loaded into the BW with the next delta request or a repetition of a delta. In any case only the statuses READ, READY and RECORDED in both tables are considered to be valid. The status EXECUTED in TRFCQOUT can occur temporarily. It is set before starting a DeltaExtraction for all records with status READ present at that time. The records with status EXECUTED are usually deleted from the queue in packages within a delta request directly after setting the status before extracting a new delta. If you see such records, it means that either a process which is confirming and deleting records which have been loaded into the BW is successfully running at the moment, or, if the records remain in
the table for a longer period of time with status EXECUTED, it is likely that there are problems with deleting the records which have already been successfully been loaded into the BW. In this state, no more deltas are loaded into the BW. Every other status is an indicator for an error or an inconsistency. NOSEND in SMQ1 means nothing (see note 378903). The value 'U' in field 'NOSEND' of table TRFCQOUT is discomforting. Q) The extract structure was changed when the DeltaQueue was empty. Afterwards new delta records were written to the DeltaQueue. When loading the delta into the PSA, it shows that some fields were moved. The same result occurs when the contents of the DeltaQueue are listed via the detail display. Why are the data displayed differently? What can be done?
Make sure that the change of the extract structure is also reflected in the database and that all servers are synchronized. We recommend to reset the buffers using Transaction $SYNC. If the extract structure change is not communicated synchronously to the server where delta records are being created, the records are written with the old structure until the new structure has been generated. This may have disastrous consequences for the delta. When the problem occurs, the delta needs to be re-initialized. Q) How and where can I control whether a repeat delta is requested?
A) Via the status of the last delta in the BW Request Monitor. If the r equest is RED, the next load will be of type 'Repeat'. If you need to repeat the last load for certain reasons, set the request in the monitor to red manually. For the contents of the repeat see Question 14. Delta requests set to red despite of data being already updated lead to duplicate records in a subsequent repeat, if they have not been deleted from the data targets concerned before. Q) As of PI 2003.1, the Logistic Cockpit offers various types of update methods. Which update method is recommended in logistics? According to which criteria should the decision be made? How can I choose an update method in logistics?
See the recommendation in Note 505700. Q) Are there particular recommendations regarding the data volume the DeltaQueue may grow to without facing the danger of a read failure due to memory problems?
A) There is no strict limit (except for the restricted number range of the 24 -digit QCOUNT counter in the LUW management table - which is of no practical importance, however - or
the restrictions regarding the volume and number of records in a database table).
When estimating "smooth" limits, both the number of LUWs is important and the average data volume per LUW. As a rule, we recommend to bundle data (usually documents) already when writing to the DeltaQueue to keep number of LUWs small (partly this can be set in the applications, e.g. in the Logistics Cockpit). The data volume of a single LUW should not be considerably larger than 10% of the memory available to the work process for data extraction (in a 32-bit architecture with a memory volume of about 1GByte per work process, 100 Mbytes per LUW should not be exceeded). That limit is of rather small practical importance as well since a comparable limit already applies when writing to the DeltaQueue. If the limit is observed, correct reading is guaranteed in most cases.
If the number of LUWs cannot be reduced by bundling application transactions, you should at least make sure that the data are fetched from all connected BWs as quickly as possible. But for other, BW-specific, reasons, the frequency should not be higher than one DeltaRequest per hour.
To avoid memory problems, a program-internal limit ensures that never more than 1 million LUWs are read and fetched from the database per DeltaRequest. If this limit is reached within a request, the DeltaQueue must be emptied by several successive DeltaRequests. We recommend, however, to try not to reach that limit but trigger the fetching of data from the connected BWs already when the number of LUWs reaches a 5-digit value. Q) I would like to display the date the data was uploaded on the report. Usually, we load the transactional data nightly. Is there any easy way to include this information on the report for users? So that they know the validity of the report.
A) If I understand your requirement correctly, you want to display the date on which data was loaded into the data target from which the report is being executed. If it is so, configure your workbook to display the text elements in the report. This displays the relevance of data field, which is the date on which the data load has taken place. Q) Can we filter the fields at Transfer Structure?
Q) Can we load data directly into infoobject with out extraction is it possible.
Yes. We can copy from other infoobject if it is same. We load data from PSA if it is already in PSA. Q) HOW MANY DAYS CAN WE KEEP THE DATA IN PSA, IF WE R SHEDULED DAILY, WEEKLY AND MONTHLY.
a) We can set the time. Q) HOW CAN U GET THE DATA FROM CLIENT IF U R WORKING ON OFFSHORE PROJECTS. THROUGH WHICH NETWORK. a) VPN…………….Virtual Private Network, VPN is nothing but one sort of network where we
can connect to the client systems sitting in offshore through RAS (Remote access server). Q) HOW CAN U ANALIZE THE PROJECT AT FIRST?
Prepare Project Plan and Environment Define Project Management Standards and Procedures Define Implementation Standards and Procedures Testing & Go-live + supporting. Q) THERE is one ODS AND 4 INFOCUBES. WE SEND DATA AT TIME TO ALL CUBES IF ONE CUBE GOT LOCK ERROR. HOW CAN U RECTIFY THE ERROR?
Go to TCode sm66 then see which one is locked select that pid from there and goto sm12 TCode then unlock it this is happened when lock errors are occurred when u scheduled. Q) Can anybody tell me how to add a navigational attribute in the BEx report in the rows?
A) Expand dimension under left side panel (that is infocube pane l) select than navigational attributes drag and drop under rows panel. Q) WHAT IS TRANSACTIONAL CUBE?
A) Transactional InfoCubes differ from standard InfoCube s in that the former have an improved write access performance level. Standard InfoCubes are technically optimized for read-only access and for a comparatively small number of simultaneous accesses. Instead, the transactional InfoCube was developed to meet the demands of SAP Strategic Enterprise Management (SEM), meaning that, data is written to the InfoCube (possibly by several users at the same time) and re-read as soon as possible. Standard Basic cubes are not suitable for this.
Q) Is there any way to delete cube contents within update rules from an ODS data source? The reason for this would be to delete (or zero out) a cube record in an "Open
Order" cube if the open order quantity was 0. I've tried using the 0recordmode but that doesn't work. Also, would it be easier to write a program that would be run after the load and delete the records with a zero open qty?
A) START routine for update rules u can write ABAP code.
A) Yap, you can do it. Create a start routine in Update rule.
It is not "Deleting cube contents with update rules" It is only possible to avoid that some content is updated into the InfoCube using the start routine. Loop at all the records and delete the record that has the condition. "If the open order quantity was 0" You have to think also in before and after images in case of a delta upload. In that case you may delete the change record and keep the old and after the change the wrong information. Q) Wondering how can I get the values, for an example, if I run a report for month range 01/2004 - 10/2004 then monthly value is actually divide by the number of months that I selected. Which variable should I use?
Q) Why is it every time I switch from Info Provider to InfoObject or from one item to another while in modeling I always get this message " Reading Data " or "constructing workbench" in it runs for minutes.... anyway to stop this?
Q) Can any one give me info on how the BW delta works also would like to know about 'before image and after image' am currently in a BW project and have to write start routines for delta load.
Q) I am very new to BW. I would like to clarify a doubt regarding Delta extractor. If I am
correct, by using delta extractors the data that has already been scheduled will not be uploaded again. Say for a specific scenario, Sales. Now I have uploaded all the sales order created till yesterday into the cube. Now say I make changes to any of the open record, which was already uploaded. Now what happens when I schedule it again? Will the same record be uploaded again with the changes or will the changes get affected to the previous record.
A)
Q) In BW we need to write abap routines. I wish to know when and what type of abap routines we got to write. Also, are these routines written in update rules? I will be glad, if this is clarified with real-time scenarios and few examples?
A) Over here we write our routines in the start routines in the upd ate rules or in the transfer structure (you can choose between writing them in the start routines or directly behind the different characteristics. In the transfer structure you just click on the yellow triangle behind a characteristic and choose "routine". In the update rules you can choose "start routine" or click on the triangle with the green square behind an individual characteristic. Usually we only use start routine when it does not concern one single characteristic (for example when you have to read the same table for 4 characteristics). I hope this helps. We used ABAP Routines for example: To convert to Uppercase (transfer structure)
To convert Values out of a third party tool with different keys into the same keys as our SAP System uses (transfer structure)
To select only a part of the data for from an infosource updating the InfoCube (Start Routine) etc. Q) What does InfoCube contains?
A) Each InfoCube has one FactTable & a maximum of 16 (13+3 system defined, time, unit & data packet) dimensions. Q) What does FACT Table contain?
A FactTable consists of KeyFigures. Each Fact Table can contain a maximum of 233 key figures. Dimension can contain up to 248 freely available characteristics. Q) How many dimensions are in a CUBE?
A) 16 dimensions. (13 user defined & 3 system pre-defined [t ime, unit & data packet]) Q) What does SID Table contain?
SID keys linked with dimension table & master data tables (attributes, texts, hierarchies) Q) What does ATTRIBUTE Table contain?
Master attribute data Q) What does TEXT Table contain? Master text data, short text, long text, medium text & language key if it is language dependent Q) What does Hierarchy table contain? Master hierarchy data Q) What is the advantage of extended STAR Schema?
Q). Differences between STAR Schema & Extended Schema? A) In STAR SCHEMA, A FACT Table in center, surrounded by dimensional tables and the dimension tables contains of master data. In Extended Schema the dimension tables does not contain master data, instead they are stored in Masterdata tables divided into attributes, text & hierarchy. These Masterdata & dimensional tables are linked with each other with SID keys. Masterdata tables are independent of Infocube & reusability in other InfoCubes.
Q) As to where in BW do you go to add a character like a \; # so that BW will accept it. This is transaction data which loads fine in the PSA but not the data target. A) Check transaction SPRO ---Then click the "Goggles"-Button => Business Information Warehouse => Global Settings => 2nd point in the list. I hope you can use my "Guide" (my BW is in german, so i don't know all the english descriptions). Q) Does data packets exits even if you don't enter the master data, (when created)?
Q) When are Dimension ID's created? A) When Transaction data is loaded into InfoCube. Q) When are SID's generated? A) When Master data loaded into Master Tables (Attr, Text, Hierarchies). Q) How would we delete the data in ODS? A) By request IDs, Selective deletion & change log entry deletion. Q) How would we delete the data in change log table of ODS? A) Context menu of ODS â†‟ Manage â†‟ Environment â†‟ change log entries.
Q) What are the extra fields does PSA contain? A) (4) Record id, Data packet …
Q) Partitioning possible for ODS? A) No, It's possible only for Cube. Q) Why partitioning? A) For performance tuning. Q) Have you ever tried to load data from 2 InfoPackages into one cube? A) Yes. Q) Different types of Attributes? A) Navigational attribute, Display attributes, Time dependent attribut es, Compounding attributes, Transitive attributes, Currency attributes. Q) Transitive Attributes? A) Navigational attributes having nav attr…these nav attrs are called transitive attrs Q) Navigational attribute? A) Are used for drill down reporting (RRI). Q) Display attributes? A) You can show DISPLAY attributes in a report, which are used only for displaying. Q) How does u recognize an attribute whether it is a display attribute or not? A) In Edit characteristics of char, on general tab checked as attribute on ly. Q) Compounding attribute? A) Q) Time dependent attributes? A) Q) Currency attributes?
A) Q) Authorization relevant object. Why authorization needed? A) Q) How do we convert Master data InfoObject to a Data target? A) InfoArea â†‟ Infoprovider (context menu) â†‟ Insert characteristic Data as DataTarget.
Q) How do we load the data if a FlatFile consists of both Master and Transaction data? A) Using Flexible update method while creating InfoSource. Q) Steps in LIS are Extraction? A) Q) Steps in LO are Extraction? A) * Maintain extract structures. (R/3) * Maintain DataSources. (R/3) * Replicate DataSource in BW. * Assign InfoSources. * Maintain communication structures/transfer rules. * Maintain InfoCubes & Update rules. * Activate extract structures. (R/3) * Delete setup tables/setup extraction. (R/3) * InfoPackage for the Delta initialization. * Set-up periodic V3 update. (R/3) * InfoPackage for Delta uploads. Q) Steps in FlatFile Extraction? A) Q) Different types in LO's? A) Direct Delta, Queued Delta, Serialized V3 update, Unserialized V3 Updat e. Direct Delta: - With every document posted in R/3, the extraction data is transferred directly into the BW delta queue. Each document posting with delta extraction becomes exactly one LUW in the corresponding Delta queue. Queued Delta: - The extraction data from the application is collected in extraction queue instead of as update data and can be transferred to the BW delta queue by an update collection run, as in the V3 update. Q) What does LO Cockpit contain?
A) * Maintaining Extract structure. * Maintaining DataSources. * Activating Updates. * Controlling Updates. Q) RSA6 --- Maintain DataSources. Q) RSA7 ---- Delta Queue (allows you to monitor the current status of the delta attribute) Q) RSA3 ---- Extract checker. Q) LBW0 --- TCode for LIS. Q) LBWG --- Delete set-up tables in LO's. Q) OLI*BW --- Fill Set-up tables. Q) LBWE ---- TCode for Logistics extractors. Q) RSO2 --- Maintaining Generic DataSources. Q) MC21 ----- Creating user-defined Information Structure for LIS (It is InfoSource in SAP BW). Q) MC24 ---- Creating Updating rules for LO's. Q) PFCG ---- Role maintenance, assign users to these roles. Q) SE03 -- Changeability of the BW namespace. Q) RSDCUBEM --- For Delete, Change or Delete the InfoCube. Q) RSD5 -- Data packet characteristics maint. Q) RSDBC - DB Connect Q) RSMO --- Monitoring of Dataloads. Q) RSCUSTV6 -- Partitioning of PSA. Q) RSRT -- Query monitor. Q) RSRV - Analysis and Repair of BW Objects Q) RRMX - BEx Analyzer Q) RSBBS - Report to Report interface (RRI). Q) SPRO -- IMG (To make configurations in BW). Q) RSDDV - Maintaining Aggregates. Q) RSKC -- Character permit checker. Q) ST22 - Checking ShortDump. Q) SM37 - Scheduling Background jobs. Q) RSBOH1 -- Open Hub Service: Create InfoSpoke.
Q) RSMONMESS -- "Messages for the monitor" table. Q) ROOSOURCE - Table to find out delta update methods. Q) RODELTAM - Finding for modes of records (i.e. before image & after image) Q) SMOD - Definition Q) CMOD - Project Management enhancing Q) SPAU - Program Compare Q) SE11 - ABAP Dictionary Q) SE09 - Transport Organizer (workbench organizer) Q) SE10 - Transport Organizer (Extended View) Q) SBIW - Implementation guide Q) Statistical Update? A) Q) What are Process Chains? A) TCode is RSPC, is a sequence of processes scheduled in the background & waiting to be triggered by a specific event. Process chains nothing but grouping processes. Process variant (start variant) is the place the process chain knows where to start. There should be min and max one start variant in each process chain, here we specify when should the process chain start by giving date and time or if you want to start immediately Some of theses processes trigger an event of their own that in-turn triggers other processes. Ex: - Start chain â†‟ Delete BCube indexes â†‟ Load data from the source system to PSA â†‟ Load data from PSA to DataTarget ODS â†‟ Load data from ODS to BCube â†‟ Create Indexes for BCube after loading data â†‟ Create database statistics â†‟ Roll -Up data into the aggregate â†‟ Restart chain from beginning.
Q) What are Process Types & Process variant? A) Process types are General services, Load Process & subsequent processing, Data Target Administration, Reporting agent & Other BW services. Process variant (start variant) is the place the process type knows when & where to start. Q) Difference between MasterData & Transaction InfoPackage? A) 5 tabs in Masterdata & 6 tabs in Transaction data, th e extra tab in Transaction data is DATA TARGETS. Q) Types of Updates? A) Full Update, Init Delta Update & Delta Update.
Q) For Full update possible while loading data from R/3? A) InfoPackage â†‟ Scheduler â†‟ Repair Request flag (check).
This is only possible when we use MM & SD modules. Q) InfoPackage groups? A) Q) Explain the status of records in Active & change log tables in ODS when modified in source system? a) Q) Why it takes more time while loading the transaction data even to load the transaction without master data (we check the checkbox, Always Update data, even if no master data exits for the data)? A) Because while loading the data it has to create SID k eys for transaction data. Q) For what we use HIDE fields, SELECT fields & CANCELLATION fields? A) Selection fields-- The only purpose is when we check this column, the field will appear in InfoPackage Data selection tab. hide fields -- These fields are not transferred to BW transfer structure. Cancellation - It will reverse the posted documents of keyfigures of customer defined by multiplying it with -1...and nullifying the value. I think this is reverse posting Q) Transporting. A) When it comes to transporting for R/3 and BW, u shou ld always transport all the R/3 Objects first………once you transport all the R/3 objects to the 2nd system, you have to replicate the datasources into the 2nd BW system…and then transport BW objects.
First you will transport all the datasources from 1st R/3 system to 2nd R/3 System. Second, you will replicate the datasources from 2nd R/3 system into 2nd BW system. Third, you will transport all the BW Objects from 1st BW system to 2nd BW system. You have to send your extractors first to the corresponding R/3 Q Box and replicate that to BW. Then you have to do this transport in BW. Development, testing and then production Q) Functionality of InitDelta & Delta Updates? A) Q) What is Change-run ID?
A) Q) Currency conversions? A) Q) Difference between Calculated KeyFigure & Formula? A) Q) When does a transfer structure contain more fields than the communication structure of an InfoSource? A) If we use a routine to enhance a field in the communication from several fields in the transfer structure, the communication structure may contain more fields. A) The total no of InfoObjects in the communication structure & Extract structure may be different, since InfoObjects can be copied to the communication structure from all the extract structures. Q) What is the PSA, technical name of PSA, Uses? A) When we want to delete the data in InfoProvider & again want to re-load the data, at this stage we can directly load from PSA not going to extract from R/3. A) For cleansing purpose. Q) Variables in Reporting? A) Characteristics values, Text, Hierarchies, Hierarchy nodes & Formula elements, Q) Variable processing types in Reporting? A) Manual, Replacement path, SAP Exit, Authorizations, Customer Exit Q) Why we use this RSRP0001 Enhancement? A) For enhancing the Customer Exit in reporting. Q) What is the use of Filters? A) It Restricts Data. Q) What is the use of Conditioning? A) To retrieve data based on particular conditions like less than, greater than, less than or equal etc., Q) Difference between Filters & Conditioning? A) Q) What is NODIM? A) For example it converts 5lts + 5kgs = 10. Q) What for Exception's? How can I get PINK color?
A) To differentiate values with colors, by adding relevant colors u can get p ink. Q) Why SAPLRSAP is used? A) We use these function modules for enhancing in r/3. Q) What are workbooks & uses? A) Q) Where are workbooks saved? A) Workbooks are saved in favorites. Q) Can Favorites accessed by other users? a) No, they need authorization. Q) What is InfoSet? A) An InfoSet is a special view of a dataset, such as logical database, tab le join, table, and sequential file, and is used by SAP Query as a source data. InfoSets determine the tables or fields in these tables that can be referenced by a report. In most cases, InfoSets are based on logical databases. SAP Query includes a component for maintaining InfoSets. When you create an InfoSet, a DataSource in an application system is selected. Navigating in a BW to an InfoSet Query, using one or more ODS objects or InfoObjects. You can also drill-through to BEx queries and InfoSet Queries from a second BW system, that is Connected as a data mart. The InfoSet Query functions allow you to report using flat data tables (master data reporting). Choose InfoObjects or ODS objects as data sources. These can be connected using joins. __You define the data sources in an InfoSet. An InfoSet can contain data from one or more tables that are connected to one another by key fields. The data sources specified in the InfoSet form the basis of the InfoSet Query. Q) LO's? A) Synchronous update (V1 update) Statistics update is carried out at the same time as the document update in the same task. • Asynchronous update (V2 update)
Document update and the statistics update take place separately in different tasks. • Collective update (V3 update)
Again, document update is separate from the statistics u pdate. However, in contrast to the v2 update, the V3 collective statistics update must be scheduled as a job. Successfully scheduling the update will ensure that all the necessary information Structures are properly updated when new or existing documents are processed. Scheduling intervals should be based on the amount of activity on a particular OLTP system. For example, a development system with a relatively low or no volume of new documents may only need to run the V3 update on a weekly basis. A full production environment, with hundreds of transactions per hour may have to be updated every 15 to 30 minutes. SAP standard background job scheduling functionality may be used in order to schedule the V3 updates successfully. It is possible to verify that all V3 updates are successfully completed via transaction SM13. This transaction will take you to the "UPDATE RECORDS: MAIN MENU" screen. At this screen, enter asterisk as your user (for all users), flag the radio button 'All' and hit enter. Any outstanding V3 updates will be listed. While a non-executed V3 update will not hinder your OLTP system, by administering the V3 update jobs properly, your information structures will be current and overall performance will be improved. COMPENDIT MAKES NO REPRESENTATIONS ABOUT THE SUITABILITY OF THE INFORMATION CONTAINED IN THE DOCUMENTS AND RELATED GRAPHICS PUBLISHED ON THIS SERVER FOR ANY PURPOSE. ALL SUCH DOCUMENTS AND RELATED GRAPHICS ARE PROVIDED "AS IS" WITHOUT WARRANTY HAVE ANY KIND. Business Content Business Content is the umbrella term for the preconfigured BW objects delivered by SAP. These objects provide ready-made solutions to basic business information requirements and are used to accelerate the implementation of a BW. Business Content includes: R/3 extractor programs, DataSources, InfoObjects, InfoSources, InfoCubes, Queries, Roles, and Workbooks. From the Grouping menu, choose the additional Business Content objects that you want to include. Groupings gather together all of the objects from a single area: Only Necessary Objects: Only those additional objects that are needed to activate the objects that you have selected are included (minimum selection).
In Data Flow Before: All the objects that pass data on to another object are collected. In Data Flow Afterwards: All the objects that receive data from another object are collected. In Data Flow Before and Afterwards: All the objects that both deliver and receive data are collected. Backup for System Copy: You use this setting to collect some of the objects into a transport request. This request can be added again after a system copy has been made. Q) I found 0fiscyear has no master data table, and 0fiscper has master data table t009. Does anyone know how the system gets the data for these 2 info objects? A) From context menu of source system à transfer global settings, based on FISCVARNT you can take data for 0FISCYEAR and 0FISCPER Q) I am facing an odd problem in my dev box. I am using 0FIAP_O03 to load 0FIAP_C03 (InfoSource 0FI_AP_4 loads the ODS). I have loaded the ODS with R/3 data without any problem. I saw the data in New Data table and then after activation I am able to see the data in Active Data table and Change Log table. Now, when I want to do the delta initialization in the cube using ODS data, the request fails miserably. In fact, all the update rules are active between ODS and cube and all of them are one to one mappings (not even a single ABAP routine in UpdateRules). If the cube and the corresponding update rules are active and the data loads immaculately perfect until ODS, why would the load from ODS to cube fail? (There are no lock entries in the system). Does anyone have any idea? A) You must have a Job Log in SM37. If not the job was n ever started. Q) I have checked up sm37, the job shows as complete! But the request status under InfoCube à manage à request tab is still showing yellow. Assuming a false status, I have checked for the data in the cube. Nope. No data there either. Do you have any clue why would the job log show as complete, where as no data appears in the cube? Regarding the export datasource issue, I have tried that too before posting the question. In fact, why would you want to go with export datasource when you would have already created an export datasource by creating update rules between ODS and cube! Sorry for the silly question, but any help in this regard is highly appreciated. A) Hi, Maybe you have to do a 'Generate Export Dataso urce' by right-click on the ODS. Q) Actually I'm trying to create a simple standard, [order -Flat data] ODS, while I tried activating it in Info provider- RSA1, it gets saved but not getting activated, throws up error. I'm working on BW-3.1version; I enabled only BEx report in Settings. The errors are
1. I couldn't find the 0RECORDMODE object. BW Delta process: updatemode object in the left hand side KF or Char Info objects. So, I tried to insert the 0RECORDMODE object on the right hand side, to Data fields folder, it shows the object while trying to find 0RECORDMODE in INSERT INFOOBJECTS options [on right click at Data Fields Folder], But once I enter this object, said continue, it didn't add to the DATA FILED Folder side along with my keyfigures.
2 Could not write in to the output device T'- Error: So, I just tried to activate the ODS object I'm getting the error 'could not write in to the output device T' in status bar. Also the status is inactive. What could be the error?
Q) I need to populate the results of my query into another InfoCube. Any ideas on how I should go about this?
Q) For the current project I am looking at, they have 2 major companies in their Group. Both companies are on different clients in R3. Their configuration and system setup is different in R3. Is it advisable to separate their BW systems to 2 different clients as well? Or is it recommended to actually fit both in one? What's the best practice?
Q) I am creating a CO-PA datasource. I successfully set the business content datasources into active versions in R/3. Then, I try to create CO-PA datasource for Transaction data using KEB0. However, I cannot see any value fields.... you know all the VVOCO and those things.
characteristics from Segment Item and Line model and others including Calculated Key figures are available fine, except the value fields. Is there any way I can set to make value fields available?
Q) While executing the query it generally takes the default name of the excel sheet as BOOK1, BOOK2, but My client wants that the default name should be the same name as query name.
A) Embed the query in a workbook saved as the name of t he query and have your client open the workbook as opposed to the query.
Q) Considering that I have in this infocube 6 dimensions, 30 characteristics and 20 key figures, do you see any simply any way to make my upload process easier? Do you think that the server will support that amount of data? Or which considerations should I add to make sure that this upload process would run?
Q) We need to find the table in which query and variable assignments are stored. We must read this table in a user-exit to get which variables are used in a query. A) Check out tables RSZELTDIR and RSZCOMPDIR for query BEx elements. I found this info on one of the previous posting, variable table, RSZGLOBV, query table - get query id from table RSRREPDIR (field RSRREPDIR-COMPUID), use this id for table start with RSZEL* When 'ZFISPER1' name of the variable when VNAM = 'ZVC_FY1' - characteristics. Step 1 - before selection Step 2 - after selection Step 3 - processed all variable at the same time Q) Actually the ODS has data till date (09 Dec) which is coming from 2 datasources, but the infocube has data upto 8 November only as we have deleted few requests because there was mismatch of notification complaints encountered during reporting. So we tried to update the data thru "Update ODS Data in data target" by selecting the delta update option, we are getting an error message called "Delta update for ZUODEC02 (ODS) is invalidated". Please let me know the solution to come out this problem. Can we go for Initialize delta update again for cube?
Q) How is the display of USD in currency controlled to be seen as USD or $? A) You can control Currency display with the following customizing point. In BW customizing BW Customizing Implementation Guide à Reporting-relevant Settings à General Reporting Settings à Set Alternative Currency Display In this table, you can specify the symbol or string you want to use.
If this table is empty USD, the symbol used in BEx restitution is $. Q) Deleting Data from PSA? A) Context menu PSA and delete data or context menu à Edit several requests delete the required request and in reconstruction tab of cube manage delete request. Q) If u update data from ODS to data target. System will generate InfoSource with one prefix name? A) It will generate with prefix name starting with 8 along with Info Source name. Q) How to check physically about data load? A) At the bottom while updating. tRFC8 ani naming convention tho executing avuthu untundhi -- just have a look when u update from ODS. Q) What is an aggregate? A) Aggregates are small or baby cubes. A subset of InfoCube. Flat Aggregate --when u have more than 15 characteristics for aggregate system will generate that aggregate as flat aggregate to increase performance. Roll-up--when the data is loaded for second time into cube, we have to roll-up to make that data available in aggregate. Q) X & Y Tables? A) X-tables and Y-tables only have primary key. X-table (SID- key relationship plus SID columns per time-independent navigational attribute) or a Y-table (SID- key relationship. Timestamp, SID columns per time-dependent navigational attribute) Q) Routine with Return Table A) Update rules generally only have one return value. However, you can create a routine in the tab strip key figure calculation, by choosing Return table. The corresponding key figure routine then no longer has a return value, but a return table. You can then generate as many key figure values, as you like from one data record. In the routine editor, you will find the calculated characteristic values in the structure ICUBE_VALUES. Change these values accordingly (in the above example: Employee), fill the field for the relevant key figure (in the above example: Sales revenue), and use this to fill the return table RESULT_TABLE Q) What is Line-Item Data, in which scenario u use line item dimension? A) A line item dimension in a fact table does not have the dimension table, it conn ects
directly with the SID table at its sole characteristic. When there is only one characteristic in a dimension, it will eliminate the dimension table. Fact table directly linked to the SID table. Compressing InfoCubes Use When you load data into the InfoCube, entire requests can be inserted at the same time. Each of these requests has its own request ID, which is included in the fact table in the packet dimension. This makes it possible to pay particular attention to individual requests. One advantage of the request ID concept is that you can subsequently delete complete requests from the InfoCube. However, the request ID concept can also cause the same data record (all characteristics agree, with the exception of the request ID) to appear more than once in the fact table. This unnecessarily increases the volume of data, and reduces performance in Reporting, as the system has to aggregate using the request ID every time you execute a query. Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0). This function is critical, as the compressed data can no longer be deleted from the InfoCube using its request IDs. You must be absolutely certain that the data loaded into the InfoCube is correct. Functions You can choose request IDs and release them to be compressed. You can schedule the function immediately or in the background, and link it to other events. Compressing one request takes approx. 2.5 ms per data record. With non-cumulative InfoCubes, compression has an additional effect on query performance. Also, the marker for non-cumulatives in non-cumulative InfoCubes is updated. This means that, on the whole, less data is read for a non-cumulative query, and the reply time is therefore reduced. If you run the compression for a non-cumulative InfoCube, the summarization time (including the time to update the markers) will be about 5 ms per data record. If you are using an Oracle database as your BW database, you can also carry out a report using the relevant InfoCube while the compression is running. With other manufacturers' databases, you will see a warning if you try to carry out a report using an InfoCube while the compression is running. In this case you can only report on the relevant InfoCube when the compression has finished running. If you want to avoid the InfoCube containing entries whose key figures are zero values (in reverse posting for example) you can run a zero-elimination at the same time as
the compression. In this case, the entries, where all the key figures are equal to 0, are deleted from the fact table. Zero-elimination is permitted only for InfoCubes, where exclusively key figures with the aggregation behavior 'SUM' appear. You are not permitted to run a zero-elimination with non-cumulative values in particular. Activities For performance reasons, and to save space on the memory, summarize a request as soon as you have established that it is correct, and is no longer to be removed from the InfoCube. Q) You can convert currencies at 2 places? A) One is at update rule level & one at the front-end (reports lo) Q) You can create infospokes only on 4 objects. A) ODS, InfoCubes (Master data attributes and texts) & not o n hierarchies and multiproviders. I mean for data offloading from BW to excel. Not only into excel but also to application server and also to database tables. Q) If you use process chains, you can Automate the complex schedules in BW with the help of Event-controlled processing, Visualize the schedule by using network applications and Centrally control and monitor the processes. Q) Transportation For BEx you can have only one transport request. There should always be one transport open if you want to create or edit queries. All the queries which u create or edit are assigned to this request. Suppose there is no open request, then you cannot edit or create any queries Once you have some queries in a request and you want to transport then you can release this one and immediately create a new one so that u can again create or change queries. Q) Our is a SAP-BW with a non-sap source system. Everyday for client information we usually check the data loading thru' monitor. Is there any ways so that I can connect our mail-server to it & get automatic e-mails from BW server about every time after the transaction data is loaded? A) Go to SBWP Tcode there u have set for getting automatic mails.
A) Write a transfer routine for the infoObjects to extend the length of the data type. Q) How can we stop loading data into infocube? A) First you have to find out what is the job name from the mo nitor screen for this load monitor screen lo Header Tabstrip lo untadhi. SM37 (job monitoring) in r/3 select this job and from the menu u can delete the job. There are some options, just check them out in sm37 and also u have to delete the request in BW. Cancellation is not suggestible for delta load. Q) If the Ship to address is changed, will it trigger delta for the 2LIS_11_VAITM extractor? A) I suppose you are talking of changing the "Ship to" ad dress information in the Ship to master data. That has nothing to do with Transactional Data. Therefore no delta will be in the 2LIS_11_VAITM extractors. Address is changed for ShipTo on the sales order document (n ot Master data) and this address is saved in ADRC table. Q) How to reset the data mart status in the ODS. Which transaction is used for resetting the status? A) Go to the ODS and from context menu, Manage Reset t he Data Mart Status (third column). If necessary restart the load from the DM which at InfoSource = "8ODS-Name" using Update Tab" Initialize Delta Process" Initial Load without Data Transfer" after resetting the Data Mart Status.
Q) We are facing a problem with authorizations for BPS. We have a large number of agents that need to enter plan data via a layout. In order to simplify the control of the necessary authorizations, we would like to filter via something similar to a user exit using a function module in order to avoid having to define authorization objects for each of the agents who have access to the systems. Right now, we are not sure if there is user exit concept available as it is for BW variables. A) In BPS, you can use user specific variables or you can set up a Variable of type exit. You can also have a variable of type authorization which uses the security / authorization of the BW system. Q) Here are the steps for Process chain A) Call transaction RSPC Create a process chain with the start process; herein you can > schedule > your whole Process chain (at once, by an event, by day, and so on) Call the icon "process types" >
within the process types: Call the infopackage you need to load the data from the source > system to > ODS > Activate the ODS à Update the ODS into the InfoCube à Rollup the data from the InfoCube to the aggregate the connection between the steps from 2 until 7 are connected by à EVENTS; you à can create events by pressing the right mouse click down from the à predecessor à process to the successor process; the system asks you à independent from à error, only by error, only by success". In such a way you can create the process chain Copying Process chains, there is a way to copy process chain. In the RSPC view, open the desired process chain to be copied. Type COPY in the transaction-calling field on left top of the screen. The process chain will be copied then. However, I am using BW 3.1C. Perhaps this works in 3.0 as well. Q) I want to add the ADNR (address number) of the ship-to party to the standard extractor in LBWE. We have ONE-Time ship-to recipients and we want to be able to report on the STATE of the recipient rather than the state of the sold-to. I just wanted verification that I might be going in the right direction.. In order to add this field could I add the ADNR to the include MCPARTUSR section of the MCPARTNER structure? (And hopefully those fields would then be available in any of the MCVBAK, MCVBAP Communication structures...?) Or add an additional append ZMCPART, and then populate the field with a user exit during the extraction process. Or I could add it directly to the MCVBAK in a "Z". Append (?). Then populate the field with a user exit during the extraction process. Has anyone attempted something like this before? Is that what I need to do to get it into the communication structures for LBWE the logistic cockpit to use it? Haven't seen many posts or documentation that talks about specifics. Saw a bunch on master data augmentation, but not on the transaction extractors A) We ultimately did add a few fields to the structure, and then populated them with a user exit. Every document is associated with an ADDR, so why not give the 0DOCUMENT an attribute called ADDR or a full set of attributes that include addresses. Then build an infoset on R/3 to populate the 0DOCUMENT object directly with addresses. Treat the addr as "master" data of the 0document object rather than part of the transactional data. I THINK it would work but am not certain yet. Q) Can any body know how to create datamarts for Masterdata characteristics? Here for example I have two master data characteristics like 0Mat_plant and Zmatplt in the same BW
system. Now I want to upload the data from 0Mat_plant to Zmat_plant using the delta enabled infopackage. A) Datamarts are a functionality of an ODS you need one extraction program created by the ODS to make the upload into the data target) and not a functionality from Master data tables. Therefore create one ODS for jour object "like 0Mat_plant" Q) To check the version of particular infoobject in table RSDIOBJ Q) I have a break on one document between R/3 and BW. The document does exist in BSEG, but is missing from BW. All the other documents that were posted during that time were extracted by BW except this one. What could be the reason? A) The record may be missing because of any logic in update rules or transfer rules logic. I mean to say that there may be any filter on that particular record in Update or transfer rules. First check whether the record exist in RSA3. Then check in PSA whether that record is in BW or not. Then check the update rules. Q) What's the difference between R3 drill down reporting and BW reporting? These two reporting seem to function similarly with slice and dice. If they function similarly, then that means we don't have to implement BW, just use R3 Drill down reporting tool, am I right? A) The major benefits of reporting with BW over R/3 are in terms of performance a nd analysis. 1. Performance -- Heavy reporting along with regular OLTP transactions can produce a lot of load both on the R/3 and the database (cpu, memory, disks, etc). Just take a look at the load put on your system during a month end, quarter end, or year-end -- now imagine that occurring even more frequently. 2. Data analysis -- BW uses a Data Warehouse and OLAP concepts for storing and analyzing data, where R/3 was designed for transaction processing. With a lot of work you can get the same analysis out of R/3 but most likely would be easier from a BW. Features in BI 7.0 or Netweaver 2004s
Below are the features in SAP BI 7.0 version. Some of the features are new and the others are tweaked from previous versions. Metadata Search (Developer Functionality) :
1. It is possible to search BI metadata (such as InfoCubes, InfoObjects, queries, Web templates) using the TREX search engine. This search is integrated into the Metadata Repository, the Data Warehousing Workbench and to some degree into the object editors. With the simple search, a search for one or all object types is performed in technical names and in text. 2. During the text search, lower and uppercase are ignored and the object will also be found when the case in the text is different from that in the search term. With the advanced search, you can also search in attributes. These attributes are specific to every object type. Beyond that, it can be restricted for all object types according to the person who last changed it and according to the time of the change. 3. For example, you can search in all queries that were changed in the last month and that include both the term "overview" in the text and the characteristic customer in the definition. Further functions include searching in the delivered (A) version, fuzzy search and the option of linking search terms with “AND” and “OR”.
4. "Because the advanced search described above offers more extensive options for search in metadata, the function ""Generation of Documents for Metadata"" in the administration of document management (transaction RSODADMIN) was deleted. You have to schedule (delta) indexing of metadata as a regular job (transaction RSODADMIN).
Effects on Customizing
Installation of TREX search engine
Creation of an RFC destination for the TREX search engine
Entering the RFC destination into table RSODADMIN_INT
Determining relevant object types
Initial indexing of metadata"
Remote Activation of DataSources (Developer Functionality) :
1. When activating Business Content in BI, you can activate DataSources remotely from the BI system. This activation is subject to an authorization check. You need role SAP_RO_BCTRA. Authorization object S_RO_BCTRA is checked. The authorization is valid for all DataSources of a source system. When the objects are
collected, the system checks the authorizations remotely, and issues a warning if you lack authorization to activate the DataSources. 2. In BI, if you trigger the transfer of the Business Content in the active version, the results of the authorization check are based on the cache. If you lack the necessary authorization for activation, the system issues a warning for the DataSources. BW issues an error for the corresponding source-system-dependent objects (transformations, transfer rules, transfer structure, InfoPackage, process chain, process variant). In this case, you can use Customizing for the extractors to manually transfer the required DataSources in the source system from the Business Content, replicate them in the BI system, and then transfer the corresponding source-system-dependent objects from the Business Content. If you have the necessary authorizations for activation, the DataSources in the source system are transferred to the active version and replicated in the BI system. The source-systemdependent objects are activated in the BI system. 3. Source systems and/or BI systems have to have BI Service API SAP NetWeaver 2004s at least; otherwise remote activation is not supported. In this case, you have to activate the DataSources in the source system manually and then replicate them to the BI system.
How to automate flat file loads into SAP BW?
Flat file uploads are pain in the neck in most cases. However, in many cases we do not have an option except to go with flat file uploads because of non SAP source systems or in cases where business users has to consolidate source file from different systems. In any case automating these flat file loads will save much needed time and head ache for SAP BW support team and saves the delay in emailing the file to support team. This also helps with the security of sensitive data in flat files. There are different options to automate flat file loads. You have to choose the best option for your environment. It depends on your system architecture, available interfaces, type of scheduling tool being used etc. For instance there are scheduling tools like (IBM
Maestro/Tivoli) that has features (like being able to start the jobs based on presence of a file in the application server). It is relatively easy and straight forward to automate if you are using such a tool. There are three critical parts in automating flat file loads. 1. Getting the flat file on to the app server (Flat file needs to be on the app server to be able to schedule the infopackage in the backgroud, you can schedule it in the back ground to load from client workstation) 2. Detecting the flat file automatically and starting the job/process chain to load the file. 3. Actual loading of the file, checking for format errors, sending user notifications, post cleaning steps like archiving/moving files Getting the flat file on to App Server: For this step can use a FTP client to upload the file from user‟s computer to a designated
directory/folder in the app server. This works in the cases where a single user/small user group is responsible for generating the flat files. One can give upload access to the user/user group to the folder where flat files reside before being loaded into BW. Users can upload the file using a ftp client like WSFTP pro. The above approach works in case of single user/small user group. It won‟t work in case of a
large user group because of file overwrite and timestamp issues. The better approach for large user group is to use ftp scripts to consolidate and ftp the file to app server from windows shared folders. This is how the second approach works: 1. Users put their source files on to a common share windows NT drive (folder) 2. A batch script consolidates (if there are more than one file) and ftps the file to app server 3. This windows batch script can be schedule to run for every 30 mins or 1 hour depending on the requirement.
4. The advantage of this approach is that no training of FTP client is needed for end users, the script takes care of multiple files issue and also it is scalable meaning can easily allow new users to upload the files all they need is access to the shared folder Detecting the flat file and automatically and starting the job/process chain:
This is the critical part. Check your scheduling tool features if you are using one to see if it is possible to schedule jobs dependent on presence of a file in the app server. I know that IBM Maestro/Tivoli supports this. Other tools also might support. You need to resubmit the schedule at the end of the load to be able to load multiple files in a single day. If you are using a scheduling tool that supports this all you need to do is schedule the jobs to be dependent of the file. You need to define the file name and directory path in when you define the jobs in the tool. The scheduling tool checks for the file at regular intervals ( 10 or 15 mins ) and starts the schedule when the file is present. Don‟t worry if you are not using a scheduling tool or your scheduling tool does not support
this. There are other options. You can write an ABAP program to check if a file is present in the app server. This ABAP program can be used to trigger the process chain that loads the actual file. This is how this works: 1. ABAP program is scheduled to run every 30 mins or 1 hour depending on the requirement. This program triggers an event if the file is present. 2. A process chain scheduled based on the event from above step. This process chain loads the file into BW. Post load and cleanup activities:
You need to do some housekeeping activities to avoid overlap of loads, keeping track of files and time stamps for troubleshooting if there are any data issues.
1. First step is to rename the file at the beginning of the process chain/schedule. For instance if your file name is flatfile.csv , rename it to flatfile.cbs.load. By doing this you can avoid overlapping loads. Let us assume your ABAP program runs every 30 mins, it detects the file and starts the load at 10 AM and also assume that the file is huge and it takes more than 30 mins to load. In this scenario the next run of the ABAP program at 10:30 AM detects the same file and it triggers the process chain before the fist file load is complete. Renaming the file avoids this issue. 1. Check flat file for formatting issues: Top most problems with flat files are formatting issues. You can write an ABAP program to check for formatting issues before hand and notify the users and skip the data load. 1.2.
Archive the file to archive directory after the successful load and notify the
users. You can use UNIX command if your scheduling tool supports or ABAP program or SM36 job for this.
How to change process chain (PC) status in SAP BW?
There are scenarios where you need to change status of a process chain or a particular step in process chain. It is easy to change the status of a step if it is data load, you can just change the status of request in the monitor and it will in turn changes the status in the process chain. The problem is other types of processes like master data activation, custom ABAP programs etc. In these cases there is no straight forward way to change the status. You might need to change the status in the cases where we need mark these steps successful so that dependent steps get processed. There are other scenarios where one needs to change the status of a single step in the process chain or status of whole process chain. Step by step instructions on change process chain status 1. Right click on the failed step in the process chain monitor and go to displaying messages 2. Go to „Chain‟ tab and note down variant, instance and start date
3. Go to SE16 and the table RSPCPROCESSLOG and enter variant, instance and start dates from step 2 and note down log_id, type, variant, instance. 4. Go to SE37 transaction and execute the function module RSPC_PROCESS_FINISH and enter the values from step 3 and enter the new status „G‟ in status fie ld and execute the FM
5. This sets the status of the process chain (PC) After you set the status using the FM, go to the monitor screen of the process chain, you will notice the changed status. Now dependent steps in the process chain will start running.
SAP BW - Navigational Attributes - Importance, Usage, How to? Navigational Attributes in SAP BW
Navigational attribute is like any other master data attribute except that we can navigate (filter, drill down and selection) in reports on this characteristic. As you know we can navigate only based on characteristics in infocube or multiprovider. However there are many scenarios where you want users to navigate in the reports based on a master data attribute because it is not always possible to have every required field as a characteristic for many reasons.
For starters, you should either do a lookup in BW or enhance the extractor on ECC side to add new characteristic to infocube, this is cumbersome and delta on the new fields may not work and most important the history data will not have this field populated. The solution is to use navigational attributes; no need of any lookups, no delta worries and navigational attribute can be used for history data as well. The only disadvantage is it will impact report performance a little bit. So you don‟t want to have too many navigable attributes in an infcoube. How to turn on navigational attributes?
Well, first you need to switch them on in the master data. Go to RSD1 -> Attribute tab -< Detail/Navigation Attributes -> Navigational Attribute on/off column. After this you need to switch them on in required infocubes and multproviders under navigational attributes section.
Caution: Transport of master data object might take long time when you switch on
navigation attributes in cases where master data volume is high. This is because SAP inserts sids int SID table (/BIC/X*) for new navigational attributes.
SAP BW : PSA and Changelog Deletion
Best practice to have a strategy on PSA and changelog deletion. For PSA data below are the best practices in my view.
PSA Deletion in SAP BW/BI(Business Intelligence)
1. Transaction Data – Keep upto 45 days, delete the requests older than 45 days. The rationale behind 45 days is that month end reporting is done within the first two weeks of the month. So by keeping 45 days we will have the opportunity to reload/recover data from PSA if there are in issues with data. 2. Master Data : For master data it depends on attribute or texts, delta or full.
Attribute Delta Loads : 2 weeks. Attribute full Loads : 1 week Texts: One week or no PSA at all
Change Log Data : I think there is lot of confusion in forums on deleting changelog data. Some say we can delete the changelog data no problem, some say we will get incorrect data if there are any updates for deleted old data.
For the record, change log data can be deleted, there are no issues with it even when we get changes for the records we deleted. Here is how change log /delta from ODS to the cube works.
1. Data is first loaded into new table (This is before ODS activation) 2. ODS Activation : During activation SAP compares records in new table with records in active table and generates changelog records and puts them under a unique id which corresponds to request id in the active table. Records in new table are deleted at the end of activation. 3. Now when you do a delta from the ODS to the cube it loads the records from changelog table based on request id, after the delta load is complete the records in change log table are never used for anything again unless you want to reconstruct/reload data requests by requests.
4. Even if you need to reload you do it by either full or delta init loads or data will be loaded from active table. How to delete changelog or PSA data in SAP BW?
OK. Now you have decided to delete change log and PSA data which has been sitting out there. How to know how many requests are there, what are big change log/PSA to get rid of first to save some much needed spaces. You can go to table RSTSODSREQUEST and export the list to excel or access file. It will give the no of requests by PSA and ODS. Actual deletion of change log can be done either in process chains or from the ODS >Manange -> Environment -> Delete change log data.
Tips and Tricks to improve SAP BW - BI performance
20 Technical tips and tricks to speed SAP Business Intelligence (BW - BI) query and report perforance by Dr. Bjarne Berg. This is wonderful presentation by Dr. Bajarne Berg whos is considered to be SAP BI guru. This presentationg gives 20 tips on improving SAP BW - BI performance. This presentation covers following topics. Download the presentation from the attachements section below. Check out resource to know more about Dr. Bjarne Berg and his work. Performance Issues & Tips
Multiproviders and Partitioning
Aggregates
Query Design & Chaching
Hardware & Servers
Designing for Performance
InfoCubes and DSOs
BI Accelerator
Sizing and Implementation
Management and Costs
Early Watch Reports
Information Broadcasting in SAP BI 7.0
This SAP document gives step by stepinstructions on troubleshooting information broadcasting in SAP BW/BI. Information broadcasting is used to send reports to users in email, publish reports to portal or for sending reports to printer. SAP has added new features to Information broadcasting in SAP BI 7.0 and improved over all functionality. In BI 7.0 you can also broadcast workbooks in addition to queries, no need to setup a separate precal server which you need to setup in 3.5 to be able to broadcast workbooks. Also now you can send reports in PDF format in BI 7.0. The first document attached to this article „Information Broadcasting with SAP Netweaver -04‟ gives an overview of information broadcasting features in SAP BI 7.0 explaining with examples where and how to use information broadcasting in different scenarios. The second document „Howto -troubleshoot-information-broadcasting.pdf‟ gives step by step instructions on troubleshooting information broadcasting. Getting information broadcasting to work with workbooks is not a trivial thing, this document helps in troubleshooting process. Download these documents from the attachments section below.
SAP Transaction Codes (tcodes): T-code Search Results for "datasource" Top of Form x
Find Tcodes
RSA6 - Maintain DataSources Basis - BW Service API RSDS - DataSource BW - Data Staging
RSA2 - SAPI DataSource Repository Basis - BW Service API KEB0 - Create CO-PA DataSource CO - Profitability Analysis RSA8 - DataSource Repository Basis - BW Service API RSA15 - DW Workbench: DataSource Tree BW - Administrator Workbench FAGLBW03 - Assign Gen. Ledger DataSource/Ledger FI - General Ledger Accounting RSDSD - DataSource Documentation BW - Data Staging CRMBWST - Genertd DataSource for BW Status Obj Cross Application - Cross Application Components RSDPMOLAPDS - MOLAP DataSource creation BW - OLAP Technology BWST - Gener. DataSource for BW Status Obj. Project Systems - Information System KEB1 - CO-PA Hierarchy DataSource CO - Profitability Analysis RSA2OLD - SAPI DataSource (Old GUI) Basis - BW Service API FCIWCU00 - Generate DataSources Enterprice Controlling - Forwarding to SAP BW KCBW - EC-EIS/BP: Generate DataSource Enterprice Controlling - Executive Information System --------------------- datasource related Transaction Codes---------------------- RSA3 - Extractor Checker Basis - BW Service API RSA6 - Maintain DataSources Basis - BW Service API RSA5 - Install Business Content Basis - BW Service API RSA7 - BW Delta Queue Monitor Basis - BW Service API RSO2 - Oltp Metadata Repository Basis - BW Service API LBWE - LO Data Ext.: Customizing Cockpit Logistics - Logistics Information System (LIS) RSA1 - Modeling - DW Workbench BW - Administrator Workbench CMOD - Enhancements Basis - Customer Enhancements SE11 - ABAP Dictionary Maintenance Basis - Dictionary Maintenance RSDS - DataSource BW - Data Staging SAP Transaction Codes (tcodes): T-code Search Results for "LO Cockpit" LO Cockpit
x
Find Tcodes
--------------------- LO Cockpit related Transaction Codes---------------------- LBWE - LO Data Ext.: Customizing Cockpit Logistics - Logistics Information System (LIS) RSA7 - BW Delta Queue Monitor Basis - BW Service API LBWQ - Logistics Queue Overview Logistics - Logistics Information System (LIS) LBWG - Delete Newly Reorg. BW Data Logistics - Logistics Information System (LIS) RSA3 - Extractor Checker Basis - BW Service API SBIW - BIW in IMG for OLTP Basis - BW Service API SM13 - Administrate Update Records Basis - Client/Server Technology OLI7BW - Reorg. of VIS Extr. Struct.: Order Logistics - Logistics Information System (LIS) RSA6 - Maintain DataSources Basis - BW Service API CMOD - Enhancements Basis - Customer Enhancements RSA5 - Install Business Content Basis - BW Service API SPRO - Customizing - Edit Project Basis - Customizing Project Management (IMG) DBACOCKPIT - Start DBA Cockpit Basis - DB2 Universal Database for UNIX / NT SMQ1 - qRFC Monitor (Outbound Queue) Basis - RFC SM37 - Overview of job selection Basis - Background Processing OLI9BW - Reorg. VIS Extr. Str.: Invoices Logistics - Logistics Information System (LIS) SE11 - ABAP Dictionary Maintenance Basis - Dictionary Maintenance RSA2 - SAPI DataSource Repository Basis - BW Service API BMBC - Batch Information Cockpit Logistics - Batches OLI8BW - Reorg. VIS Extr. Str.: Delivery Logistics - Logistics Information System (LIS) SAP Transaction Codes (tcodes): T-code Search Results for "Copa Datasource" Copa Datasource
x
Find Tcodes
--------------------- Copa Datasource related Transaction Codes---------------------- RSA3 - Extractor Checker Basis - BW Service API RSA6 - Maintain DataSources Basis - BW Service API RSA7 - BW Delta Queue Monitor Basis - BW Service API
RSA5 - Install Business Content Basis - BW Service API RSO2 - Oltp Metadata Repository Basis - BW Service API RSA1 - Modeling - DW Workbench BW - Administrator Workbench LBWE - LO Data Ext.: Customizing Cockpit Logistics - Logistics Information System (LIS) SE11 - ABAP Dictionary Maintenance Basis - Dictionary Maintenance CMOD - Enhancements Basis - Customer Enhancements KEB0 - Create CO-PA DataSource CO - Profitability Analysis SBIW - BIW in IMG for OLTP Basis - BW Service API RSDS - DataSource BW - Data Staging KE30 - Execute profitability report CO - Profitability Analysis KE24 - Line Item Display - Actual Data CO - Profitability Analysis RSA2 - SAPI DataSource Repository Basis - BW Service API SE38 - ABAP Editor Basis - ABAP Editor KE4I - View maintenance VV2_T258I_V CO - Profitability Analysis SE16 - Data Browser Basis - Workbench Utilities KEDR - Maintain Derivation Strategy CO - Profitability Analysis KE21N - CO-PA Line Item Entry CO - Profitability Analysis SAP Transaction Codes (tcodes): T-code Search Results for "content extractors" content extractors
x
Find Tcodes
--------------------- content extractors related Transaction Codes---------------------- WCM - Work Clearance Management PM - Work Clearance Management OAC0 - CMS Customizing Content Repositories Basis - Content Management Service CSADMIN - Content Server Administration Basis - Content Management Service RSA1 - Modeling - DW Workbench BW - Administrator Workbench RSA5 - Install Business Content Basis - BW Service API KPRO - KPRO Administration Basis - Content Management Service SPRO - Customizing - Edit Project Basis - Customizing Project Management (IMG) OACT - Maintain Categories Basis - Content Management Service
RSA3 - Extractor Checker Basis - BW Service API SWDC - Workflow Definition: Administration Basis - SAP Business Workflow SXMB_MONI - Integration Engine - Monitoring Basis - Integration Engine SE38 - ABAP Editor Basis - ABAP Editor XSLT - XSLT tester Basis - XML Generic Technology SP01 - Output Controller Basis - Print and Output Management OAC3 - SAP ArchiveLink: Links Basis - ArchiveLink SBIW - BIW in IMG for OLTP Basis - BW Service API SICF - HTTP Service Hierarchy Maintenance Basis - Internet Communication Framework SMICM - ICM Monitor Basis - Internet Communication Manager SM59 - RFC Destinations (Display/Maintain) Basis - RFC SE11 - ABAP Dictionary Maintenance Basis - Dictionary Maintenance SAP Transaction Codes (tcodes): T-code Search Results for "generic datasource" generic datasource
x
Find Tcodes
--------------------- generic datasource related Transaction Codes---------------------- RSA3 - Extractor Checker Basis - BW Service API RSA6 - Maintain DataSources Basis - BW Service API RSO2 - Oltp Metadata Repository Basis - BW Service API RSA7 - BW Delta Queue Monitor Basis - BW Service API RSA5 - Install Business Content Basis - BW Service API RSA1 - Modeling - DW Workbench BW - Administrator Workbench LBWE - LO Data Ext.: Customizing Cockpit Logistics - Logistics Information System (LIS) SE11 - ABAP Dictionary Maintenance Basis - Dictionary Maintenance CMOD - Enhancements Basis - Customer Enhancements SBIW - BIW in IMG for OLTP Basis - BW Service API RSDS - DataSource BW - Data Staging SE16 - Data Browser Basis - Workbench Utilities RSA2 - SAPI DataSource Repository Basis - BW Service API
SE38 - ABAP Editor Basis - ABAP Editor LBWQ - Logistics Queue Overview Logistics - Logistics Information System (LIS) SE37 - ABAP Function Modules Basis - Function Builder CTBW - Table Maint. for BW and Classes Cross Application - Environment SM37 - Overview of job selection Basis - Background Processing SMQ1 - qRFC Monitor (Outbound Queue) Basis - RFC ST22 - ABAP dump analysis Basis - Syntax, Compiler, Runtime SAP Transaction Codes (tcodes): T-code Search Results for "Bex" Bex
x
Find Tcodes
RS_PERS_ACTIVATE - Activation of BEx Personalization BW - Business Explorer RS_PERS_BOD_DEACTIVA - Deactivate Pers. for BEx Open BW - Business Explorer RS_PERS_BOD_ACTIVATE - Activate BEx Open Pers. BW - Business Explorer RRMB - Upload Screens from BEx Browser BW - Business Explorer --------------------- Bex related Transaction Codes---------------------- RSRT - Start of the report monitor BW - OLAP Technology RRMX - Start the Business Explorer Analyzer BW - Business Explorer RSA1 - Modeling - DW Workbench BW - Administrator Workbench SPRO - Customizing - Edit Project Basis - Customizing Project Management (IMG) PFCG - Role Maintenance Basis - Authorization and Role Management CMOD - Enhancements Basis - Customer Enhancements RSECADMIN - Manage Analysis Authorizations BW - OLAP Technology ST01 - System Trace Basis - Low Level Layer SICF - HTTP Service Hierarchy Maintenance Basis - Internet Communication Framework SWDC - Workflow Definition: Administration Basis - SAP Business Workflow SAP Transaction Codes (tcodes): T-code Search Results for "BWA" BWA
x
Find Tcodes
MEREP_BWAFDEL - delivery of BWAFMAPP entries Basis - SAP NetWeaver Mobile
--------------------- BWA related Transaction Codes---------------------- TREXADMIN - TREX Administration Tool Basis - TREX ABAP + JAVA API RSDDBIAMON2 - BI Accelerator Maintenance Monitor BW - OLAP Technology RSRV - Analysis and Repair of BW Objects BW - Business Explorer RSDDV - Maintaining Aggregates/BIA Index BW - OLAP Technology BIC - Transfer Bank Data from BIC Database Cross Application - Bank SM37 - Overview of job selection Basis - Background Processing RSDDBIAMON - BI Accelerator Maintenance Monitor BW - OLAP Technology RSRT - Start of the report monitor BW - OLAP Technology SLG1 - Application Log: Display Logs Basis - Basis Application Log RSA1 - Modeling - DW Workbench BW - Administrator Workbench SAP Transaction Codes (tcodes): T-code Search Results for "transport" transport
x
Find Tcodes
STMS - Transport Management System Basis - Transport Management System SE10 - Transport Organizer Basis - Transport Organizer SE03 - Transport Organizer Tools Basis - Transport Organizer SE09 - Transport Organizer Basis - Transport Organizer SE01 - Transport Organizer (Extended) Basis - Transport Organizer SE06 - Set Up Transport Organizer Basis - Transport Organizer COSS - Transport of C Tables PP - Production Orders ME27 - Create Stock Transport Order MM - Purchasing GCTR - Transport from Report Writer objects FI - Basic Functions OOCR - Set up PD Transport Connection Basis - Organizational Management KE3I - CO-PA: Transport tool CO - Profitability Analysis ME37 - Create Transport Scheduling Agmt. MM - Purchasing RE_RHMOVE30 - Manual Transport Link Basis - Organizational Management /SAPAPO/SCC_TL1 - Transportation Lanes MM - Basic Functions SLXT - Translation Transport Basis - Translation Tools OBY9 - C FI Transport Chart of Accounts FI - Basic Functions VT04 - Transportation Worklist Logistics Execution - Transportation
/SAPAPO/TSOBJ - Transport Connection DP/SNP Project Systems - Project System LECI - Register Means of Transport/Visitor Logistics Execution - Basic Functions ME6Z - Transport Vendor Evaluation Tables MM - Vendor Evaluation FINB_TR_DEST - Destination for Transport Methods Financials - Financials Basis QCCY - Transport QM tolerance key QM - Quality Management MEKX - Transport Condition Types Purchasing MM - Purchasing STMS_PATH - TMS Transport Routes Basis - Transport Management System OKE5 - Transport Organization Customizing CO - Overhead Cost Controlling SAP Transaction Codes (tcodes): T-code Search Results for "delete bex" delete bex
x
Find Tcodes
--------------------- delete bex related Transaction Codes---------------------- RSRT - Start of the report monitor BW - OLAP Technology RRMX - Start the Business Explorer Analyzer BW - Business Explorer RSA1 - Modeling - DW Workbench BW - Administrator Workbench SE38 - ABAP Editor Basis - ABAP Editor SPRO - Customizing - Edit Project Basis - Customizing Project Management (IMG) SE16 - Data Browser Basis - Workbench Utilities PFCG - Role Maintenance Basis - Authorization and Role Management SM37 - Overview of job selection Basis - Background Processing SU01 - User Maintenance Basis - User and Authorization Management SE14 - Utilities for Dictionary Tables Basis - Activation Program, Conversion Program, DB Utility, MC, SPDD STMS - Transport Management System Basis - Transport Management System SE37 - ABAP Function Modules Basis - Function Builder SE11 - ABAP Dictionary Maintenance Basis - Dictionary Maintenance SARA - Archive Administration Basis - Archive Development Kit ST22 - ABAP dump analysis Basis - Syntax, Compiler, Runtime LSMW - Legacy System Migration Workbench Basis - Legacy System Migration Workbench CMOD - Enhancements Basis - Customer Enhancements
SE16N - General Table Display CO - Controlling SM30 - Call View Maintenance Basis - Table Maintenance Tool SM21 - Online System Log Analysis Basis - R/3 Syslog SAP Transaction Codes (tcodes): T-code Search Results for "spool" spool
x
Find Tcodes
SP02 - Display Spool Requests Basis - Print and Output Management SPAD - Spool Administration Basis - Print and Output Management SP00 - Spool and related areas Basis - Print and Output Management SPAT - Spool Administration (Test) Basis - Print and Output Management SPOV - Spool Request Overview Basis - Print and Output Management RSPO0055 - Installation Check: Spool Basis - Audit Information System RSPFPAR_SPOOL - Spool Parameters Basis - Audit Information System OMMZ - Spool Parameters for WM Print Ctrl Logistics Execution - Other Functions --------------------- spool related Transaction Codes---------------------- SP01 - Output Controller Basis - Print and Output Management SPAD - Spool Administration Basis - Print and Output Management SM37 - Overview of job selection Basis - Background Processing SP02 - Display Spool Requests Basis - Print and Output Management SE38 - ABAP Editor Basis - ABAP Editor SM36 - Schedule Background Job Basis - Background Processing SP12 - TemSe Administration Basis - Print and Output Management SCOT - SAPconnect - Administration Basis - Communication Services: Mail, Fax, SMS, Telephony SM21 - Online System Log Analysis Basis - R/3 Syslog F110 - Parameters for Automatic Payment FI - Financial Accounting SAP Transaction Codes (tcodes): T-code Search Results for "bad character" bad character
x
Find Tcodes
--------------------- bad character related Transaction Codes---------------------- RSKC - Maintaining the Permittd Extra Chars BW - Warehouse Management SM59 - RFC Destinations (Display/Maintain) Basis - RFC
SE37 - ABAP Function Modules Basis - Function Builder F103 - ABAP/4 Reporting: Trnsfr Receivables FI - Basic Functions SPRO - Customizing - Edit Project Basis - Customizing Project Management (IMG) I18N - Internationalization Basis - Internationalization (I18N) XSLT - XSLT tester Basis - XML Generic Technology SXMB_MONI - Integration Engine - Monitoring Basis - Integration Engine STRUST - Trust Manager Basis - Security SO10 - SAPscript: Standard Texts Basis - SAPscript OB04 - C FI Maintain Table T030F FI - Basic Functions SPAD - Spool Administration Basis - Print and Output Management SE11 - ABAP Dictionary Maintenance Basis - Dictionary Maintenance BIC - Transfer Bank Data from BIC Database Cross Application - Bank F104 - ABAP/4 Reporting: Receivables Prov. FI - Basic Functions SE38 - ABAP Editor Basis - ABAP Editor OBXD - C FI Maintain Table T030 FI - Basic Functions SUMG - Unicode Postconversion Basis - I18N Unicode SE16 - Data Browser Basis - Workbench Utilities SE73 - SAPscript Font Maintenance Basis - SAPscript BW Interview Questions
Written by Kevin Wilson To get the full 201 Interview Questions go to http://201interviewquestions.com/books/bw.htm 1. What are the advantages of an Extended star schema of BW vs. The star schema?
Uses generated numeric keys and aggregates in its own tables for faster access. Uses an external hierarchy. Supports multiple languages. Contains master data common to all cubes. Supports slowly changing dimensions.
2. How many dimensions are there in a cube?
There are a total of 16 dimensions in a cube. Of these 16, 3 are predefined by SAP and these are time, unit and request. This leaves the customer with 13 dimensions. 3. What is the transaction for the Administrator work bench?
Transaction RSA1 4. What is the “myself data mart”?
A BW system feeding data to itself is called the myself data mart. It is created automatically and uses ALE for data transfer. 5. What is an aggregate?
Aggregates are mini cubes. They are used to improve performance when e xecuting queries. You can equate them to indexes on a table. Aggregates are transparent to the user. 6. What is a calculated key figure?
A calculated key figure is used to do complicated calculations on key figures such as mathematical functions, percentage functions and total functions. For example, you can have a calculated key figure to calculate sales tax based on your sale price. 7. What is the enhancement user exit for BEx reporting?
RSR00001 8. What is a characteristics variable?
You can have dynamic input for characteristics using a characteristic variable. For example, if you are developing a sales report for a given product, you will define a variable for 0MATERIAL. 9. What is a condition?
If you want to filter on key figures or do a ranked analysis then you use a condition. For example, you can use a condition to report on the top 10 customers, or customers with more than a million dollars in annual sales. 10. What are the data types supported by characteristics?
NUMC Numeric CHAR (up to 60) Up to 60 characters DATS Date TIMS Time
11. What are the types of attributes?
Display only - These attributes are only for display and no analysis can be done. Navigational attributes - These attributes behave like regular characteristics. For example, assume that we have customer characteristics with country as a navigational attribute, you will then be able to analyze the data using customer and country. In the BEx query you can create filters or variables for country and you can also use the drill down feature. 12. What is meant by compounding?
Compounding defines the superior InfoObject, which must be combined to define an object. For example, when you define a cost center, the controlling area is the compounding (superior) object. 13. What are the 10 decision points of data warehousing?
Identify a fact table. Identify the dimension tables. Define the attributes of the entities. Define the granularity of the fact table (how detailed do you want the data to be).
Define pre-calculated key figures. Identify slowly changing dimensions. Identify aggregates. How long will the data be kept. How often is the data extracted. From which system is the data to be extracted.
14. What options are available in the transfer rule?
Assign an InfoObject – direct transfer, no transformation Assign a constant eg. If you are loading data from a specified country from a flat file, you can make the country (US) as a constant and assign the value explicitly ABAP routine eg. If you want to do some complex string manipulation, assume that you are getting a flat file from legacy data and the cost center is in a field and you have to “massage” the data to get it in. In this case the use of an ABAP routine is most appropriate Formula - for simple calculations use formula eg. If you want to convert all lower case characters to upper case, use the TOUPPER formula. You can use formula builder to help put your formulas together.
15. What is compression or collapse? This is the process by which we delete the request ID‟s which leads to space savings. All the
regular requests are stored in the F table. When you compress, the request ID is deleted and data is moved from the F table to the E table. This saves space and improves performance but the disadvantage is that you cannot delete the compressed requests individually. You can, however, still use selective deletion. If you are using noncumulative key figures in a cube, the cube should be compressed as often possible to improve performance. 16. What is an InfoSet?
An InfoSet is an info provider giving data by joining data from different sources like ODS and master data. You can also do an outer join in an InfoSet. InfoSets can also be used to combine transactional data with master data. For example, if you have quantity in the transaction data and you have price as an attribute of the material. Then you can have an InfoSet with transaction data and material where you will be able to do calculations based on material price in BEx. Another usage is, if you have ODS you can disable BEx reporting (in the setting) and use the ODS in the InfoSet for reporting, which leads to improved performance 17. What are non cumulative key figures?
These are key figures that are not summarized (unlike sales, etc.). Examples are head count and inventory amount. They are always shown in relation to a point in time. For example, we will ask how many employees we had as of last quarter. We don‟t add up the head count. First, to change your body language you must be aware of your body language. Notice how you sit, how you stand, how you use you hands and legs, what you do while talking to someone. You might want to practice in front of a mirror. Yeah, it might seem silly but no one is watching you. This will give you good feedback on how you look to other people and give you an opportunity to practise a bit before going out into the world.
Another tip is to close your eyes and visualize how you would stand an d sit to feel confident, open and relaxed or whatever you want to communicate. See yourself move like that version of yourself. Then try it out. You might also want observe friends, role models, movie stars or other people you think has good body language. Observe what they do and you don‟t. Take bits and pieces you like from different people. Try using what you can learn from them. Some of these tips might seem like you are faking something. But fake it til you make it is a useful way to learn something new. And remember, feelings work backwards too. If you smile a bit more you will feel happier. If you sit up straight you will feel more energetic and in control. If you slow down your movements you‟ll feel calmer. Your feelings will actually reinforce your new behaviours and feelings of weirdness will dissipate. In the beginning easy it‟s to exaggerate your body language. You might sit with your legs almost ridiculously far apart or sit up straight in a tense pose all the time. That‟s ok. And people aren‟t looking as much as you think, they are worrying about their own problems. Just
play around a bit, practice and monitor yourself to find a comfortable balance. 1. Don’t cross your arms or legs – You have probably already heard you shouldn‟t cross
your arms as it might make you seem defensive or guarded. This goes for your legs too. Keep your arms and legs open. 2. Have eye contact, but don’t stare – If there are several people you are talking to, give
them all some eye contact to create a better connection and see if they are listening. Keeping too much eye-contact might creep people out. Giving no eye-contact might make you seem insecure. If you are not used to keeping eye-contact it might feel a little hard or scary in the beginning but keep working on it and you‟ll get used to it. 3. Don’t be afraid to take up some space – Taking up space by for example sitting or
standing with your legs apart a bit signals self-confidence and that you are comfortable in your own skin. 4. Relax your shoulders – When you feel tense it‟s easily winds up as tension in your shoulders. They might move up and forward a bit. Try to relax. Try to loosen up by shaking the shoulders a bit and move them back slightly. 5. Nod when they are talking – nod once in a while to signal that you are listening. But don‟t overdo it and peck like Woody Woodpecker. 6. Don’t slouch, sit up straight – but in a relaxed way, not in a too tense manner.
7. Lean, but not too much – If you want to show that you are interested in what someone is saying, lean toward the person talking. If you want to show that you‟re confident in yourself and relaxed lean back a bit. But don‟t lean in too much or you might seem needy and desperate for some approval. Or lean back too much or you might seem arrogant and distant. 8. Smile and laugh – lighten up, don‟t take yourself too seriously. Relax a bit, smile and laugh when someone says something funny. People will be a lot more inclined to listen to you if you seem to be a positive person. But don‟t be the first to laugh at your own jokes, it
makes you seem nervous and needy. Smile when you are introduced to someone but don‟t keep a smile plastered on your face, you‟ll seem insincere. 9. Don’t touch your face – it might make you seem nervous and can be distracting for the
listeners or the people in the conversation. 10. Keep you head up - Don‟t keep your eyes on the ground, it might make you seem insecure and a bit lost. Keep your head up straight and your eyes towards the horizon. 11. Slow down a bit – this goes for many things. Walking slower not only makes you seem more calm and confident, it will also make you feel less stressed. If someone addresses you, don‟t snap you‟re neck in their direction, turn it a bit more slowly instead. 12. Don’t fidget – try to avoid, phase out or transform fidgety movement and nervous ticks such as shaking your leg or tapping your fingers against the table rapidly. You‟ll seem
nervous and fidgeting can be a distracting when you try to get something across. Declutter your movements if you are all over the place. Try to relax, slow down and focus your movements. 13. Use your hands more confidently – instead of fidgeting with your hands and scratching your face use them to communicate what you are trying to say. Use your hands to describe something or to add weight to a point you are trying to make. But don‟t use them to much or it might become distracting. And don‟t let your hands fl ail around, use them with some control. 14. Lower your drink – don‟t hold your drink in front of your chest. In fact, don‟t hold anything in front of your heart as it will make you seem guarded and distant. Lower it and hold it beside your leg instead. 15. Realise where you spine ends – many people (including me until recently) might sit or stand with a straight back in a good posture. However, they might think that the spine ends where the neck begins and therefore crane the neck forward in a Montgomery Burns-pose. Your spine ends in the back of your head. Keep you whole spine straight and aligned for better posture. 16. Don’t stand too close –one of the things we learned from Seinfeld is that everybody gets weirded out by a close-talker. Let people have their personal space, don‟t invade it.
17. Mirror - Often when you get along with a person, when the two of you get a good connection, you will start to mirror each other unconsciously. That means that you mirror the other person‟s body language a bit. To make the connection better you can try a bit of proactive mirroring. If he leans forward, you might lean forward. If she holds her hands on her thighs, you might do the same. But don‟t react instantly and don‟t mirror every change in body language. Then weirdness will ensue. 18. Keep a good attitude – last but not least, keep a positive, open and relaxed attitude. How you feel will come through in your body language and can make a major difference. For information on how make yourself feel better read 10 ways to change how you feel and for relaxation try A very simple way to feel relaxed for 24 hours.
You can change your body language but as all new habits it takes a while. Especially things like keeping you head up might take time to correct if you have spent thousands of days
View more...
Comments