Wednesday 28 March 2012

SAP BW Tables

Custom Infoobjects Tabels:
 
/BIC/MView of Master data Tables
/BIC/PMaster data Table, Time Independent attributes
/BIC/QMaster data Table, Time Dependent attributes
/BIC/XSID Table, Time Independent
/BIC/YSID Tabel, Time Dependent
/BIC/TText Table
/BIC/HHeirarchy Table
/BIC/KHeirarchy SID Table

Standard Infoobjects Tabels(Business Content):
 
Replace “C” with “0″ in above tables.

Ex: /BI0/MView of Master data Tables 

Standard InfoCUBE Tables :
 
/BI0/F —
Fact Table(Before Compression)
/BI0/E — Fact Table(After Compression)
/BI0/P — Dimension Table – Data Package
/BI0/T — Dimension Table – Time
/BI0/U — Dimension Table – Unit
/BI0/1, 2, 3, …….A,B,C,D : — Dimension Tables

BW Tables:
 
BTCEVTJOB To check List of jobs waiting for events
ROOSOURCEControl parameters for Datasource
ROOSFIELD Control parameters for Datasource 

ROOSPRMSCControl parameters for Datasource
ROOSPRMSFControl parameters for Datasource
 
– More info @ ROOSOURCE weblog

RSOLTPSOURCE – Replicate Table for OLTP source in BW
RSDMDELTA Datamart Delta Management
RSSDLINITSEL, RSSDLINITDEL Last valid Initialization to an OLTP 
                                                 Source
RSUPDINFO
Infocube to Infosource correlation
RSUPDDAT Update rules key figures
RSUPDENQ Removal of locks in the update rules
RSUPDFORMBW: Update Rules – Formulas – Checking Table
RSUPDINFOUpdate info (status and program)
RSUPDKEYUpdate rule: Key per key figure
RSUPDROUTUpdate rules – ABAP routine – check table
RSUPDSIMULD Table for saving simulation data update
RSUPDSIMULHTable for saving simulation data header information
RSDCUBEIOBJ Infoobjects per Infocube
RSIS Infosouce Info
RSUPDINFO — Update Rules Info
RSTS – Transfer Rules Info
RSKSFIELD – Communication Structure fields
RSALLOWEDCHAR – Special Characters Table(T Code : RSKC, To maintain)
RSDLPSEL – Selection Table for fields scheduler(Infpak’s)
RSDLPIO – Log data packet no
RSMONICTAB – Monitor, Data Targets(Infocube/ODS) Table, request 
                        related info
RSTSODS — Operational data store for Transfer structure
RSZELTDIR – Query Elements
RSZGLOBV BEx Variables
RXSELTXREF, RSCOMPDIR – Reports/query relavent tables
RSCUSTV – Query settings
RSDIOBJ Infoobjects
RSLDPSEL — Selection table for fields scheduler(Info pak list)
RSMONIPTABInfoPackage for the monitor
RSRWORKBOOK – Workbooks & related query genunid’s
RSRREPDIR – Contains Genuin id, Rep Name, author, etc…
RSRINDEXT – Workbook ID & Name
RSREQDONE – Monitor: Saving of the QM entries
RSSELDONE – Monitor : Selection for exected requests
RSLDTDONE – Texts on the requeasted infopacks & groups
RSUICDONE – BIW: Selection table for user-selection update Infocubes’s
RSSDBATCH – Table for Batch run scheduler
RSLDPDEL – Selection table for deleting with full update scheduler
RSADMINSV – RS Administration
RSSDLINIT — Last Valid Initializations to an OLTP Source
BTCEVTJOB –To check event status(scheduled or not)
VARI – ABAP Variant related Table
VARIDESC — Selection Variants: Description
SMQ1 – QRFC Monitor(Out Bound)
SM13 – Update Records status

T Code : LBWQ –> QRFC related Tables
TRFCQOUT,
QREFTID,
ARFCSDATA

How to Correct Routines in Transformations

What’s new with SAP NetWeaver BW 7.3(Product Management)

What’s New with SAP NetWeaver BW 7.30 and BW Accelerator 7.20?

Upcoming Developments in SAP Netweaver 7.2

Simplified data flow modeling with data flow copy- and data flow migration tool, Reduced development efforts via automated tool support

Data Flow copy tool
 
�� Copy an existing SAP NetWeaver BW 7.x data flow, i.e. data sources, 
      data targets, and transformations leveraging a wizard-like user 
      interface.

�� Complete copy an existing process chain (incl. related process variants 
      and data flow objects)

�� Possibility to copy a data flow to a source system which is only 
      available in the productive system landscape by assigning a dummy 
      source system in the development environment.


Data Flow migration tool (3.x �� 7.x)


 �� Migration of data flows with 3.x objects (3.x InfoSource, 3.x 
      DataSource, transfer rules, update rules) including adaption of 
      InfoPackage, process variants, process chains, VirtualProviders (‘Remote 
      Cubes‘)
 
�� Workbench-like UI for selecting data flows and objects to be migrated
 
�� Restore of 3.x data flow (from snapshots of the objects made during 
      migration) supported


Enhanced modeling capabilities to increase scalability , flexibility and reduce total cost of development
 
�� Semantic Partitioning
 

�� Semantic partitioned object (SPO) as a single point of entry for creation 
      and easy administration of the semantic partitioned data model

�� Wizard based partitioning of InfoCubes and DataStore objects along 
      criteria such as calendar year or region to manage high volume data 
      across different time zones.

�� Reduces TCO to define and manage scalable data models also when it 
      comes to re-model the partitions (because only the Reference structure 
      needs to be changed)

�� Automated generation of corresponding modeling objects like data 
      transfer processes (including respective filter criteria depending on the 
      data target) and transformations as well as a process chain.

�� Good Query performance by leveraging SAP NetWeaver BW Accelerator 
      as well as partition pruning when accessing partitioned InfoCubes

�� Semantic Partitioned Object can be embedded into a MultiProvider


Real Time Business Intelligence


Enterprise Data Warehousing Real-Time Business Intelligence

Implement operational reporting scenarios leveraging new modeling objects

HybridProvider
 
�� Combines real-time information with historic data

�� Enables SAP NetWeaver BW Accelerator for DataStore objects

�� Automatic generation of data flow within HybridProvider
      considering which underlying objects have been chosen

�� Scenario 1 (with InfoCube and VirtualProvider): Direct access to source 
      system leveraging a Virtual InfoProvider with a function module

�� Scenario 2 (with InfoCube and DataStore Object):DataStore object can 
      be fed by a Real-Time Data Acquisition (RDA) data flow (not a must!)
�� Create Queries with BEx Query Designer in SAP ERP 6 systems upon 
      Classical InfoSets

�� Enable Classic InfoSets in SAP ERP 6 (EhP 5 onwards) to be basis for 
      reports created with SAP NetWeaver BW BEx Suite

�� Operational Reporting via BEx on SAP ERP w/o replication of data to SAP 
      NetWeaver BW

�� Enhancements Real –Time Data Acquisition

�� Possibility to load master data with RDA

�� Enhanced error handling and scalability (daemon can handle more 
      DataSources more efficiently)

Integration Testing - Best Pratice

Integration Testing Strategy in SAP Projects:
 
I have outlined here the testing strategy, best practices and guide lines for choosing Testing Tool. 

Objectives of Integration Testing:
 
The overall Design/Solution built is accurate and correct from a technical perspective  

Integration of related SAP R/3 modules and business processes 

Integration of SAP configuration and custom developments, interfaces 

SAP R/3 integrates with the other SAP application (APO, BW, SEM, CRM ) 

Integration with the legacy systems

Integration Testing - Preparation Phase:
 
1. Identify the scope of testing - Scope should include all relevant business 
    scenarios, scenarios to test Interfaces with legacy if any, period-end 
    scenarios. 

2. Load the testing scripts in testing tools.

3. Identify Master Data and Organisation structure relevant for testing.

4. Identify testers and schedule testing

Testing Environment setup:
 
- Ensure all transports of configuration and programs are moved to intended  
  Testing Environment

- Perform check on configuration

- Perform manual configurations (like Variant setup or Number ranges etc)

- Setup user id required for testing

- Check if any dependencies with other process teams

- Create Master Data used for Testing

- Define Defect Management procedure and identify focal points in each  
  process areas for defects resolution
 
- Perform check on Master data's (Finance, Costing, Tax check etc) before 
  testing is commenced
 
Integration Testing - Execution Phase:
 
1. Testers will run the test cases and record the results in Testing tool and 
    raise defects where ever applicable.

2. Setup daily testing status review meetings with SAP process team's focal 
    points, defect focal points and legacy team focal points.

3. Run Daily status review meetings to review test execution progress with 
    each process team and set targets for next working day. Also review 
    defects which are blocking testing progress and raise escalation if 
    required to expedite resolution.

4. Generate daily reports covering all topics to provide good visibility on the 
    testing progress to all stakeholders.

5. Bridge the knowledge gaps if any between SAP process teams and Legacy 
    teams before start of the testing cycle.

6. Facilitate communication between SAP process teams, legacy teams and 
    Data load teams etc.

Defect Management Process:
 
All issues found during test execution should be logged in as a Defect. 

- Provide defect definitions with respect to severity (Low, Medium, and High) 
  and Priority (Low, Medium, and High)

- Setup defect Management rules on use of severity and priority to classify 
  defects.

- Recommend resolution times for defects based on severity and priority.

- Setup/Mark fields in the defect management tool to capture all inputs 
  required for defect resolution and also to perform defect analysis at end of 
  testing cycle (like cause of defects analysis)

- Define escalation progress and identify escalation focal points.

Best Practices for scoping:
 
1. Based on Business Value (Which should be present in the design to 
    support Business benefits) and Technical Risk (Complex logic, high 
    volumes, use of new technology) classify each scenarios as low, medium 
    and high. We can be used to reduce testing scope in case Draft scope is 
    high and with current resources it is not achievable to complete testing 
    within the time window

2. If Legacy systems are involved in testing then scoping is one of the most 
    important exercises. Setup review meetings with SAP process teams, 
    legacy team and Middleware teams to discuss and agree testing scope, 
    list and raise exceptions/step outs (step outs are deviations from regular 
    testing plan). Use simple excel checklist to record details of readiness 
    against Data (in legacy), availability of legacy system, training needs to 
    legacy team etc and Signoff.

3. If the solution is rolled out to multiple countries repeat the no of 
    scenarios to cover data sets of all countries.

Guidelines to choose Testing Tool:
 
I have outlined here the points to be considered when a choice on testing tools is made. 

- Testing Tool should have a Global capability with integrated planning and 
   execution capability.

- Tool should be capable of supporting all phases of testing and enabling 
   automation

- Multi user access and appropriate controls and security

- Full audit trail capability - enabling SOX compliance

- Integrated defect management and workflow capabilities

- Robust reporting for tracking and management

- Library based approach to test scripting - enabling reuse through all 
  phases of testing

Data Flow from LBWQ/SMQ1 to RSA7 in ECC (Records Comparison)

Checking the Data using Extractor Checker (RSA3) in ECC Delta and Repeat Delta etc

Beginner's Guide to BI - ABAP

Diffrent Routines avilable in SAP BI:  Link to Article

Routines in SAP BI 7.0 Transformations

Steps to Debug Routines in BI Transformations

This article explains the Debugging process for routines in Transformations. It helps us to trace the errors crept in Routines in the debug mode, so that it can be handled effectively.

Maintaining Roles and Authorizations in BI7.0 - RSECADMIN

Creating Customized TCODE for Updating Text in Info Objects

Commenting in BW using Single Document Web Item

How to implement CRM hierarchy in BI 7

When integrating CRM with BI 7 one can notice that Organization structure hierarchy cannot be easily uploaded to BI. Unfortunately, Business Content does not provide a standard solution for integrating CRM organization hierarchy. In this post I outline steps on how to implement a solution for loading organization hierarchy from CRM to BI. This approach has been implemented and tested with SAP CRM 5.2 and SAP BI 7.0.

As an introduction to hierarchy table structures and related concepts I suggest first looking at an earlier post on the topic: “Use master data Update Rules for Generating Hierarchies Automatically”. The approach described below is not limited to CRM only. It can be used in generating hierarchies dynamically for various data sources.

1. Clarify requirements

Let us imagine we have a business case where organization hierarchy is structured as follows: Total Organization – Region – Unit – Position – Employee (business partner). See an example from CRM (transaction code PPOMA_CRM):

image001.png

In order to keep the task simple we intend to design a solution that displays the hierarchy exactly as it is in the source system. If there are any changes in the organization structure we want to see the latest organization structure hierarchy in BI. That is, if an employee moves to another department his/her sales will move with them and will be reported under the new department. Another requirement is to process only English versions of node names.

Note: It is possible to slightly adjust code’s logic described below in order to 
         accommodate time dependant hierarchies.

One more requirement is related to the fact that some reports have to display Region and Unit next to employee’s code. In order to provide this capability we will add Region and Unit as attributes of the Employee infoobject, and populate them with appropriate values from the organization structure.

2. Create extractors

As business content does not provide a solution for extracting organization hierarchy, we have to build generic extractors. They have to be based on the following tables: HRP_1001 (nodes) and HRP_1000 (texts).

It is recommended to design database views on top of the tables and use them for building extractors: Z_HRP_1001 and Z_HRP_1000. If organization structure does not contain vast amount of nodes we can refresh hierarchy from CRM using a daily full refresh.

3. Create infoobjects in BI

Next we have to build objects in BI that would be updated by the extractors. We can use two infoobjects: one for organization structure (employee) attributes and hierarchy (ZORGSTR), and another one for texts (ZORGTXT).
Infoobject ZORGSTR has to have navigation attributes ZREGION and ZUNIT in order to link employees with corresponding region and units. I suggest using the following data flow:
image002.png

4. Implement attribute transformation to construct the hierarchy

The magic happens on the way from the extractor to the infoobject attributes (ZORGSTR). First we have to set both of the newly created infoobjects (ZORGSTR and ZORGTXT) as data targets in BI. The next step is to create a transformation for ZORGSTR and add some ABAP code in the start routine of the transformation.

In the transformation mapping screen three objects have to be mapped to the target fields of ZORGSTR:
  • SOBID -> ZORGSTR
  • ADATANR -> ZREGION
  • SHORT -> ZUNIT
The idea is to process all nodes coming from CRM with the extractor Z_HRP1001 and generate appropriate hierarchy nodes together with assigning proper attributes for Region and Unit. Below you will find guidelines and some ABAP code that can be used in the transformation start routine of the ZORGSTR infoobject:

* Select existing root node in the hierarchy
SELECT SINGLE * FROM /BIC/HZORGSTR INTO node
WHERE TLEVEL = '01'.
* Completely refresh hierarchy table
DELETE FROM /BIC/ HZORGSTR.
* Leave only current Plan entries in the hierarchy
DELETE SOURCE_PACKAGE WHERE PLVAR <> '01'. " OR ENDDA <> '99991231'.
* Populate texts from characteristic ZORGTXT
LOOP AT SOURCE_PACKAGE INTO sp.
  curid = sp-OBJID.
  SELECT SINGLE txtsh txtlg INTO (sp-mc_short, sp-stext)
  FROM /BIC/TZORGTXT WHERE /BIC/ZORGTXT = curid. " AND LANGU = 'EN'.
  IF sy-subrc = 0.
    MODIFY SOURCE_PACKAGE from sp.
  ENDIF.
ENDLOOP.
* Root node for internal organization is called 'ROOT'
READ TABLE SOURCE_PACKAGE
WITH KEY mc_short = 'ROOT' INTO sp.
curnode = '1'.
* Prepare internal table for BP codes
CLEAR resp. rwa = sp.
* Form Level 1 of the hierarchy -----------------------------
node-NODENAME = sp-STEXT.
TRANSLATE node-NODENAME TO UPPER CASE.
curnode = curnode + 1.
node-CHILDID = curnode.
INSERT INTO /BIC/ HZORGSTR VALUES node.
* Form Level 2 of the hierarchy ---- Region ----------------
*Browse through all nodes belonging to current root ----------
lastnode2 = '0'.
LOOP AT SOURCE_PACKAGE INTO sp2
      WHERE OBJID = sp-OBJID AND SUBTY(1) = 'B' AND SCLAS = 'O'.
*Find child record
  childnode = sp2-SOBID.
  READ TABLE SOURCE_PACKAGE WITH KEY OBJID = childnode INTO sp3a.
                        <… ABAP code …-  assign fields for the new hierarchy node >
  lastnode2 = curnode.
  curnode = curnode + 1.
  node-CHILDID = '0'.
  INSERT INTO /BIC/HZTVFKORG VALUES node.
* Form Level 3 of the hierarchy ---- Unit ----------------
<… ABAP code …>
* Form Level 4 of the hierarchy ---- Position ----------------
<… ABAP code …>
* Form Level 5 of the hierarchy ---- Employee ----------------
  lastnode6 = '0'.
  LOOP AT SOURCE_PACKAGE INTO sp6
      WHERE OBJID = sp6a-OBJID AND SCLAS = 'CP'.
*Find child record
    childnode = sp6-SOBID.
    READ TABLE SOURCE_PACKAGE WITH KEY OBJID = childnode
    OTYPE = 'CP' SCLAS = 'BP' INTO sp7a.
   IF sy-subrc = 0.
*Add new BP record if it is found in the structure
<… ABAP code …>
* Add BP code & attributes
      rwa-sobid = node-NODENAME.
      rwa-ADATANR = sp3a-mc_short. " Region
      rwa-SHORT = sp4a-mc_short. " Unit
      INSERT rwa INTO TABLE resp.
<… ABAP code …>
   ENDIF.
        ENDLOOP. " Level 5--------
      ENDLOOP. " Level 4--------
  ENDLOOP. " Level 3------
ENDLOOP. " Level 2-----
* Populate source package from resp
DELETE SOURCE_PACKAGE WHERE PLVAR = '01'.
APPEND LINES OF resp TO SOURCE_PACKAGE.
* Remove duplicates
SORT SOURCE_PACKAGE BY SOBID.
DELETE ADJACENT DUPLICATES FROM SOURCE_PACKAGE COMPARING SOBID.

5. Outcome

Data loads should be done in the following order:
  • Run extractor Z_HRP1000 to update characteristic ZORGTXT with texts for hierarchy nodes and attributes.
  • Run extractor Z_HRP1001 to update attributes of ZORGSTR.
When the transformation script is executed a new organization hierarchy is generated based on current organization model in CRM. At the same time ZORGSTR is populated with employee codes together with Region and Unit attributes.

Below is an example of the hierarchy generated in SAP BI dynamically:

image003.png  


Using Hierarchies for Data Selection

When processing transactions one may have a need for selecting records belonging to certain hierarchy nodes or levels. For example, if we are processing transaction lines for Profit and Loss statements we want to select only those Cost Elements that belong to a certain hierarchy node. There is no standard way in SAP BW for selecting records in the DTP or in the ABAP start routine based on infoobject hierarchies. However, if we design a query based on the hierarchy masterdata we can save query results to a Direct Update DSO. Later we can use records from this DSO as a filter. Below I will go through the steps one has to follow in order to implement this approach.

Create Master Data Query with Relevant Filters

In SAP BW it is only in the query we can do selections by hierarchy nodes. If we set a filter by Cost Element hierarchy node we can load query result to a DSO. Let’s say we have to select all nodes for Cost Element node Z922X-270 with some Cost Elements to be used as exceptions. Here is how query filter will look like:

image007.jpg

Create Direct Update DSO with an APD for Hierarchy Nodes

DSO for master data elements derived from the hierarchy may contain only one infoobject. In the case of cost element here is an example:

image008.jpg

As soon as Cost Element hierarchy is refreshed we have to update the filter in the corresponding DSO: P&L Cost Elements. This can be automated with an APD. Analysis Process Designer may be very simple:

image009.jpg

Apply Filter in the Start Routine

The following code can be used in the start routine of a transformation in order to filter out records by Cost Element hierarchy nodes:
  FIELD-SYMBOLS:  <ls> TYPE _ty_s_SC_1.
  TYPES: t_ce TYPE /BIC/AO_CEPNL00.
  DATA: ce TYPE HASHED TABLE OF t_ce WITH UNIQUE KEY COSTELMNT.
* Load list of Valid Cost Elements
  SELECT * FROM /BIC/AO_CEPNL00 INTO TABLE ce.
* Filter by Cost Element - leave only CE hierarchy node 270
  LOOP AT SOURCE_PACKAGE ASSIGNING <ls>.
    READ TABLE ce WITH TABLE KEY COSTELMNT = <ls>-COSTELMNT TRANSPORTING NO FIELDS.
    IF sy-subrc <> 0.
      DELETE SOURCE_PACKAGE.
    ENDIF.
  ENDLOOP.

BI PROJECT DESIGN CHECKLIST

Use Virtual Characteristics and Query Input Parameters for Dynamic Calculations

It is a common requirement in BI reporting to do certain calculations “on the fly” at query run time. Sometimes these calculations have to be based on query input parameters. Usually formulas and virtual characteristics are used for calculating values dynamically. In this post I will look at an example where we have to produce a calculation for each report line, using query lines as an input parameter together with query variable values.

Business Case

Let’s look at a business case where we are dealing with a vast volume of customers signing up for service contracts with an organization. When we analyze customer contracts one of the analysis objects in BI reporting is customer’s age. As contracts usually last for several years customer’s age has to be defined as of a certain date. 

Select Data in DTP Dynamically

Similarly to selecting parameters dynamically in the infopackage, as discussed in the previous post, we may need to select parameters dynamically in a DTP. In the post below I will review an example of how to make selections dynamically in the DTP. A practical application of this approach can be selection of records from a DSO by date range, say we want to select all records with Date From lower than [Today + 30 days] and Date To greater than [Today - 30 days].

In the Filter selections for the DTP we have to click on the “Routine create” button for the infoobject we want to use:


As DTP selections are stored as a range we have to populate fields of the table l_t_range with appropriate selection parameters.