Friday 28 September 2012

Detailed explanation about BADI and the ways to find the BADI with an example (ME23n transaction)

Def:  BADI (Business Add-In) is a new SAP Object Oriented enhancement technique which is used to add our own business functionality to the existing SAP standard functionality. 

BADI's are available in SAP R/3 from the system release 4.6c.

Why BADI?

In contrast to the earlier enhancement techniques, BADI follows Object Oriented approach to make them reusable. A BADI can be used any number of times where as standard enhancement techniques can be used only once. 

For example if we assign an enhancement to one custom project, then that enhancement cannot be assigned to any other custom projects. To overcome this drawback SAP has provided a new enhancement technique called BADI.  

Transaction code for BADI Definition:

SE18   
 
When you create a BAdI definition, a class interface will be automatically created and you can define your methods in the interface. The implementation of the methods can be done in SE19 transaction.  

When a BAdI is created following are automatically generated:
  • An interface with 'IF_EX_' inserted between the first and second characters of the BAdi name
  • An adapter class with 'CL_EX_' inserted between the first and second characters of the BAdi name. 
Transaction code to Implement BADI:

SE19

Types of BADI's:

While creating a BADI using the T-code SE18, it provides the pop-up screen to select the type of BADI to be used is as shown below.

There are two types of BADI's.  

1) Multi use BADI:

With this option, any number of active implementations can be assigned to the same definition BADI. By default this option is checked. 

If we want the BADI for multiple use, If you have multiple-use BADI definitions, the sequence must not play any role.

The drawback in Multiple use BADI is, it is not possible to know which BADI is active especially in country specific version.

2) Filter dependent BADI:

Using this option we can define the BADI's according to the filter values to control the add-in implementation on specific criteria.

Ex: Specific country value.


How to Find BADI's in SAP system:

Method 1:

Steps to find BADI: 

1. Go to SE 24 transaction, type CL_EXITHANDLER and then click on display. 


 2. Double click on GET_INSTANCE method.

          
3. Put a break-point on class method  CL_EXITHANDLER=>GET_CLASS_NAME_BY_INTERFACE.

4.Run any transaction on which we want find the BADI's say VA01.

5. Give the transaction name VA01 and press enter. 

6. It will automatically take you to the break-point which we have set the in the SE24  transaction. Each time if you press F8 a list BADI names will be displayed.




 


















7. You can find the BADI name in field EXIT_NAME and if you double click on it, we can get the corresponding BADI name before hit the corresponding screen. Based on the requirement find the BADI name and accordingly implement your functionality using the transaction se19.

Method 2:

Go to transaction SE84 and click on Enhancements. Then double click on Business Add-Ins. 

For example if you want to find the BADI's for the standard transaction ME22n, the procedure is as follows. This example shows, finding the way of BADI names by providing the Package of ME22n. 

1) Go to transaction ME22n. Select the System option from the menu and then click on Status. It displays the following information.

2) Double click on the program name i.e. SAPLMEGUI. It will take you into the program and click on Go to tab from the Menu. There we can find the package name of the standard transaction ME22n.Copy and paste it in the package filed.
3) Now Press F8, a list of BADI names will be displayed as shown below. Select the appropriate BADI name and implement it based on the business requirement using the transaction SE19.
Method 3:

Finding the BADI names using SE18 transaction.

1) Go to transaction SE 18. Select F4 help for the definition name and click on Information System button as shown below. 

2) A pop-up screen will be displayed and give the package name for any standard transaction say VA02. Finding the package is explained above. Please refer above method to find the package name. The package name for VA02 transaction is 'VA.'
3) A list of BADI names will be displayed for the transaction VA02. Select the appropriate BADI name and implement it using T-code SE19.
Example :

This Example explains how to implement BADI's. Here I am trying to show how to add custom screen to the ME23N Transactions using BADI's. 

The procedure is as explained below. 

 -  Find the BADI using the method 1 as shown above.




 















The BADI name to add the custom screen to ME23n is ' ME_GUI_PO_CUST'.
 
  -         Go to T-code SE19 and create an implementation ZME_GUI_PO_CUST and then click on create button

 -         Give the Definition name "ME_GUI_PO_CUST" and continue, give short text, save.

-         Click on Interface Tab, you can find the implementation class name as ZCL_IM_ME_GUI_PO_CUST2.  

-         Click on ZCL_IM_ME_GUI_PO_CUST2, which will take you to SE24. 

-         Add "MMMFD" in the Type Group Section of properties tab.  

-         Go to Attributes section and declare the following attribute 

-         SUBSCREEN1 Constant Public Type MEPO_NAME Name of a View
'ITEMSCREEN1 '. 

-      Go to Methods section, you can find the BADI interface methods.  

-   Double click on the method "IF_EX_ME_GUI_PO_CUST~SUBSCRIBE", this method is having 3 parameters.

 
Go to the 



code section of the method and add the following code there. 

Code for IF_EX_ME_GUI_PO_CUST~SUBSCRIBE:

  DATA: ls_subscriber LIKE LINE OF re_subscribers.
*--FIRST SCREEN POPULATION
*--we want to add a customer subscreen on the item detail tab
  CHECK im_application = 'PO'.
  CHECK im_element     = 'ITEM'.
*--each line in re_subscribers generates a subscreen. We add one subscreen
*--in this example
  CLEAR re_subscribers[].
*--the name is a unique identifier for the subscreen and defined in this
*--class definition
  ls_subscriber-name = subscreen1.
*--the dynpro number to use
  ls_subscriber-dynpro = '0002'.
*--the program where the dynpro can be found
  ls_subscriber-program = 'ZME_GUI_PO_CUST_SCREEN'.
*--each subscreen needs itsown DDIC-Structure
  ls_subscriber-struct_name = 'ZMARA'.
*--a label can be defined
  ls_subscriber-label = 'Cust BADI'.
*--the position within the tabstrib can be defined
  ls_subscriber-position = 7.
*--the height of the screen can be defined here. Currently we suport two
*--screen sizes:
*--value <= 7 a sevel line subscreen
*--value > 7 a 16 line subscreen
  ls_subscriber-height = 7.
  APPEND ls_subscriber TO re_subscribers.
Save and check and back
Double click on method IF_EX_ME_GUI_PO_CUST~MAP_DYNPRO_FIELDS".
Add the following code in the method.
*given the field catalog of structure ZMARA we have to
*establish a mapping to metafields which are used for field selection
*purposes and error handling Standard definitions can be found in type
*pool MMMFD. It is important for customer fields to use integer
*constants above 90000000 for the metafield.
  FIELD-SYMBOLS: <mapping> LIKE LINE OF ch_mapping.
  LOOP AT ch_mapping ASSIGNING <mapping>.
    CASE <mapping>-fieldname.
      WHEN 'MATNR'.      <mapping>-metafield = mmmfd_cust_08.
      WHEN 'MTART'.      <mapping>-metafield = mmmfd_cust_09.
      WHEN 'MATKL'.      <mapping>-metafield = mmmfd_cust_10.
    ENDCASE.
  ENDLOOP. 
  • The metafield mapping important for field selection and error handling purpose.
  • Save, check and back
  • Activate the Implementation class.
  • Activate the BADI Implementation. 
Now create a structure in SE11 with the name ZMARA.
Add the following fields in the structure.


 
Now Create a Program with the Name 'ZME_GUI_PO_CUST_SCREEN' and create a screen with sub screen type with the number 0002.


 












Add the fields on to screen from ZMARA program 'ZME_GUI_PO_CUST_SCREEN'.
 


















* *De comment the PBO module in screen flow logic and create the module in above program.  
Add the following code in program ZME_GUI_PO_CUST_SCREEN.
TABLES: ZMARA.
DATA: call_subscreen TYPE sy-dynnr,
      call_prog TYPE sy-repid,
      call_view TYPE REF TO cl_screen_view_mm,
      call_view_stack TYPE REF TO cl_screen_view_mm OCCURS 0.
*---------------------------------------------------------------------*
*       FORM SET_SUBSCREEN_AND_PROG                                   *
*---------------------------------------------------------------------*
*       ........                                                      *
*---------------------------------------------------------------------*
*  -->  DYNNR                                                         *
*  -->  PROG                                                          *
*  -->  VIEW                                                          *
*  -->  TO                                                            *
*  -->  CL_SCREEN_VIEW_MM                                             *
*---------------------------------------------------------------------*
FORM set_subscreen_and_prog USING dynnr TYPE sy-dynnr
                                  prog TYPE sy-repid
                                  view TYPE REF TO cl_screen_view_mm.
  call_subscreen = dynnr.
  call_prog = prog.
  call_view = view.
ENDFORM.
*&---------------------------------------------------------------------*
*&      Module  STATUS_0002  OUTPUT
*&---------------------------------------------------------------------*
*       text
*----------------------------------------------------------------------*
MODULE STATUS_0002 OUTPUT.
SELECT SINGLE * FROM MARA
          INTO CORRESPONDING FIELDS OF ZMARA
          WHERE MATNR = '100-100'.
ENDMODULE.                 " STATUS_0002  OUTPUT  
  • The sub-routine "set_subscreen_and_prog" is must with the same signature. 
  • This routine is called from BADI to call the sub screen. 
  • Activate the screen & program. 
  • Now Go to T-Code ME23N to test the application. 
  • You can find a new tab is added in the item level with the name CUST BADI.
Final Output:


























How to Debug SAP RFC, Background Job, Update FM etc..

This post covers RFC, portal, update task and background job debugging.
 
Debugging a Remote enabled Function module application:

Suppose you want to debug a function module which is there in APO system from the R/3.

rfc_deb

(FYI: As you all know this RFC Destination is maintained via transaction code SM59 )

So here you can see the R/3 Program which is calling the APO Function module Z_TEST_CONNECTION_R3.

As you can see in the below image, keep the external break-point in your program before it calls the RFC enabled Function module.

step11
Now login into the APO system where you’re remote enabled function module exist and put an external break-point.

step12
Now go to Transaction code: SRDEBUG and click on the Activate Debugging Button:

It will give you a pop up which confirm your id, application server etc., click enter which will give you another popup as shown in the below image. Keep it as it is.

step13 
Now go to your Report program in R/3 system and Execute it, it will open debugger, when you press F5 key to go inside the RFC Function module it  will start debugging in APO system.

How to debug a portal application:

Assume that there is a Java based application which is using HTML/JSP as its user interface and intern it uses the Java code to call the ABAP RFC enabled function module. So when user click on the submit or search option it fetches or update the data inside the ABAP system.

port_deb
As compare to RFC Debugging from R/3 to APO here it’s somewhat tricky as here the user is not same, As Portal application and ABAP will not see the same user id(***It depends but mostly it will not be same).

You need to consult with portal designer and needs to set your user id as the default user when you submit the FORM in portal. Once both the JAVA and ABAP system uses the same user id you just needs to put the external break point in your ABAP System it will stops when this RFC is called.

How to debug the background job:

There are two ways to debug the background job.

Option 1: Open the running job in SM37 and select it. Enter “JDBG” in the command line and click enter. It will start the abap debugger.

bgjob_deb
Option 2: Go to Transaction code SM50. Select your job which you want to debug as shown in the below image.

bgjob_deb2
It will give you a popup for asking whether you want to debug the program or not and in few seconds it will open the debugger screen.

How to Debug an Update Function Module:
As you know update function module is called when a commit work is happend. To Debug an update function put a break-point just above the update function module and executes the program. Go to the property of the debugger (Setting->Display/change debugger setting)and select the flag “Update Debugging”.

When program encounter a Commit work statement it will start the debugger in new window for the update function module.

update_fm
Tips and Tricks related to Debugger:

If you want to jump to a specific line of code just put your cursor on the desired line and click on the Shift+F12. It will start the execution of the program/code from that line (Not if you are skipping the lines, those peace of code is not executed so that code is as equal as empty lines).


Thursday 27 September 2012

OLAP Cache Optimization in SAP BW

This post explains how to improve performance of long running queries using OLAP Cache - View Document

OLAP Cache Optimisation

I would like to expand on one aspect of improving reporting performance in SAP BW – exploiting the OLAP cache. The OLAP cache is standard functionality that is setup during a technical install of SAP BW. However, to realise the full benefits of speedy queries based on cached data, some additional steps need to be taken post-install. I’ve noticed these steps, and in particular the scheduled population of the OLAP cache, has been overlooked on quite a few implementations. 

In this post i’ll focus on the concept of ‘pre-filling’ the cache on a regular basis through the Business Explorer (BEx) broadcaster. I’ll also cover some points to consider in maintaining the OLAP cache and performance tuning. Let’s use the functionality available in a standard install to give your users the benefits of cached data!

Cache: A safe place for hiding or storing things.

A cache can be described as a buffer that’s employed to re-use commonly occurring items. A cache has many applications in a computing context: the memory cache, internet browsing history cache, disk caching, etc. In all of these cases, caching provides an answer or data retrieval, faster than another alternative. Faster data access can be achieved through caching data in registers (accessed in 1-5 clock cycles) vs. storing it in slower RAM (10-100 cycles).
SAP BW OLAP cache
In a SAP BW context, the OLAP cache buffers query results sets retrieved from the database (disk) by storing them in resident memory as highly compressed cluster data. Why bother storing query results in memory? The answer is simple – this is a speed contest between disk access vs. memory access, with memory being the much faster winner. On both sides of the equation, physical and electronic, there’s compelling logic that application performance will be improved through maintaining data in memory, rather than retrieving it from disk. Disk access is one of the few mechanical (as opposed to electronic) functions integral to processing and suffers from the slowness of moving parts. On the software side, disk access also involves a ‘system call’ that is relatively expensive in terms of performance. The desire to improve performance by avoiding disk access is the fundamental rationale for database management system (DBMS) caching and other file system caching methods. SAP BW is no different – and contains caching at the application layer to improve OLAP reporting performance.

Users progressively fill the OLAP cache as they access queries. When a user accesses a query result set for the first time, a database read from disk takes place and the result set is then written to cache memory. A cache read action takes place when the same result set (or subset) is subsequently accessed. Hence, the result set is retrieved from memory, instead of being retrieved from disk. An important feature of the OLAP cache is that read/write access to the cache is available to all users on the application server. All users get the benefits of faster data retrieval from the cache if another process or user has previously written the data to it.

The OLAP cache has a physical memory size and hence a storage limit. The cache persistence mode can be setup to either swap to disk or be overwritten when it’s full. OLAP cache data is invalidated each time a request is loaded into a Basic InfoCube; meaning that any Basic InfoCube that is updated daily, will have its underlying cache data wiped. This makes sense as the cached query result set data for that InfoCube, whilst accurate as a historical snapshot is now no longer representative of the current data in the InfoCube. Hence the old query result set needs to be erased. This begs the question, when should the cache be filled with the required query result sets?
Default OLAP Cache

In the default approach post BW install, only users progressively fill the OLAP cache as they access queries. Users that access query data for the first time will retrieve data from disk and not the OLAP cache – meaning they will experience a longer delay than users subsequently accessing the same data set.

The delay in retrieving query data from the database for the first user access can be significant for certain queries – a key problem that needs to be solved. This first access delay is compounded for web templates or dashboards that execute several queries for the first time in parallel. The delay is also influenced by the complexity of the query and your success with other performance optimisation techniques. Furthermore the delay will occur during user runtime – a particularly inconvenient time – as we want to shift any delays in accessing data away from the user’s waking hours. In any case, if this query result delay is significant – then you’ve lost the user already.
Accessing the OLAP Processor
OLAP Processor - Access hierarchy
When searching for requested data, a BI query accesses performance optimised sources in the order shown to the left. The important take-away is that the OLAP cache is read first, in preference to all other performance optimisation techniques, even the BI Accelerator. However, this isn’t to say that these techniques don’t have their place.

Performance optimisation techniques, such as aggregating data, compressing data, limiting table size, etc. are still vital to any successful implementation. They are still valid and necessary even with an OLAP cache. Not only will they help to soften the blow of the first disk read as previously discussed, but it is impossible and not efficient to cache all the permutations of query result sets. OLAP analysis by its very nature is predicated on not being able to predict every possible query request.

Even if you did try to make such a prediction, you’d most likely end up caching a whole bunch of result sets that:
  • User’s will never read; and
  • Would be invalidated on a regular basis as the Cubes are updated.
Pre-filling the cache
The idea of pre-filling or ‘warming up’ the OLAP cache is to defer the database operations away from user run-time to another more convenient point in time, system time. Such a time could be after the daily load in the early hours of the morning when few (if any) query requests are being processed.

The database is going to have to do the work at some point in time to fill the cache – this much is clear. It could also be argued that this work, the ‘pre-filling’ the OLAP cache, represents an additional operation and load on the system in addition to the regular data flow processes. In my opinion this is probably the wrong way of looking at the problem. If we can accurately predict what the most common query requests are going to be, then there will be no additional load on the system. These query requests would happen in the normal course of business. Additional load would only be generated if we go overboard caching every possible query request permutation possible and then that cached data is never accessed. 

To prevent this we should endeavor to restrict data being loaded to the cache to only likely requests and this will vary from Data Mart to Data Mart. Tools are available in SAP BW through the Transaction RSRCACHE to determine which query entries have been read from the cache. This should be monitored on an ongoing basis to optimise your caching strategy and remove any poorly performing cache entries from being continually generated. See Monitoring the OLAP Cache in SAP Documentation for further details. 

The most common and convenient way of pre-filling the OLAP cache is through the BEx broadcaster, although there are several other approaches, such as:
  • Using the Reporting Agent (3.x release) to fill the cache (Transaction Code REPORTING_AGENT).  Note that in 7.x a warning message appears stating that this can only be run for 3.x queries and templates, but it works fine with 7.x queries.  However, it doesn’t work with 7.x web templates and will not be developed further by SAP; or
  • Create a program/function module/web service that executes the relevant queries (See FM RRW3_GET_QUERY_VIEW_DATA );
  • Create an Analysis Process to run the relevant queries and output the data to a dummy target such as a DSO.
Each of these methods are able to be inserted into a process chain and hence, can be regularly scheduled.
BEx Broadcasting Approaches:
Some recommended broadcasting approaches that i’ve seen work well on implementations are listed below. Let me know what approaches have worked well for you – I’d be interested to hear your feedback.

1. Schedule broadcast on data change

Firstly in the BEx Broadcaster schedule the broadcaster to run on a data change event.



Add a data change event to the end of each datamart load process chain.

Process Chain - Event data change

This event triggers the broadcasts which pre-fill the OLAP cache. In the data change event you specify which InfoProvider has had it’s data changed (and hence the OLAP cache has been erased for it) and broadcasts scheduled on the event for the InfoProvider are triggered.

2. Web Templates

Web templates can contain multiple data providers and hence multiple queries.  Instead of creating individual broadcast settings for each query in the template, a single global setting can be created for the template.  When scheduled and broadcasted, this web template setting will run, and hence cache, all of the underlying queries in the template as they would appear in the dashboard.

Unfortunately for web templates, there isn’t a broadcast distribution type to fill the OLAP cache.  Instead if you’re objective is to only fill the OLAP cache, another setting such as Broadcast to the Portal will need to be used.  I’d recommend the following settings:

Distribution Type:   Broadcast to the Portal
Output Format:   Online Link to Current Data
Authorization User:   ADMIN
Export Document to My Portfolio:   Check
Export User:   ADMIN

Using these settings will result in generated online links to the web template (and not larger files) being posted for the Admin User in the Business Intelligence Role in My Portfolio/BEx Portfolio/Personal BEx Documents.

Broadcast - KM Admin Folder

The online links posted in this folder will overwrite each other and hence preventing a large amount of documents being stored in this directory over time.

3. Queries

Queries can be individually broadcasted.  Unlike with broadcasting a web template, more settings are available, such as broadcasting by multiple selection screen variants and by filter navigation on the query result set, i.e. by a characteristic.
Cache maintenance
Now that you’ve broadcast queries/web templates to the OLAP cache, you’re going to need to maintain it. To do so, a little more information is needed about the technical architecture of the SAP BW OLAP cache. Technically, the OLAP cache consists of two caches, the local cache and the global cache. These two caches can be setup with different parameters, such as size. The local cache is accessed by a single user within a session on an application server. Local cache data is retained in the roll area as long as it is required by the OLAP processor for that user. Global cache data is shared by all users across all application servers. Global cache data is retained for as long as it is required and will either be deleted when it is no longer needed, e.g. the underlying data has changed and the cache is invalidated, or depending on the persistence mode will be swapped to disk when the cache size is exceeded.

The cache size parameters indicate the maximum size that the local and global caches are permitted to grow to.  The global cache size should be larger than the local cache size, as the global cache is accessed across multiple users.  The local and global cache size values should be generally extended from their default settings when you install SAP BW. This will take advantage of memory available and ensure that the stored cache entries do not exceed the cache capacity. The size parameters should be reviewed periodically depending on cache usage, hit ratio and overflow. The cache size must be appropriate to manage the frequency of query calls and the number of users.  Some indications that your cache size should be extended are:
  • The number of cache entries has filled the capacity of the cache at least once.
  • Average number of cache entries corresponds to at least 90% of the capacity or has reached this capacity around 30% of the time.
  • Ratio of hits to gets, is lower than 90%.
You can configure the cache parameters using Transaction RSCUSTV14.  Note that the size of the global cache is determined by the minimum value of the Global Size MB parameter and the actual memory available in the shared memory buffer (profile parameter rdsb/esm/buffersize_kb). You should therefore use Transaction ST02 to check whether the size of the export/import buffer is appropriate – as the default setting of 4,096 KB is often too small.  SAP recommends the following settings:
  • rsdb/esm/buffersize_kb=200000
  • rsdb/esm/max_objects=10000
The permitted cache size needs to be realistic though. If you’re talking in the multiples of gigabytes, you may wish to review why you need so much data in the cache in the first place. A significant OLAP processing load would need to take place to generate that much cache data.
Wrap up
We’ve covered a fair bit of ground on the SAP BW OLAP cache in this post. Significant performance benefits can be achieved through the system (and not the user) pre-filling the cache on a regular basis as the underlying data changes. The benefits are available to all users as the OLAP analysis results are stored in a central repository, the OLAP Cache. Furthermore the initial hit on the database (and subsequent delay) when the query/web template is run the first time after the data has changed, is taken away from the user and performed by the system at a more convenient time. A benefit that they’ll sure be appreciative of first thing in the morning when they arrive at their desk. 

Hope you like this post and useful.