Thursday, 27 September 2012

OLAP Cache Optimisation

I would like to expand on one aspect of improving reporting performance in SAP BW – exploiting the OLAP cache. The OLAP cache is standard functionality that is setup during a technical install of SAP BW. However, to realise the full benefits of speedy queries based on cached data, some additional steps need to be taken post-install. I’ve noticed these steps, and in particular the scheduled population of the OLAP cache, has been overlooked on quite a few implementations. 

In this post i’ll focus on the concept of ‘pre-filling’ the cache on a regular basis through the Business Explorer (BEx) broadcaster. I’ll also cover some points to consider in maintaining the OLAP cache and performance tuning. Let’s use the functionality available in a standard install to give your users the benefits of cached data!

Cache: A safe place for hiding or storing things.

A cache can be described as a buffer that’s employed to re-use commonly occurring items. A cache has many applications in a computing context: the memory cache, internet browsing history cache, disk caching, etc. In all of these cases, caching provides an answer or data retrieval, faster than another alternative. Faster data access can be achieved through caching data in registers (accessed in 1-5 clock cycles) vs. storing it in slower RAM (10-100 cycles).
SAP BW OLAP cache
In a SAP BW context, the OLAP cache buffers query results sets retrieved from the database (disk) by storing them in resident memory as highly compressed cluster data. Why bother storing query results in memory? The answer is simple – this is a speed contest between disk access vs. memory access, with memory being the much faster winner. On both sides of the equation, physical and electronic, there’s compelling logic that application performance will be improved through maintaining data in memory, rather than retrieving it from disk. Disk access is one of the few mechanical (as opposed to electronic) functions integral to processing and suffers from the slowness of moving parts. On the software side, disk access also involves a ‘system call’ that is relatively expensive in terms of performance. The desire to improve performance by avoiding disk access is the fundamental rationale for database management system (DBMS) caching and other file system caching methods. SAP BW is no different – and contains caching at the application layer to improve OLAP reporting performance.

Users progressively fill the OLAP cache as they access queries. When a user accesses a query result set for the first time, a database read from disk takes place and the result set is then written to cache memory. A cache read action takes place when the same result set (or subset) is subsequently accessed. Hence, the result set is retrieved from memory, instead of being retrieved from disk. An important feature of the OLAP cache is that read/write access to the cache is available to all users on the application server. All users get the benefits of faster data retrieval from the cache if another process or user has previously written the data to it.

The OLAP cache has a physical memory size and hence a storage limit. The cache persistence mode can be setup to either swap to disk or be overwritten when it’s full. OLAP cache data is invalidated each time a request is loaded into a Basic InfoCube; meaning that any Basic InfoCube that is updated daily, will have its underlying cache data wiped. This makes sense as the cached query result set data for that InfoCube, whilst accurate as a historical snapshot is now no longer representative of the current data in the InfoCube. Hence the old query result set needs to be erased. This begs the question, when should the cache be filled with the required query result sets?
Default OLAP Cache

In the default approach post BW install, only users progressively fill the OLAP cache as they access queries. Users that access query data for the first time will retrieve data from disk and not the OLAP cache – meaning they will experience a longer delay than users subsequently accessing the same data set.

The delay in retrieving query data from the database for the first user access can be significant for certain queries – a key problem that needs to be solved. This first access delay is compounded for web templates or dashboards that execute several queries for the first time in parallel. The delay is also influenced by the complexity of the query and your success with other performance optimisation techniques. Furthermore the delay will occur during user runtime – a particularly inconvenient time – as we want to shift any delays in accessing data away from the user’s waking hours. In any case, if this query result delay is significant – then you’ve lost the user already.
Accessing the OLAP Processor
OLAP Processor - Access hierarchy
When searching for requested data, a BI query accesses performance optimised sources in the order shown to the left. The important take-away is that the OLAP cache is read first, in preference to all other performance optimisation techniques, even the BI Accelerator. However, this isn’t to say that these techniques don’t have their place.

Performance optimisation techniques, such as aggregating data, compressing data, limiting table size, etc. are still vital to any successful implementation. They are still valid and necessary even with an OLAP cache. Not only will they help to soften the blow of the first disk read as previously discussed, but it is impossible and not efficient to cache all the permutations of query result sets. OLAP analysis by its very nature is predicated on not being able to predict every possible query request.

Even if you did try to make such a prediction, you’d most likely end up caching a whole bunch of result sets that:
  • User’s will never read; and
  • Would be invalidated on a regular basis as the Cubes are updated.
Pre-filling the cache
The idea of pre-filling or ‘warming up’ the OLAP cache is to defer the database operations away from user run-time to another more convenient point in time, system time. Such a time could be after the daily load in the early hours of the morning when few (if any) query requests are being processed.

The database is going to have to do the work at some point in time to fill the cache – this much is clear. It could also be argued that this work, the ‘pre-filling’ the OLAP cache, represents an additional operation and load on the system in addition to the regular data flow processes. In my opinion this is probably the wrong way of looking at the problem. If we can accurately predict what the most common query requests are going to be, then there will be no additional load on the system. These query requests would happen in the normal course of business. Additional load would only be generated if we go overboard caching every possible query request permutation possible and then that cached data is never accessed. 

To prevent this we should endeavor to restrict data being loaded to the cache to only likely requests and this will vary from Data Mart to Data Mart. Tools are available in SAP BW through the Transaction RSRCACHE to determine which query entries have been read from the cache. This should be monitored on an ongoing basis to optimise your caching strategy and remove any poorly performing cache entries from being continually generated. See Monitoring the OLAP Cache in SAP Documentation for further details. 

The most common and convenient way of pre-filling the OLAP cache is through the BEx broadcaster, although there are several other approaches, such as:
  • Using the Reporting Agent (3.x release) to fill the cache (Transaction Code REPORTING_AGENT).  Note that in 7.x a warning message appears stating that this can only be run for 3.x queries and templates, but it works fine with 7.x queries.  However, it doesn’t work with 7.x web templates and will not be developed further by SAP; or
  • Create a program/function module/web service that executes the relevant queries (See FM RRW3_GET_QUERY_VIEW_DATA );
  • Create an Analysis Process to run the relevant queries and output the data to a dummy target such as a DSO.
Each of these methods are able to be inserted into a process chain and hence, can be regularly scheduled.
BEx Broadcasting Approaches:
Some recommended broadcasting approaches that i’ve seen work well on implementations are listed below. Let me know what approaches have worked well for you – I’d be interested to hear your feedback.

1. Schedule broadcast on data change

Firstly in the BEx Broadcaster schedule the broadcaster to run on a data change event.



Add a data change event to the end of each datamart load process chain.

Process Chain - Event data change

This event triggers the broadcasts which pre-fill the OLAP cache. In the data change event you specify which InfoProvider has had it’s data changed (and hence the OLAP cache has been erased for it) and broadcasts scheduled on the event for the InfoProvider are triggered.

2. Web Templates

Web templates can contain multiple data providers and hence multiple queries.  Instead of creating individual broadcast settings for each query in the template, a single global setting can be created for the template.  When scheduled and broadcasted, this web template setting will run, and hence cache, all of the underlying queries in the template as they would appear in the dashboard.

Unfortunately for web templates, there isn’t a broadcast distribution type to fill the OLAP cache.  Instead if you’re objective is to only fill the OLAP cache, another setting such as Broadcast to the Portal will need to be used.  I’d recommend the following settings:

Distribution Type:   Broadcast to the Portal
Output Format:   Online Link to Current Data
Authorization User:   ADMIN
Export Document to My Portfolio:   Check
Export User:   ADMIN

Using these settings will result in generated online links to the web template (and not larger files) being posted for the Admin User in the Business Intelligence Role in My Portfolio/BEx Portfolio/Personal BEx Documents.

Broadcast - KM Admin Folder

The online links posted in this folder will overwrite each other and hence preventing a large amount of documents being stored in this directory over time.

3. Queries

Queries can be individually broadcasted.  Unlike with broadcasting a web template, more settings are available, such as broadcasting by multiple selection screen variants and by filter navigation on the query result set, i.e. by a characteristic.
Cache maintenance
Now that you’ve broadcast queries/web templates to the OLAP cache, you’re going to need to maintain it. To do so, a little more information is needed about the technical architecture of the SAP BW OLAP cache. Technically, the OLAP cache consists of two caches, the local cache and the global cache. These two caches can be setup with different parameters, such as size. The local cache is accessed by a single user within a session on an application server. Local cache data is retained in the roll area as long as it is required by the OLAP processor for that user. Global cache data is shared by all users across all application servers. Global cache data is retained for as long as it is required and will either be deleted when it is no longer needed, e.g. the underlying data has changed and the cache is invalidated, or depending on the persistence mode will be swapped to disk when the cache size is exceeded.

The cache size parameters indicate the maximum size that the local and global caches are permitted to grow to.  The global cache size should be larger than the local cache size, as the global cache is accessed across multiple users.  The local and global cache size values should be generally extended from their default settings when you install SAP BW. This will take advantage of memory available and ensure that the stored cache entries do not exceed the cache capacity. The size parameters should be reviewed periodically depending on cache usage, hit ratio and overflow. The cache size must be appropriate to manage the frequency of query calls and the number of users.  Some indications that your cache size should be extended are:
  • The number of cache entries has filled the capacity of the cache at least once.
  • Average number of cache entries corresponds to at least 90% of the capacity or has reached this capacity around 30% of the time.
  • Ratio of hits to gets, is lower than 90%.
You can configure the cache parameters using Transaction RSCUSTV14.  Note that the size of the global cache is determined by the minimum value of the Global Size MB parameter and the actual memory available in the shared memory buffer (profile parameter rdsb/esm/buffersize_kb). You should therefore use Transaction ST02 to check whether the size of the export/import buffer is appropriate – as the default setting of 4,096 KB is often too small.  SAP recommends the following settings:
  • rsdb/esm/buffersize_kb=200000
  • rsdb/esm/max_objects=10000
The permitted cache size needs to be realistic though. If you’re talking in the multiples of gigabytes, you may wish to review why you need so much data in the cache in the first place. A significant OLAP processing load would need to take place to generate that much cache data.
Wrap up
We’ve covered a fair bit of ground on the SAP BW OLAP cache in this post. Significant performance benefits can be achieved through the system (and not the user) pre-filling the cache on a regular basis as the underlying data changes. The benefits are available to all users as the OLAP analysis results are stored in a central repository, the OLAP Cache. Furthermore the initial hit on the database (and subsequent delay) when the query/web template is run the first time after the data has changed, is taken away from the user and performed by the system at a more convenient time. A benefit that they’ll sure be appreciative of first thing in the morning when they arrive at their desk. 

Hope you like this post and useful.

Why migrate SAP BW data flows from 3.x to 7.x?

This post starts off with some of the discoveries and useful features made using this tool, and then expands further on the topic of the post: Why bother migrating a SAP BW data flow from the 3.x objects over to 7.x? This question has persisted in the SAP BW community for sometime now, since the introduction of BW 7.0 data flows in 2005. With the recent release of BW 7.3, the case for migration has become even more compelling.
 

Migration Wizard

 

Prior to the BW 7.3 release, one would have to manually migrate each object in a data flow separately from the 3.x version to 7.x. This can include the 3.x update rules, InfoSources, transfer rules and DataSources. However, through the migration wizard it is now possible to automate the migration of entire data flows to 7.x; including the addition of transformations, Data Transfer Processes (DTPs) and 7.x InfoSources (if required). The below screenshot shows the options available in the wizard.



Migration Options

One particularly useful feature of the wizard is the automated update of the migrated loading processes (specifically additional DTPs) into process chains. This has saved from a lot of searching and manual re-work in process chains post migration, adding DTPs in the right spot.

There is also a clear concept in the wizard, of segmenting data flow migrations into projects – thus enabling a more sophisticated management of data flow migration, and also recovery, than previously existed.

Some successes and failures with the migration wizard being able to automate the migration of the entire data flow. Well failures is probably too harsh, but some of the migrations were not successful and did require some manual re-work. Here’s a screenshot of the error log to give you an idea of the ‘look and feel’ of the error reporting:

Migration Wizard - Start Routine Errors

Migration Wizard - Start Routine Errors

Errors were particularly apparent with update rules that had start routines that relied on the old DATA_PACKAGE concept, being automatically migrated into 7.x transformations with SOURCE_PACKAGE start routines. Encountered a few syntax errors there with the COMM_STRUCTURE for some forms missing components – but no major problem, hard to cover all coding scenarios in one migration tool.

One annoying bug that encountered was not related to the migration wizard, but did occur post migration. This resulted in the following error:

Start Routine Syntax Error

Start Routine Syntax Error

This bug centered around the automatic update of the _ty_s_SC_1_full structure (used in the migrated form routine_9998 in the start routine). This structure was not automatically updated to reflect the source InfoSource structure, post the addition of some custom Z InfoObjects to the source InfoSource. Even after a manual update of the structure to include the additional Z InfoObjects and a successful save, the change was unable to be activated. The _ty_s_SC_1_full structure would always revert back automatically to a previous version, sans Z fields (See SAP Note 1052648 for further details on what is supposed to occur). Weird – but we were able to work around the issue without using the _ty_s_SC_1_full structure, while this bug is being fixed.

No tool, particularly a newly released one, is ever perfect.  Found the migration tool to be a great accelerator in setting up the necessary 7.x objects (RSDS, etc.) with only a little bit of extra tinkering around the edges required to get everything up and running.

Another great feature of the migration tool, is the tool itself provides feedback on what objects were not successfully migrated. Thus enabling you to channel your efforts into the right place to fix errors, without having to go through the whole data flow, wondering what successfully migrated, and what didn’t.

Have a look at the BW 7.30: Data Flow Migration tool blog entry on SCN for further detail on step by step screenshots on using the migration tool wizard.

So what’s the point of migrating?

 

The above section pre-supposes that one would wish to migrate a data flow from 3.x to 7.x. This raises the question of why would someone wish to perform such a migration in the first place? To me this situation arises in three categories:

1. Legacy 3.x implementation that used the 3.x transfer/update rules data flow 
    concept;

2. 7.x implementation that used the 3.x data flow concept;

3. Any implementation that has installed a 3.x business content data flow and 
    not migrated it!
 
Let’s look at these categories in further detail:
1. Legacy 3.x implementations
For the first point on legacy 3.x implementations, fair enough, your data flow model was built using the ETL logic that existed at the time. The question now is, does one invest in migrating the logic over to 7.x transformations, or leave things as they are… after all things are working just fine for the moment with transfer/update rules.

a. New Objects in BW 7.0

The arrival of SAP BW 7.0 (also known as SAP NetWeaver 2004s), brought with it a large change in the management of data flows. These changes helped to streamline and clearly segregate data flow objects based on their primary roles, i.e. persisting source data (DataSource), acquiring source data (InfoPackage), transforming data (Transformation) and moving data between persistent storage objects (Data Transfer Processes). 

Improvements were also made in terms of data transfer performance, including parallelisation, reduction of loading steps through the elimination of transfer and update rules, error handling, etc. This is explained more comprehensively in SAP documentation.

b. Faster Data Load in BW 7.3

In the BW 7.3 release, further improvements have been made in the speed and capabilities in transforming data. These include look-ups from DSOs (using the Read from DataStore option) and Master data objects (using the Navigation attributes as a source). The suggested improvement in performance is between 10-20%. Without 7.x transformations in your data flow, you’ll be missing out on these benefits.

As a side note, data activation has also been enhanced in BW 7.3 through the use of a package fetch of the active table, as opposed to single look-ups. The suggested improvement in data activation is between 15-30%.

c. Go-forward ETL strategy

The above benefit available in BW 7.3 serves as an indication as to where the focus of the development team at SAP lies. Legacy 3.x data flow support is rightly being maintained, however the focus is on improving the ‘new’ 7.x data flow with transformation/DTP concepts and not the 3.x data flow. Indeed, in SAP BW 7.3, new capabilities, such as Semantic Partitioning have been introduced, and these require 7.x transformations and InfoSources.

All in all with these factors in mind, the 7.x data flow is correctly classified as the ‘go-forward’ ETL strategy in BW. Furthermore, the migration of existing data flows to 7.x is generally recommended by SAP in cases where one wishes to ‘realise benefits of the new concepts and technology’.

d. Re-implementation Risk

With 7.x data flows eastablished as the go-forward ETL strategy, this raises the ire of the dreaded ‘re-implementation risk’ for customers that continue to use 3.x data flows. PepsiCo’s CTO Javed Hussain, refers to the general SAP upgrade dilemma in a Jan 2012 ASUG news article, in which he states ‘you don’t maintain a certain level of version or capability, then you’re going to fall behind two ways—you’re going to have a support problem with what you’re running today; and you’re going to fall behind your competition because you haven’t upgraded yet.’.

This rings true for the BW ETL strategy, if you stick around with the older 3.x technology, you’re also going to ‘fall behind’ as you’re not going to be able to leverage the development that SAP is placing in modelling and transforming data using the ‘go-forward’ 7.x data flow. Also at some point you’re going to run into a support issue, as either (a) 3.x data flow tech won’t be supported by SAP in the future, and/or (b) Your staff/consultants could potentially become less and less familiar with this ‘obsolete’ data flow over time.

However the period of time until you run into a support issue, could be an extended one. For example, some of the major banks in Australia are still running on good old COBOL and being supported by an aging army of consultants. No doubt the Aussie banks have benefited from a reduced IT CAPEX spend over the years by continuing to support and flog the COBOL horse. This perfectly reflects Hussain’s comment above in that the systems (being core banking) are (a) becoming increasingly difficult to support, and (b) perhaps now lacking in competitive advantage.

Indeed, it could be argued that these support issues are linked to a spate of recent bank IT failures. Some analysts are suggesting that due to this legacy, Australian bank consumers should be prepared for ’15 years of bank IT failures. So guess what the banks are doing… core upgrades, otherwise known as the dreaded ‘re-implementation’. Pretty dramatic, yes… but getting back to SAP BW, I don’t want to be doing a manual re-implementation of an entire data flow sometime down the track.
2. 7.x implementation that used 3.x data flow
My opinion that point number two should basically never occur. Given the benefits of the 7.x data flow, including enhanced performance, functionality and being the ‘go-forward’ ETL strategy, there’s no logical reason to implement an obsolete 3.x data flow from scratch in a 7.x environment.
3. Any implementation that has installed a 3.x business content data flow and not migrated it!
Point number three however, is a bit of a different story. SAP continues to supply a mixture of 3.x and 7.x data flows in their business content. Business content is a collective term for pre-configured templates of objects, canned reports and data modelling scenarios from DataSource to Data Mart that are based on SAP and customer experience. I find business content great and use it often as an implementation accelerator, but have found some drawbacks with the continuing supply of some data flows by SAP in 3.x format. 

So do I take the leap and migrate?

 

So far I’ve extolled the virtues of migrating your data flow from 3.x to 7.x, but is it time to take the leap and migrate? What are the risks and is this wise in a productive environment?

The focus of this conversation should be flipped around, to ‘why shouldn’t you migrate?’. I think given the points raised, that any business needs to be able to justify why they would be implementing or continuing to support older legacy ETL data flows in a 7.x BW system. I’m not saying that the continued use is never justifiable and have seen scenarios where the extensive breadth of productive BI data flows would carry significant cost in a migration project. However in such scenarios, where the use of 3.x data flows is continued, a ‘go forward’ plan should be in place, i.e. is it realistic to still be on 3.x data flows in 5 years time?

In terms of the risks, there is always a risk that the data flow migration may not be successfully completed, with conversion errors occurring in one or more objects in the data flow. This risk still persists even with the use of the mass migration wizard, and I’ve highlighted some of the errors I’ve encountered with the wizard in the first section of this post.

Like any development, the migration work should take place in the development system and never in production. I’d recommend using the migration wizard to automate the conversion to the 7.x data flow as much as possible and to also ensure that you can recover back to the 3.x data flow if the conversion is not successful. To aid in this, each data flow should have a separate migration project. 

Some ABAP knowledge will be required in more complex transformations with start/end routines and ABAP transformation code, to ensure that the conversion utility has properly handled the transformation of the code to ABAP OO where applicable. 

For testing purposes, I’d recommend taking a sample snapshot of data target results prior to the data flow conversion. This should then be compared to the same data sample post data flow conversion, after you’ve loaded data into the data target using a DTP. There are various ways of doing this depending on your comfort level, the main point is that you need to compare the same set of data pre and post conversion. To aid in this you may wish to move the pre data-flow conversion data into a copied data target, or just save the results in a spreadsheet, etc. prior to reloading the data target, post data flow conversion.

To mitigate the risk in transporting the migrated data flow to a production environment, the entire data flow, with process chain and all should be thoroughly tested in a QA environment (if you’re fortunate enough to have one). Also if you are fortunate enough to be able to get away with it, insist that you only finally transport data flows in a modular fashion, flow by flow, into production and don’t go with a big bang approach.

Hope you like this post and useful.


Thursday, 20 September 2012

Ad-hoc query, reporting and analysis from Business Objects

Let end users interact with business information and answer ad hoc questions themselves....without having to understand complex database languages and underlying structures.

These tools support query generation and integrated analysis, as well as basic report authoring and information sharing over intranets and extranets.

More details @ http://www.businessobjects.com/product/qra/adhoc.asp


Demo for SAP BusinessObjects 4


Watch a feature preview of SAP BusinessObjects 4.

Learn more @ http://blogs.sap.com/analytics

Watch the SAP BusinessObjects User Conference online to see the latest innovations in analytics @ http://spr.ly/sboucvirtualplatform

New Webi in 4.0 Preview Demo

Watch a sneak preview of the new WebIntelligence for SAP BusinessObjects 4.0

SAP BusinessObjects Dashboards Statement of Direction: All Access SAP Xcelsius Webinar

Check out the post-webinar Q&A with Anita Gibbings, Solution Marketing and Ian Mayor, Solution Management from SAP @ http://www.youtube.com/watch?v=zFIZXwcaySo&feature=g-all-lik

Mico Yuk from EverythingXcelsius.com and members of the SAP Business Intelligence team discuss the future of SAP BusinessObjects Dashboards, including strategy and direction, at the All Access SAP Xcelsius Webinar on April 18, 2012.

Q&A begins @ 17:07

For more information on analytics for SAP, visit http://blogs.sap.com/analytics.

New Crystal Reports in 4.0 demo


Check out the new Crystal Reports, get more information on SAP BusinessObjects 4.0 @ blogs.sap.com/analytics

SAP BusinessObjects WebIntelligence 4.0 demo on SAP HANA


This demo shows a new WebIntelligence report with SAP BusinessObjects 4.0 running on top of in-memory computing solutions SAP HANA.

Learn more about Business Analytics @ http://blogs.sap.com/analytics

Dashboards with SAP BusinessObjects 4.0


Check out this dashboard design demo, formerly known as Xcelsius within the new SAP BusinessObjects 4.0 release. Register to watch the launch of 4.0 @ http://virtualevents.sap.com/business-analytics/login.aspx

Learn more @ www.sap.com/analytics 


SAP HANA 1.0: Overview - Use HANA with BI 4.0


SAP HANA is a high-performance analytic appliance that provides SAP software components optimized on hardware from SAP's leading hardware partners. 

HANA's in-memory computing engine enables organizations to effectively analyze business operations based on very large volumes of detailed real-time data, without affecting backend enterprise applications or databases.

In this tutorial, you will review high-level steps for using SAP HANA with SAP BusinessObjects Business Intelligence (BI) tools.

Visit @ http://www.sap.com/LearnBI to view full catalog of interactive SAP BusinessObjects BI Suite tutorials. 


How to Create a Universe - SAP BusinessObjects Information Design Tool 4.0


Learn more about SAP Analytics Innovations in 2012. Watch the online SAP Analytics North America Forum...Register @ http://spr.ly/SAPAF

For the higher quality, interactive version of this tutorial, along with other SAP BusinessObjects BI Suite tutorials, visit @ http://www.sdn.sap.com/irj/scn/bi-suite-tutorials


Tuesday, 21 August 2012

Enhancement Category for Table Missing - COPA

Scenario:-

I’m trying to enhance the CO data source 1_CO_PA750W001 with WW620,ww621 and ww622 fields from table CE1W001.

Append the fields to the append structure, while trying to activate I’m getting the below warnings and I was not able to see the newly added fields in the extract structure.

Enhancement Category for table Missing Warning :-

"Enhancement Category for table Missing"
"Enhancement Category for include or subtype Missing"

Solution :-
 
To overcome this error, in edit mode do the below settings:


From Menu bar -> Extras -> Enhancement Category -> here select the type of category which suits your need. In my case, i want to enhance the data source, i had selected the option“can be enhanced(Deep)”.

 

 
For more info, about the above options:

Enhancement Category Selection

Structures and tables that were defined by SAP in the ABAP Dictionary can be enhanced subsequently by customers using Customizing includes or append structures. The enhancements do not only refer to structures/ tables themselves, but also to dependent structures that adopt the enhancement as an include or referenced structure. Append structures that only take effect at the end of the original structure can also cause shifts - in the case of dependent structures - even within these structures.

You must select an enhancement category for the following reason: In programs where there is no active Unicode check, enhancements to tables and structures can cause syntax and runtime errors during type checks and particularly in combination with deep structures.

In programs where there is an active Unicode check, statements, operand checks, and accesses with an offset and length are problematic - for example, if numeric or deep components are inserted into a purely character-type structure and the structure thus loses its character- type nature.

Depending on the structure definition, the radio buttons allowed in the dialog box are ready for input. Choose one of the possible enhancement categories:
  • Cannot be enhanced
The structure must not be enhanced.
  • Can be enhanced or character type
All structure components and their enhancements must be character-type (C, N, D, or T). The original structure and all enhancements through Customizing includes or through append structures are subject to this limitation.
  • Can be enhanced or character-type or numeric
The structure and its enhancement must not contain any deep data types (tables, references, strings).
  • Can be enhanced in any way
The structure and its enhancement may contain components whose data type can be of any type.
  • Not classified
This category can be chosen, for example, for a transition status; however, it must not be chosen for creating structures.
The rules for defining the enhancement category result implicitly from the structure setup and the classification of the types used. These rules are as follows:
  • If the object contains at least one numeric type or a substructure or component (field has a structure/table/view as its type) that can be enhanced numerically, the object can no longer be enhanced character-type, but is itself, at most, enhanceable character-type or numeric.
  • If the object contains a deep component (string, reference, or table type), or it contains a substructure or component that is marked as enhanceable in any way, then the object itself is enhanceable in any way.
  • If the object does not contain any substructure or component that is marked as enhanceable, you can select cannot be enhanced. If the structure has not yet been enhanced, you can choose the category cannot be enhanced in any case.
If you are creating new tables and structures in the ABAP Dictionary, the system proposes the category can be enhanced in any way as standard value for the classification of the enhancement options. If the developer chooses a more restrictive classification than can be enhanced in any way for a particular structure, then only the classification levels that adhere to the rules above are allowed. It is not possible to choose an enhancement option of a structure that is more restrictive than the classification resulting immplicitly from the structure setup and from the classification of the types used. Therefore, only the allowed categories are proposed for selection in the maintenance user interface.

If a structure depends on one or several other structures, the smallest category is chosen as implicit classification (in the order cannot be enhanced < can be enhanced and character-type < can be enhanced and character-type or numeric < can be enhanced in any way). This classification is greater than or less than the category in the other structures and also greater than or the same as the category that results from the actual setup in the original structure itself.

For more information, refer to the online documentation (push button "i").

Matt Johnson - What's next for SCN in 2012

Greetings SCN members!  We've received many questions on what the next steps are for the SCN platform in the coming months, so I wanted to take some time to outline the high-level roadmap and give you a taste of what's to come this year.

First off, I acknowledge, as Mark Yolton does with Dennis Howlett in Dennis's Video Blog and Oliver Kohl explains in his blog, that we're still experiencing some challenges related to launching the new site –challenges with migration, indexing, a few remaining performance tweaks, etc.  I'm extremely confident that our teams will resolve these issues in the coming days and you will be able to focus on exploring and collaborating with one another here on SCN. 

While launching the new site was a massive undertaking, we've effectively only taken the first steps —that being laying the foundation, enhancing the social functionality, improving the tools to contribute, and harmonizing the systems.  It's this foundational step that will allow us to add further features and functionality that we know the community wants and that we're looking forward to providing in the future.

So… What about mobile?  What about search?  What about functionality?  Allow me to briefly lay out what we have on the roadmap with a quick word of caution: This is the high-level view and is subject to change of course if budgets, capacities, or acts of God change the priorities at SAP.

Coming in Q2, early Q3:

Mobile SCN:

We are all voracious mobile app users and fully appreciate the need to have SCN available for smartphones and tablets.  Due to security and resource bandwidth, we prioritized getting the migration and new full site launched before branching into mobility.  This is our first priority beyond resolving the current platform challenges and here's what is planned.
  • We'll roll out an initial mobile app for Smartphones in the next 2 months.  This will offer basic use cases to satisfy immediate needs (taking part in discussions, following and messaging, and commenting on content).
  • The same will be available for iPads as well, while we investigate mobile format for tablets to make better use of the real estate.
  • There's also some indications that a couple of SAP Mentors are working on hobby projects, to make mobile much more interesting….Unfortunately I have to leave it at that for now, stay tuned

Social Sign-On:

We are planning to add the ability to login to SCN through other well known social platforms like LinkedIn, Twitter, and Facebook.  This will come with other enhancements to the profile and registration process to make it easier for new members to get started and for existing members to keep their info current.

Next generation Idea Place:

We're working with an industry leading Ideation vendor to up the ante on how Idea Place encourages, facilitates, and rewards your innovative ideas.
  • We'll be releasing the new solution in parallel to the existing Idea Place for some very strategic challenges where you will have the ability to make a quick, direct impact to SAP's ecosystem.
  • The new solution will offer staging/leveling to ensure the best ideas rise to the top and you will be subsequently rewarded through robust game mechanics as well.
  • The rest of Idea Place will transition to the new solution toward the end of the year.

Additional functionality:

 

We already have a big stack of requirements we couldn't squeeze into SCN before our launch.  We're combining that list with what we've learned by listening to you in the Pilot and days since go live.  Highlights from the list of over 200 items that we'll begin implementing in May and continue to process until we've completed it:
  • Improving Search usability
  • Moderation features
  • Points fixes and tweaks
  • (drum roll...) Navigation changes
Bugs will continue to be a priority as well and we working to have all, what we've termed "critical" bugs, resolved by the end of April.  I encourage you to bookmark Oliver's Release Notes to stay current.

Coming in Q3 and Q4:

Gamification --We'll be putting the reputation into turbo mode and offer full fledged gamification across the site

Video —We have a new, robust video application and management system in the works internally to SAP.  We are aiming to bring that to the community as well in Q4 to make sharing and managing videos possible on site, rather than relying on external providers.

So that's the high level picture.  I empathize with many of you who want this functionality now —believe me, I've tried snapping my fingers and clicking my heels.  We're working hard to stretch our budget as far as possible and to deliver the most bang for the buck.

If you have other ideas about what would make SCN better, please use our session in Idea Place!  It's going to be a great year in the evolution of SCN and I'm honored to be sharing the excitement with you. 

Tips to remember SAP Tables used for Development

An easy way to remember all the main SAP tables used for any development.

Remember Bank tables start with B, say "BKNF , BKPF".
Remember Customer tables start with K , say "KNA1,KONV".

Remember Material tables start with M, say "MARA , MAKT , MARC".

Remember Master data tables start with T, say "T001, T001W".

Remember Purchasing tables start with E, say "EKKO, EKPO"

Remember Sales table start with V, say "VBAK,VBAP".

Remember Vendor tables start with L, say "LFA1".


Six important FI tables:


They contain an I if it is an open item.
They contain an A if it is a closed item.

They contain an S if a GL account say "BSIS , BSAS".


To remember the table names of  Billing, Delivery, Sales and Purchasing:

 
Each table had a K if it is a header data, say "VBAK, VBAP, LIKP, VBRK, EKKO".

Each table had a P if it is an item data, say "VBAP, LIPS , VBRP , EKPO".


With a D at the end , the table is a Customer

With a K at the end, the table is a Vendor.


Finally TSTC tables danced for keeping the list of all transaction codes


For Reference : http://scn.sap.com/community/abap/blog/2012/03/23/fishing-with-sap-tables--a-poem-dedicated-to-all-the-sap-tables-we-use-daily

 

Infocube deletion failure ---- Still used in Data Transfer Process DTP_4

Scenario: I was trying to delete the IC from development system.

I got the below error message: "Infoprovider XXXX is being used externally".

Deletion of Objects with Type InfoCube
Using InfoCube ZXX_XXX:
Still used in Data Transfer Process DTP_4PHRRX18QMDE1N61OJDJRMVX9 (Used by)
Still used in Data Transfer Process DTP_4PHRRX18QMDE1N61OJDJRMVX9 (Used by)
InfoCube ZXX_XXX cannot be deleted because of references
Deletion of 000000 TLOGO objects

Solution:

1) Delete the data from the IC.
2) Search for the DTP, you will not find the DTP in the system, because i had   
    already deleted the DTP manually.
3) Search for the DTP using the Find(globally) Button.
4) Select it and delete the DTP.
5) Next IC got deleted without any issue. 

Steps to reset the revised version to Active version

Scenario:

While installing BI content unexpectedly the existing BI content info object got installed and the status of the info object was displaying as revised.Now i dont want to over write the changes with BI content and i want to rest the status back to active version.

Solution:

- RSA1 (Search for IO) or RSD1 --> click on Change Mode
- Menu --> Characteristics
- Select " Return to active version"

It will pop up a message to undo the changes --> Reset IO to active version --> click Yes.

Steps to restore an active version of a transformation

Scenario:

I had made some changes to the transformations and saved the transformations with out activating them.Now i don't the changes and i want to restore the active version of the Transformations.

Solution:

- Open the transformations in edit/change mode
- From menu bar --> "Edit"
- Select --> "Return to active Version"

Transport Error

Error 1:

 Start of the after-import method RS_ODSO_AFTER_IMPORT for object type(s)  
 ODSO (Activation Mode)
 InfoObject 0PART_PRCTR deleted from key part of DataStore object ZFIGL_04
 InfoObject 0PCOMPANY deleted from key part of DataStore object ZFIGL_04
 InfoObject ZZLOC deleted from key part of DataStore object ZFIGL_04
 InfoObject 0AC_DOC_TYP deleted from DataStore object ZFIGL_04
 Inconsistencies found while checking DataStore object ZFIGL_04

Error 2: 


   Start of the after-import method RS_CUBE_AFTER_IMPORT for object type(s)   CUBE (Activation Mode)
   Error/warning in dict. activator, detailed log    > Detail
   Structure change at field level (convert table /BIC/DZFIGL_C075)
   Table /BIC/DZFIGL_C075 could not be activated
   Return code..............: 8
   Following tables must be converted
   DDIC Object TABL /BIC/DZFIGL_C075 has not been activated
   Error when activating InfoCube ZFIGL_C07
   Error/warning in dict. activator, detailed log    > Detail
   Structure change at field level (convert table /BIC/DZFIGL_C072)
   Structure change at field level (convert table /BIC/DZFIGL_C073)
   Structure change at field level (convert table /BIC/FZFIGL_C07)
   Table /BIC/DZFIGL_C072 could not be activated
   Table /BIC/DZFIGL_C073 could not be activated
   Table /BIC/FZFIGL_C07 could not be activated
   Return code..............: 8
   Following tables must be converted
   DDIC Object TABL /BIC/DZFIGL_C072 has not been activated
   Error when resetting InfoCube ZFIGL_C07 to the active version

Cause:
Data exist in the DSO/IC.

Solution:

1) Drop the data from the DSO as the structure was modified(removed few 
    IO's).
2) IC structure was also modified.
3) Re-import the transport.