Wednesday 7 March 2012

SAP BW Interview Questions

1) Workbooks, as a general rule, should be transported with the role.
 
A:Here are a couple of scenarios:

1.
If both the workbook and its role have been previously transported, then the role does not need to be part of the transport.

2.
If the role exists in both dev and the target system but the workbook has never been transported, and then you have a choice of transporting the role (recommended) or just the workbook. If only the workbook is transported, then an additional step will have to be taken after import: Locate the WorkbookID via Table RSRWBINDEXT (in Dev and verify the same exists in the target system) and proceed to manually add it to the role in the target system via Transaction Code PFCG -- ALWAYS use control c/control v copy/paste for manually adding!

3
. If the role does not exist in the target system you should transport both the role and workbook. Keep in mind that a workbook is an object unto itself and has no dependencies on other objects. Thus, you do not receive an error message from the transport of 'just a workbook' -- even though it may not be visible, it will exist (verified via Table RSRWBINDEXT).

Overall, as a general rule, you should transport roles with workbooks.

 
2) How much time does it take to extract 1 million (10 lackhs) of 
     records into an infocube?

A.
This depends, if you have complex coding in update rules it will take longer 
    time, or else it will take less than 30 minutes.

3) What are the five ASAP Methodologies?


A: Project plan, Business Blue print, Realization, Final preparation and

    Go-Live - support.


1. Project Preparation:
In this phase, decision makers define clear project objectives and an efficient decision making process ( i.e. Discussions with the client, like what are his needs and requirements etc.). Project managers will be involved in this phase (I guess).

A Project Charter is issued and an implementation strategy is outlined in this phase.


2. Business Blueprint:
It is a detailed documentation of your company's requirements. (i.e. what are the objects we need to develop are modified depending on the client's requirements).

3. Realization:
In this only, the implementation of the project takes place (development of objects etc) and we are involved in the project from here only.

4. Final Preparation:
Final preparation before going live i.e. testing, conducting pre-go-live, end user training etc.

End user training is given that is in the client site you train them how to work with the new environment, as they are new to the technology.


5. Go-Live & support
: The project has gone live and it is into production. The Project team will be supporting the end users.


4) what is the landscape of BW?
 
A: Then Landscape of b/w: u have the development system, testing 

    system, production system

Development system:
All the implementation part is done in this sys. (I.e., Analysis of objects developing, modification etc) and from here the objects are transported to the testing system, but before transporting an initial test known as Unit testing (testing of objects) is done in the development sys.

Testing/Quality system
: quality check is done in this system and integration testing is done.

Production system
: All the extraction part takes place in this sys.

Q) How do you measure the size of infocube?


A:
In no of records.

Q). Difference between infocube and ODS?


A:
Infocube is structured as star schema (extended) where a fact table is surrounded by different dim table that are linked with DIM'ids. And the data wise, you will have aggregated data in the cubes. No overwrite functionality.

ODS is a flat structure (flat table) with no star schema concept and which will have granular data (detailed level). Overwrite functionality.

Flat file datasources does not support 0recordmode in extraction.


x before, -after, n new, a add, d delete, r reverse


Q) Difference between display attributes and navigational attributes?


A:
Display attribute is one, which is used only for display purpose in the report. Where as navigational attribute is used for drilling down in the report. We don't need to maintain Navigational attribute in the cube as a characteristic (that is the advantage) to drill down.

Q. CAN U ADD A NEW FIELD AT THE ODS LEVEL? 

A:
Sure you can. ODS is nothing but a table.

Q. CAN NUMBER OF DATASOURCES HAVE ONE INFOSOURCE?

 
A: Yes of course. For example, for loading text and hierarchies we use different data sources but the same InfoSource.

Q. BRIEF THE DATAFLOW IN BW.

 
A: Data flows from transactional system to analytical system (BW). DataSources on the transactional system needs to be replicated on BW side and attached to infosource and update rules respectively.  
 Q) WHAT IS PROCEDURE TO UPDATE DATA INTO DATA TARGETS?
 
 A:FULL and DELTA.

Q) AS WE USE Sbwnn, sbiw1, sbiw2 for delta update in LIS THEN WHAT IS THE PROCEDURE IN LO-COCKPIT?

 
A:No LIS in LO cockpit. We will have datasources and can be maintained (append fields). Refer white paper on LO-Cockpit extractions.

Q) Why is the delta queue not updated when you start the V3 update in the logistics cockpit area?

A:
It is most likely that a delta initialization had not yet run or that the delta initialization was not successful. A successful delta initialization (the corresponding request must have QM status 'green' in the BW System) is a prerequisite for the application data being written in the delta queue.

Q) What is the relationship between RSA7 and the qRFC monitor (Transaction SMQ1)?

A:
The qRFC monitor basically displays the same data as RSA7. The internal queue name must be used for selection on the initial screen of the qRFC monitor. This is made up of the prefix 'BW, the client and the short name of the DataSource. For DataSources whose name are 19 characters long or shorter, the short name corresponds to the name of the DataSource. For DataSources whose name is longer than 19 characters (for delta-capable DataSources only possible as of PlugIn 2001.1) the short name is assigned in table ROOSSHORTN.

In the qRFC monitor you cannot distinguish between repeatable and new LUWs. Moreover, the data of a LUW is displayed in an unstructured manner there.


Q) Why are the data in the delta queue although the V3 update was not started?


A:
Data was posted in background. Then, the records are updated directly in the delta queue (RSA7). This happens in particular during automatic goods receipt posting (MRRS). There is no duplicate transfer of records to the BW system. See Note 417189.

Q) Why does button 'Repeatable' on the RSA7 data details screen not only show data loaded into BW during the last delta but also data that were newly added, i.e. 'pure' delta records?


A:
Was programmed in a way that the request in repeat mode fetches both actually repeatable (old) data and new data from the source system.

Q) I loaded several delta inits with various selections. For which one is the delta loaded?

A)
:For delta, all selections made via delta inits are summed up. This means, a delta for the 'total' of all delta initializations is loaded.

Q) How many selections for delta inits are possible in the system?


A:
With simple selections (intervals without complicated join conditions or single values), you can make up to about 100 delta inits. It should not be more.

With complicated selection conditions, it should be only up to 10-20 delta inits.


Reason: With many selection conditions that are joined in a complicated way, too many 'where' lines are generated in the generated ABAP source code that may exceed the memory limit.


Q) I intend to copy the source system, i.e. make a client copy. What will happen with may delta? Should I initialize again after that?


A:
Before you copy a source client or source system, make sure that your deltas have been fetched from the DeltaQueue into BW and that no delta is pending. After the client copy, an inconsistency might occur between BW delta tables and the OLTP delta tables as described in Note 405943. After the client copy, Table ROOSPRMSC will probably be empty in the OLTP since this table is client-independent. After the system copy, the table will contain the entries with the old logical system name that are no longer useful for further delta loading from the new logical system. The delta must be initialized in any case since delta depends on both the BW system and the source system. Even if no dump 'MESSAGE_TYPE_X' occurs in BW when editing or creating an InfoPackage, you should expect that the delta have to be initialized after the copy.

Q) Is it allowed in Transaction SMQ1 to use the functions for manual control of processes?


A:
Use SMQ1 as an instrument for diagnosis and control only. Make changes to BW queues only after informing the BW Support or only if this is explicitly requested in a note for component 'BC-BW' or 'BW-WHM-SAPI'.

Q) Despite of the delta request being started after completion of the collective run (V3 update), it does not contain all documents. Only another delta request loads the missing documents into BW. What is the cause for this "splitting"?


A:
The collective run submits the open V2 documents for processing to the task handler, which processes them in one or several parallel update processes in an asynchronous way. For this reason, plan a sufficiently large "safety time window" between the end of the collective run in the source system and the start of the delta request in BW. An alternative solution where this problem does not occur is described in Note 505700.

Q) Despite my deleting the delta init, LUWs are still written into the DeltaQueue?


A:
In general, delta initializations and deletions of delta inits should always be carried out at a time when no posting takes place. Otherwise, buffer problems may occur: If a user started the internal mode at a time when the delta initialization was still active, he/she posts data into the queue even though the initialization had been deleted in the meantime. This is the case in your system.

Q) In SMQ1 (qRFC Monitor) I have status 'NOSEND'. In the table TRFCQOUT, some entries have the status 'READY', others 'RECORDED'. ARFCSSTATE is 'READ'. What do these statuses mean? Which values in the field 'Status' mean what and which values are correct and which are alarming? Are the statuses BW-specific or generally valid in qRFC?


A:
Table TRFCQOUT and ARFCSSTATE: Status READ means that the record was read once either in a delta request or in a repetition of the delta request. However, this does not mean that the record has successfully reached the BW yet. The status READY in the TRFCQOUT and RECORDED in the ARFCSSTATE means that the record has been written into the DeltaQueue and will be loaded into the BW with the next delta request or a repetition of a delta. In any case only the statuses READ, READY and RECORDED in both tables are considered to be valid. The status EXECUTED in TRFCQOUT can occur temporarily. It is set before starting a DeltaExtraction for all records with status READ present at that time. The records with status EXECUTED are usually deleted from the queue in packages within a delta request directly after setting the status before extracting a new delta. If you see such records, it means that either a process which is confirming and deleting records which have been loaded into the BW is successfully running at the moment, or, if the records remain in the table for a longer period of time with status EXECUTED, it is likely that there are problems with deleting the records which have already been successfully been loaded into the BW. In this state, no more deltas are loaded into the BW. Every other status is an indicator for an error or an inconsistency. NOSEND in SMQ1 means nothing (see note 378903).

The value 'U' in field 'NOSEND' of table TRFCQOUT is discomforting.


Q) The extract structure was changed when the DeltaQueue was empty. Afterwards new delta records were written to the DeltaQueue. When loading the delta into the PSA, it shows that some fields were moved. The same result occurs when the contents of the DeltaQueue are listed via the detail display. Why are the data displayed differently? What can be done?

A:
Make sure that the change of the extract structure is also reflected in the database and that all servers are synchronized. We recommend to reset the buffers using Transaction $SYNC. If the extract structure change is not communicated synchronously to the server where delta records are being created, the records are written with the old structure until the new structure has been generated. This may have disastrous consequences for the delta.

When the problem occurs, the delta needs to be re-initialized.


Q) How and where can I control whether a repeat delta is requested?

A:
Via the status of the last delta in the BW Request Monitor. If the request is RED, the next load will b e of type 'Repeat'. If you need to repeat the last load for certain reasons, set the request in the monitor to red manually. For the contents of the repeat see Question 14. Delta requests set to red despite of data being already updated lead to duplicate records in a subsequent repeat, if they have not been deleted from the data targets concerned before.

See the recommendation in Note 505700.


Q) Are there particular recommendations regarding the data volume the DeltaQueue may grow to without facing the danger of a read failure due to memory problems?


A:
There is no strict limit (except for the restricted number range of the 24-digit QCOUNT counter in the LUW management table - which is of no practical importance, however - or the restrictions regarding the volume and number of records in a database table).

When estimating "smooth" limits, both the number of LUWs is important and the average data volume per LUW. As a rule, we recommend to bundle data (usually documents) already when writing to the DeltaQueue to keep number of LUWs small (partly this can be set in the applications, e.g. in the Logistics Cockpit). The data volume of a single LUW should not be considerably larger than 10% of the memory available to the work process for data extraction (in a 32-bit architecture with a memory volume of about 1GByte per work process, 100 Mbytes per LUW should not be exceeded). That limit is of rather small practical importance as well since a comparable limit already applies when writing to the DeltaQueue. If the limit is observed, correct reading is guaranteed in most cases.


If the number of LUWs cannot be reduced by bundling application transactions, you should at least make sure that the data are fetched from all connected BWs as quickly as possible. But for other, BW-specific, reasons, the frequency should not be higher than one DeltaRequest per hour.


To avoid memory problems, a program-internal limit ensures that never more than 1 million LUWs are read and fetched from the database per DeltaRequest. If this limit is reached within a request, the DeltaQueue must be emptied by several successive DeltaRequests. We recommend, however, to try not to reach that limit but trigger the fetching of data from the connected BWs already when the number of LUWs reaches a 5-digit value.


Q) I would like to display the date the data was uploaded on the report. Usually, we load the transactional data nightly. Is there any easy way to include this information on the report for users? So that they know the validity of the report.


A:
If I understand your requirement correctly, you want to display the date on which data was loaded into the data target from which the report is being executed. If it is so, configure your workbook to display the text elements in the report. This displays the relevance of data field, which is the date on which the data load has taken place.  
 Q) Can we load data directly into infoobject with out extraction is it possible.

A:
Yes. We can copy from other infoobject if it is same. We load data from PSA if it is already in PSA.

Q) HOW MANY DAYS CAN WE KEEP THE DATA IN PSA, IF WE R SHEDULED DAILY, WEEKLY AND MONTHLY.

A:We can set the time.

Q) HOW CAN U GET THE DATA FROM CLIENT IF U R WORKING ON OFFSHORE PROJECTS. THROUGH WHICH NETWORK.


A:
VPN…………….Virtual Private Network, VPN is nothing but one sort of network where we can connect to the client systems sitting in offshore through RAS (Remote access server).

Q) HOW CAN U ANALISE THE PROJECT AT FIRST?


A:
Prepare Project Plan and Environment
    Define Project Management Standards and
Procedures
    Define Implementation Standards and Procedures
    Testing and Go-live + supporting.


Q) There is one ODS AND 4 INFOCUBES. We send data at  a time to

     all cubes, if one cube got lock error, how can u rectify the error?

A:
Go to TCode sm66, then see which one is locked select that pid from there 
    and goto SM12 TCode, then unlock it this is happened when lock errors are 
    occurred when u scheduled.


Q) Can anybody tell me how to add a navigational attribute in the BEx 

     report in the rows?

A:
Expand dimension under left side panel (that is infocube panel) select than 
    navigational attributes drag and drop under rows panel.


Q) WHAT IS TRANSACTIONAL CUBE?


A:
Transactional InfoCubes differ from standard InfoCubes in that the former have an improved write access performance level. Standard InfoCubes are technically optimized for read-only access and for a comparatively small number of simultaneous accesses. Instead, the transactional InfoCube was developed to meet the demands of SAP Strategic Enterprise Management (SEM), meaning that, data is written to the InfoCube (possibly by several users at the same time) and re-read as soon as possible. Standard Basic cubes are not suitable for this.


Q) Is there any way to delete cube contents within update rules from an ODS data source?
 

A:The reason for this would be to delete (or zero out) a cube record in an "Open Order" cube if the open order quantity was 0. 

I've tried using the 0recordmode but that doesn't work. Also, would it be easier to write a program that would be run after the load and delete the records with a zero open qty.

A) START routine for update rules u can write ABAP code.


A) Yap, you can do it. Create a start routine in Update rule.


It is not "Deleting cube contents with update rules" It is only possible to avoid that some content is updated into the InfoCube using the start routine. Loop at all the records and delete the record that has the condition. "If the open order quantity was 0" You have to think also in before and after images in case of a delta upload. In that case you may delete the change record and keep the old and after the change the wrong information.



Q) I am not able to access a node in hierarchy directly using variables 

    for reports. When I am using Tcode RSZV it is giving a message that 
    it doesn't exist in BW 3.0 and it is embedded in BEx. Can any one 
    tell me the other options to get the same functionality in BEx?
A: Tcode RSZV is used in the earlier version of 3.0B only. From 3.0B onwards, 
    it's possible in the Query Designer (BEx) itself. Just right click on the   
    InfoObject for which you want to use as variables and precede further 
    selecting variable type and processing types.


Q) In BW we need to write abap routines. I wish to know when and 

    what type of abap routines we need to write. Also, are these 
    routines written in update rules? I will be glad, if this is clarified 
    with real-time scenarios and few examples?

A:
Over here we write our routines in the start routines in the update rules or in the transfer structure (you can choose between writing them in the start routines or directly behind the different characteristics. In the transfer structure you just click on the yellow triangle behind a characteristic and choose "routine". In the update rules you can choose "start routine" or click on the triangle with the green square behind an individual characteristic. Usually we only use start routine when it does not concern one single characteristic (for example when you have to read the same table for 4 characteristics). I hope this helps.


We use ABAP Routines, For example:


To convert to Uppercase (transfer structure)


To convert Values out of a third party tool with different keys into the same keys as our SAP System uses (transfer structure)


To select only a part of the data for from an infosource updating the InfoCube (Start Routine) etc.



2 comments:

  1. I am William..
    I just browsing through some blogs and came across yours!
    Excellent blog, good to see someone actually uses for quality posts.
    Your site kept me on for a few minutes unlike the rest :)
    Keep up the good work!Thanks for sharing a important information on sapbwbi


    ReplyDelete
  2. I am truly inspired with this blog! Clear clarification of issues is given and it is interested in everybody. your blog style is also very impressive and beautiful. I am very impressed. Great work!. We also Provide SAP BW Interview Questions and Answers .

    ReplyDelete