Description
Real-time data
acquisition (RDA) supports operative reporting. The data is transferred into
BI at regular intervals and is then updated to the DataStore objects, which
are directly available for reporting and analysis. Background processes
(daemons) in the BI system initiate the InfoPackages and data transfer
processes assigned to them (to update the PSA data to DataStore objects).
To simplify the
processing of and reporting on the data, the requests are kept open in the
DataStore object and in the PSA. Based on settings in the InfoPackage, the
daemons decide when a request is to be closed and when a new request is to be
opened.
The data can be
transferred from the source to the entry layer in BI (PSA) in two
ways:
●
Using a Web
Service Push
A Web service
push can write the data directly from the source to the PSA. The data transfer
is not controlled by BI. An InfoPackage (for full upload) is required only to
specify request-related settings for RDA.
●
Using a Service
API
If the data
is transferred via a delta queue of an SAP source system, the data is also
written to the PSA. This requires the creation of a special InfoPackage; this
InfoPackage ensures data is transferred from the source system and is used for
specifying request-related settings. To generate an entry for the DataSource
in the delta queue, an InfoPackage with delta initialization must first be
created and executed.
Prerequisite
The DataSource has to support real-time data acquisition. The option to support real-time data acquisition is a property of the DataSource. From a technical point of view, this property is possible if the BI Service API in the SAP source system has at least the following release : Plug-In-Basis 2005.1, or for 4.6C source systems Plug-In 2004.1 SP10.
DataSources
used for RDA can no longer be used for standard extraction (scheduling using
InfoPackages). This is because the DataSource can have one extraction
mechanism only (RDA or scheduled data transfer). It is not possible to have
multiple extraction mechanisms simultaneously, since the delta queue can only
contain one entry for each DataSource and target system at any given
time.
More information:
Real-Time Data
Acquisition
General Practices with Migration
If you want to
integrate the extraction of data with real-time data acquisition into an
existing data flow, there are two alternatives:
●
Using two different
DataSources
Up until now,
you evaluate the information at item level using one DataSource, for example.
To do this, data is made available periodically using a scheduled extraction;
InfoPackages specify the selection parameters and control the data request.
If you want
to obtain a more efficient overview of new documents, you can connect a second
RDA-enabled DataSource in parallel, to write the header information about the
daemon to a separate DataStore object at shorter intervals. The existing data
flow does not change. For reporting and analysis, the data can be combined
using a
MultiProvider. To
avoid redundant data in the RDA DataStore object, regularly delete the data
from the DataStore object once the standard DataStore object is
loaded.
The following
figure outlines these two options:
Setting the
data flow in this way allows you to continue to use the existing data flow.
You can use a separate and possibly smaller DataStore object for the data from
the additional DataSource. However, you require a MultiProvider for reporting.
Additional administration effort is demanded by the need to regularly delete
the data from the RDA DataStore object.
●
Using a single
DataSource
If just one
DataSource is to be used, the existing data flow must be fully switched over
to real-time data acquisition. This is a result of the fact mentioned above
that a DataSource can have one extraction mechanism only. In our example, the
information about new documents would not be transferred using a separate
DataSource for header information; instead, the DataSource that returns the
data at item level would be switched over to real-time data
acquisition.
The following
figure outlines these two options:
If the
existing data flow is fully replaced by a new RDA data flow, the data
reconciliation for different DataSources does not require any additional
administration effort. Switching to this data flow allows you to load larger
volumes of data by dividing the load process into several smaller load
processes.
No comments:
Post a Comment