Large datasets are usually moved when a planning area version (transaction /SAPAPO/TSCUBE; report /SAPAPO/RTSINPUT_CUBE) is loaded. To optimize performance, consider the following:
- The indexes and statistics should be available for the InfoCube. This can be checked in the BW Administrator Workbench and can be created and corrected if necessary.
- The dataset transferred should be kept as small as possible:
- Keep the period between the from date and the to date as short as possible
- Use suitable selection conditions to limit the number of characteristics combinations
- Only use the key figures that are really necessary in the key figure assignment. Do not assign key figures that do not need to be loaded, or for which no data is available.
- The total duration of the processes can be shortened if you use parallel processing. Note the following:
- The system sets the maximum number of possible parallel processes. Influential factors include the number of possible batch processes and the size of the heap (in particular, the size of the liveCache heap).
- The same objects cannot be processed by the parallel processes (this causes locking problems). This means that there must be a separation using suitable selection restrictions in the characteristics combinations.
- For each loading process, the same period is transferred for all selected key figures. If, for some key figures, data for period 1 (in the past, for example) and for other key figures, data for period 2 (in the future, for example) exist, it may be better, for performance reasons, to split the operation: Loading process 1 for all key figures for period 1 and loading process 2 for all key figures for period 2.
- Example of parallel processing: 500,000 characteristics combinations and 20 key figures are to be loaded for a total period of 3 years (2 years past, 1 year future). Past data is only available for 4 key figures, only future data exists for the remaining 16. 5 batch processes can be started in the system. Option 1: Carry out 5 parallel loading processes for the entire period, in each case about 100,000 characteristics combinations and all 20 key figures. Option 2: Carry out 5 parallel loading processes for the past period and in each case about 100,000 characteristics combinations for 4 of the key figures, and then 5 parallel loading processes for the future period, and, in each case, about 100,000 characteristics combinations for the remaining 16 key figures. Many factors determine which of the options is quicker.
- Then, if the loading process is carried out at aggregated level, and the data in the liveCache can be disaggregated, then the period for which the time series were created (cf. transaction /SAPAPO/MSDP_ADMIN) influences the duration of the loading process. The shorter the period, the quicker the loading process. This does not apply if the loading process is carried out at detail level.
- You load data from an InfoCube into a planning area. This can take a very long since null values are always read and written. Writing of null InfoCube values is generally necessary so that possibly existing values from the planning area are overwritten. In some scenarios, you do not need to write the null values, in which case performance improvements can be achieved. This is the case, for example, if no data exists in the planning area (for example, with the first loading process after creating the time series). After installation of Note 705068 for Release 3.0, 3.1 and 4.0 or in 4.1 as of SP0, you can use a flag in report /sapapo/rtsinput_cube to control whether null values should be written or not. You may only set the flag if the null values can be ignored for the current scenario.
Implement this note.
|Release Status:||Released for Customer|
|Released on:||24.02.2004 11:10:16|
|Primary Component:||SCM-APO-FCS Demand Planning|
|Secondary Components:||SCM-APO-FCS-BF Basic Functions|
|705068 - Performance: Loading data from an InfoCube|
|568671 - Collective consulting note on versions|