Summary
Symptom
This note describes how to use and manage fixed aggregates in Demand Planning and answers important questions in relation to this area.
Other termsAggregate
Performance
Fixing
/sapapo/msdp_admin
- 1. What are fixed aggregates?
Master data is required for Demand Planning. In Demand Planning, this master data is known as characteristic values combinations and these combinations are stored in the master planning object structure. Technically speaking, a master planning object structure is a special InfoCube which includes all of the characteristics required for the planning process. One or more fixed aggregates may be created for a master planning object structure (see transaction /SAPAPO/MSDP_ADMIN) by selecting a subset of the characteristics of the corresponding master planning object structure (each fixed aggregate is assigned to exactly one master planning object structure) during aggregate creation. An aggregate stores the dataset of a master planning object structure in aggregated form (both redundantly and permanently) on the database whereby aggregation is performed using characteristics that are not used in the aggregate.
- 2. The fixed aggregate hierarchy
You can also create aggregates for an aggregate and thus create a hierarchy.
- Example 1: The master planning object structure contains the product, location and sales organization characteristics.
Aggregate 1 contains the product and location characteristics.
Aggregate 2 contains the product characteristic.
In this example, Aggregate 1 is an aggregate for the master planning object structure and Aggregate 2 is an aggregate for Aggregate 1. The aggregate hierarchy therefore appears as follows:
Aggregate 2
|
Aggregate 1
|
Master planning object structure
- Example 2: The master planning object structure contains the product, location and sales organization characteristics.
Aggregate 1 contains the product and location characteristics.
Aggregate 2 contains the product and sales organization characteristics.
In this example, both Aggregate 1 and Aggregate 2 are aggregates of the master planning object structure:
Aggregate 2 Aggregate 1
| |
-----------------
|
Master planning object structure
You can use the /SAPAPO/TS_PSTRU_TOOL report (the 'Technical information' option) to display the aggregate hierarchy.
- 3. Terms used in connection with aggregates
- Activate/active
A corresponding database table (or InfoCube) is only created if an aggregate is activated (see transaction /SAPAPO/MSDP_ADMIN). An active aggregate has the status 'green'. You can only use active aggregates.
If an aggregate is activated, the required database tables are created first and the aggregate is filled afterwards ('roll up').
If an aggregate appears as active (green) in transaction /SAPAPO/MSDP_ADMIN, this aggregate also appears in transaction rsa1 (transaction rsa1 => data target corresponds to the name of the master planning object structure => 'Maintain aggregates').
- Deactivate/inactive
If an active aggregate is deactivated, the underlying InfoCube is deleted and the characteristic values combinations of this aggregate are also deleted as a result.
An inactive aggregate has the status 'red'. A corresponding InfoCube does not exist.
- 4. Relationships between the master planning object structure, aggregate and liveCache time series
Time series objects are created for a specific planning area and a specific planning version. This planning area is assigned to exactly one master planning object structure, which in turn, may use one aggregate, several aggregates or no aggregates at all. A corresponding time series network is created in the liveCache (this is based on both the characteristic values combinations of the master planning object structure and the aggregates):
- A corresponding time series is created in the liveCache for each characteristic values combination of the master planning object structure.
- A corresponding time series is created in the liveCache for each characteristic values combination of an aggregate.
- Relations are created between the aggregate times series and the time series of the master planning object structure and these relations define the aggregate time series that summarize the time series of the master planning object structure. If several aggregates are used and a hierarchy is defined as a result (aggregate for an aggregate, see point 2), the time series are also created accordingly in the liveCache.
Example 1: The master planning object structure contains the product, location and sales organization characteristics with the following combinations:
Product Location Sorg
P1 L1 V1
P1 L1 V2
P1 L2 V1
Aggregate 1 contains the product and location characteristics and therefore the following combinations:
Product Location
P1 L1
P1 L2
Aggregate 2 contains the product characteristic and therefore the following combinations:
Product
P1
In this case, the time series network in the liveCache appears as outlined in figure 1 of the attachment to this note.
The time series at the Aggregate 2 level summarizes the two time series at the Aggregate 1 level. Aggregate 1 summarizes the time series of the master planning object structure.
Example 2: The master planning object structure contains the product, location and sales organization characteristics with the following combinations:
Product Location Sorg
P1 L1 V1
P1 L1 V2
P1 L2 V1
Aggregate 1 contains the product and location characteristics and therefore the following combinations:
Product Location
P1 L1
P1 L2
Aggregate 2 contains the product and sales organization characteristics and therefore the following combinations:
Product Sorg
P1 V1
P1 V2
In this case, the time series network in the liveCache appears as outlined in figure 2 of the attachment to this note.
In this example, both aggregates are attached to the master planning object structure and exist more or less parallel to each other (and do not have any hierarchical relationship).
- 5. When does it make sense to use fixed aggregates?
- a) Improved performance in reading liveCache time series data
Performance may be improved to a certain extent if an existing fixed aggregate is used in reading liveCache time series data. Technically speaking, this means that only m values (where m < n) must be read and aggregated instead of reading and aggregating the n values on the detail.
Example 1: You want to read values for a specific product (product A). A total of 500 characteristic values combinations exist for this product.
Case 1: There are no aggregates.
The data of 500 details (that is, 500 time series) is aggregated, read and output when reading the data of product A.
Case 2: There is one aggregate which contains the exact product characteristic.
The data of product A already exists in aggregated form. This means that an exact aggregated value (that is, a time series) is read when reading the data of product A.
If you then compare the pure read access (including aggregation) of cases 1 and 2, the runtime in case 1 is approximately 500 times longer than in case 2.
Example 2: You want to read values for products A, B and C. A total of 500 characteristic values combinations exist for product A, 750 for product B and 1,000 for product C.
Case 1: There are no aggregates.
The data of 2,250 (500+750+1,000) details is aggregated, read and output in reading the data of products A,B and C.
Case 2: There is one aggregate which contains the exact product characteristic.
The data of three (3) time series (already aggregated) is aggregated, read and output in reading the data of products A,B and C.
The n:m ratio (that is, the number of time series n summarized by the number of constant time series m at an aggregated level) is an important influencing factor for improved performance for read accesses. The ratio is 500:1 in the above example.
- b) Reducing the liveCache heap consumption in reading liveCache time series data
Any problems that occur while data is read because the liveCache heap consumption is too high may be avoided by using constant aggregates. The heap consumption when reading time series data is influenced by a number of factors, including the number of 'selected' time series. If you can reduce the number of time series read by using constant aggregates, then the heap consumption of the read access is also reduced automatically as a result.
- c) Only APO 3.0 and 3.1: Storing fixing information at aggregated level
In APO 3.0 and 3.1, fixing information can only be stored at the most detailed level or at the level of fixed aggregates. If you have to store fixing information at aggregated level, you must create a fixed aggregate at the corresponding aggregation level for this.
- d) You are using a key figure that is neither aggregated nor disaggregated
If calculation type 'N' (see transaction /sapapo/msdp_admin), that is, 'No calculation' was set for a key figure but you want to store and read data for this key figure, then you may only do this at the most detailed level or at the fixed aggregate level. In such cases, you must create a fixed aggregate at the corresponding aggregation level.
You should only create an aggregate if the reasons listed under point 5 support this.
If possible, you should use no more than 2 aggregates so that the time series does not become too complex.
- 6. When should I not use any fixed aggregates?
You should not use any aggregates if the reasons listed under point 5 do not support this.
- 7. Prerequisites for creating, activating, deactivating and deleting aggregates (it is imperative that you read this section of the note)
Authorizations
APO:C_PLOB authorization object: Change planning object structure
BW: S_RS_ICUBE authorization object: Administrator Workbench - InfoCube
BC: S_CTS_ADMI authorization object: Administration functions in the Change & Transport system
BC: S_BDS_DS authorization object: Authorizations for document set
System change option (see transaction SE06 -> 'Set system change option')
You must set the following namespaces/name ranges to 'Modifiable':
General SAP name range: /0SAP/
APO generation namespace: /1APO/
Business Information Warehouse: SAP namespace: /BI0/
Business Information Warehouse: Customer namespace: /BIC/
You should also set the software component "Local development (no automatic transport)" (LOCAL) to "Modifiable".
- 8. Adding/Activating and deleting/deactivating aggregates
If possible, the decision to use aggregates and if so, which ones and how many, should be made during the Customizing stage, that is, before planning begins. Of course, it is possible to subsequently add/delete aggregates if planning data already exists, but you should avoid this if possible, since some significant system activities are required (these are explained in more detail below).
Case 1: The master planning object structure does not yet contain any characteristic values combinations:
Only DDIC objects (that is, the InfoCube) are created when you create/activate or delete/deactivate an aggregate.
Note the following: -
Case 2: The master planning object structure already contains characteristic values combinations but no time series exist yet:
The DDIC objects (that is, the InfoCube) are created during creation/activation. The aggregate is filled (see point 3).
Note the following:
If several aggregates are used and these aggregates represent a hierarchy (see point 2), then you should create/activate several aggregates sequentially, that is, one after the other, and not simultaneously (Caution: you should terminate any jobs started by the system before creating/activating the next aggregate). Otherwise the aggregates may be filled incorrectly or not filled at all.
Technical background using example 1 under point 2: the two aggregates are activated at almost the same time =>, first the DDIC objects are created for both aggregates => the aggregates are filled afterwards: Aggregate 1 is based on the data of the master planning object structure, Aggregate 2 is based on the data of Aggregate 1. Since Aggregate 1 only physically exists at this time, that is, it is not filled yet, this means that Aggregate 2 is filled incorrectly or not filled at all. Case 3: The master planning object structure already contains characteristic values combinations, and time series already exist:
The DDIC objects (that is, the InfoCube) are created during creation/activation. The aggregate is filled (see point 3). Afterwards, the additional time series are created (together with relations) in the liveCache for all planning are assigned to the master planning object structure (the planning area for all planning versions for which time series already exist).
The DDIC objects and therefore the data of the aggregate are deleted during deletion/deactivation. The corresponding time series are deleted in the liveCache afterwards.
Note the following:
You cannot access the data on the affected planning areas during deactivation/deletion and activation (that is, at the same time).
Since this operation may result in a longer runtime, we recommend that you use the /SAPAPO/TS_PSTRU_TOOL report, and not transaction /SAPAPO/MSDP_ADMIN, to execute the activation, deactivation or delete operation interactively in the background.
When you activate an additional aggregate, you should note that this cannot be an 'intermediate aggregate'. Aggregate 1 (under point 2, example 1) corresponds to an intermediate aggregate. This aggregate cannot be subsequently created if Aggregate 2 is already active. In such cases, you should first deactivate Aggregate 2. Afterwards, you should first activate Aggregate 1 and then Aggregate 2.
Which characteristics should be used for an aggregate? How can I identify useful aggregates?
In addition to the functional reasons specified under point 5 for aggregates of a certain level (fixing; calculation type 'N'), you can identify useful aggregates for improved performance as follows:
1) Determine read accesses or selections and groupings (both interactively and in the background) that are critical for performance (see point 10) => The characteristics used with these accesses are potential candidates for a fixed aggregate
2) Select these characteristics as a grouping condition in transaction /SAPAPO/MC62 and 'Display characteristic values combinations'
=> You will then receive the number of aggregated combinations
3) Check to see whether the number of aggregated combinations remains practically unchanged as a result of adding further characteristics to the grouping condition. In this case, this additional characteristic should also be added to the aggregate since this will only result in a negligible loss in performance, however, the aggregate may also be used with read accesses other than those determined under point 1).
- 9. When is an existing fixed aggregate used?
For technical reasons, a fixed aggregate may only be used with a read access if it includes all characteristics used with the read access. This means that the aggregate must contain all of the characteristics used in the corresponding selection and grouping:
In interactive planning, these are all of the characteristics for the selection as well as the characteristics used for the drilldown (see transaction /SAPAPO/SDP94).
In mass processing, these are only the characteristics that were selected as an aggregation level (see transaction /SAPAPO/MC8E).
When you copy (see transaction /SAPAPO/TSCOPY) and load a planning area version (see transaction /SAPAPO/TSCUBE), these are the characteristics for which one or several selection conditions was specified as well as the characteristics that were selected in the grouping condition.
If navigation attributes are used during selection or grouping (in addition to the normal characteristics), an aggregate may only then be used if it includes the characteristic that corresponds to the navigation attribute in addition to the normal characteristics. The navigation attributes themselves are irrelevant.
The aggregate with the least number of records is used if there are several aggregates containing characteristics used during the selection.
If no aggregate fulfills these prerequisites, then the master planning object structure is used and the data is aggregated from the most detailed level at runtime.
An aggregate is also used to read the data if not all characteristics that exist on the aggregate are used in the selection, that is, even if the access occurs in a more aggregated form. If not all key figures are used on an aggregate, you may not be able to display the data for certain key figures, even if you are not at the level of the aggregate, rather at a higher level.
- 10. Advantages of fixed aggregates
- a) Improved performance with read accesses
- b) Reducing the heap consumption with read accesses
- 11. Disadvantages of fixed aggregates
- a) Increased complexity of the time series network and increased susceptibility to errors when changes are made such as adding and deleting characteristic values combinations
- b) Creating time series objects adversely affects performance
- c) Saving data adversely affects performance
Example:
Aggregate 1 level: P1
|
------------------
| | |
Basis level: M1/P1 M2/P1 M3/P1
If a value is saved for M1/P1 in this example, then the aggregate value for P1 must also be updated in addition to M1/P1.
If the value for P1 is fixed at the Aggregate 1 level, the M2/P1 and M3/P1 values must also be changed when the data of M1/P1 is changed (in accordance with the aggregation/disaggregation rule). If an aggregate does not summarize three (3) time series, as in this example, but rather a few hundred or a few thousand time series and this aggregate is fixed, this may adversely affect performance when you save the liveCache time series data because instead of one time series, a few hundred or a few thousand time series may have to be updated.
- d) Lock problems with data change
If fixed aggregates are used, this may result in lock problems because the aggregate itself is locked when one or more time series of an aggregate is changed. This means that if fixed aggregates are used, a process may encounter lock conflicts with other processes even though no changes are made to overlapping data.
Example:
An aggregate summarizes 10 time series.
During process 1, two (2) of these time series are aggregated at runtime and the values are changed. These two (2) time series and the aggregate itself are locked as a result.
Process 2 aggregates two (2) other time series for the same aggregate. A lock conflict occurs with process 1 when values are changed because process 1 locks the aggregate and process 2 can no longer acquire the lock.
Header Data
Release Status: | Released for Customer |
Released on: | 07.03.2006 10:11:31 |
Priority: | Recommendations/additional info |
Category: | Consulting |
Primary Component: | SCM-APO-FCS Demand Planning |
Secondary Components: | SCM-APO-FCS-BF Basic Functions |
No comments:
Post a Comment