General Information About liveCache
This article gives you an introduction to SAP liveCache technology and and its main application areas.
Working with liveCache
Production planning in Supply Chain Management (SCM) requires a large amount of calculation work, particularly when determining the appropriate use of resources and controlling the flow of materials. This is the main job of liveCache.
Data Model
A small set of objects forms the basis of a flexible order network in the data model. This network can be used to model the planning requirements of various SCM applications, such as PP/DS, SNP, TP/VS, SPP, MSP, and Alert Monitor.
liveCache Interface
The interface between ABAP and liveCache consists of a collection of function modules. These are known as OM modules and are responsible for calling liveCache in the required way. However, they should not be used directly by the application.
Error Handling and Troubleshooting
Errors in liveCache are communicated using function module exceptions and a return code table.
Consistent View
Within a transaction, the application program sees all the data saved in liveCache in the state it had at the start of the transaction. This means that the program can extract data from liveCache at any time without being affected by transactions running in parallel (that is, without seeing the changes made in these parallel transactions). There are, however, some drawbacks to this consistent view.
Simulation Version (also known as a simsession)
Simulation versions (or simsessions) can prevent lock conflicts occurring when liveCache data is accessed. This means that objects in liveCache should only be created, changed, or deleted within a simulation version.
Performance
liveCache is designed to process very large quantities of data as efficiently as possible. However, liveCache can slow down if the interfaces are called in an inefficient way, or if too much data is extracted. This needs to be considered carefully if liveCache is to handle mass data in complex calculations.
Working with liveCache
The liveCache Data Model
Two master data objects (the resource and the pegging area) and three transaction data objects (the order, the activity, and the I/O node) form the flexible basis of the liveCache data model.
These objects work together with the two most important liveCache modules: liveCache Scheduler is used to scheduled activities on resources and the Pegging functions are used to assign appropriate quantities of materials from the I/O nodes.
Master Data
Resource
In an APO business scenario, you use a resource to model an object in a supply chain, when the object has a restricted capacity and is required to perform process steps. A resource can model, for example, a person, a machine, a tool, a method of transportation, or some other type of device. The transaction data assigned to a resource consists of the activities planned on this resource.
Pegging Area (or Location Product)
A pegging area is a combination of a product and a location. The transaction data assigned to a pegging data consists of the input and output nodes (known as I/O nodes for short). Each I/O node is assigned to exactly one pegging area. All I/O nodes in a single pegging area belong to the same product and are in the same location.
Transaction Data
Activity
An activity is an elementary work step within a process (such as production or transport). An activity is known as a dummy activity if it does not need a resource and is not therefore assigned to a resource. Non-dummy, or regular activities, can be sequence-dependent setup activities or processors. A processing activity can also be a setup activity with a constant (non-sequence-dependent) duration.
Mode and Capacity Requirement (or capreq)
Activities that require resources can have production variants. This means they can be dispatched to alternative resources. One type of production variant is a mode. Each mode itself needs different capacity requirements on one or more resources. Each non-dummy activity requires at least one mode and each mode needs at least one capacity requirement.
Input and Output Nodes (I/O Nodes)
An I/O node depicts a product either required by a process step (in the case of an input node or requirement) or created by a process step (in the case of an output node or receipt). The process step itself is modeled by an activity. The node saves the quantity of the product and its date/time (a requirement date/time for input nodes and an availability date/time for output nodes) An I/O node is assigned to exactly one activity but an activity can be assigned to any number of I/O nodes.
Order
An order comprises multiple activities or suborders) in a single processing unit. The following rules apply:
1. An order must contain at least one suborder or activity.
2. An order can contain only suborders or only activities, not both.
3. Suborder cannot itself contain further suborders.
The order that contains a suborder is known as its parent order. An order that is not itself contained in a parent order is known as a top order. Rule 3 means that a parent order must always be a top order.
Constraint
A constraint defines a time relationship between the start or end of two activities. liveCache Scheduler must respect this constraint when planning. For example, a particular activity may only begin once another activity has been completed. If both activities in a constraint belong to different top orders, the constraint is external; if not, the constraint is internal.
Material Flow (Pegging Edge)
A pegging edge joins the output node of a top order to the input node of another top order. This means it defines the receipt that covers a requirement. liveCache creates dynamic pegging edges automatically; however it is also possible to usefixed pegging edges to fix a receipt to one or more requirements or cover a requirement with one or more receipts.
Scheduler and Pegging
The Scheduler
liveCache Scheduler is the central component of liveCache Applications. It determines the start and end times for activities scheduled on resources or joined to other activities by constraints. liveCache Scheduler can employ various strategies and respect a range of different conditions.
Pegging
Pegging is the process by which available materials are distributed to requirements. The pegging is recalculated dynamically each time the planning situation changes. However, it is also possible to assigned a fixed receipt to a fixed requirement.
Working with liveCache
The liveCache Interface
The interface between ABAP and liveCache consists of a collection of function modules. These function modules are in the namespace /SAPAPO/ and are prefixed with OM (which stands for Object Manager). These OM modules have the following tasks:
Connecting to liveCache
Calling liveCache using EXEC SQL
Converting errors communicated by liveCache into ABAP exceptions
The OM modules are developed and maintained in liveCache development systems and the source code cannot be modified by the application.
Direct liveCache calls that skip the OM modules (direct EXEC SQL calls from an application program) are also not permitted. Calls of this nature will at some point cause serious problems, since any changes to the liveCache interface will only be reflected in the OM modules.
The DM Layer
We recommend that you do not call OM modules directly from an application program. For most scenarios, function modules are provided that package and standardize liveCache accesses in a form suitable for the application in question. These modules are usually prefixed with /SAPAPO/DM (Data Manager), which is why this outer liveCache wrapper is known as the DM Layer.
Instead of calling an OM function module in an application program, we therefore recommend that you call up a where-used list for the module. This enables you to check whether a suitable DM module exists for the function you require. If not, and the new function covers a general requirement, we suggest that you consider creating a new DM function module.
Working with liveCache
Error Handling and Troubleshooting
Various errors are possible when live Cache is called, for example errors that occur when calling an OM module. They fall into the following error classes:
Application errors or errors when calling liveCache
Internal liveCache errors (programming errors in liveCache, which usually cause short dumps)
A short dump with an appropriate error message is usually also generated if the connection to liveCache is lost.
Application Errors
Any application errors that occur when liveCache is called are generally caused by programming errors at or near the code section where the OM modules are called. liveCache checks the data it receives and communicates any inconsistencies or invalid values in the return code table ET_RC.
There are two categories of application errors. The OM module raises the appropriate exception and fills the return code table ET_RC:
LC_COM_ERROR (sy-subrc 2)
The error is so serious that no processing is possible in liveCache and none of the received data can be handled.
This exception is generally caused by a programming error in the ABAP application program. If raised, the current transaction must be rolled back.
LC_APPL_ERROR (sy-subrc 3)
An error occurred when the application tried to process a data record. An entry is created in ET_RC for each error. All other data records (that do not have entries in ET_RC) were processed successfully in liveCache.
It is possible to anticipate errors in the category LC_APPL_ERROR. You can, for example, call the module OM_ORDER_GET_DATA with the GUID of an order to determine whether this order exists in liveCache. If the order does not exist in liveCache, the module raises an sy-subrc 3 error and generates a return code with the error number 40 = om_invalid_order.
For each error, the return code table ET_RC contains the following information:
Return code (RC): Error number; the transaction /SAPAPO/OM10 provides further information about this error number.
GUID and type of the main object (OBJECTKEY and OBJECT_TYPE): The error occurred when processing this object.
GUID and type of the error object (ERROR_OBJECT_KEY and ERROR_OBJECT_TYPE): This object caused the error.
If an error in the category LC_COM_ERROR occurs, ET_RC only contains a single data record. The data record specifies only an error number, because none of the objects can be processed and the error cannot be allocated to one particular object.
The main objects in a liveCache call are generally the data records in the first import table of the OM module. Any exceptions to this are listed in the documentation of the module in question.
Internal liveCache Errors
If a short dump with the runtime error DBIF_DSQL2_SQL_ERROR occurs when liveCache is called, it can usually be traced back to a programming error in liveCache (particularly if the short dump specifies SQL error 60 or 600).
In these cases, contact the liveCache Applications support team on the component BC-DB-LCA. To help in troubleshooting, include any details of how to reproduce the error.
liveCache Connection Errors
If the connection to the liveCache server is broken while a liveCache call is being processed, a short dump is generated, stating that the database connection has been lost. If liveCache is not available at the point when an OM module is called, the module raises the exception LC_CONNECT_FAILED (sy-subrc = 1). This exception should not be handled by the application program; an appropriate short dump should be generated instead.
In most cases, liveCache in a production system should not be stopped for longer than a few minutes (for upgrades or patches). Pay attention to any system messages informing you that liveCache is not available. If liveCache remains unavailable for a longer period of time, contact the system administrator.
Working with liveCache
The Simulation Version (or Simsession)
There are two ways of editing data in liveCache:
The actual data is accessed directly in liveCache. This is the operational, committed data that can be viewed by all transactions.
The changes are made to a copy of the actual data in a simulation version, also known as a simsession or OMS version. The changes can then be applied to the actual data at the end of the simulation version. This step is known as a Merge.
Advantages and Disadvantages of Simulation Versions
Simulation versions are optional, and they have both advantages and disadvantages.
The first advantage is that an application program that uses a simulation version views all data in liveCache in the state it had when the simulation version was generated. Any changes performed and committed by parallel transactions are hidden. This guarantees a consistent view of all persistent data in liveCache, for a freely definable period of time.
A further advantage is that any changes made in a simulation version are temporary (up until the merge): that is, the changes are only simulated, even when committed. The actual data is not changed until the merge. If the merge is skipped and the simulation version deleted instead, all changes made in the simulation version are discarded.
Finally, simulation versions prevent the lock conflicts that can occur when actual data in liveCache is accessed multiple times in parallel.
The disadvantage of a simulation version is that the updates applied by the merge step to the actual data slow down runtime. Simulation versions also use more memory in liveCache, since liveCache needs to retain before-images of all objects changed by parallel transactions during the existence of the simulation.
Creating and Using a Simulation Version
The central OM module for the administration of a simulation version is called OM_SIMSESSION_CONTROL. To create a new simulation version, you call this module using the method gc_simsession_new ('N') and a newly generated GUID. This simulation version GUID identifies the new simulation
All relevant OM modules in the liveCache interface have an import parameter IV_SIMSESSION. This is where you enter the simulation version GUID, so that the required changes are made in the correct version. Even if only reading data from liveCache you should specify the current simulation version GUID so that the caller views the correct data.
If you leave the parameter IV_SIMSESSION in its initial state (NULL_GUID), liveCache operates on the actual data.
The Merge
When the simulation version is merged, the simulated changes are applied to the actual data in liveCache. This is done by calling the module OM_SIMSESSION_CONTROL with the current simulation version GUID, followed by one of the following methods:
'M' = gc_simsession_merge: Soft merge
The merge terminates at the first error; either all changes are applied or none.
'H' = gc_simsession_hardmerge: Hard merge
liveCache attempts to apply as many changes to the actual data as possible, order by order. If the changes to one order cannot be merged, liveCache moves to the next order changed in the simulation version and attempts to apply the changes there.
Before it can apply changes to the actual data, the merge function must obtain the required object locks in liveCache. If an object is locked temporarily, it waits (but no longer than 20 seconds). This means that the merge can take some time, depending on how much data was changed in the simulation version.
A merge can fail for one of the following reasons:
A parallel transaction confirms an order in the meantime; any planning changes are no longer permitted for this order.
No lock could be obtained for the order (even after waiting) and it could not be modified.
Any errors that cause a merge to fail are listed in the export table ET_INFO_RC of the module OM_SIMSESSION_CONTROL. The other export tables provide information about the changes made to the actual data by the merge.
Deleting the Simulation Version
The simulation version is deleted automatically after every successful soft merge and after every hard merge. To delete a simulation version manually (for example, because the user has canceled the current transaction), call the module OM_SIMSESSION_CONTROL with the method gc_simsession_delete ('D').
A simulation version is not deleted after a failed soft merge. This means that the merge can be repeated. (The simulation version must be deleted manually, if required.)
Other Features of Simulation Versions
liveCache provides a range of further administration functions for simulation versions, documented in the module OM_SIMSESSION_CONTROL.
Simulation Versions and Database Transactions
Once the changes from a simulation version have been merged with the actual data, a COMMIT WORK is required to commit the updates. Correspondingly, a ROLLBACK WORK rolls back the merge and the changes in the actual data.
Consequences of Simulation Versions for liveCache Performance
Overly short and overly long simulation versions can both have a negative effect on liveCache performance:
When you create a simulation version, you also create a consistent view in liveCache. liveCache needs to create and retain a change history for this view. If you have a long-running simulation version (particularly a version of a long planning board session), you should refresh the open simulation version (see
Very short simulation versions can also have a noticeable effect on liveCache performance, due to the work needed to create the version, merge the changes, and delete the version again. Therefore it is best to use one simulation version for a series of connected liveCache calls, rather than create many individual versions.
Working with liveCache
liveCache Performance
liveCache is designed to be able to handle large volumes of data, however you may occasionally encounter a drop in performance. In some customer scenarios, the sheer volume of data and the accompanying performance problems can only be handled by improved hardware (such as providing more main memory or quicker processors). In exceptional cases, it may even be necessary to ship a patch with internal performance enhancements for liveCache, or to develop a new liveCache interface tailored to the specific problem.
In many cases, though, poor performance can be traced back to bad practices when using liveCache. Studying and implementing the following recommendations can help you avoid making these mistakes, right from the development phase:
Only extract the data you need
Some modules that read data from liveCache have a large number of export tables (to obtain as much data as possible for an object), however the application rarely needs all this data. Avoid requesting the export tables that are not needed, or use exclude structures to stop them from being filled with data.
In some cases you can even specify that certain fields in a data structure are not recalculatedwhen you extract them: particularly fields such as slacktime and devquantity in the extended I/O node information. These fields take an especially long time to calculate.
Correct use of the mass interface
The OM modules are designed to handle mass data, which means that they can create, change, or query multiple objects in a single liveCache call. The activities that have a negative effect on performance in liveCache are either a series of many calls involving small amounts of data or individual calls that create or change a large amount of data each time.
Avoiding Individual Calls
Regardless of the volume of data, runtime is always incurred when connecting the application server (where the ABAP application runs) to liveCache. The smaller the amount of data involved, the greater the proportion of runtime taken up by actually establishing the connection (it can even outweigh the data transfer time).
It is therefore far better when, for example, extracting the data for 1000 orders, to call the module ORDER_GET_DATA once with 1000 order IDs than to call it 1000 times with a single order ID.
We also recommend that you avoid repeat reads of data from liveCache if possible. For example, initially you may want to determine only the order type for a certain order ID and therefore only read the header data of the order using OM_ORDER_GER_DATA. However, later on you could decide you also want to see the activities of the order and call OM_ORDER_GET_DATA again to get them. If you know from the start that you will also need the activities at some point, it is best to include them in the initial liveCache call.
Creating Packages
Creating or changing large numbers of objects in liveCache can affect performance, partially because of the need to retain history data for the consistent view.
Large volumes of data can also create bottlenecks in the main memory on the liveCache server. This can even be caused by read modules, if data is requested for a large number of objects, and particularly when all objects of a certain type in liveCache need to be read (for example, all orders in liveCache).
If your applications process large volumes of data, we recommend that you separate it into packages and distribute the processing across multiple liveCache calls. For example, if you want to create 50000 orders, it is best to split them into, say, 100 packages of 500 orders. The liveCache resources do not then have to handle 50000 orders (and their subobjects) simultaneously.
We recommend that you use parameters to define the size of packages and so achieve an optimum balance between performance and resource use when handling any potential customer scenario.
Analyzing Performance Problems
Trying to trace the causes of performance problems is often difficult and time-consuming. We recommend, therefore, that you include appropriate analysis options when you develop (or revise) your liveCache applications, and implement the logging of performance-relevant data or actions, particularly for
the data processed, categorized by data type
the runtimes of long-running processing blocks
No comments:
Post a Comment