ABAP – Why parallelization through DIA processes is a bad idea

Today I would like to talk about how your program parallelization fits into a bigger concept of an abap system design & spec.

Now for this I assume you know that you have multiple types of process types available for parallelization on your abap system.

Whenever I see programs that can function in a parallel fashion, I look at what kind processes those programs use. Often, I see a simple function call with “Starting new task” added – the report is pumping out dialog processes to distribute the workload.  And this is precisely what I do not want to see as a solution. The reason is not so much the code itself, but the consequences it has on the overall system behavior and spec driven strategy of the abap system.

Let us take a step back and imagine for a moment that we are not developers – let us change our perspective. You now have the task of configuring your abap system to meet a certain business requirement. This business requirement is simple:

“Up to X Users per Minute must be able to use the reporting functions, apart from that we have these jobs which need to run every hour. Those jobs must run. If they do not, we lose money and clients. Budget is X€ per month.”

Now let us assume that the sizing of the machine has already been taken care off, the budget limit was also respected. Your job now is to make sure that those jobs can indeed run every hour with a minimum chance of failure. How do you do that?

Your only reliable solution is to adapt our abap server memory configuration. Your abap instance has multiple types of memory, but we now just want to focus on two:


HEAP Memory is mostly allocated by BTCH processes. BTCH processes start with allocating HEAP until they either hit their individual limit (parameter value) or the global HEAP limit (parameter value). If your BTCH process hits either of those limits, it starts allocating EM memory (skipped the roll memory, but it’s insignificant for this case) until it hits one of the set limits (individual or global).



EM Memory is mostly allocated by DIA processes. DIA processes start with allocating EM until they either hit their individual limit (parameter value) or the global EM limit (parameter value). If your DIA process hits either of those limits, it starts allocating HEAP memory (again skipped the roll memory) until it hits one of the set limits (individual or global). If a DIA process is allocating HEAP memory is goes into the so called private mode. In that mode the process is not being freed when the program is finished. In order to be able to hold the memory allocated in the HEAP it has to keep occupying the workprocess. If you have too much of this, you do not have any spare DIA workprocesses left and your system becomes unusable.


Those are the basics in a very simplified form. Your DIA and your BTCH processes each have a memory area where they feel at home. Apart from the split having an interesting technical background (for example: EM Memory can handle fast context switches better. Therefore better equipped to handle masses of short workloads), they also give the possibility of sealing off user driven workloads from batch driven workloads by restricting them to their starting memory type. In that case your DIA processes would be restricted from allocating more than a minimal amount of heap and your BTCH processes could not allocate EM past a technical minimum.

This is what enables you to make sure that those jobs are never short on memory and would always be in the position to start. Even if your DIA Users start to demand an extreme amount of memory, your BTCH processes are shielded. Although you can’t compensate for an undersized machine, you can protect your important business processes.

After having our core processes dump in the night because a user was trying to allocate 40GB of memory, I am a big fan of sealing off those workloads against each other.

And this is precisely the point where parallelization of programs with DIA processes becomes a big problem. If I as a “run oriented” person start BTCH jobs, I expect them to run in batch – not to pump out DIA processes. You can’t seal that off, and you will therefore fail to deliver. Your batch workload has to be immune to dialog user driven actions. When it gets serious, I rather have a few dialog users receive a memory dump than my core jobs failing.


Take care




Leave a Reply

Your email address will not be published.