DD03L remap performance

Today I would like to show you how I improve DD03L remap performance into a csv format. I will start with a simple setup and tune it step by step, thereby showing you different tuning possibilities.

Starting setup

Our starting setup is pretty basic. We have a lcl_file_service class which performs the reformat operation for us. We can feed it any table we want, as long as it is contained in DD03L.

So this where we start. I feed it from the outside with the request to remap the whole content of the DD02L table.

Initial runtime:

Stage 1

There are a few things that catch my attention here right away:

  • Looping at the it_dd03l into a workarea, that is a common site – but really slow. I want to change this to a field symbol
  • Assigning component with the fieldname, instead of the position. This statement can also be used by working with the position of the target field. This is usually faster.
  • We are inserting our result instead of appending it. If inserting is not obligatory, we should not do it. Switching to append here.
  • We have a branch, which checks if the end of the file line was reached in order to skip the last separator. I do not like branches in hot loops, I really don’t. If you have any possibility to pull branches outside of your hot loops, do it. Although your processor is likely to be able to predict the outcome of this branch most of the time, I would not bet on it. Therefore I will only loop until the next-to-last line and read the last line separately. This way I avoid having the branch.

So let us then implement these conclusions..

Now let us check our performance.

Stage 1 runtime:

We improved our performance by 125%.


But I am still not happy. I actually do not like the whole approach of the solution. Why do I have to do things over and over again?

For example the mapping of the individual fields: I do an assign component every time I touch a field in a data row.

And every time I have to check if my palms are in trouble.

Then I have to go through that buffer variable, so I do not blow myself up when I touch datatypes which are not compatible to a simple concatenate operation.

And if that was not enough I have to keep track of where I am in the file line, so I do not destroy the output format. 🙁

Stage 2

I do not want to do all of those things above over and over again. If I really have to do them, I want to do them only once.

In order to reach that goal we have to change our approach. Let me present you to my ideal remap solution:

Now I know that is quite abstract, but it is essentially what I am willing to invest. I want to enter a magical loop, where everything has already been taken care of. The input is just waiting to be mapped to the correct output. Both are in one line, so I do not have to jump around like a fool. And also every necessary input check has to be taken care of. After all everything I would need for those checks is contained in the dd03l table and one single row of input data.

I also have no more stupid branches to worry about and the formatting has also been taken care of.

Unfortunately the real solution is not that simple or lean, but it still follows the same approach:

As you see we have a new method involved – build_remap_customizing – which takes care of all that work I do not want to repeat doing.

The result table has a structure with only 4 components:

  • source_offset type i
  • source_width type i
  • target_offset type i
  • target_width type i

Nothing more is needed.

This is what our remap loop looks now. The whole assigning and input to output matching has already been taken care of.

Now maybe you have noticed that I do not work on the input and output directly, but through buffer variables. The reason for that is that I want my source and target to be alphanumerical, so I can work with parts of them. When I copy the data row into my input structure (well, it is a long char field actually…), I make sure that every field is right where I expect it to be. The same applies for the output field – here the separators have also been pre computed.

So, what’s the performance?

We have increased performance by 98% compared to stage 1 and 350% compared to the initial solution.


Basic improvements do help and can provide you with solid performance improvements. But sometimes we need a little change of perspective to come up with different and better solutions.

Do take care of the basics, but also try to pre compute and optimize your whole solution approach from time to time – it pays off.

Take care,


Leave a Reply

Your email address will not be published.