I got another problem concerning this topic:
I tried to integrate the python/snappy code, which I until now only tested for one image by directly calling it from the terminal, into my bulk processing bash code. So I am doing the replacement of the GeoCoding first and after that I am using gpt to do some other operations, everything inside a loop through my image files in one bash script.
The problem is, that now the python code is very slow, it needs approx. 1 hour to process the same image like I did using the terminal to run the code. There it needed ~10 mins. When I check the python-process, it only uses 0 to 1 % of the CPU but says it is running. The point where it takes so much time is the writeProduct part (I used the BEAM-DIMAP format). (I only need to write it because I need it as input for the following gpt processing, so saving the images is maybe not necessary but I don’t know another way to use the result of the snappy part for my gpt in the bash script.)
I already checked the advises for example in https://forum.step.esa.int/t/slower-snappy-processing/6354, but the configuration is like it should be.
Is it possible that the processing speed is influenced by the way I am starting the python code?