Gpt commands run slower than GUI

Hi,
I am running SNAP on a Mac 10.15 with 40GB memory, and have been using it processing S-1 images for DInSAR.

I have found that running SNAP operations by calling the gpt command (I am doing this via a subprocces in Python) is taking longer than when using click-and-point in the graphical user interface.

As a specific example, I tested this with the command gpt Back-Geocoding using various numbers of threads using the -q option. The results were:
Default: 130 seconds
-q 1 : 836 seconds
-q 3: 293 seconds
-q 5:182 seconds
-q 12: 131 seconds
-q 15: 130 seconds
-q 20: 134 seconds
GUI: 26 seconds

So it would seem that the GUI is about four times faster at doing this operation, and increasing the number of threads doesn’t help. The -x option also appeared to have no effect on the time for gpt command to run.

I am aware that there are other options available that configure the amount of cache/memory available to the gpt command, but I have difficultly understanding what they do and haven’t found anything that brings it up to speed with the GUI.

Thank you for any suggestions!

Hi @harry.carstairs,

I have some questions about the GUI workflow:
The 26 seconds are only the Back-Geocoding operator execution, or it also takes in account the reading and writing of the products?
Could you share the graph?

Thank you in advance!

Cheers,
Martino

Hi Martino,

This is without actually using any graph. So the comparison is between:
(1) Hitting run on this in the GUI

and (2) calling gpt Back-Geocoding source1 source2 -t target …

Does that clarify? Is it possible that the time difference is simply to do with the reading in and writing processes?

Yes sorry I didn’t understood, now it is clear.

I am no expert of S-1 product but it is a possibility in particular if you opened and visualized the input products in SNAP (and by SNAP I mean the GUI) before running the operator.

Also note that when running most of the operators in SNAP, without writing the output product, the product that appear in the product list is actually just a “dummy” product and only when visualizing, saving or using it in other operation the operator computation is done (this behavior is notify with a dialog pop-up).
snap_dialog

However some S1 expert would probably have more insights!

Cheers,
Martino

Thanks for these ideas, Martino.

However, I think I can rule out the reading/writing - I just ran Back-Geocoding through the GUI on two images the were not already open, and definitely wrote the result out to file. It took 20 seconds, so again much faster than I can get with the gpt tool.

Hi Harry,

Thank you for your test, there is maybe a last thing to try, maybe you can create a graph (using the graph builder or writing it directly in XML) doing the same operation and then run it trough both the GUI and GPT and see if the performance difference is still there.
Thank you for your efforts!

Cheers,
M.

There is a significant startup time for Java applications. When you run a simple task in the GUI you have already done the startup processing, but each time you run gpt you are starting a Java process. From your data it looks like startup time for gpt is about 100 sec. on your system. Can you show is the output for:

$ time gpt Back-Geocoding -h > /dev/null

or much better (if it is available on macOS):

/usr/bin/time -v gpt Back-Geocoding -h > /dev/null

You should run this several times as the first run doesn’t benefit from caching. On my
Windows 10 laptop it takes about 30 s for the first execution, 5 s for subsequent runs.
A slightly faster linux desktop takes half as long, probably due in part to anti-virus overhead. I have often used gpt in a POSIX shell loop on macOS and linux, and I did find that linux was signficantly faster after taking into account the difference in hardware.

This consistently takes around 5 seconds.
Your first command gives:

gpt Back-Geocoding -h > /dev/null 4.68s user 0.32s system 193% cpu 2.583 total

And your second (without the -v option which doesn’t exist for me):

2.61 real 4.78 user 0.31 sys

Yes normally only the first gpt execution should have a large over head, as it will eventually initialize the user configuration (and it can takes 20-40 seconds, on my machine), however I do not expect a big overhead after.