Gpt and Snap performance parameters - exhaustive manual needed!

Is this still an issue?
To which value have you set the tile cache? (the c option of gpt)
Could be that 15GB is not sufficient for this type of processing.
You could try to split your graph into three. 2 for applying the orbit files and the third for doing the remaining part.

We need to review and optimise the memory-usage and performance of S1TBX operators as they seem to be causing the most problems.

2 Likes

When setting the snap.jai.tileCacheSize in snap.properties,

gpt --diag shows 0 B for most of the settings. I’ve tried multiply different combinations…

eg. Setting snap.jai.tileCacheSize=20000 (which would be 20GB) shows:

image

Is that just a bug from GPT displaying the environment variables and the processing is done using 20GB cache size, or is the tileCacheSize disabled by error?

(Same thing happens when using SNAP GUI Performance tuning for setting the variables.)

When trying different values sometimes, gpt --diag shows eg. 1.5GB for snap.jai.tileCacheSize = 22000. I can’t see any pattern in the way the tileCacheSize is set.

Unfortunately, you are right, this is a bug.
Due to a numerical overflow, the cache size is set to zero when using values higher than 2000. Due to multiple overflows for a value of 22000 you got a cache of 1.5GB. This will be fixed with a module update in the coming weeks.

You can work around this by using the -c option from the command line. This way the cache is correctly set.

1 Like

Thanks good to know ;).

Hi @marpet,

Thanks for the information about setting the Java parameters.

You mention that -J-Xmx16G will not overwrite the -Xmx8G form the gpt.vmoptions file. Is this really impossible ? My problem is that I’ve installed snap on a cluster with 2 types of machines (one with a larger RAM than the other). So I would like to launch on each machine with the optimal parameters, but I don’t see how if the -J-Xmx is not taken into account. Is it maybe possible to specify the gpt.vmoptions file to use with my gpt depending on the machine? Or should I try to install snap twice (but it has a single front end)?

Hi Marco,

Would you please to give your opinion, I’m a bit confused, the machine was 16 GB RAM and 1 TB HHD, the creating coherence graph of two SLC images with multilook it took a few seconds to one or two minutes,

Now the machine 32 GB RAM and 2 TB SSD,

The same process takes more than 10 minutes, The SNAP is reinstalled and this is the snap.properties from snap.

and the following is the SNAP properties from SNAP\etc


and this is the gpt.vmoptions,

You see me puzzeld, too.

One thing I recently noticed is setting the properties

snap.dataio.reader.tileWidth

and

snap.dataio.reader.tileHeight

can slow down the processing.

Delete them in the snap.properties and try again.

1 Like

Hi Marco,

Thanks a lot for this very precise technical note, the difference of time to the same graph below, is, (19 minutes) after deleting those two lines, and (29 minutes) within two lines.

But, is this duration of time (19 min.) is logical with 32 GB RAM and 2 TB SSD? (using gpt)

I think in your case, it will be useful to increase the snap.jai.tilecache perhaps you could try 12000 or 15000 (if you are using gpt, then you can use the -c parameter).

1 Like

Okay, that’s good. But you say processing finished within 2 minutes.
This means it is still much slower.
Are you sure about the timings for your previous memory configuration?
Somehow I doubt that this graph would run such fast, from looking at what users reported.

The 19 minutes seem to be reasonable to me considering the processing graph. It should take almost the same time with the configurations you have mentioned.

And increasing the tile cache might help, as Omar suggested.

1 Like

Now I really confused with time, but most probably were a few minutes.

Hi Marco and Omar @obarrilero

Using gpt,

After deleting those two lines, and increase the cachesize to (15000) the same graph duration time is only (03 min.) .

Tremendous thanks.

But Omar, I’m not quite sure if got properly this point, would you please to clarify it!

1 Like

@obarrilero shouldn’t the configuration optimiser suggest that automatically?

@falahfakhri
I meant that you can modify the property of snap.jai.tilecache in the files of properties or directly in the command line when running gpt, by using the -c option from the command line. You can run gpt -h to obtain more information about the command line options of gpt.

1 Like

@mengdahl For the configuration optimiser we decided to compute it as the 70% of the memory dedicated to SNAP, but it depends a lot of the processing… Perhaps we should think more about it…

@marpet can I use variables in gpt.vmoptions e.g.

-include-options $HOME/custom.vmoptions ?

Yes, this should be possible.
At least it is documented for the installer we use:
install4j Help - VM parameters (ej-technologies.com)
But I haven’t tried it myself yet.

2 Likes

Thanks! I’ll run a few tests and drop the outcome here. It definitely looks like it’s supported.

@marpet all good, gpt reads the custom options from $HOME/custom.vmoptions
Thanks for your support!

2 Likes