Gpt performance

Hi,

Sorry. I found that I missed to update gpt.vmoptions. I guess I was looking for this file …

Cheers

Hi,

I still have problems:
In the gpt.vmoptions file I included the same settings I chose in the gui under preferences --> performance:
-Xverify:none
-XX:+AggressiveOpts
-Xmx16384m
-Xms512m
-Djava.io.tmpdir=/Volumes/data1/snap_cache/tmp

In the gui the execution of a graph takes 30 min. The same graph processed using the commandline gpt takes more than 4h. What can I do?

I have a MacOSX:10.10.5 with 32 GB RAM
By the way what setting would you recommend for Xmx, Xms, tile and cache size?

hope you can help

Many thanks

1 Like

I see the same thing. I usually run the full snap on windows and it performs fine and I run gpt on linux on a similar computer but it performs 10x slower. Both machines have 16Gb, similar processor.

1 Like

Make sure you have gpt configured properly by setting the java vm heap size large enough.

Hi,

what exactly do you mean?
I have no idea of java at all and where can I change the gpt configuration?

When I call
gpt graph.xml
does it automatically look through the file gpt.vmoptions or do I have
to add the java options like this

gpt graph.xml -J-XX:+AggressiveOpts -J-Xverify:none -J-Xms512M
-J-Xmx16384M …

I changed in
/Applications/snap/etc/snap.properties

Tile cache size [Mb]

snap.jai.tileCacheSize = 4096

Default tile size in pixels

snap.jai.defaultTileSize = 2048

number of CPU cores used for image rendering and graph processing

allow this to default to Runtime.getRuntime().availableProcessors()

snap.parallelism = 12

Is this also considered in gpt?
When I call gpt -h than exactly those values of the snap.properties file
are shown as default

Again the question what are good setting for cache and tile size, Xms ,
Xmx etc.
I have 32 GB RAM

In another forum some more java options are considered - what should be
used to get a good performance?

The options entered in /gpt.vmoptions/ are automatically considered.

To find good settings you should try the tool snap offers.
In the menu select /Tools–>Options/ and then select /Performance/.
Use the /Compute/ to button to find good settings on your computer.

Hi,

thanks. I computet the settings by chosing different combinations.
Then I saved the changes. When I closed snap and reopened now I got some strange messages. It took me some time to find that in snap.conf now a slash after jdkhome is missing which caused the mesages:
The uncomment line was the new which caused the error messages (). I changed this and it worked.

**#default_options="–jdkhome “/Library/Internet Plug-Ins/JavaAppletPlugin.plugin/Contents/Home” --branding snap …
default_options="–jdkhome “/Library/Internet Plug-Ins/JavaAppletPlugin.plugin/Contents/Home” --branding snap …

  1. If I understand correctly.
    The snap gui uses the vm settings defined in snpa.config
    gpt uses the vm setting defined in gpt.vmoptions

Both are the same but I get large performance differences.

lveci mentioned : setting the java vm heap size large enough.
But to my understanding this is done with the xmx command defined in gpt.vmoptions and snap.config.

Hope you can help

Hello vhelm,

Thanks for all your patience.
Regarding the wrongly changed snap.conf I think that @NicolasDucoin will have a look.
Regarding the big performance difference we need to further investigate why this happens. Do you experience this with every operator or only with a few specific. Is it possible that it is dependent on the source product? Or maybe it depends on the format of the target product.
Do you have any observation to share?

Thanks

hi,

I see this definitly in the georeference operator.
It seems not to be the case with Multilook.

The other graph I was talking about consist of topsar-split, apply-orbit, interferogramm, backgeocoding and topo phase removal and filter.

I just tried Sentinel-1 and output beam-dimap.
With the georeference example I output Envi format.

By the way what is the temporary directory for snap
When I define this in snap.config: -Djava.io.tmpdir=/Volumes/data1/snap_cache/tmp

then snap don’t uses this directory, but gpt does it.

Thanks for the report, I created a Jira issue for this which will be fixed asap: https://senbox.atlassian.net/browse/SNAP-238

Hi,

Meanwhile I found a way to get the same performance with gpt as with the gui.
I just have to run
“gpt job.xml -c 8192M -q 8”

These are the setting I used in the snap.properties file. GPT seems not to look in this property file.

Best wishes

Thanks for the tricks, i will try it.

However what means-c and-q ?

Best

Hi,

type gpt -h and you get the information:

-c Sets the tile cache size in bytes. Value can be
suffixed
with ‘K’, ‘M’ and ‘G’. Must be less than maximum
available heap space. If equal to or less than
zero, tile
caching will be completely disabled. The default tile
cache size is ‘4,096M’.
-q Sets the maximum parallelism used for the computation,
i.e. the maximum number of parallel (native) threads.
The default parallelism is ‘8’.

Am 06.10.15 um 20:14 schrieb clardeux:

Many thanks of this tricks.

Indeed after some test, I have tried to use -x “Clears the internal tile cache after writing a complete row of tiles to the target product file. This option may be useful if you run into memory problems” and decreasing in gpt option the max memory to 3G and it works very well. However the counterpart seesm to have more hard disk access (I am not really sure)

Did you know if there is equivalent parameter in snap than -x because I think it could be an important way to solve memory problem as I have shown in the forum.

I will do other test in Windows on 4Go ram standard laptop to validate this settings.

Hope it helps people.

2 Likes

Hi
I use windows7 and java v.8.0.11
When I open one more image in Snap show this errors : Java heap space , GC overhead space
Would you pleas help me.

How much RAM does your OC have? Probably it is not sufficient.
What kind of images are you trying to open?

M y laptop has 4G RAM
I’m trying to open S1A-SLC images

Then the 4G of RAM is the problem. This is not enough to handle the amount of data.
Probably your OS is also still 32Bit? Then it is really not sufficient. If it is already 64Bit, then you can try to tweak the memory settings a bit.

In the ‘etc’ folder of the installation directory of SNAP, you’ll find a file named snap.conf. Open it in a text editor.
There is the line which starts with ‘default_options=’
In this line, you’ll find an option like -J-Xmx2G. Increase the value. You could use something like -J-Xmx3G.

Hi all,
I am using snap gpt on Ubuntu 18.04 but it not finish process when i try with sentinel1-slc graph. My computer has 32 GB RAM. what can I do? please help me solve this problem! Has anyone run slc graph in Ubuntu successfully?

Thank you

Please review the many previous reports by searching on “GPT SLC”. If you don’t find a post the matches your problem please start a new thread with a subject that mentions GPT and SLC. You should describe the processing you want to perform in more detail. Do you get an error or is it just that the processing very slow? Have you adjusted the memory settings in gpt.vmoptions? If you run bashtop or, recommended: bpytop in a terminal you can monitor CPU and RAM usage while gpt is running.

1 Like