I found some settings in the snap gui preferences to speed up Sentinel1 Interferomtric processing.
However when I run the same grap.xml using gpt then it takes ages.
I used the same snap.properties but I can’t use the java snap.conf settings ( -J-Xms256M -J-Xmx4G) …
I still have problems:
In the gpt.vmoptions file I included the same settings I chose in the gui under preferences --> performance:
-Xverify:none
-XX:+AggressiveOpts
-Xmx16384m
-Xms512m
-Djava.io.tmpdir=/Volumes/data1/snap_cache/tmp
In the gui the execution of a graph takes 30 min. The same graph processed using the commandline gpt takes more than 4h. What can I do?
I have a MacOSX:10.10.5 with 32 GB RAM
By the way what setting would you recommend for Xmx, Xms, tile and cache size?
I see the same thing. I usually run the full snap on windows and it performs fine and I run gpt on linux on a similar computer but it performs 10x slower. Both machines have 16Gb, similar processor.
The options entered in /gpt.vmoptions/ are automatically considered.
To find good settings you should try the tool snap offers.
In the menu select /Tools–>Options/ and then select /Performance/.
Use the /Compute/ to button to find good settings on your computer.
thanks. I computet the settings by chosing different combinations.
Then I saved the changes. When I closed snap and reopened now I got some strange messages. It took me some time to find that in snap.conf now a slash after jdkhome is missing which caused the mesages:
The uncomment line was the new which caused the error messages (). I changed this and it worked.
If I understand correctly.
The snap gui uses the vm settings defined in snpa.config
gpt uses the vm setting defined in gpt.vmoptions
Both are the same but I get large performance differences.
lveci mentioned : setting the java vm heap size large enough.
But to my understanding this is done with the xmx command defined in gpt.vmoptions and snap.config.
Thanks for all your patience.
Regarding the wrongly changed snap.conf I think that @NicolasDucoin will have a look.
Regarding the big performance difference we need to further investigate why this happens. Do you experience this with every operator or only with a few specific. Is it possible that it is dependent on the source product? Or maybe it depends on the format of the target product.
Do you have any observation to share?
-c Sets the tile cache size in bytes. Value can be
suffixed
with ‘K’, ‘M’ and ‘G’. Must be less than maximum
available heap space. If equal to or less than
zero, tile
caching will be completely disabled. The default tile
cache size is ‘4,096M’.
-q Sets the maximum parallelism used for the computation,
i.e. the maximum number of parallel (native) threads.
The default parallelism is ‘8’.
Indeed after some test, I have tried to use -x “Clears the internal tile cache after writing a complete row of tiles to the target product file. This option may be useful if you run into memory problems” and decreasing in gpt option the max memory to 3G and it works very well. However the counterpart seesm to have more hard disk access (I am not really sure)
Did you know if there is equivalent parameter in snap than -x because I think it could be an important way to solve memory problem as I have shown in the forum.
I will do other test in Windows on 4Go ram standard laptop to validate this settings.
Then the 4G of RAM is the problem. This is not enough to handle the amount of data.
Probably your OS is also still 32Bit? Then it is really not sufficient. If it is already 64Bit, then you can try to tweak the memory settings a bit.
In the ‘etc’ folder of the installation directory of SNAP, you’ll find a file named snap.conf. Open it in a text editor.
There is the line which starts with ‘default_options=’
In this line, you’ll find an option like -J-Xmx2G. Increase the value. You could use something like -J-Xmx3G.
Hi all,
I am using snap gpt on Ubuntu 18.04 but it not finish process when i try with sentinel1-slc graph. My computer has 32 GB RAM. what can I do? please help me solve this problem! Has anyone run slc graph in Ubuntu successfully?