The problem:
When I try to process it on SNAP GUI : it is running in less than 6 hours.
When I try to process it with GPT - command line to “bash process” manually : even after more than 8 days it’s still not finished and not writing in the output file.
Hardware/software used :
Multiple trials on different software/machines resulted in the same problem.
Windows 64 bits, server 2008, 128 Go RAM, no HDD limit.
Linux 64 bits, with 64 Go RAM, no HDD limit.
Linux 64 bits, with 512 Go RAM, no HDD limit.
First, make sure you have all the latest updates. In update 5.0.2 or 3 we introduced a new option for gpt to provide some diagnostic information -diag
It will help you check if the memory settings you have applied are actually taking affect on gpt.
You may also try to break the graph down into multiple steps. For example coregister in one graph and terrain correct in a separate graph. Depending on the processing involved, sometimes it performs better.
Now it’s updated (today version check OK), I tried the --diag option. it just stops the processing before starting, writing in the log the followings :
INFO: org.esa.snap.core.gpf.operators.tooladapter.ToolAdapterIO: Initializing external tool adapters
SEVERE: org.esa.s2tbx.dataio.gdal.GDALInstaller: The GDAL library is available only on Windows operation system.
SNAP Release version 5.0
SNAP home: PATH/…
SNAP debug: null
SNAP log level: null
Java home: PATH/snap/jre
Java version: 1.8.0_102
Processors: 48
Max memory: 1 GB
Cache size: 1024 MB
Tile parallelism: 48
Tile size: 512 x 512 pixels
To configure your gpt memory usage:
Edit snap/bin/gpt.vmoptions
To configure your gpt cache size and parallelism:
Edit .snap/etc/snap.properties or gpt -c {cachesize-in-GB}G -q {parallelism}
What’s strange with your configuration is that it shows that you only have 1GB of memory assigned to gpt.
Maybe almost 2GB, but because of wrong scaling the value is always scaled to the lower integer value (fixed with the next update).
So I think there is something wrong with your configuration. By default the value should be higher on your machine.
Check the gpt.vmoptions
Hi, I am trying to generate coherence using S1 SLC data via SNAP GUI. But it ran for several days and is stuck at 21%.
I am using win10, i7 CPU, RAM 32G and 512 SSD. I think the machine could be used for S1 data processing, but I still have some problems. So is there anybody know what exactly the problem is and how can I solve it?
When a process works using the GUI but fails using GPT you need to look at what is different about the GPT processing. Have you adjusted the memory settings in gpt.vmoptions?
You can use Windows Task Manager to compare the memory usage for the GUI with memory usage for your GPT process. You may need to close the SNAP GUI and other memory intensive programs so free up memory for GPT. An easy way to maximize the memory available to GPT is to reboot the system before starting your GPT processing.