Gpt speckle filter using too much memory

Hello,

I’m having trouble using gpt speckle filter. I downloaded a sentinel 1 grd zip. I pre-proccessed it using the radiometric calibration tool from gpt. That worked fine.

Now, I’m trying to process the output of the calibration with speckle filter. The output of calibration is a .tif with 1.6GB. I have about 10GB free RAM and allocated 9GB to the heap size. However I get the following errors:

Waiting thread received a null tile.
Java heap space

I tried tweaking the parameters in gpt.vmoptions, snappy.ini, etc… I also checked and my JVM is 64-bits.

Anyways, shouldn’t 9GB of heap size be enough to run a 1.6GB tif? What should be the ratio between product size and memory consumption for the speckle filter tool?

Thanks in advance!

Maybe you call

gpt --diag

This should print the current configuration. You will see if your changes to the memory are considered. change snappy.ini is not needed if you run gpt. Only if you run python scripts.
I general, I think too, that 9 GB should be sufficient. But it could be that the 1.6 GB are compressed. And uncompressed it is bigger.