GPT out of memory

I am processing the Sentinel-1 (S1A, S1B) IW GRDH data which has been downloaded from the ESA hub.

The processing is initiated from an IDL program, where the following snap_call is made

  "C:\Applications\SNAP5\bin\gpt.exe"
  "C:\Radar\AUX_DATA\Water_bodies_st1_graph_calibration_snap.xml" 
  -x 
  -c 6144M
  -q 16
  -Pinput="C:\Radar\Sentinel-1\S1B_IW_GRDH_1SDV_20170206T050119_20170206T050144_004175_0073A5_E424.SAFE\manifest.safe" 
  -Poutput="C:\Processed\Sentinel-1\S1B_IW_GRDH_1SDV_20170206T050119.dim"
  -Pbands="VV,VH"
  -Psigma="Sigma0_VV,Sigma0_VH"
  -Psigmaout="Sigma0_VV,Sigma0_VH,layover_shadow_mask"

… and then executed with spawn command:

spawn, snap_call, snap_result, snap_error, /noshell, /hide

This always produces one or more of the following errors:

  • cannot construct DataBuffer
  • Java Heap space
  • GC overhead limit exceeded

Files that SNAP processes are around 800 MB and I believe this should not exceed memory resources.

Additionally, I have:

  1. Increased the amount of memory allocated when passing the snap_call.

  2. Changed gpt.vmoptions in SNAP installation to include:

    -Xverify:none
    -XX:+AggressiveOpts
    -xmx8192m
    -xms1024m

  3. Tested multiple software & hardware combinations:

    ESA SNAP 5.0 @ PC with 16GB RAM
    ESA SNAP 5.0 @ PC with 32GB RAM
    ESA SNAP 4.0 @ PC with 32GB RAM

None of these changes resolved the memory issue and now I am out of ideas how to proceed.

The IDL program itself is not an issue, it is only the SNAP call where things get stuck. Sometimes it would actually output the images, but it is obvious SNAP only manages to process cca upper 10% of the image (there will be something visible on top part of the image, the rest is then completely black).

Any ideas what could be done? Thanks!

Depending on the graph you use, a lot of memory can be used. Especially S1 processing is known to be memory hungry.
As you have enough memory, try specifying

-xmx12g

Thanks for reply!
I have tried increasing the memory multiple times and it finally worked when using 22 GB.

I am not sure what could have changed that memory use is so intensive all of a sudden.

However, in previous months I could easily execute this code on PC with 16 GB RAM which is now not possible anymore.
Is there a way I could do some workaround and still run it on 16GB RAM machine? Perhaps some combination of tile size and number of threads or some other combination of settings?

As I have no real experience with processing S1 data, maybe someone else have?

What is in the graph?
With the new 5.0.2 update there is the new option --diag to show some diagnostic information where you can see if your memory settings are actually taking effect.