Gpt unable to process large graphs

I’ve been testing gpt with large graphs that coregister a pair of Sentinel-1 SLCs. Regardless of the input images, gpt seems to run into an infinite loop. This is the command I use:

$ gpt /home/esteban/s1-graphs/S1-Graph.xml -c 8G -q 12

My current workaround is to split the graph in two smaller ones, thus having to save intermediate results. I’ve attached the graph’s xml and a screenshot.

S1-Graph.xml (10.5 KB)

I’m using Ubuntu with 128 Gb (RAM) and 12 processors.

Esteban Aguilera
SkyGeo

It depends what’s in the graph but, usually the more operators in a
graph, the more memory it will need. If it runs out of tile cache, some
tiles may get triggered to be recalculated.

Hi Luis,

Thanks for your prompt reply. Do you mean that if the cache is too small, then gpt will run indefinitely? If so, what cache size would you recommend? I was hoping to use gpt to get a coregistered pair in one go.

Note that I’ve attached the graph’s xml in my original post. So, that’s what’s in the problematic graph.

Esteban
SkyGeo

You have a lot of memory available so make sure your JVM heap size is big and tile cache size is big.

The problem in the graph may be caused by range and azimuth operators which are currently not very efficient when they are in the same graph due to how they request data.

Thanks Luis. I’ll give it a try and report back.

Esteban
SkyGeo

Hi again,

I tried adding -x (without increasing the heap size) as follows:

$ gpt /home/esteban/s1-graphs/S1-Graph.xml -x -c 8G -q 12

It works now, although it takes about 3 hours to finish.

Esteban
SkyGeo

Using azimuth-shift and range-shift in the same graph makes it very inefficient. It’s better to split the graph into two and run them sequentially. I’d be interested in hearing how much this improves performance.

Update: these are the best setting I could find to date

  • Heap size: 100Gb (I added -Xmx100G to /opt/snap/bin/gpt.vmoptions)
  • Command: gpt /home/esteban/s1-graphs/S1-Graph.xml -c 8G -q 12

Esteban
SkyGeo