Slow performance of snap for TOPS ALL swath ifg

Hi,
I tried to do interferometry for a full swath of a TOPSAR pair. I used the standard graph provided with installation and just added enhanced-spectral-diversity after back-geocoing and before interferogram formation into the graph. The processing is fine, and the result looks good but the processing took a lot of time and was finished after a few days even in our good server. The same pair was processed by Gamma or SARscape in about an hour. I wonder if anybody knows a solution for this problem? Is it a problem of SNAP program for processing full-swath interferogram? Is there any way to make the processing much faster? For a burst processing of a single swath using SNAP I did not have such a problem

With kind regards

Mahdi Motagh

I faced a similar issues long time ago. Look at @marpet’s answer here below :

Dear Mahdi,

You can also check if the -Xmx variable is set and lets SNAP use an ideal amount of RAM:

Some users reported (for large processing clusters) that using 60% of the available RAM (instead of 75%) resulted in faster processing actually (at least reported here).

Generally, I believe this is related to the memory management of larger graphs which makes long processing chains slow, at least this is what is reported here Problems obtaining an interferogram from two product sets and here Coregistrating more than two Sentinel-1 SLC IW products
Of course, this is makes them quite ineffective, because they don’t save a lot of time.

If you are working on linux you could as well create a shell script which uses gpt and writes intermediate products (memory is then released), such as here: S1_preprocessing.sh (1.0 KB) This is for the preprocessing of a Sentinel-1 product to terrain-corrected Gamma0, but creating an inteferogram should work as well and you can delete intermediate products within the same graph.
Not very elegant and just a work-around.

Thank you for the hint. I will try it

Dear Andreas,
Thank you for the detailed answer. I will try the options that you suggested
All the best
Mahdi

Dear Mahdi,
you are very welcome. You patiently introduced me to the world of PSI/SBAS with StaMPS a couple of years ago, so I am happy if I can offer some help in here in return.

Slow performance of “complex” graphs is quite an issue if running on cloud instances. With a 4 vCPU and 16 GB RAM VM, I can’t even get beyond the TOPSAR-Split - Calib - TOPSAR-Deburst - TOPSAR-Merge stage (as part of HAAlpha processing). Typically crashes out of memory after some 20% of processing (10 mins). If you htop the process, it seems that it simply continues stuffing everything in memory. Also, it looks the more steps you include in the graph, the worst parallel-CPU use becomes, with most of the processing done by 1 or 2 processors only. This looks a lot like software design issues, but I could be mistaken :slight_smile:

I am switching to step-wise processing with intermediate storage for now. Getting larger VMs would be an option, but prices scale quickly with more exotic configs. Disk space is less costly.

1 Like

Guido, could you share your graph so we can use it as a test-case?

The full process is in TOPSAR_SLC_HAAlpa.xml but that never completes.

I have split into several steps now. The first one (step1.sh) does split-calib-deburst and writes out each subswath. That completes in about 20 mins.

The second one (step2.sh) does TOPSAR-Merge and Multilook (only) and takes > 90 mins, for most of which the CPUs appear to be idle. Same if I only do TOPSAR-Merge (which seems to be the real bottleneck). I would expect TOPSAR-Merge to be some simple read - combine - write operation (maybe with some interpolation).

Guidostep1.sh (825 Bytes) step2.sh (247 Bytes) TOPSAR_Merge_ML.xml (1.7 KB) TOPSAR_SLC_Split_Calib_Deburst.xml (1.9 KB) TOPSAR_SLC_HAAlpha.xml (9.3 KB)

1 Like

Thanks. BTW do your VMs have SSDs? (highly recommended)

That is very kind of you!

@glemoine

We’ve looked at your graph and it appears to be unnecessary complex whick kills the performance. Calibration is able to handle bursts and subswaths so it’s not necessary to split, deburst, calibrate and then merge. @lveci has more details.

Thanks Markus. So you suggest to put calibration first?

I did not study your graph personally (Luis did) - cannot you just drop the splitting and merging operations?

Calibrate should handle bursts and swaths. Therefore you would only need:

However, the terrain correction also has a problem causing it to get a lot of source tiles. This has been fixed and should be in an update later this month. Currently this graph is taking around 10min to process.
The next bottleneck and memory consumption to tackle is in the read.

2 Likes

As I expected, this does not work on a 16 GB VM, even if I drop split and merge.

Native memory allocation (mmap) failed to map 2959605760 bytes for committing reserved memory.

It would be perfect to not have to split and merge, but that’s the only way to get it to run.

I will wait for the new update. Good to know, though, that it indeed poor memory mgmt, because that seems solvable.

Guido

Are you using FileCache in readers to conserve memory (in options of S1TBX)?

We are going to tackle excessive memory usage. Meanwhile you can try running terrain correction in a separate graph or use bigger VMs.

Bigger VM is not an option. There is no logical reason why this graph does not run in 16 GB RAM, other than bad memory management. I’ll stick to the split - merge 3 step approach for now and wait for further updates on memory mgmt.