I tried to do interferometry for a full swath of a TOPSAR pair. I used the standard graph provided with installation and just added enhanced-spectral-diversity after back-geocoing and before interferogram formation into the graph. The processing is fine, and the result looks good but the processing took a lot of time and was finished after a few days even in our good server. The same pair was processed by Gamma or SARscape in about an hour. I wonder if anybody knows a solution for this problem? Is it a problem of SNAP program for processing full-swath interferogram? Is there any way to make the processing much faster? For a burst processing of a single swath using SNAP I did not have such a problem
If you are working on linux you could as well create a shell script which uses gpt and writes intermediate products (memory is then released), such as here: S1_preprocessing.sh (1.0 KB) This is for the preprocessing of a Sentinel-1 product to terrain-corrected Gamma0, but creating an inteferogram should work as well and you can delete intermediate products within the same graph.
Not very elegant and just a work-around.
Slow performance of “complex” graphs is quite an issue if running on cloud instances. With a 4 vCPU and 16 GB RAM VM, I can’t even get beyond the TOPSAR-Split - Calib - TOPSAR-Deburst - TOPSAR-Merge stage (as part of HAAlpha processing). Typically crashes out of memory after some 20% of processing (10 mins). If you htop the process, it seems that it simply continues stuffing everything in memory. Also, it looks the more steps you include in the graph, the worst parallel-CPU use becomes, with most of the processing done by 1 or 2 processors only. This looks a lot like software design issues, but I could be mistaken
I am switching to step-wise processing with intermediate storage for now. Getting larger VMs would be an option, but prices scale quickly with more exotic configs. Disk space is less costly.