Multilooking causes 'thick stripe' artifact in S1 interferograms

Hi, I am processing S-1 images for differential InSAR, but I keep having a problem with a missing band of data in the results:

Unwrapped Interferogram

I think I have traced the issue to the Multilooking step, but I will demonstrate the full process in case there are other issues. I am calling SNAP functions as subprocesses from Python via gpt commands. Note that the coherence is low, as these images are taken around a year apart over an area with lots of vegetation (I am aiming for an implementation of the ISBAS technique).

TOPSAR-Split > Orbit Files > TOPS Back Geocoding
Back-Geocoded Stack of 2 images (Red, Green for intensities of two images)

(zoom in)

TOPSAR-Deburst
Deburst stack

Interferogram

TopoPhaseRemoval

Multilook
gpt Multilook inputfile -PnRgLooks=20 -PnAzLooks=6 -t outputfile

GoldsteinPhaseFiltering > SnaphuExport > SNAPHU unwrapping > SnaphuImport
This brings us back to the image at the top!

I cannot replicate this problem with the Multilooking using the GUI - it seems to work fine here. But I need this to run reliably through a script.

Thank you so much for any advice - and please let me know if I can provide any other useful information.
Harry

looks like a writing problem to me. Have you tried to run it again to check if the error persists?

Depending on the incidence angle, 20/6 looks might not be the ideal ration between range and azimuth looks, but you probably checked this step in the GUI if the same number of looks is suggested for this scene, because you said you cannot reproduce the error with the GUI.

I think you could be right - the error is not persistent. It seems to occur most commonly when processing a large number of images.

I can recommend to close SNAP from time to time to completely clear all RAM before applying large graphs.

Ok, is there a way to clear RAM in between gpt operations from the command line?

My code that does this section of the processing is of the form:

for image1, image2 in pairs_of_S1_imgs:
    gpt Back-Geocoding Image1 Image2 -t intermediateProduct1
    gpt Deburst intermediateProduct1 -t intermediateProduct2
    ...
    gpt GoldsteinPhaseFilter intermediateProduct5 -t outputProduct
    (delete intermediate products)

I chose not to combine these operations in a graph because it seemed to use too much memory and take a lot longer. Also, I am not creating a single stack of images because I need interferograms from all pairs of images, not just pairs that include a single master image. Feedback on these decisions is welcome, too!

oh, I see. I think this is more effective in terms of memory than having a super long graph.

Could someone explain to me what multilooking does to the SAR image? (I only noticed that the image is visually stretched to attain the proper land shape of the represented area). Also do Sentinel 1 images require raw data focusing?

Because of the side-looking geometry of the SAR system, pixel resolution in azimuth and range direction is different. To get squared pixel, multi-looking is applied which averages one or more pixels in range direction with a multiple of pixels in azimuth direction based on their size ratio. Please also see here: Multilooking necessary when geocoding?

This depends on the product level. Level 0 products are not focused, so SNAP cannot read them. SLC or GRD products (both Level 1) are already focused and can be processed in SNAP. More on this Sentinel-1 Data Products.

1 Like

Thanks for your explanation