S1 Interferometric Coherence - Processing Workflow & Interpretation

Hi together,

I’ve created an interferometric coherence map using a workflow as follows:
Apply-Orbit-File > Back-Geocoding > Coherence > Deburst > Subset > Terrain-Correction

To calculate the coherence for approx. square pixels, I used a range window of 20 pixels and an azimuth window of 5 pixels in the coherence calculation step.

As I’m not familiar with interferometric coherence processing so far, I’m wondering about the correctness of my workflow and the obtained results. The following makes me sceptical: Using such a large window of 20 x 5…

  1. Why are there so many discrete patches of high coherence with relatively sharp boundaries to adjacent regions visible? I get that these patches tend to occur in urban areas and this makes kind of sense to me but aren’t the transitions to the surrounding areas supposed to be a lot smoother (as coherence is averaged in sliding windows)?

  2. Why are there a few dark pixels with distinct coherence values distributed over the scene?

Furthermore, I am wondering if I should include an additional multilook step into my processing chain prior to terrain correction. Does it make a difference if I use multitlooking or simple resampling (as already included in terrain-correction) to obtain square pixels in the end? I understand that the multilooking operator is implemented as a space domain multilook which brings me to the conjecture that its result has to be similar to resampling?!

Thanks for any clarification!

Hello Felix,
I don’t know what calculations the SNAP software uses to calculate interferometric coherence. Some measurements of coherence can be strongly affected by small bright reflections such as those coming from buildings. I expect that it is using a sliding window, but due to the highly non-linear effect of the bright reflection, the coherence estimate suddenly changes as soon as the bright object is in the window. Those patches are the size of your coherence estimation window.

1 Like

You can test different windows sizes, smaller ones tend to produce smoother transitions and suppress the observed square-like patches of high coherence. But this is always related to the spatial resolution of the sensor, so as @EJFielding said, dominant targets within a pixel will always result in local patterns superimposing actual land cover.
The single black pixels might be invalid or zero, you can check in the Pixel Info tab and edit the definition of valid pixels and nodata pixels in the band properties.