I’ve created an interferometric coherence map using a workflow as follows:
Apply-Orbit-File > Back-Geocoding > Coherence > Deburst > Subset > Terrain-Correction
To calculate the coherence for approx. square pixels, I used a range window of 20 pixels and an azimuth window of 5 pixels in the coherence calculation step.
As I’m not familiar with interferometric coherence processing so far, I’m wondering about the correctness of my workflow and the obtained results. The following makes me sceptical: Using such a large window of 20 x 5…
Why are there so many discrete patches of high coherence with relatively sharp boundaries to adjacent regions visible? I get that these patches tend to occur in urban areas and this makes kind of sense to me but aren’t the transitions to the surrounding areas supposed to be a lot smoother (as coherence is averaged in sliding windows)?
Why are there a few dark pixels with distinct coherence values distributed over the scene?
Furthermore, I am wondering if I should include an additional multilook step into my processing chain prior to terrain correction. Does it make a difference if I use multitlooking or simple resampling (as already included in terrain-correction) to obtain square pixels in the end? I understand that the multilooking operator is implemented as a space domain multilook which brings me to the conjecture that its result has to be similar to resampling?!
Thanks for any clarification!