Coherent Change Detection: setting coherence rg/az window size

Hi all,
I am trying to use S1 data for change detection (coherent and non-coherent) for man made features, in particular buildings. I have read through lots of posts here already regarding best practises as in workflow for SNAP. However, I am a bit unclear still about the coherence range and azimuth window size. As I read in another paper, it is probably tough to detect any structures smaller than spatial resolution, so in the S1 case this would be 5x20m for single look. I am wondering whether it would be at all possible to detect for example a completely destroyed building of area 10x10m with CCD? As far as I understand the noise will be very high if the coherence is not estimated over a set of N independent pixels, which correspond to the coherence window size.

Are there any recommendations for setting these values so as to not go down with resolution too far? And just for my understanding - this would be like a moving average window, so it takes the coherence for NxN window and then moves one pixel over and calculates again ? So in this sense is different to a multi-looking, where essentially the image size is also reduced.
Thanks for insights!

1 Like

Concerning the

It’s not possible to detect any object within the IW S-1 less than it’s resolution ,

Source

Concerning the coherence and window size,

Source

Multi-looking

Source

Source of the post

1 Like

Well since this is supposed to be for change detection, I did think that a sub-pixel detection should be possible. Since even when part of the scattering properties within one pixel change, the speckle characteristics between pre-damage-coherence and post-damage coherence should be different.

So my question was in particular aiming at how to set the coherence window size in range and azimuth so as to still have decent resolution and not so much noise. Depending on coherence window size setting- what would be the smallest changed feature size I could detect?

I think it’s not always similar scenario could be detected, since in many cases the collapse damage of the buildings stay within the same spot, and consequently similar signal of these objects could be detected in different passes before and after the damage.

I’d retrieve this answer of our colleague @mengdahl in part of this post,

Source of the post

Yes, you are right. It will strongly also depend on what kind of damage occurred to the structure. I think for example if just the roof is gone, and building walls are intact, it could still give a strong double bounce signal, that will dominante speckle. And thus coherence will nevertheless be rather high.

I guess I will just have a try with different window sizes and see what the results look like

It also depends on the buildings intensity within the pixel,

It would be great if you could share the results, in case that possible!

Well still hoping I get any results :slight_smile: but happy to share if it’s the case.