After searching the forum I have not seen this issue being asked. Any help will be appreciated.
Whenever I compute the coherence of two images I am getting many isolated NaN pixels in the coherence image. Is there a mathematical explanation for this?
As an example:
Input images (Atacama desert)
S1A_IW_SLC__1SSV_20161203T230604_20161203T230626_014221_016FD2_5DAA
S1A_IW_SLC__1SSV_20161227T230603_20161227T230625_014571_017ACF_F95F
Thanks for your reply.
In the coherence band properties, I can see:
No-Data value used: checked
No-Data value: 0.0
Tracing back the NaN pixels in the graph, they seem to originate from pixels in the input images with 0 in either the I or Q component. Those pixels are converted to NaN, and these NaNs are propagated forward in the graph. Could this be? If this is what is happening, is there a way to make sure that gpt treats zeros as zeros and not as NaNs?