I am reducing some S1 data. After multi-temporal speckle filtering I am seeing some very odd results depending on which algorthm I run. Comparison below (same colour scale).
The gamma filter leaves NaNs across the bands.
The IDAN filter produces pixels with extreme values (-32000000000000000).
I have been told that filtering works better on additive noise (dB) rather than on multiplicative noise (before taking the log), but perhaps this is misleading as I do not know the intricacies of the algorthms. Maybe taking ratios of images should not be done until after filtering? At what point in the data reduction should I apply speckle filter?
I would just try if the filter results look different (without the error pixels) after terrain correction.
Some people would argue that the filters are originally designed for SLC data (and filtering should be directly done after calibration.
But for your case a result without errors would be a first step.
It is certainly true that filters behave different depending on the distribution of values (raw intensities vs. db intensity values) because the statistics are different (relative radiometrical distance between the cental pixel and it’s neighbour values)