Apparent severe divergences on S1A and S1B time series

Dear all,

We are using S1 time series over Amazonian forest to detect deforestation. We use GRD images available on Google Earth Engine, which were produced using SNAP algorithms (see https://developers.google.com/earth-engine/sentinel1). One way we check the consistency of those series is by comparing Sentinel-1A and Sentinel-1B backscattering values on homogeneous forest areas with both satellite acquisitions.

We observed serious divergences on gamma naught values of Sentinel-1A and Sentinel-1B, only on VH polarization. VV channel looks much better. Those divergences seem to be related to the changes on the software processing version. 2.84 to 2.90 upgrade, in march’18, looks especially impactful.

Before going further on our research, I’d like to ask S1 production team and fellow users what might be causing those differences, and what will be the optimal way to mitigate them.

To reproduce those results, please use the following GEE script:

https://code.earthengine.google.com/4d2511b5b8807f16aa928d3271541a4c

I’m attaching some charts to illustrate our findings. The third chart pictures the different versions of the processing software:

Please display the values in dB. Also it’s not clear what the 3rd plot shows?

The third plot showed the version number of the software used to compute the GRD product.
In fact, investigating a little further I conclude that this effect is site-dependent.
I have studied some other sites’ differences between S1A and S1B measurements and there’s no clear correlation between them.
Just a little R chart to show it:


The differences seem to swing around ±0,2 dB, which is a range of variation we can live with. However, it’s funny how S1A values are frequently slightly higher than S1B, at least on the last year…