Sentinel 1 GRD sigma0 calibration wrong output values

Dear S1TBX developers,

I have some issues with the values computed by SNAP/S1TBX S1 Calibration operator when I compare to other tool or to manual computation.

For example with the following S1 GRD product, S1A_IW_GRDH_1SDV_20210202T060035_20210202T060100_036407_0445F3_B8EA.SAFE, I compute calibration for three different points (in VV polarization but it is the same for VH one):

  • pixel A(l=667, c=8720) with DN=228, I get directly from LUT, A_sigma0=6.14E+02. When I perform DN²/A_sigma0², I get 0.137914. SNAP gives me 0.137245.
  • pixel B(l=1202, c=8782) with DN=925, I perform bi-linear interpolation manually and get A_sigma0=613.66432. When I perform DN²/A_sigma0², I get 2.27207. SNAP gives me 2.26102.
  • pixel C(l=1024,c=1024) with DN=143, I perform bi-linear interpolation manually and get LUT_sigma0=655,85186. When I perform DN²/A_sigma0², I get 0.0475402
    . SNAP gives me 0.0475402.

Computation are available in computation.ods (30.8 KB).

When I compare globally with OTB 7.3 implementation, SARCalibration (with noise parameter set to true to avoid noise removal) app gives me the right results over the three pixels.
Moreover two patterns in difference image |cal_snap-cal_otb| appears:

Have you any idea about these differences ? Issue with computeTile implementation ?


1 Like

I was just searching the forums if anyone had reported this issue, as I see the same pattern with differences(with a SLC dataset). The patterns seems to be tile height and width.

I did some code debugging and to me it seems the binary search upper index calculation goes wrong.

In computeTile()

pixelIdx = getPixelIndex(calVec, pixelIdx, subsetOffsetX + x);

If I change the pixelIdx argument to calVec.pixels.length, the result seems good.

Obviously it would be better if a snap developer commented on this.

Thanks @priit123 for your feedback, I confirm my tile size parameter in the SNAP GUI performance options is set to 512.

This is an interesting observation. If it really concerns a bug in such important SNAP process, it should be addressed. @lveci or @mengdahl, any comments from your side?

I will add that the best way to recreate the issue is to change the snap tile size under performance settings. I think you need to restart snap for this option to take effect and I did that just in case.

Here is the resulting image, where I did (pixel1 - pixel2)/pixel1 to give me relative error. In this case, 128 vs 1024.

This pattern should only emerge with gamma and sigma, not with beta(because calibration values always the same?).

@junlu @lveci could you check what is causing this? Tile-size should not affect the end-results of processing.

We will look into the issue. Thank you all for the information. A JIRA ticket ( has been created by @mengdahl to track the issue

A bug has been fixed in getPixelIndex(). The fix will be in the next patch release. Thank you all for pointing out the problem

1 Like

Thanks a lot for fix it.