I’m working on Sentinel-1 GRD data in EOPF Zarr format and I’m comparing the calibrated products with the one that can be computed with SNAP. I noticed an average difference, where the calibrated values obtained from Zarr are lower on average compared to SNAP.
Looking into the SNAP docs, I noticed this particular sentence: For GRD products, a constant offset is also applied.
However, I couldn’t find any other info on how this offset is computed or where to find it in the metadata.
I probably found where in the code of the Sentinel-1 toolbox this offset is applied but it’s not so clear to me how to reproduce the same in Python.
Can someone clarify me how to compute this constant offset to calibrate the data without using SNAP?
Hi @piyushrpt , I am actually comparing sigma nought values. I leave here a minimal code example to reproduce my current status. What I already checked:
SAFE and EOPF Zarr vales are the same
SAFE and EOPF LUT values are the same
I don’t know how SNAP is interpolating the LUT and therefore there might be a difference depending on that too. Anyway, I still don’t know if there is and how to apply the aforementioned constant offset.
You can check that you have wired things up correctly by checking beta0 first. This is a constant value even though it is provided as a LUT. If you see a difference in beta0, the error is pre-calibration or in interpreting the calibration factor. Simple bilinear interpolation is recommended for other LUTs - sigma0 / gamma0 depend on incidence angle which again depends on terrain height used - which is provided also a 1D LUT as a function of along track time.
Best way would be take ratios of the data on the same grid for comparison.
If you read Sec 9.19 of algorithm description document - you will see that the absolute calibration constant K_{abs} is already accounted for in the LUTs.
You may also want to check your math on abs(grd_vh)**2 . I’m not familiar with how this data is distributed but I suspect that GRD Tiffs are UInt16 and that operation can wraparound if not converted to float first.
@piyushrpt thanks for the precious suggestions. You are right, casting to float is necessary: abs(grd_vh)**2 becomes abs(grd_vh.astype(np.float32))**2. This leads to almost perfect matching of the Beta0 values:
However, Sigma0 values are still off. In the documentation I don’t see the LIA or DEM mentioned for the calibration, but I can find the terrainHeight 1D LUT you mentioned in the metadata. How should I include it in the computation? Can you point me to the right documents?
You don’t need to use them for computations as the effect has already been included in the LUTs. If your beta0 lines up and sigma0 is off, then the difference is in the LUT interpolator.