S1 Processing product vs. MSPC S1-RTC product

I am struggling with a problem of understanding here and I was hoping you may be able to offer some insight.
I have a S1-GRD processing chain that I run in SNAP. It looks like this:

The outcome of this are Sigma0_VH_db, Sigma0_VV_db, LIA_db, which is great and works for the intended purpose. However this is very costly to process on a large scale. So I am looking to use Microsoft Planetary Computer, as they have a S1-GRD product that is also processed and known as S1-RTC. You can see their product here
Their product output is COG files which are “terrain-corrected gamma naught values with radiometric terrain correction applied”. They describe their entire methodology in as such:
"The Sentinel-1 GRD product is converted to calibrated intensity using the conversion algorithm described in the ESA technical note ESA-EOPG-CSCOP-TN-0002, Radiometric Calibration of S-1 Level-1 Products Generated by the S-1 IPF. The flat earth calibration values for gamma correction (i.e. perpendicular to the radar line of sight) are extracted from the GRD metadata. The calibration coefficients are applied as a two-dimensional correction in range (by sample number) and azimuth (by time). All available polarizations are calibrated and written as separate layers of a single file. The calibrated SAR output is reprojected to nominal map orientation with north at the top and west to the left.

The data is then radiometrically terrain corrected using PlanetDEM as the elevation source. The correction algorithm is nominally based upon D. Small, “Flattening Gamma: Radiometric Terrain Correction for SAR Imagery”, IEEE Transactions on Geoscience and Remote Sensing, Vol 49, No 8., August 2011, pp 3081-3093. For each image scan line, the digital elevation model is interpolated to determine the elevation corresponding to the position associated with the known near slant range distance and arc length for each input pixel. The elevations at the four corners of each pixel are estimated using bilinear resampling. The four elevations are divided into two triangular facets and reprojected onto the plane perpendicular to the radar line of sight to provide an estimate of the area illuminated by the radar for each earth flattened pixel. The uncalibrated sum at each earth flattened pixel is normalized by dividing by the flat earth surface area. The adjustment for gamma intensity is given by dividing the normalized result by the cosine of the incident angle. Pixels which are not illuminated by the radar due to the viewing geometry are flagged as shadow.

Calibrated data is then orthorectified to the appropriate UTM projection. The orthorectified output maintains the original sample sizes (in range and azimuth) and was not shifted to any specific grid.

RTC data is processed only for the Interferometric Wide Swath (IW) mode, which is the main acquisition mode over land and satisfies the majority of service requirements."

So how different is their RTC product from the one I generate by using the xml graph in SNAP?
Well… I see no mention of using updated orbit files, so they are probably using the native one. I don’t see any mention of ThermalNoiseRemoval, GRD Border Noise Removal or linearTodB function. I do however see information about calibration and terrain correction, and when I load the cog file into python, I apply a db scale to it, so disregarding the linearTodB function then.
Now besides the seemingly descrepencies in processes, missing the noise removal, it is also the different use of sigma/gamma terms that I seek to understand. I thought once we calibrate, the product “becomes” sigma0. I found this statement on a guide somewhere …“It [read: Calibration] produces calibrated backscatter values, known as sigma0, which represent the radar reflectivity of the observed area.”. So what is what here? The SNAP end product outputs it as sigma0_VV_dB. The S1-RTC says it outputs Gamma0. The processing, besides noise removal and the final linearTodB, seems to be similar.

I found this bit as I think it may be correct. But please feel free to comment on it!

Gamma0 represents the radar backscattering coefficient or radar cross-section in natural (linear) units.
It is typically used to describe the radar reflectivity of an observed area without any specific reference to the terrain or viewing geometry.
Gamma0 values are linear and not in decibels (dB).

Sigma0 often referred to as sigma naught, represents the radar backscattering coefficient or radar cross-section that has been converted to decibels (dB).
It is obtained by applying a logarithmic transformation (in decibels) to the gamma0 values
Sigma0 in dB provides a more convenient scale for visualizing and analyzing radar data, as it compresses the dynamic range of values. This makes it easier to distinguish features with varying radar reflectivity.

There are basically six different conventions in which backscatter is expressed:

  • the raw digital numbers saved in the product, e.g. scaled to be saved as integer
  • beta naught \beta^0: the backscatter per image pixel area, e.g. 10x10 m² GRD pixel size
  • sigma naught \sigma^0_E: backscatter projected to the ellipsoid with the covered ellipsoid area as reference
  • gamma naught \gamma^0_E: backscatter projected to the ellipsoid with the area perpendicular to the line of sight as reference
  • sigma naught RTC \sigma^0_T: terrain corrected sigma naught
  • gamma naught RTC \gamma^0_T: terrain corrected gamma naught

Projection to ellipsoid does not refer to geocoding however, it just refers to the pixel area projection, i.e. how much backscatter is received per which area.

The difference in reference areas is well explained by the figure of Small 2011:


Thank you for this excellent explanation and figure.

1 Like