Can someone please explain to me about the bit depth (radiometric resolution?) of Sentinel-1 data, and how it might transpose as it’s processed, i.e.;
1: What is the bit depth of S1 GRD data as downloaded?
2: What is the bit depth after calibrating using SNAP?
3: What is the bit depth after speckle filtering - the same?
When I inspect the data after calibration and filtering, I get ‘intensity’ (even though the data might have been selected for processing as amplitude only) values between zero and .99999 (5 decimal places). Hitting ‘Analysis/Information’ tells me the data is float32 (which is huuuuge).
Basically I’m trying to difference two S1 images and am trying to understand the DN values I’m getting.
Calibration converts the dimensionless intensity to the normalized radar cross-section (Sigma0). Normalized means that a fully isotropic scatterer is equivalent to 1.
So if you want to compare two images, calibration is essential and conversion to a float data type is unavoidable.
1: How do you get more than 100% of the signal returning?
2: 32 bit = 4,294,967,296 values. So are you saying that after calibration the normalised sigma zero range of 0 to 1 is divided into 4,294,967,296 steps or values)?
3: When looking at pixel values using the ‘Pixel information’ panel, why are the values shown to only 5 significant places if 0-1 divided into 4,294,967,296 steps?
in case of corner reflection (buildings) or volume scattering (vegetation)
no. Don’t confuse these bits with the bins of a histogram. The bit depth of 32 just means that the data is potentially able to store numbers within the given range. The raster itself could still consist of only 1.0s. A 8 bit raster, for example, cannot store values larger than 255. Good explanation: Bit depth capacity for raster dataset cells—Help | ArcGIS for Desktop
The data type (integer, float, double…) defines how many decimals can be stored per value.
This is important for the storage which is reserved for this pixels on your hard drive. An 8 bit integer raster consisting only of 1s is way smaller than a 32 bit double raster, also consisting only of 1.0s.
Radar cross-section is normalised in such a way (for historical reasons) that if Sigma0 = 1 = 0dB it corresponds to perfectly isotropic scattering, like what one would get for example from a metal sphere. If Sigma0 > 0dB more energy is directed back at the radar compared with the sphere, and if Sigma0 < 0dB more energy is reflected away.
Thanks again ABraun for your detailed reply and links which were informative. I now understand that the 32-bit format can just be a container for a variety of data types, and does not necessarily reflect on the data-type being contained.
Once S1 data are calibrated, although they end up in a 32-bit container, how many discrete values are there, e.g. is Sigma0 data in fact 8-bit with 256 values, 16-bit…?
To ABraun and mengdahl:
I’m sorry, I still don’t understand how >100% of radar signal can be reflected, even if multi-path. Is this a cumulative result? Sorry for being thick.
To help me understand, do you know what the actual calculation is that is made when calibrating the 16-bit GRD data so that it ends up with ‘almost infinite’ number of discrete values. I’m sorry. but there’s still something I’m not understanding…
ABrauns explanation is not correct. As I tried to explain above what you call “> 100% backscatter” actually refers to “pixel is reflecting more energy back towards the radar than an isotropic (omnidirectional) scatterer would do”.
Hmmm… my maths isn’t good enough to make sense of that, but I now understand why each pixel might be different, and why storing in 32 float is appropriate. So thanks for getting me over that hurdle.
The reason I’m getting hung up on the number of values, is that I’m trying to make sense of the resulting values from differencing two S1 images bracketing Hurricane Maria over Dominica. After calibrating, filtering and coregistering two images bracketing the event, the results are not intuitive, and so I’m trying to understand the data itself, but became confused with the DN values.
Many thanks again for all your help.
PS: Why does SNAP show the ‘raw’ GRD data as 32 bit INT for amplitude (32 bit float for intensity), when the ESA notes you copied above say 16-bit?
thank you for clarification. So volume scattering or double bounce does play a role here but energy received at a pixel is only emmitted from this location (and not neighboring scatterers)?
Largely so, but a full explanation gets into the technicalities of SAR focusing, antenna beam sidelobes etc. Perhaps @peter.meadows could give a more comprehensive response…