Sentinel 1A GRD images are delivered as .tiff
files where pixel square-root intensities are represented using 16-bit unsigned integers (uint16
). When reading the .tiff
file directly, SNAP reports the correct type; however, viewing the image by opening the manifest.safe
file causes the values to be interpreted as signed integers (int16
). This results in very large intensity values being reported as negative (in reverse order with respect to absolute value, due to two’s complement).
I see no problem here. Storing data as unsigned integer helps reducing disk space. The manifest file grants that the data is interpreted correctly, for example by applying a scaling factor. The same is done for Sentinel-1 data which are stored as integers, but displayed as floats (reflectance 0-1) inside SNAP.
Thanks for replying so quickly.
From what I can see (unless I’m doing something wrong), SNAP does not display the amplitudes as floats, but rather as signed integer values. Below is a screenshot showing what happens when a S1A GRD product is imported into SNAP using default settings, resulting in some pixels having negative amplitudes.
I’m not saying that there is necessarily something wrong with SNAP, but something seems to be up with either the contents of the manifest (or calibration) file, or the way in which it is parsed.
Maybe @Jan has an explanation
In the addBands
function (in Sentinel1Level1Directory
), the dataType
parameter of the Band
constructor is set to ProductData.TYPE_INT16
for both SLC and GRD images (see lines 156 and 184, respectively). Per the Sentinel-1 Product Specification, this is correct for the former, but the latter should use the uint16
data type. Rebuilding the toolbox with the correct data type resolves the issue, as shown in the screenshot below.
I’ve submitted a pull request with the fix here.