Default Processing Bit Depth

I’m not sure if this is true, but it seems to me that SNAP operators convert incoming data products to float32 bit depth by default so that through the processing operations the size of the underlying data increases a lot. Operations like writing take a lot longer because of this.

Is there a way to set the default bit depth of SNAP operators to unsigned int 16 or something similar to the data type that Sentinel 1 is delivered as. Would a Convert-Datatype operation early in the Graph chain hold throughout the rest of the operations? Is that dangerous (regarding data integrity)?

My alternative now is to leave my graphs as they are and use GDAL translate after the fact to reduce the file sizes of my S1 pre-processed data. As of now processing with the GPT takes 1.5 hours to do, so GDAL translate would add to that processing time. If I could shrink SNAP processing time AND not have to use GDAL translate afterward, that would be ideal.

Any ideas about this? Thank you!

Generally this is done at the end of processing just before you export to GeoTiff or a visual image.
A data type like UINT16 represents scaled data for the purpose of reducing the amount of data to write.
However, within a processing chain you want to work with float values for calibrated Sigma0 for SAR or an index like NDVI for optical. The values are expected to be within a certain range and they have meaning within that range.

When you scale the data linearly, the data stops having that same meaning without also including the linear gain and offset to return the data back to the float representation.

1 Like

Sounds good. I will keep data type conversion to the very end, make sure to be aware of the gain and offset (slope and intercept) so that people know what has been done to the physical backscatter values. Thank you!