I am wondering how to exactly replicate ‘visual’ property (which is 8-bits rgb image) in sentinel images from [b2, b3, b4] bands (which are 16-bits).
I’ve tried different ways to normalize like (x /65_535 )* 255, (x / 10_000)* 255 but failed to exactly match the values in ‘visual’ property.
The above methods of normalizing or giving way too bright or way too dark images.
What you might need to consider is to skip the 2.5% brightest and darkest pixels of the histogram when mapping the values to RGB. This is at least what SNAP does.
For more context, I am actually using stac api to download sentinel-2 images in python.
The sentinel-2 stac item has assets dictionary, which has a ‘visual’ key.
.i.e. item.assets[‘visual’] gives us a decent img just for representation purposes in (8 bits) (0-255).
So I am looking for that preprocessing step where i can generate a same 8-bit image from 16-bit rgb bands I have.
From which service are you using the STAC API. I think the generation of the quicklook dependes on the service provider. Probably it is better you ask this provider.
Maybe the image is the same which is provided with the S2 Data and generated by ESA ground segment. In this case you might ask ESA directly at eosupport@copernicus.esa.int.
The visual asset in the Planetary Computer links to the TCI_10m file output by sen2cor, converted to COG (note that the COG conversion causes the issue described at What is the valid range for the TCI product?). But the actual values come straight out of sen2cor, so we’re using whatever algorithm it uses.