This is my first post, so I’m sorry if I’m asking something that has been already covered (though I haven’t found it.) I am using Sentinel-2, Level 2A images as I’m interested in “at ground reflectance.” However, the image I am using also has cloud cover visible. In that case, what do the pixel values at the clouds represent? I could speculate, but I think it is better not to, as I’m not too familiar with Sentinel yet. I hope to use an image in which every pixel represents an at-ground reflectance.
The pixel values just represent what the sensor has measured. So it is not necessarily at “ground level”.
If there is a cloud between sensor and surface the cloud is measured.
For level 2 data there is a cloud detection and you distinguish between cloud pixel, land and water pixels etc. But the clouds can not be removed.
If you want a cloud free image you need to go to level 3. At level 3 several images are merged to one.
There are several approaches of doing it. You could do it your self in SNAP by using the Mosaic or the Binning operator. But this would be quite a bit of work.
There are also services available.
There is for example Sentinel-2 Global Mosaic. Click on “Mosaic Hub” and after a free registration you can order the data.
There is also Theia which is providing mosaics. But you can also use the software locally which is used by Theia.
To use MAJA : https://github.com/CNES/Start-MAJA
To use WASP: https://github.com/CNES/WASP
Thank you; this is helpful!