I am working on a research in which I want to use ground images. What I want to do is take a lot of images, in an automated way, and select them on what kind of things can be found on the image, meaning that I can select whether an image is of vegetation, or of an urban zone. I am completely new to using all this software, but I have read the Sentinel-2 handbook and it seems to be possible. Do any of you know if it is? And how do I maybe use the Sentinel-2 toolbox to make these queries based on these ground-theme selections?
what you describe sounds like a semantic data cube to me: https://www.mdpi.com/2306-5729/4/3/102
Sentinel-2A products contain an automated pre-classifiction band (not very good, but still usable): https://earth.esa.int/web/sentinel/technical-guides/sentinel-2-msi/level-2a/algorithm
If you stack a lot of Sentinel-2A products, you could use the mask manager to, for instance, search for pixels which are snow in all of the images and pixels which are snow only in a smaller proportion of images. Just a first thought on this.