Multitemporal series classification

Hello,
i want to perform a classification in SNAP based on multitemporal images using Random forest for example.
But i’m a bit confused about the input for the classifier:
Should i load a single file with all the multitemporal images’ bands stacked together or classify the images separately, but in a unique classification run?
I’m not sure i have expalined my issue clearly, i mean should i use the temporal serie to augment the features’ dimension or i should use it to augment the training set?

Thanks in advance for the help.

to my understanding, you can enter images of multiple dates as predictive rasters for the training but they need to have different names. For example, you cannot add “blue” ten times when you have images of ten days. These should then be renamed to “blue1”, “blue2”, “blue3”, and so on that the classifier can handle them as different features.
Does this answer your question?

well i have to admit that i’m not sure i understood what you mean and even not sure i explained clearly my point.
I try to explain better.

I have the same area i want to classify, and i have for example 3 of S2 images taken in different dates.
For example each image contains 5000 pixels and I want to perform the classification based on whole 12 bands as features.
So what should i do?

Case 1:
i create a stack of the 3 images so as input i have 5000 px dataset with 36 features (12x3)

Case 2:
i input the 3 images separately (different names) in unique RF run and i have 15000 (5000x3) px dataset and 12 features each

Thanks for your help.

I would prefer this because you get full control on the names of all 36 features then, while SNAP might automatically rename rasters with same names in the second case.

but i guess the two cases may lead to different results, which one do you consider more precise?

I don’t think there will be a considerable difference.
What matters more is the size and amount of training pixels and the number of trees.

I have another doubt, if i stack all the images (with relatives bands) in a single file and then i run on it the RF, to which image does the final classification attain?
If there are 3 images but the classification’s output is single which one of the images is exactly classified?

Because i want to use this classification to spot changes on land cover i think i need every image has its own classification and not a unique output, right?

the classifier no longer refers to one image, but rather makes use of the dynamics between the images. For example, an urban area might show the same reflectance over the whole year while the infra-red band of a forest or cropland, for example will show considerable dynamics which help to discriminate it from permanent vegetation (e.g. grassland).

But you are right, if you want to detect changes, this does not work and you will have to use only image at a time. Maybe you should have mentioned that earlier :wink: