View azimuth, sun zenith, collocation flags in Random Forest classification

I am just wondering which variables I should include in my Random Forest classification after collocation. Apart from the bands, there are also: tile_ids, view_zenith, sun_zenith and collocation flags. Should I select these too, alongside the bands?



no, you only include rasters which have an explanatory value for your classes. I don’t think the sun angle or the tile ID is related to the surfaces you want to classify. You can open all bands to see if they show a pattern which helps you or not.
But please consider that two bands are probably not enough for a Random Forest classification. Please have a look at my comment here: Classification of GRD product

Thanks Braun. I used wet and dry season so have 4 bands. With these, I managed a fair classification accuracy. However, when I included texture variables from GLCM, the accuracy reduced. probably because the features which I am trying to classifiy have very small intra-class variation (woody vegetation, grasslands, hedges).

Another thing that’s a bit strange, is that with Sentinel 2 alone, I get good overall classification accuracy, but low user and producer accuracies for grassland and hedges. Sentinel 1 dry and wet season give me a better user and producer accuracy for these classes (though lower overall accuracy). Therefore I would expect that if I fuse the two images, I should be able to get higher user and producer accuracies for these two classes, right? yet I now get 0% for both. could it be the fusion technique (collocation)? Is stacking better? what options do I have for fusion?



You can visually check the quality of your stack with an RGB image which contains both data. Both stack and collocation simply overlay all products, therefore their geolocation must be exact already before.
Another way of merging them would be a PCA but the same applies here: geocoding of both products must ne good

Which bands go into the RGB with the stack of 14 bands (10 S2 and 4 S1)?

Also, for the geo-location, I used a subset polygon to obtain AOI for both Sentinels. This should be able to ensure exact overlays, or should I do some co-registration maybe?

Which bands go into the RGB with the stack of 14 bands (10 S2 and 4 S1)?

At least one S1 and one S2 image so you can see if the chosen scenes of both sensors spatially match or if there is a shift.

Coregistration is for SAR data but surely the most accurate. You can try to enter both products in the tool.

Ok. Let me try the RGB combination and coregistration.