Classification of GRD product

OK. Thanks but do you know any ‘citation reference’ for the below sentence?
The good thing with random forest classifiers is that they select the layers which are most useful.

The work of Leo Breiman can be seen as a foundation:

And in practice:

1 Like

Thanks alot for such an information. but i have little bit confusion in second, third and fourth point as you have suggested me.

Please tell me how can I clarify your confusion and I’ll try to help.

Aformentioned point which you have suggested.

these are just suggestions on what data can be added to increase the feature space.

I just want to thank @ABraun for the support and the reads he’s provided and linked the forum readers to. I have learnt a lot today.

3 Likes

Did you find de answer? I was wondering the same but I’m not able to do it.
As you posted, what I need to know is which exactly is the feature importance in percent.

Thanks in advance!

I never figured it out. Used one of the measures within the output and graphed it.

Ok, Thank you! I think that maybe it would be good to try again with a new post, just in case someone has found something.
I will post it tomorrow.
Regards.

Hello,

I am trying to make a classification of Sentinel 1 image. Please, which software did you use in your project to classify? if it is SNAP, how can I import a shapefile made by arcgis to be as a training data?

Thank you in advance.

Bests,

Hey @ABraun I have used “S1A_IW_GRDH_1SDV_20180911T001147_20180911T001212_023643_02939E_9162” this data to classify it in to a binary image showing water and non-water. I classified data using two different methods :

  1. Raw data > Apply Orbit File > calibration (to beta 0) > speckle filtering > Terrain Flattening > Range doppler Terrain correction > Supervised Classification (max. likelihood)

  2. Raw data > Apply Orbit File > calibration (to sigma 0) > GLCM (derived all the products available) > Random forest Classifier.

What I absorbed is that the output of both the methods was almost same. what I mean to say is that, you told that classifying a SAR image by preserving texture properties and GLCM gives much better results than the other classifying methods but here I observed that results from both the methods were visually similar. I din’t find the image 2 superior than the image 1. May I know the reason why this was happened with me ?

this strongly depends on your data. Sometimes textures enhance structures you wouldn’t see without and sometimes texture is just a blurred version of your original data.

Especially for a bbinary classification (water/non-water): If, at some places, the presence of water is ambigous (shallow water or roughed up by waves), texture won’t help you much, unless you include these areas in your training set.
Also make sure that your training samples are large enough. Collect many and of various situations.

@ABraun Thanks for your kind replay . I want to generate a flood inundation map from the given SAR image.

1.Can I achieve the same thing accurately just by classifying the image with one of the supervised classification algorithm or should I go with some thresholding techniques like Otsu’s or Kittler and Illingworth thresholding approach and then generate the map.

  1. Can I include a matlab code in the graph builder in s1tbx to automate the whole process of pre-processing, analysis/processing and post-processing. what would be the alternative for automation in s1tbx if incorporating an external matlab code is not possible in the graph builder ?

If you have the chance to implement it in Matlab, I would suggest to use thresholding approaches.

Some tutorials on radar and Matlab:

@ABraun Can I import a matlab code or python code in to the snap s1tbx and place it in graph builder to make a continuous processing chain? If “yes” how. If “No” what is the best alternative to do the whole preprocessing work with tools available in the snap s1tbx and other own analysis part as a continuous and fully automatic processing chain.

including matlab or python code is not possible but you have several options to automate processing chains:

@ABraun in the above mentioned paper of yours ( http://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XL-7-W3/777/2015/isprsarchives-XL-7-W3-777-2015.pdf ), you have added all the layers together to check their combined effect on results of classification. But if I ask you what is the best size of window to choose for glcm what would you suggest (3,5 or 9) for classification ultimately?
one more thing to ask. For better results Is it ok to give these generated glcm products with added elevation data as input for SVM classification(in ENVI or ERDAS) as well or else we should do as we do in case of maximum likelihood classification or other supervised classification by simply giving intensity band as input for SVM classifier.

hard to say, to be honest. None of the textures was among the 15 most important features. Yet, they are helpful for the delineation of fine patterns. So I’d say both small and large texture measures give the best result. If you only want to integrate one size, I’d recommend 3 as a direct measure of pixel neighbors. However, this works best if no speckle filtering, multi-looking, or resampling during terrain correction and geocoding was applied before.
If you want to integrate texture or elevation you have to be careful because they have different units and value ranges. A key requirement for SVM classifiers is therefore standardization of input parameters. In turn, Random Forest classifiers are based on thresholds and don’t need standardization.
Maximum Likelihood is not very suitable because it relies on input data with a gaussian distribution (plus standardization). It is rather used for multispectral imagery with homogenous data ranges.

  1. rawdata > applying orbit file > calibration(sigmma0 in db)> specklefiltering > TC > SVM class.
  2. rawdata > applying orbit file > calibration(sigmma0 in db)> GLCM > TC > SVM class.

which one would you suggest out of these two.