Classification of GRD product

Ok, Thank you! I think that maybe it would be good to try again with a new post, just in case someone has found something.
I will post it tomorrow.
Regards.

Hello,

I am trying to make a classification of Sentinel 1 image. Please, which software did you use in your project to classify? if it is SNAP, how can I import a shapefile made by arcgis to be as a training data?

Thank you in advance.

Bests,

Hey @ABraun I have used “S1A_IW_GRDH_1SDV_20180911T001147_20180911T001212_023643_02939E_9162” this data to classify it in to a binary image showing water and non-water. I classified data using two different methods :

  1. Raw data > Apply Orbit File > calibration (to beta 0) > speckle filtering > Terrain Flattening > Range doppler Terrain correction > Supervised Classification (max. likelihood)

  2. Raw data > Apply Orbit File > calibration (to sigma 0) > GLCM (derived all the products available) > Random forest Classifier.

What I absorbed is that the output of both the methods was almost same. what I mean to say is that, you told that classifying a SAR image by preserving texture properties and GLCM gives much better results than the other classifying methods but here I observed that results from both the methods were visually similar. I din’t find the image 2 superior than the image 1. May I know the reason why this was happened with me ?

this strongly depends on your data. Sometimes textures enhance structures you wouldn’t see without and sometimes texture is just a blurred version of your original data.

Especially for a bbinary classification (water/non-water): If, at some places, the presence of water is ambigous (shallow water or roughed up by waves), texture won’t help you much, unless you include these areas in your training set.
Also make sure that your training samples are large enough. Collect many and of various situations.

@ABraun Thanks for your kind replay . I want to generate a flood inundation map from the given SAR image.

1.Can I achieve the same thing accurately just by classifying the image with one of the supervised classification algorithm or should I go with some thresholding techniques like Otsu’s or Kittler and Illingworth thresholding approach and then generate the map.

  1. Can I include a matlab code in the graph builder in s1tbx to automate the whole process of pre-processing, analysis/processing and post-processing. what would be the alternative for automation in s1tbx if incorporating an external matlab code is not possible in the graph builder ?

If you have the chance to implement it in Matlab, I would suggest to use thresholding approaches.

Some tutorials on radar and Matlab:

@ABraun Can I import a matlab code or python code in to the snap s1tbx and place it in graph builder to make a continuous processing chain? If “yes” how. If “No” what is the best alternative to do the whole preprocessing work with tools available in the snap s1tbx and other own analysis part as a continuous and fully automatic processing chain.

including matlab or python code is not possible but you have several options to automate processing chains:

@ABraun in the above mentioned paper of yours ( http://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XL-7-W3/777/2015/isprsarchives-XL-7-W3-777-2015.pdf ), you have added all the layers together to check their combined effect on results of classification. But if I ask you what is the best size of window to choose for glcm what would you suggest (3,5 or 9) for classification ultimately?
one more thing to ask. For better results Is it ok to give these generated glcm products with added elevation data as input for SVM classification(in ENVI or ERDAS) as well or else we should do as we do in case of maximum likelihood classification or other supervised classification by simply giving intensity band as input for SVM classifier.

hard to say, to be honest. None of the textures was among the 15 most important features. Yet, they are helpful for the delineation of fine patterns. So I’d say both small and large texture measures give the best result. If you only want to integrate one size, I’d recommend 3 as a direct measure of pixel neighbors. However, this works best if no speckle filtering, multi-looking, or resampling during terrain correction and geocoding was applied before.
If you want to integrate texture or elevation you have to be careful because they have different units and value ranges. A key requirement for SVM classifiers is therefore standardization of input parameters. In turn, Random Forest classifiers are based on thresholds and don’t need standardization.
Maximum Likelihood is not very suitable because it relies on input data with a gaussian distribution (plus standardization). It is rather used for multispectral imagery with homogenous data ranges.

  1. rawdata > applying orbit file > calibration(sigmma0 in db)> specklefiltering > TC > SVM class.
  2. rawdata > applying orbit file > calibration(sigmma0 in db)> GLCM > TC > SVM class.

which one would you suggest out of these two.

Well, one prepares the image and the other its texture. Both are complementary information and important for classification.

Thank You for your valuable suggestions :grinning:

Hey @ABraun as per your suggestion i have been using GPT. I was able to do the pre-processing work as processing chain in this but so far i could not understand how to extend the code to my own analysis part( ie. to include my own code in that for further processing).

I don’t think the GPT can directly call code from other software but you can write a routine in GPT which processes the data to a certain point and writes an intermediate output file. Then, this routine calls the external code to process it and then again proceeds with GPT commands.

In windows, this could be done by batch scripting.

How to write a routine in GPT ? Do you have any example code of this kind.

one more thing, Is it easy for a beginner like me to add plugin to my snap desktop using python ?

It is nicely described here: https://www.computerhope.com/jargon/b/batchfil.htm
You simply list the steps in the correct order, something like this:

gpt -graph1.xml -outfile.tif
external_command.exe outfile1.tif outfile2.tif
gpt graph2.xml outfile2.tif -final.tif
1 Like

@marpet Running a process through GPT is faster or processing an image by snappy interface is faster ?
and which one is more convenient if I want to do include my own python script in few steps of the processing for my analysis ?

The performance should be the same but python is known to struggle with the memory allocation and clearing. While GPT fully clears the cache after each processed product in the list (java machine is restarted), python accumulates temporary variables and becomes slow. One reason for this is that python often only uses one processor core to compute (instead of parallel processing) which also makes it potentially slower if you have a strong machine with multiple cores.
A far as I know, there are plans to tackle these problens with the next releases of SNAP, but there is no date for this yet.

To automate the calling of XMLs by the GPT, but with python scripts seems a good trade-off. It is nicely described here: https://senbox.atlassian.net/wiki/spaces/SNAP/pages/70503590/Creating+a+GPF+Graph

You can also write a batch script for this (instead of a python script) as described here:
https://senbox.atlassian.net/wiki/spaces/SNAP/pages/70503475/Bulk+Processing+with+GPT

Also make sure that gpt makes the use of your computational resources (~80% of your RAM is usually suggested). It is described here:


https://senbox.atlassian.net/wiki/spaces/SNAP/pages/15269950/SNAP+Configuration

1 Like

@ABraun 1. Here, in batch files I couldn’t understand how I must give input(which I got as out put from gpt) to the external_command.exe file . I mean in external_command.exe python script how should I write code to take it as input. Kindly, show me with an example external code.
2. Here, we are reading and writing product two times ( 1st in the gpt and 2nd in the external_command ) . Due to this I thought this whole process will take much longer time compared to running one full code with pre-processing and analysis included using snappy. Is this true ?