GLCM worsening accuracy results?

Are there instances when GLCM does not improve classification results? I added texture parameters to my sentinel-1 bands and have actually got a worse result than using dual-polarized bi-temporal sentinel-1 bands alone.

Anyone experienced this?


What are you actually trying to do?
Are you trying to solve a classification problem?
What type of features are shown in your image?

Do you have screenshots of your results to post on here?

Different features on the Earth have different textures (e.g trees,roads,buildings,water), hence, texture analysis can add a values on your final results.

Maybe we first need to clarify which accuracy you mean. The accuracy given to you by SNAP is the training accuracy. It tells you how good your model represents the training data.
If you later validate your classification based on actual validation areas (should be different than your training samples) you get the classification accuracy. It tells you how good the model predicted the classes outside the trained areas.

I think it strongly depends on the used classifier: Random Forest, for example, automatically selects the input layers with the highest information content. The more layers you use, the better is your training accuracy (in most cases) and it doesn’t matter if some of the input rasters are redundant regarding their information content (which is often the case for texture measures).

I could however imagine that MaxLike or MinDist have problems with larger numbers of input rasters and training accuracy may indeed go down.

1 Like

Thanks. I am trying to carry out a landcover classification which captures fine-scaled habitats like hedges and grasslands in a fairly heterogeneous African landscape. My image consists of interspersed woodlands, croplands, bare-soil, grasslands, etc. My main aim is to capture the under-represented but important habitats like hedges and grassland for ecological purposes.
Could texture analysis help in discriminating between hedges, grassland and woody landcovers or are they too similar?

Thanks. I am talking about accuracy based on validation areas (70-30 partitioning of training-validation data). I carried this out in SCP in QGIS.
I have also used Random Forest for both the classifications. Thanks for the information on RF ability to deal with redundancy, since I used all the texture variables within GLCM.

alright, thank you. So you say that the classification accuracy went down when using more images in a Random Forest classifier?
It is true that at some point, more inpunt rasters do no longer increase the accuracy, but it actually shouldn’t decrease…

One point regarding textures: I found them quite helpful in my studies in african landscapes so far:

Thanks for the response and the articles. These will be very useful to me.
Most applications that I have seen use SAR to discriminate between fairly distinct land cover classes e.g. croplands, water, woody vegetation and urban, and texture analysis seems to work very well in these cases.
Could the intra-class similarity between the classes I am looking at (woody, grassland, hedges) be the issue here? such that the textures are very similar and hence not easily distinguished by SAR?

There is a paper which you authored: Combined use of SAR and optical data for environmental assessments
around refugee camps in semiarid landscapes

It can be observed that S1 or SRTM data alone are not suitable for a
classification. Even though they provide more predictor layers
their TAs are noticeable below the one of the L8-only

Doesn’t this observation corroborate with my findings?

This was the case in my study but I would not generally say that you can’t get a proper classification without optical data.

Thanks. In my case it seems that optical sentinel 2 gives the best results.

Now, something interesting that I have noticed.

  1. When I classify using the dual polarization only, the percent cover for woody vegetation is 45% which seems a bit realistic.

  2. When I derive the texture parameters and use them as input to the classification, the percent cover for woody veg goes upto 76% which is totally off.

  3. Finally, when I stack the VV+VH bands together with the texture bands (so more variables), the percent for woody veg goes to 92%.

The results seem to be biased towards woody vegetation with increase in texture variables.

If you are looking at vegetation 6-day InSAR coherence may help, if available. Also best 12-day pairs should help, depending where your test-site is.

Thanks. Can I do the 6-day InSAR coherence in SNAP? where can I get these data?