I am currently trying to calculate GLCM matrices for Sentinel-2 data. However, the results are not satisfactory.
Usually, the GLCM images look something like this.
E.g. the input for the GLCM calculation above were folloging parameters:
It seems like if there went something wrong… Any suggestions?
EDIT: The input bands in the picture above where typed wrongly, which was just for visualisation
try all angles instead of 0.
Have you set a NoData value in the file band properties?
Alright so I tried to enable the use of a NoData value in the product explorer.
Unfortunately, it didn’t result in the expected image using the enabled NoData value and the all angles option. Now I only see a white scene with one value…
hm, I just tried it with a band from S2 AWS and it worked fine.
Where did you get the data?
Was it somehow pre-processed?
Can you post a screenshot of the full GCLM module window?
@ABraun Thanks for your help so far!
I downloaded the data from the Sentinel Hub and applied some preprocessing (L2A processing, Resampling, Subsetting). Processing was complete accomplished in SNAP and a screenshot of the full GLCM was given above.
Are there any differences between data download from Amazon and S2 Hub?
yes the AWS allows the download of single tiles and bands (instead of full images), they are also in integer format instead of float. This could actually be a reason.
I just tried B9 from an original S2 image (after sen2cor) and this is what I got:
It seems that the GLCM can’t handle float values properly. They all seem too equal so that there is no pattern/texture recognized between the different pixels by the algorithm. I played around with the parameters a bit (quantification levels, size, quantizers…) but didn’t find a solution yet. I however remember using these textures with db SAR data which was also float format.
So, a kind of workaround would be to multiply your source bands with 100 and then perform the texture analysis:
This is not a perfect solution but maybe brings you closer to your desired textures.
Just a remark: there is no difference between AWS S2 data and SciHub data, except the packing/grouping of files. From an “original” S2 product, the SNAP S2 reader knows how to convert DN to reflectance (of type float), whilst if you open just a jp2 file (a single band), the SNAP JP2 reader doesn’t know how to do that and you only have DN.
thanks for clarification, @kraftek! I always wondered why there are sometimes integers and sometimes floats.
@c3po: Did it work?
@ABraun : Thanks for that workaround! It actually works but I think it would be good to implement a solution like yours into the GLCM tool in SNAP.
btw: Since I am going to segment my data, I think it does make more sense to calculate textures after the segmentation right?
what do you mean by ‘segmentation’? Extracting vectors of homogenous areas (OBIA approach)?
Yes, in order to classify land cover classes, I thought it would be more useful to use objects isntead of single pixels.
then I personally would base the segments on the image data only and then use the texture per object for the classification .
Right, exactly that was my idea, thanks for supporting the decision!
What Quantizer(problemistic/equal distance) and Quantization level(8/16/32/64/128) mean in the processing pararmeter of GLCM operator???
Thanks in advance