Classification of Sentinel-1 images with deep learning methods

Hello
I need to classify using the deep learning method using Sentinel-1 images,
Since Sentinel-1 images may not be suitable for discussing deep learning, is it appropriate to use Sentinel-1 images for deep learning classification?
What are some suggested ways to do this?

This is quite a huge topic (and not implemented in SNAP), so maybe you can clarify first what you already know about deep learning or Sentinel-1 image so there is a base for a discussion.

For example, have you already classified Sentinel-1 images based on more traditional methods?
Did you already apply deep learning on other data before?
Also, what’s the application or expected outcome for the classification (e.g. image recognition, image segmentation, object detection, instance segmentation…)?

1 Like

To Andreas’s questions I would add a question about training data:

Remote sensing coverage extends to many isolated areas where in situ data sets are sparse, so it would be helpful to describe the training data for your use case.

1 Like

good point @gnwiii Compared to traditional methods, deep learning requires extensively larger efforts on training data to provide results of good quality.

I have already done a classification for Sentinel 1 images
But I have not used deep learning methods
I intend to classify and extract the constructed areas using the results of intensity, coherence, and texture of Sentinel 1 images.
Is this action appropriate for using Sentinel 1 images?

In terms of data input, Sentinel-1 backscatter intensity, texture and coherence are a suitable input for the mapping of urban areas. I know that Sentinel-1 has been included in the Global Human Settlement Layer (GHSL) which has a spatial resolution of 30 m. As this is of global coverage, you could aim for a higher spatial resolution in order to provide a method which is more accurate at the local scale.

There is a nice review paper (Deep Learning Meets SAR) which discusses why SAR data brings new challenges (e.g. the imaging geometry or the non-normal shape the histogram). For example, traditional DL approaches use RGB images as inputs. If you have more than 3 layers you have to find ways to make use of them inside a CNN.
So it would probably a good start to find a suitable technique to apply DL on traditional images first and then move forward to Sentinel-1 data.

If you are new to Deep Learning in general, a fantastic overview has been presented by @thho

3 Likes