Steps for fusion of optical and SAR


I am working on the steps required to fuse optical data with SAR.

  1. I subset both the images using Geo-coordinates however the result is different for both datasets. the SAR image is cropped at a different length to the optical image.

  2. I need to bring both into the same resolution and then coregister them.
    What steps should I follow for the same?

  3. in what ways can I use the ground control points.

Thank you.

Depending on your data, both images need to be projected into the same coordinate reference system.
For example, Sentinel-2 (optical) comes in UTM, Sentinel-1 (SAR) is not projected yet. Therefore, it makes sense to also project the SAR image into the same coordinate reference system as the optical image. This can be selected during the Range Doppler Terrain Correction.

Afterwards, you can use the Collocation module to bring both images into one product. It let’s you chose which one is the master and how the pixels are resampled to a common resolution (of the master).

Maybe you can clarify your first point with a screenshot.


To address your points

  1. If you have the chance, try to get Level 1.1 data, apply radiometric calibration and Range Doppler Terrain Correction where you can select the coordinate reference system.
  2. If you don’t get L1.1 data you can apply the mosaic operator (described here) to geocode your product (terrain effects will not be eliminated) and re-project it to UTM.
  3. SNAP cannot correctly display two products of different pixel size side by side. They show the same area and during stacking they will be resampled to a the resolution of the master. This is just a visual thing at the moment, you can proceed.

I had some problems with the reference system of S2 as well. You should reproject S1 to UTM 35 WGS84 / epsg:32635, which is coordinate system of S2.
I could overlap S1 and S2 image with QGIS to see difference in some ground features, creating two raster layers. However this is helpful only if you don’t need any further SNAP processing and might not be suitable for your study.

We need to define the recommended method or methods for SAR & optical fusion with SNAP. @lveci @obarrilero

Well, data fusion can be done using the satellite data or their derive features, and based on each application could be pixel-based or feature-based.

Indeed I have worked with AI satellite data fusion, which works very nicely for land cover mapping (for example)

Curious to see the new methods that will be included in SNAP, but I believe the classifiers already could work with SAR and optical data fusion. Never tried in SNAP though.

I meant precision co-location of optical & SAR data.

1 Like

We published a new tutorial on the fusion of Sentinel-1 and Sentinel-2:

Synergetic use of radar and optical data


Thank you very much!

How can I fuse Landsat 8 and Sentinel 1 data?

The workflow is the same: Synergetic use of radar and optical data