How to evaluate the co-registration between Landsat 8 and Sentinel 1?

Hello, I have made a co-registration between a Landsat 8 image and Sentinel 1. I wonder if there is any algorithm in SNAP or other open source program to evaluate the displacement of the co-registration? I read that using cross-correlation through Fourier Transform can evaluate the displacement in pixels. Could someone suggest how to evaluate the co-registration? Any guidance please?

Thank you,

for coregistered radar products you can open the InSAR stack tool grafik which shows you some error measures based on the residuals, but I am not sure if this also works for combinations of optical and radar data.

1 Like

Thank your @ABraun, but when I open “InSAR stack” a warning appears: “this tools windows requires a coregistered stack product to be selected”. However, I have opened the product co-registered as a stack. Do you know what this is about?. In fact, I can see you have opened a stack of Ikonos and you do not have the problem that I do.

How did you achieve this coregistration?

Actually, I have used two Cosmo SkyMed images, only the RGB composite is (falsely) named Ikonos.

I used SNAP’s co-registration:

The master was the optical image (a small cut of the whole scene) and the slave the SAR image (the same cut of the optical image). Did I do well?

a stack is not really a coregistration (although it is in this menu). It simply uses the geocoding of both rasters to put them into one product. No control points are used and no adjustment of the slave image is conducted.

Yes, in fact the tool only needed the geocoded data from both images. Both images had the same projection system. I didn’t really find another tool that does the co-registration in SNAP, also according to the recommendations of this forum (Co-registration of Landsat-8 and sentinel-1 data) that is the tool for the co-registration of an optical image and sar. Am I right or is there another tool?

It does the job, but it won’t adjust the data based on their image patterns.

There is a Sentinel-2 coregistration tool in the Plugins section, but I haven’t tried it so far.

And also the GeFolki coregistration might be worth a try

Do you have any idea how how to run it? (under which tab it’s available!)

Thanks for the suggestions @ABraun. I could not run GeFolki Co-registration and I also couldn’t find tool S2. However, I could run the CoRegistration using: Radar/Coregistration/Coregistration. In fact, I was able to activate the InSAR stack tool.

How can I interpret the co-registered residuals?, are pixel units? 0.0115*30=0.345m? Is this so?

Thanks for your time!!.

You can terrain-correct S-1 to you map projection of choice and map project Landsat 8 to the same projection, using the same pixel size for L8 (oversampling to account for the higher resolution of S-1).

Case study ERS_1/2 & ENVISAT, (corrigestration)

Long and short term monitoring of ground deformation in Thessaly basin using space-based SAR Interferometry.

SAR is much more precise geometrically speaking than optical imagery. I would use the terrain corrected SAR image as the master, provided a good DEM is available for the terrain correction.

@obarrilero @lveci I think we need a tutorial & sample graphs for SAR & optical co-location.

Thank you for that @mengdahl.

@falahfakhri thank you so much!!. I think it can also be applied to S1. In fact, my co-registration errors are very low, so everything has run smoothly.

Thank you again.

@marpet @lveci Could you please open an issue for this? The RGB composite dialogue no longer suggests combinations (also not selectable in the dropdown menu). The only exception is Sentinel-2 where the band combinations are still correctly suggested but since the latest update something changed and all RGBs are called “Ikonos” with 1, 2 and 3 being preset as colors.


Dear @mengdahl, Do you have a specific paper covering this topic? Is it interesting to know that SAR is much more precise geometrically speaking than optical images. Could you explain me why please?


I want to share my experience though it may be not a good idea. Recently, I have tried to perform georeferencing the Sentinel-1 GRD and optical imagery. All of the images were transformed into WGS 1984 and UTM systems, and they were geoferenced with the digitalized thematic layers (e.g.,rivers, coast lines, roads), the output errors were limited within 1 pixel. By the way, I did these procedures with ESRI ArcGIS. Perhaps someoe who prefer the free QGIS or GRASS GIS can also easily do the same things.

Well, radar fundamentally measures distance and when the necessary corrections are made the absolute locational accuracy of modern SAR systems is in the order of several centimetres (for corner reflector localisation), which is a small fraction of the system resolution/pixel size. This is 1-2 orders of magnitude better than what optical satellites can achieve.

1 Like