Co-registration

Hello all;

I have S1A_ IW_SLC products, and I want to calculate the difference between backscattering for the images that I have.
I start with preprocessing step as following:

  1. TOPSAR-Split (select one sub-swath with the deburst that cover my AOI only)
    2.Apply orbit file
    3.Calibration.
    4.TOPSAR Deburst
    5.Speckle filter
    6.Terrain correction
    7.Subset AOI
    are the steps above correct?

In addition I want now to co-register the images that I have in Stack after I done the pre-processing and I am confused between Coregistration/coregistartion and Coregistration/DEM assisted corregistration

What is the difference between the two methods to coregister the images?

Just a side note, why are you using SLC if you want to run your analysis using backscatter?

M

1 Like

Becuase I need phase information for another step.
and as I understood from the toturial this pre-processing step only for intensity

You should not speckle-filter if you need the phase afterwards. You can always filter the phase later if necessary.

From SNAP help:

Also, the refinement of the coregistration offsets is done in a fully automatic way, including downloading and interpolation of the a-priori digitial-elevation-model.

The DEM is used to improve the result but it has a cost in terms of processing time

M

hello guys,
I have a question about coregistration basic theory and concept in processing,
While DInSAR processing which using Sentinel and ALOS, we use DEM SRTM in Coregistration and Topographic Phase Removal. As we know, pixel size or resolution in Sentinel and ALOS are different with SRTM .
I want to know from you all guys, what’s is the theory and concept which explain about these combination. How can two imagery which have different resolution combined in coregistration/geocoding/topographic phase removal and these work.
Please your teaching and information
Thank You,

I’m not sure what you mean by combining Sentinel and ALOS. DInSAR between two sensors won’t give you good results.
As for the resolution of SRTM, the DEM is interpolated to the resolution of your SAR image on the fly prior to using it.

basically, DEM-based coregistration relies on the fact that topography influences the variation of backscatter. Slopes facing towards the sensor show higher backscatter because of a larger incidence angle while slopes facing away from the sensor are generally darker.

Using a DEM and the acquisition geometry of a SAR image, a simulated SAR image is generated which only describes these variations (based on the local topography, the look direction and the local incidence angle). The result looks like this:


Image source: http://www.airgmap.com/a_sar_nav.html

The simulated image is resampled to the pixel size of the corresponding SAR image. If you want to coregister two images of different sensors, you select one master product which defines the spatial resolution of the simulated image and the resulting product.

The simulated image is then compared to the original SAR images to find similar patterns (caused by the terrain). This helps to locate both radar images on the earth’s surface and alighn them within one stack.

I mean, one of them, we can say Sentinel with SRTM. How can SRTM be used to remove topographic phase in Sentinel or how can SRTM be used to geometric correction in coregistration whereas their resolution are different. As we know, software such SNAP resamples DEM. I want to know the theory or concept.
Thank You.