Is there a way to check coregistration accuracy? And what does "ESD" do?

Dear all,

I wonder if there is a way to check the accuracy of coregistration (i.e. S-1 TOPS Coregistration)?
I found a “InSAR Stack” window. I can only see that there are contents in “Stack information” (left). However, there is no information in the “Coregistration Residuals”(right), which I think it is what I need…
image image

Besides, I am trying to find out if there is any way to improve the accuracy of coregistration? I learned that “Enhanced Spectral Diversity (ESD)” is made for this purpose, so I applied “S-1 TOPS ESD Coregistration” and compare the coregistration results with “S-1 TOPS Coregistration”. But I can barely see any difference with my eyes, and also the statistics shows no difference.

Intensity after coregistration:
RGB window (master in red and slave in green)
“S-1 TOPS Coregistration” (left) V.S “S-1 TOPS ESD Coregistration” (right)

Coherence map after Interferogram generation:
“S-1 TOPS Coregistration” (left) V.S “S-1 TOPS ESD Coregistration” (right)

Statistics and Histogram of Coherence"
“S-1 TOPS Coregistration” (left) V.S “S-1 TOPS ESD Coregistration” (right)

So is there anyone knows what does ESD work for TOPS?
And if ESD cannot help, is there any way to possibly improve the accuracy of coregistration?

Thanks a lot and really appreciate your time!
Regards,
YY

please check the SNAP help for a description of the operator

A more technical explanation is given in this article: https://ieeexplore.ieee.org/abstract/document/7390052

The ESD operator has been extended since SNAP 8 and I agree that the parameters should be documented more precisely, so they can be refined to increase coregistration quality.

About the residuals: I just tested it and can confirm that the ESD measure is not working at the moment

@lveci can you please shortly confirm if this is a bug and in case open a ticket for this and assign it to Sensar?

Thanks @ABraun
May I ask if “coregistration residuals” doesn’t work in your device either?

yes, none of them. But I guess, the traditional residuals can only be calculated for Stripmap products. The S1 IW mode is different in this sense.

Thanks for your reply @ABraun.
So do you have any suggestions in terms of how to improve and validate the coregistration accuracy in SNAP? (for IW mode S1A products)

not without the RMSE values of the InSAR stack tool.
Is there a reason why you think it should be improved in your case?

@ABraun thank you for asking! a bit long story…
I am doing my master thesis and my topic is about the error analysis of DEM generation using InSAR technique. (I read hassen, 2001 and Module 2202 provided by TUM/DLR as my reference. Not comprehensively understand all equations yet but working on it)

To simplify my research topic, I tried to filter out low coherence area before phase unwrapping in order to avoid phase unwrapping error. So, I tried out many possible ways which I found from this forum, but unfortunately they all didn’t work out for me.
Thus, I took some subsets from my study area, and try to avoid low coherence area as far as I could. The elevation of my subsets are actually really low (0~50 m) so I suppose the topographic error produced by layover, hillshade and shadowing could be temporarily ignored. Yet the final results still exist some 10~70 meters of difference from the ground truth (SRTM 30m). I personally don’t think the tropospheric and ionospheric delay could cause this large error. So… before I continue more, I hope I could refine coregistration and reduce error of coregistration.

Any of your opinion is highly appreciated :slight_smile:

I see little contribution of coregistration to the final DEM quality. The following parameters have a much higher impact:

  • perpendicular baseline (ideally above 150 m
  • temporal baseline (6 or 12 days)
  • rain-free conditions for both images
  • little vegetation cover (causes phase decorrelation which makes these parts unusable)
  • looking direktion (depending on the orientation of a hill, ridge or valley, this can also make a large difference)

There are some approaches which suggest weighted combinations of several input DEMs which I consider the most promising techniques at the moment:

Sentinel-1 is simply not designed for DEM generation in the first place. I am currently writing a review paper on DEM derivation from S1 data and there are only few studies which can be considered successful.

Thanks for your insight and all of the provided materials @ABraun!
I understand that Sentinel-1 is not designed for DEM generation, so I am not interested in how to make a high accuracy DEM but to find out the error sources, and possibly quantify them.
For example, I downloaded 8 pairs of S1A data with Bperp. ranging from 96~155 meters. I processed these image pairs until phase filtering, and observe their patterns as the following figure.


It’s quite apparent that we can see a topography ramp in the lower left corner pair but noisy in the others. I suppose most of the speckles are caused by tropospheric effect because the pairs which we can hardly see any information were taken in summer (a rainfall season of my study area).
All in all, I hope I could find out the relationship between each error source and the accuracy of DEM products. I’ll inspect the relationship pixel by pixel and apply regression analysis.

thank you very much for your kind reply as always!

1 Like

this is a very interesting topic and I fully agree that this should be investigated more.

As suggested in these documents, the ideal perpendicular baseline shoud lie between 150 and 300 m. Fringes of such pairs lie closer together and allow the derivation of more detailled surfae information than fringes representing elevation differences of up to 100 m.
Unfortunately, I cannot publicly share my findings yet, but I did very similar investigations and can confirm that the selection of large perpendicular baseline pairs from the dry period brings the most promising results.

It’s not so easy to find suitable pairs, especially ones with a short temporal baseline (6 or 12 days). You can use the ASF Search Baseline tool to identify image pairs with these configurations for your area.

Besides that, you are right that temporal decorrelation has the strongest impact on phase quality. So it is always a trade-off.

@jun_lu
I have tested the ESD Histogram tool and did not get it to run for any of SNAP 6, 7, and 8. Do you have a recommendation what is required to display the shifts in this tool?

grafik

I have created an RGB color image and I have found those signs of failed co-registration between S1 SLC pairs separated by 12 temporal baseline. The master image is given the green and red filters while the slave image is given the blue filter

This probable misalignment may have been a result of the tide action but I am not exactly sure, because in areas away from shore, the co-registration seems successful.

This is even far more noticeable alone the contacts between bursts

I have tried splitting only one burst prior to the co-registration but I received exactly the same the same results as with co-registering 3 bursts.
I do not know if this is relevant but the perpendicular baseline between those pairs, according to ASF data search, is 0 meters

So my question is, did the co-registration actually fail?
I saw no difference whatsoever after performing ESD. So, Does ESD improves the co-registration results in terms of their accuracy?

I don’t think so, your screenshots look alright. The edges at your bursts are corrected later with TOPS Deburst.
EDS can make it more precise but its effect is not visible to the eye. You need it for InSAR applications, but have to be careful with offset tracking, for example. Also, you don’t need it with only one burst.
And I think the baseline data from ASF are faulty in your case.

Hello,

I was wondering the same but I noticed that the missing “ESD Measure” for ESD Co-Registered products is still a problem in the last SNAP release.

I applied Back-Geocoding + ESD to a set of 3 products (using product-set reader) but I have no way to check the results. The RGB image could be a good compromise with only 2 images, but it is not a valuable check for stacks of >=3 products.

Is there any way to generate a numeric evaluation of the procedure?