S-1 co-registration works in SNAP so either:
- you are doing something wrong or
- the scene has changed too much between the acquisitions or
- some of the products in question are defective.
Are you following the tutorial for S-1 TOPSAR Interferometry?
S-1 co-registration works in SNAP so either:
Are you following the tutorial for S-1 TOPSAR Interferometry?
I think the problem is not with what you suggested. The thing is that if you want to coregister many SLC IW images with the S1 TOPS Coregistration procedure, it is not possible to coregister more than two images at once (as described in the tutorial). You can simply choose one master image and ONE slave image. So it’s only possible to coregister two images. But it’s not possible to use two or more slave images. This only works with the “Automatic Coregistration” process. But that is not executable with Sentinel-1 SLC data. So I hope someone can help.
You can achieve what you need to do by using the batch-processing functionality in SNAP. Or alternatively scripting everything and using the command-line.
yes we are using the same tutorial you mentioned, but still the problem is there.
I have different sets of images of 2015, but could not succeed with any pair.
thanks for the replie.
Do you have an image-pair acquired 12 days apart from the same track & over terrain you expect to stay coherent for that period of time?
does your data stack contain scenes from before March 2015? In this case you still need to perform the EAP correction on your data. The automatic EAP correction was introduced In March 2015, so for all scenes before this date, you still need to perform this manually. Here is the link to ESA’s newsfeed with this information:
Maybe this causes the difficulties with coregistration.
Hi @virk_rana ,
Was this problem ever resolved? I have been looking for a way to coregister multiple (4+) S1 SLC scenes (over the same area, track, 12 days apart etc.) for the purposes of multi temporal speckle filtering (MTSF), and while I have been struggling slightly, I have managed using the command line gpt function to get individual polarisations to coregister (by coregistering each scene indivually to the first using back geocoding, and recombining them before the MTSF). It’s a bit of a pain, and makes automation annoying as I have different numbers of scenes across different locations (i.e. making time series in site a, another in site b etc. before people think I’ve been trying to coregister with different areas).
All my scenes are post March 2015, so the EAP correction issue is not an…issue. From what I’m aware, it is possible to use both polarisations when using a MTSF, but since the back geocoding graph requires a single channel to be selected, I have geocoding issues when I stack both channels, meaning there is an offset between VV and VH.
I’m continuing to work on it, but I would appreciate any advice, or the pointing out of obvious errors in my chain below.
I have tried topsar split, deburst and merge (without backgeocoding) then createstack, cross-correlation, warp, MTSF, with terrain correction before and in another attempt after the coregistration step - with no success… just a lot of java null errors
So far, my graphs (I made two to reduce the memory load and overall complication) look like this:
3 times the below, for each polarisation:
I should also address the above reply to you, @asterios_papas
Handling multiple polarizations in backgeocoding has been solved. Where do you get the nulls errors?
You were totally right, I think I was using the same graph xml that I started with for a coherence estimation… It was still strange though - once I managed to coregister all images using backgeocoding (two scenes at a time) I found a lot of banding issues that appeared to be related to phase, which I solved by skipping that step and using create-stack ->cross correlation -> warp after multilooking.
A reason for the null errors was that I was testing on my local machine, with a limit of around 10gb of vmem, and that is why I was splitting up the polarisations.
Sorry for the slow reply,
I need to prepare the time-series of interferograms for creating the DEMs. The goal is the vegetation height estimation using S1 SLC images. This task is not very easy.
My boss says that I have to coregister all the 6 images in one stack, split them and create pairs of the interferograms from that. I chose the last date’s image as master and I can coregistrate all the 6 images in one stack … and here my problems start.
Hi guys, could I ask what the lessons learned are? I’m trying to coregister a year of SLC data (let’s say 30 scenes for each relative orbit). Isn’t there any option to take one master, and coregister at once all remaining 29 scenes to this master? I read here that you need to coregister each scene separately to that master; is that still the general workflow?
And even so, can you specify directly which scene to use as master? It looks like, if I try this, the software determines itself which image will be master and which one will be slave.
Would be easier to build a giant stack and coregister everything at once against one master…
On one side, I would suggest to coregister one by one, as this option is more attractive when you try to do operational works, let’s say you need to coregister a new scene each 6 days. Or when you decide to continue your work by adding one more year of data. On the other side creating giant stacks requires also more powerful computing environments to work.
Depending of the application the selection of master image can matters or not much. For interferometric application it matters, but for amplitude analysis it is not that much relevant. You could select first image as master, and use all the other ones as slave.
This is only my personal point of view, as I said it depends on the desired application.
From a computational perspective one should do one co-registration at a time, sequentially (in batch). Generating monster-graphs uses huge amounts of memory and accessing many files at the same time makes I/O much more inefficient.
@mdelgado and @mengdahl, thanks to both of you for the swift replies. I’m creating time series of coherences, so not sure based on what I should choose the master image.
Although processing power is not too much of an issue here, I agree with your advice and I’ll process each scene separately against the master. With regard to that, I use S1-TOPS coregistration tool. The tool has two “read” functions, but it doesn’t talk explicitly about master and slave. Do I have to assume that the first read will always be the master, and the second one the slave?
In addition, as I’ ve been reading earlier in this thread, it’s still necessary to do the complete coregistration for each subswath separately, and merging them again afterwards?
And finally, let’s say you want to process into different products: coherence, Sigma0 amplitude and Gamma0 amplitude. Coregistration is beneficial for all these products. When is generally a good time? My first thought:
SLC -> TOPRSAR-Split -> Apply precise orbit -> back-geocoding -> write to coregistered image
That output file can then be used for:
-> thermal noise removal -> calibration -> debursting -> multilooking -> TC -> output Sigma0
-> thermal noise removal -> calibration -> debursting -> TF -> multilooking -> TC -> output Gamma0
-> coherence estimation -> multilooking -> TC -> output coherence
Or do I make a mistake somewhere?
If you plan to use coherences better not get the master as the first image as mentioned on my last post, as coherence decorrelates with time.
Master is the Read 1 and slave Read 2.
I have put available some scripts that could help you to do so, you only need to change the xml to use in one of the steps. Not sure you needed. It can be found in the thread: Snap2stamps package: a free tool to automate the SNAP-StaMPS Workflow .
If you may want me to help you to customise the processing for your particular case, contact me.
Regarding the procedure you mentioned, at first sight it seems fine to me.
I hope this helps.
From my point of view, you could avoid the merging afterwards, as the coherence must be computed at subswath level and there is not a real need to merge them. From my point of view it is a personal decision as you can work perfectly at subswath level.
Great! I’ll definitely have a look at those resources. So if you’d plan to look at coherence time series, what’s the best way? If you coregister each image pair separately, that would yield highest coherences, but coregistration of all the different coherence images would not be perfect… If you choose one master and use that for coregistration along the entire series, there’s the temporal decorrelation problem?
And regarding merging; it depends a bit. If the final coherence output of the subswaths is converted to geotiff and these subswaths are nicely aligned, I guess it’s indeed not needed. But what if there’s artefacts at the borders…
Well, probably this would need a deeper discussion, at it is quite specific.
Regarding master, a easy/fast solution is to get the master in the middle of the scene and coregister all the other images with this. If you plan to do short term coherence analysis, you can get separate pairs and verify afterwards that after doing the TerrainCorrection all images are aligned or not. From my experience in my area of interest the images were nicely aligned after the Terrain Correction.
There is still another solution, and If I am right, it is still not available within SNAP, is the multi-master interferogram. This is problably what you may need.
Regarding artefacts at the borders, they will probably be. Please check it as well. For the GRD you could apply the Border Noise Removal, but probably you will need to merge them. It will be interesting to see that if it may be requred or not. Maybe @lveci can say something more precise in this regard.
It looks like this issue is still open, in terms of creating a stack of S-1 interferograms coregistered to one master (what @Tomcater was attempting and I think what @mdelgado refers to as a multi-master interferogram?). I ran the python wrapper scripts from the snap2stamps package, which runs very nicely and smoothly (thanks). This creates a series of interferograms with one single master. I do not have access to Matlab or Gamma to continue with StaMPS, unfortunately.
Failing the ability to run back-geocoding/ESD on more than two images, I created a batch of back-geocoded/ESD registered image pairs, from a time-series (all in the same rel-orbit, location etc.), all with the same master (i.e. master-slaves as mst-slv1; mst-slv2; mst-slv3 etc.). I thought I could be clever and replace the .img and .hdr files such that data from slv1 in pair one becomes mst in pair two. I encountered some problems with the datatypes, so I converted slv1 data from flt32 to int16 and modified the .dim file.
I managed to ‘trick’ SNAP into generating an interferogram from slv1-slv2 in this way, but I end up with some very strange results (image below). On the left is the first mst-slv1 image, in the middle is slv1-slv2 and on the right slv2-slv3. I am almost certain that this is because the flat-earth phase removal is using the wrong metadata. However, on the second row are the interferograms without flat earth removal and the results are not as expected either. Do people have any suggestions for which parts of the metadata I need to adapt to overcome this, or am I overlooking some fundamental aspects of SNAP/S-1 data/InSAR that would make this a pointless exercise?