I’m trying to compute a differential interferogram in SNAP version 5.0 to measure surface deformation after a flooding, using one Sentinel-1 SLC image from before the flood and one after.
I’m using an Windows 10 Laptop, 64-bit with 16 GB RAM and Intel i7 core.
Now the question is, when I use the graph-builder and “TOPSAR Coreg Interferogram”-graph for the first step I am asked to load the two images.
Logically, I would have thought, I use the image before the flood as the first input (master) and the image after the flood as second input (slave).
After I computed everything I need for the measurement of displacements (Read -> Apply Orbit File -> Back-Geocoding -> Interferogram -> TOPSAR Deburst -> TopoPhaseRemoval -> 2x Goldstein-Filtering -> SNAPHU Export -> all the stuff in SNAPHU -> SNAPHU Import -> Phase to displacement) I get quite a nice picture with everything I wanted, except that when I compared it to an orthophotograph, my displacements were just the exact contrary of what really happend (like when there had been any uplift, my displacement map would show a subsidence and vice versa.)
I then tried to first load the file AFTER the flood at the beginning and in second place the one from before and then it worked and the displacements are displayed correctly now.
Can anyone tell me, if that is normal? Maybe I’ve just got a logical error in my mind, but should not the image aquired BEFORE an event be the FIRST image to be read as I have done in the first try? And if this is the case, can anyone tell me, why my deformations are just the exact contrary of what happend?
SNAP shows the information relative to your master image. So an Image was taken after an event should be your master image, and an image was taken before the event should be your slave.
Or maybe, it would be also possible to multiply your result image on the -1 in the Raster->Band Math.
am somehow confused with this answer. is it always the case or it is a special case for this situation, because from my knowledge the master should be the image before and the slave the image after. just like @Mia i dit same and on comming out the master image was choosen to be the one after the event so its somehow confusing because from this same forum it is said the master should be the image before. please more light if possible.
I think your knowledge is right. That is an issue of how SNAP works. If you need any information for a certain date, you have to have this date as your master. Unfortunately, SNAP doesn’t provide a possibility to choose the master from a stack.
I have analyzed the interferometry of 5 groups of 2 S1A_SLC_IW images
(2 January_14january/14_January_26January/26January_7February/7 February_19February/19february_3 March).
I have done all the steps and at the end I have calculated the vertical displacement for each of them.
In order to have a feedback, I have done the same steps using the first and the last image used in the previous groups (2 January_3March).
I thought that I would have had in the same pixel the sum of the values obtained for each previous analysys but it it did not go that way.
I have used for each analysys the first image like master and the second like slave.
I have read your post and I have read that I should do it in reverse.
Please tell me if I have to consider each first image like slave and second image like master and why?
Please can you help me and tell me why my calculate are wrong?
Have a nice evening and please give me a feedback
theoretically, the sum of the short pairs should equal the total displacement between the first and the last image. But there are many error sources which distort both measurements. Have a look at these slides: https://saredu.dlr.de/unit/insar_errors
It is happening a very strange things.
I have analyzed two time the same 2 images and I have done the identical steps.
The two results have different values.
If I consider the same pixels the displacement values are different.
How can it is possibile?
Please help me,
Thank you for your answer but I can not understand.
If I take the same images and do the same processing, how is it possible that the result is different?
So based on when I do the processing, does the result change?
This way the result is not reliable.
In addition, the values obtained by me are not slightly different but very different (the sign also changes).
Considering two equal pixels, the results are as follows:
pixel 1 old result -> - 0.00158
pixel 1 new result -> + 0.00002
pixel 2 old result -> - 0.00157
pixel 2 new result -> 0.00042
The only thing I can think is that pixels do not correspond exactly to the same point, so there can be a difference.
But anyway it seems very strange that it is so substantial.
It seems too strange because I did the same identical steps using the exact same images.
Can you please clarify this concept?
InSAR measurements are relative so only differences like (pixel 1 - pixel 2) have a meaning. And if unwrapping starts at a random location that can explain the rest. Unwrapping is a messy and imperfect process on noisy interferograms with layover and shadows for example.
ok but the difference between two pixel corresponds at a geolocation.
so if I obtaine a different differencebetween the same two pixels using the same images and the same process, it is a problem.
How can I solve the problem and obtain the same result?
If there is a way to define the starting-pixel in SNAPHU and always use they same one, it should solve your problem. Still, the fact remains that unwrapping-errors are stochastic so you cannot treat your result as the “one-and-only true result”, but as an estimate on what actually took place. Note also that interferograms inherently contain signal from the variable tropospheric delay, and in some cases also the ionosphere.
nobody can help me about the starting pixel of Snaphu?
Furthermore I need your help about another question:
I have done a subset of an image using the coordinates of a city (N,S,E,W) and at the end I have exported the final result on google earth (.kmz).
I have noted that there is an error about the coordinates, infact I have not the full city but only a part.
I have repeated the operations more times but the problem is the same.
Somebody can help me please?
nobody can help me about the starting pixel of Snaphu in order to havethe same results in different processing of the same images and about the coordinates that do not corrspond at he real cordinates?
Please help me.