Fault Deformation Steps

Hi everyone,

i am trying to detect deformation on faults by using sentinel SLC IW data but i am a little confused about the steps that i should apply;

  1. My first step of every image is ; Splitting, Debursting and Merging the data and after i create a Subset with only VV or HH band (because i am only interested with these bands)

  2. After doing upper steps for all my images ,do i need to follow ;

  • Radiometric—Calibrate
  • Multilooking the image ( I dont know what i am going to choose for “Number of Range and Azimuth looks”)
  • Speckle filtering–Single Product (After i apply this to the image, when i am making coregistarion snap is saying “this product is no longer SLC data”)
    -Terrain correction (Range-Doppler TC)
  1. Do i need to follow these graphs after i apply step 2 or do i just directly start at step 3;

  1. And for final what is the difference between “Goldstein Filtering” and “Spectral – Azimuth and Range Filtering”

Thanks for your replies.

You should use the TOPS coregistration rather than using cross correlation and warp. Otherwise you may get phase jump between bursts in some scenes.
Speckle filtering is applied only on the intensity and your output will not be complex data.
The Goldstein filter is to make the phase less noisy. The azimuth and range filtering is to filter the non-overlapping spectra when the Doppler centroid frequency of the master and slave products are different.

1 Like

At first thanks for your reply @lveci , it was really helpful at some points,
but after i split, deburst and merge image and i create subset from it,
this data is unavailable for the TOPS coregistration right ?

For TOPS coregistration i have to use unburst, unsplitted IW data,
but the problem is working with all three IW data is enormous and hard to make process.

instead of this i split IW’s and deburst, merge them and create a subset which is smaller compared to first image.

The TOPS coregistration graph applies orbit correction, splits, and applies backgeocoding. You can then generate the interferogram. Do this for all 3 swaths and then deburst and merge the swaths.

1 Like

How is this process for analyzing deformation between 2 S1 passes?

Can I skip the first write/read and run all the processes to snaphu export?


1 Like

The overall order of steps seems fine to me.
Although automating such tasks seems highly desirable I don’t recommend executing them as graphs, especially for InSAR processes where you have to check the results of the single steps for correctness, maybe adjust parameters and re-run, ect.

1 Like

Thanks, Andreas, more brains are better. I think the Multilook steep is unnecessary when Terrain Correction is applied (from the S1TBX basic tutorial).

Chain processes in graphs where adjusting is not necessary. Write results where adjustments can be made to improve the analysis for particular conditions. I see 2 uses for the Graphical Processing Tool (GPT), constructing process chains and as a flow chart (note in my flow chart above, the lines were drawn out of SNAP, an editable chart).

I noted it, but wanted to go sure. Some people in here already wanted to fully automate this process which is neither a good idea nor possible with GPT.

Multi-looking makes sense if your data is too large and you have clear fringe patterns. Also the azimuth to range sampling ratio plays a role. If you have good processing capacities and want to keep maximum information content, you can skip it.

A token of appreciation for your help. A terrain corrected displacement product of the work flow above. In the lower left is Anak Krakatau, surrounded by remnants of Krakatau. Two Sentinel 1 images used are dated 19 and 31 December 2018. Image resolution is actual pixels in SNAP. Visual images tell a somewhat different story. This is my first displacement image, iterations of the process might help improve the image. Any suggestions are very welcome.

Hello guys, sorry for my question, how’s the vertical accuration or vertical resolution of Sentinel-1A ?
Thank You

Here I made displacement maps, by the steps in the flow chart above (not GPT), of 2 sub-swaths (one of 4 bursts, the other 3 bursts). Then applied Radar>Geometric>SAR Mosaic. Although I used the same processing parameters (10/2 coherence window size in interferogram formation, 4/1 multilooks), the 2 images appear to be of different scale.


Have to use same number of bursts to get same scale?

Image is October 10 - November 21, 2016, Kaikoura earthquake, South Island, New Zealand.

Maybe I answered my own pallet-match question? When working with multiple displacement images, from different swaths or different image products or time series, find the minimum and maximum data values from all of the images. Then create a pallet using the minimum minimum and maximum maximum values for the pallet range. My pallet goes red (uplifted) to white (no displacement) to blue (subsidence). Place the file in the .snap>auxdata>color_pallets sub directory. The file (attached) reads:

#BEAM Color Palette Definition File
#February 23, 2019 pallet for InSAR displacement
#edit the next line to the minimum value of the set
#edit the next line to the maximum value of the set

0_displacement.cpd (310 Bytes)

I used a text editor to create the pallet, using an included pallet as a template. I think it can also be done in the SNAP GUI, color manipulation tool window. In the color manipulation tool window, with the pallet selected, select the basic pane, then Get Data from (pallet) File button (as opposed to the image data range).

In the upper right of this pane, there is also a button to apply the pallet to other image files, a check list is presented.

I still don’t get perfect match in Radar>Geometric>SAR Mosaics, but much better than the above image. Is there a better way to match images? Are there processing parameters that can be set to maintain the scale of displacement between images?

The cause of discontinuities between processed blocks are briefly described in the ESA InSAR Principles manual. Part B, page 30, says:

"Mosaicking is required when several interferograms (each, say,
30 × 100 km) are joined together to make a long strip. The need for block
processing arises not only for computational efficiency, but to reduce the
error due to the many approximations made so far (for example: the
co-registering model, the DEM vs. SAR image alignment, the Doppler
Centroid variation with azimuth etc.).

When overlapping adjacent blocks, a phase offset could arise due to small
errors in image co-registering. This bias can be avoided if the image
mapping is estimated over the whole strip. In some cases, the bias can be
estimated, e.g. by cross-correlating the interferograms in the overlap area;
however such techniques may lead to poor results in cases of low SNR."

To merge burst swaths, deburst the splits, use Radar > Sentinel-1 TOPS menu and select the S-1 TOPS Merge. Then return to the interferometric processing flow at TopoPhaseRemoval.

The Radar > Geometric > SAR mosaic operator in SNAP is not suitable for merging TOPSAR swaths. Subsequent operations expect 2 coregistered input products.

Maybe the graph processing tool (GPT) would serve to preserve processing parameters when comparing images of different temporal baselines? Interferograms using radar images immediately prior to and after the earthquake are common (the time base includes the time of the earthquake. Pre and post seismic deformation is also interesting, deformation in the months before and after the earthquake. The earthquake occurs at critical stress, stress that has been accumulating. Likewise, deformation continues after displacement associated with the earthquake. The deformation of July 4th and 6th earthquakes at Ridgecrest, California produce nice interferograms, as humidity and vegetation density are very low.