Some explanations about concepts of fringes of interferogram and coherence

Dear all,

  1. The phase difference can have contributions from five different sources:
    Δφflat is called flat Earth phase which is the phase contribution due to the earth
     Δφelevation is the topographic contribution to the interferometric phase.
     Δφdisplacement is the surface deformation contribution to the interferometric phase.
     Δφatmosphere is the atmospheric contribution to the interferometric phase. It is
    introduced due to the atmospheric humidity, temperature and pressure change between
    the two acquisitions.
     Δφnoise is the phase noise introduced by temporal change of the scatterers, different
    look angle, and volume scattering.
    We can remove flat Earth phase, noise, elevation (by Topographic Phase Removal) based on `TOPS Interferometry Tutorial´ but we still have atmospheric phase. What we can do for removing it?

  2. Please look at the example of `TOPS Interferometry Tutorial´ about the 2014 eruption of the
    Fogo volcano in Cape Verde.
    In the end, we made an interferogram with fringes but I do not know what is the meaning with these fringes and colours. Is it meaning some surface displacements? How you can understand? Is there any material that describe these fringes? Please explain for me.

  3. We made coherence picture with white and black areas. I do not know what is the meaning with them by before eruption and after eruption. Please explain for me.

  4. `TOPS Interferometry Tutorial´ explains that The coherence band shows how similar each pixel is between the slave and master images in a scale from 0 to 1. Areas of high coherence will appear bright. Areas with poor coherence will be dark. In the image, vegetation is shown as having poor coherence and buildings have very high coherence.
    What is the meaning by above paragraph?
    Does it mean that vegetation changed a lot after eruption and buildings did not change after eruption?
    Please explain for me these…

  1. Atmospheric phase cannot be removed by a SNAP module as it cannot be modeled, like flat earth phase or topographic phase, for example. If you would make reference measurements of water vapor or have dense time-series data (example) you could theoretically remove the atmospherical phase distortions. This is advanced however (example 2).

  2. You calculate an interferogram between a pair of images. It shows you the differences between both images regarding their single phase signal. As the satellites made images from different angles and with a known distance towards each othter (perpendicular baseline) you perceive these fringes. They somehow reveal the shape (and also movement) of the terrain. Please read this document first to understand the principles why an interferogram is created and what it is able to explain. InSAR principles. There are also nice slides on the SAR EDU page for insar:

  3. You only have one coherence for the pair of images before and after. Generally, the coherence tells us if the signals are ‘in phase’ to each other, meaning that the emmitted waves align (somebody please correct me if I’m wrong)

    If the signals are out of phase, you can’t retrive reliable information of your interferogram at those locations. You therefore usually only work with pixels with a coherence of 0.3 or above (or some other threshold).

  4. Values of low coherence are caused by decorrelation. There are different reasons: For example, temporal decorrelation occurs, when objects move or change between the first and the second image. Vegtetation is highly affected by deccorelation. Even when high grass is moved by the wind, decorrelation happens within seconds. This is also called volume decorrelation.
    If your incidence angles of both images are too different you get low coherence values due to baseline decorrelation. You can’t do much about that.
    For your example, both buildings and vegetation didn’t change their location between both images. But vegetation may have grown or somehow changed its shape and structure.

I highly recommend reading the above mentioned InSAR principles. Please have also a look at the SAR EDU page which provides excellent material for free:


Thank you. For 4, it means if buildings have very poor coherence, it means they change locations?

It means they were damaged, for example collapsed

Thanks. I read those materials that you suggested to me. They were so useful but still I did not find any material for interpretation of fringes.
I found below picture but I did not undrestand something from it. How people can measure these kind of displacements?

this conclusion is crucial for the interpretation of fringes:


Accordingly, one color cycle means an elevation change of half the wavelength of the sensor along the line of sight.
To get absolute change values, these fringes need to be unwrapped (subsequently explained with ALOS data [24 cm wavelength])

This is also a nice text written in ‘normal’ words.

It originates from a straight-to-the-point FAQ about interferometry:


Thank you so much for your links.
Question about fringes again.
I am using SNAP and make fringes but I do not know how can I bring fringe bar like below picture and how can I measure displacements? Is there any software for that or I have to do manually?

  1. As you can see in figure 1, there is a fringe bar for ALOS but my fringes (Figure 2) have other colors. I do not know why? Although I am working with Sentinel1A, no ALOS.
  2. The fringe pattern may be displayed in black and white gradations or in red and blue only, but when multiple colors are used, it is easy to recognize.
    I do not know how can I make and interpret black and white gradations or in red and blue??

I saw one example from ALOS satellite that it is saying:
Coseismic displacements in Line-of-sight (LOS) direction were measured as changes of range between ground surface and the satellite at the West sky. The maximum displacement is about -2 m, which is displacement toward the satellite.
in this picture, I can accout fringes from 2 sides (side 1 and side2). As I can see in this picture (based on description of below picture in its link), they use side2. Why? How we can undrestand from which side we should start?

As, Informed you before via our messages and emails, Also ABraun, made many comments to you, the issue of you and others colleagues, unfortunately the thought that you have is expecting the InSAR are some steps that any one could follow to get the results and this is not true at all, I’d like to tell you my experience how I started as you asked me in your email to post or to take a look at this post,

2009 When I started in my PhD, I started reading the Book of Ramon Hanssen, and I started to interpret the physics subjects and bring them to my conceptual understanding, then I started to read the PDF manual A, AND B AND C From ESA, aftermath six months I started to read some papers, later on I started to use GAMMA Remote sensing and apply DInSAR of two images,

You are more lucky than me, because now SNAP SOFTWARE is available also many experts answering right away via the great forum step forum, So I’d like to ask you start step by step, it takes a few months but you’ll achieve the successful, during following your and our colleagues posts, most of them I’m beginner and I need help also they posting that the don’t want to go through out the theory anymore and this is not correct.


I am reading some books and materials and if you care more, you can see my questions are about same basic books. You are expert in interferometry but please keep in mind, there are some students are working in some connection fields like me (snow,ice and remote sensing). It means we should know about our field plus remote sensing. We do not have basic knowledge in interfrometry and we are learning now during our project step by step and definitely we need some helps.
Anyway, thanks for your comment.


I have one question . If we take two sentinel images, one ascending and one descending then will it effect the interferogram in any aspect?

I can’t imagine that this will give you a proper interferogram as the looking direction is different. But it’s worth a try.

Some guys tried it with ERS data:

1 Like

That is not possible, the result will be total decorrelation and no useful phase-signal at all. The interfering images must have the same geometry (same orbit track & perpendicular baseline less than the critical baseline).

1 Like

Fringes are generated in a somewhat intermediate step when you have a debursted, topographic phase removed and filtered interferrogram, at this stage the interferrogram is in wrapped phase form. you can get preliminary information by counting the number of fringes between two points of your interest and multiply this number by the wavelength of S1 (which is 5.546 cm).
To get the deformation spatially you have to unwrap the phase in SNAPHU and get the result of deformation in meter or cm

I am working with topsar over a large area of mixed coherence values. I am at the unwrapping stage and its not going so well.
I am looking for deformation but I guess what confuses me slightly is that deformation implies a lack of coherence.
I know insar can determine deformation quite accurately in some circumstances but if lack of coherence is a problem how is it used for landslides or large movements due to earthquakes. Surely this leads to a lack of coherence?

My master thesis is about atmospheric correction, it can be accessed here:

I asked myself the same exact question and I managed to do an atmospheric correction based on ECMWF weather data. I processed the S1A data with SNAP and did a DEM based interpolation of the atmospheric values. At the end I corrected the relative unwrapped phase values with the measured path delay.

Have a look
Kind regards



very interesting - thank you for sharing!

Hello Philippe, is the required weather data freely available globally? If yes it would be possible to develop a tool to SNAP that would perform this correction…


1 Like

Sure! That was I aiming for “on click-option”…that was the purpose…so this could be implemented and done in future with “one click” in SNAP…but I also think, research and DEV team of SNAP is already doing the research and set up for this task.

The data are globally available, but the only drawback is the data comes to public with 3-month delay. This means the whole month of January 2017 is the newest dataset right now. Sure thing, there is an option that the weather data can be available earlier (1-2 days of delay) if you have an official billing account.

Here the link, just create an account and download some sets. I always went with the Net-CDF output:

I also check on my study site the a meteostation and the offsets are pretty small, so the modelled weather data of ECMWF are credible.