INSAR / DINSAR - Perpendicular baseline calculation

Because it would be a very welcome feature to have, already in SNAP 7 or 8.

Is this possible? @marpet @lveci

For 7 rather not because we are currently working on v 8 however could be requested for future versions

1 Like

Dear ABraun
my Intensity SLV file missing in stack after coregistration,I have chosen the interferogram as required, but the result is still the same,what should I do? thank you in advance

I recommend to reduce the bursts of both inputs so that the same bursts are covered in a first step. Then orbit files and back geocoding.

sorry,could you tell me how to achieve reduce the burst,thank you.I try to subset,but it does’t work.

the Split operator lets you define the sub-swath and the bursts, please check here:

thank you, I try to do like that, but it also

does’t work


I would like to compute the perpendicular baseline between two S1-IW products.
I already read several informations from the .xmls metadatas in the S1 zipped product, but I don’t know how is computed the PerpBaseline.

There is 17 orbit positions (x, y, z) in the metadatas of one IW with one polarisation, should I average them ? take the first and compute my own geometry ?

Any info is very welcome!

1 Like

you can simply load both products into SNAP and use Menu > Radar > Interferometric > InSAR Stack Overview

Yes I saw that, but I need to get this value in my python scripts.
Thank you for the quick anwser!

sorry, this was not clear from your question.

Yes sorry for that, actually I would like to know how is SNAP computing the Bperp from the metadatas. Then I can pick up the right orbit metadatas (coz there is plenty of them) and apply the same formula :slight_smile:
Question for the ones who worked on this ? @Iveci ? (very sorry, I don’t know much about Java)

I take the opportunity to ask something

Reviewing more articles on perpendicular baseline I find that small bper makes the phase “more sensitive” and can increase error due to atmospheric and other conditions. Whereas with perpendicular baselines these errors decrease.

But in other articles they mention that to make estimates of displacements / deformations it is better to use small perpendicular baselines, while for the generation of DEM by interferometry it is necessary to use long perpendicular baselines. (SARf pg.16)

Gomes, 2020 say that : The smaller the perpendicular baseline, the higher the ambiguity height and this means that the topographic component in the interferogram phase will be smaller. This component is subtracted in differential interferometry, but possible errors in the digital elevation model used can generate phase differences that are erroneously interpreted as terrain subsidences. The diameter of the orbital tube of the Sentinel-1 mission, around 100 meters, makes the baseline values small, sometimes close to zero, resulting in ambiguity heights often greater than 500 meters. "

So I want to ask you who are the experts, for subsidence studies using Sentinel1, which bperp is better?
What are bperp’s critical values ​​for sentinel1 that make it difficult to carry out this kind of studies using insar psi techniques?
The range of bperp for the images to download should be between what values?


Topography impact on coregistration (based on 70 images)
Given 4963 m height range and 41 m Baseline range The maximum topographic range shift is estimated in 5 percent of the resolution cell
— In this case, when reviewing the graph of temporal baseline and perpendicular baseline, I observe that all the images are between -21m and 21m. The intention is to determine displacements with PSI.

Why is it not possible to co-register the images in my case?

1 Like

For anyone who still wants to compute the baseline with snappy, I ended up using the getBaselines method of the CreateStackOp operator (there may be a better way, but it is the only one that works for me at the moment):

import snappy as snap
# read products
product1 = snap.ProductIO.readProduct('/path/to/')
product2 = snap.ProductIO.readProduct('/path/to/')
# import the stack operator
create_stack = snap.jpy.get_type('org.esa.s1tbx.insar.gpf.coregistration.CreateStackOp')
# Use the getBaselines method.
# 1st argument: list of products between which you want to compute the baseline
# 2nd argument: a product that will receive the baselines as new metadata
create_stack.getBaselines([product1, product2], product1)
# Now there is a new piece of metadata in product one called 'Baselines'
baseline_root_metadata = product1.getMetadataRoot().getElement('Abstracted_Metadata').getElement('Baselines')
# You can now display all the baselines between all master/slave configurations
master_ids = list(baseline_root_metadata.getElementNames())
for master_id in master_ids:
    slave_ids = list( baseline_root_metadata.getElement(master_id).getElementNames())
    for slave_id in slave_ids:
        print(f'{master_id}, {slave_id}')
        baseline_metadata = baseline_root_metadata.getElement(master_id).getElement(slave_id)
        for baseline in list(baseline_metadata.getAttributeNames()):
            print(f'{baseline}: {baseline_metadata.getAttributeString(baseline)}')

The for loop will return something like:

Master: 06Oct2015, Slave: 06Oct2015
Perp Baseline: 0.0
Temp Baseline: 0.0
Modelled Coherence: 1.0
Height of Ambiguity: Infinity
Doppler Difference: 0.0

Master: 06Oct2015, Slave: 18Oct2015
Perp Baseline: 63.730323791503906
Temp Baseline: -12.000000953674316
Modelled Coherence: 0.9332881569862366
Height of Ambiguity: -245.80726623535156
Doppler Difference: -4.754251956939697

Master: 18Oct2015, Slave: 06Oct2015
Perp Baseline: -63.72906494140625
Temp Baseline: 12.000000953674316
Modelled Coherence: 0.933289110660553
Height of Ambiguity: 245.8077392578125
Doppler Difference: 4.754251956939697

Master: 18Oct2015, Slave: 18Oct2015
Perp Baseline: 0.0
Temp Baseline: 0.0
Modelled Coherence: 1.0
Height of Ambiguity: Infinity
Doppler Difference: 0.0

And if you want to get only a specific number:

baseline_root_metadata.getElement('Master: 06Oct2015').getElement('Slave: 18Oct2015').getAttributeDouble('Perp Baseline')

Dear Mr. Braun!
My name is Gadel Bakhtigareev. I am a postgraduate student in the Russian State University of Oil and Gas named after I.M. Gubkin. I study at the Department of General and Oil and Gas Field Geology. Direction of training: “Geology, prospecting and exploration of oil and gas fields.”
Now, I am working on the development of a new method for identifying residual oil reserves, including using the InSar method. A PhD thesis is being prepared on this topic. The use of InSar data is in demand for the development of oil and gas fields in Russia.
I wrote a request about this to They sent me the website address
Please tell me. Do I need to download the SNAP software and use the data from the site in order to determine the degree of deformation for the terrain that I study as part of my work?

The address you mentioned is where you download Sentinel-1 data.

You can download SNAP here: SNAP Download – STEP

To study deformation, please check these sources:

InSAR Principles: Guidelines for SAR Interferometry Processing and Interpretation (ESA TM-19)

Sentinel-1 TOPS interferometry

Thank you, but I would like to get rid off Snappy…

If someone finds up where in the S1 metadata Snap is catching the info to compute the Bperp, it is still very welcome !
Thanks a lot :slight_smile:

I took a look into the SNAP InSAR stack code on github and I think I have a rough idea of how it is implemented. Note: the code is in Java but I’m a Python person so my interpretation may be wrong.

  1. Bperp for each pair is calculated here

  2. Then the model method is used within the Baseline class to calculate Bperp Within the model method there is a line final double bPerp = baselineComponents.getBperp();

  3. In the getBperp method you calculate polyVal . The data is normalized before calculating polyVal using this method. And I think at this point you have the Bperp value.

All this is referring to line, pixel, and height. So maybe it’s calculating Bperp based on the the geolocationGridPoint data in the XML metadata. Though I’m not sure if the height is set to zero in the Bperp calculations.

Here is a sample of what it looks like:


This is just a really rough interpretation of the SNAP code.


Thank you for your time for your time diggin in this!

I tried to understand the java langage too and it seems that the <geolocationGridPoint> refers to the geolocalisation of a pixel at the sensor level, there is usually 210 of them.

The line final double bPerp = baselineComponents.getBperp(); is insed a triple loop

// Loop over heights(k), lines(i), pixels(j) to estimate baseline

  1. // height levels.
    for (long k = 0; k < N_HEIGHTS; ++k) {...
  2. // azimuth direction
    for (long i = 0; i < N_POINTS_AZI; ++i) {...
  3. // Continue looping in range direction
    for (long j = 0; j < N_POINTS_RNG; ++j) {...

Where .getXYZ(mTazi) is used reading x, y, z + master Time azimuth, and same for slave.
So unfortunatly, I don’t think that we can solve this using the line, pixel, height informations…
Maybe @Iveci can help about that ? it is that much complicated ?

Thanks a lot!

I tried to make it simple with the <OrbitList>

    <orbitList count="17">
        <frame>Earth Fixed</frame>

Computing the baseline from : mean(slave(x, y, z)) - mean(master(x, y, z)) thanks to Earth Fixed Model.
But the right answer does not come out…