Questions about the basic principles of SAR and InSAR

  1. I don’t understand why we can’t measure the distance to a specific point (surface) with just one SAR image.
  • If SAR knows the time when the microwaves were transmitted and received, and the location of the satellite, can’t we know the absolute distance between the satellite and the point (surface) without phase information?

(The content below is what I understand about SAR.)

  • SAR imagery is data visualized as a raster by transmitting microwaves in a pulse form and detecting the received signal.
  • Therefore, when each pixel of the SLC includes signal strength (amplitude) and phase information, I understand that the phase information contains distance information.
  • But since SAR originally knows the time of transmission and reception for each pixel, and knows the location information of the satellite, shouldn’t we be able to know the distance from the satellite to that point (surface, pixel)? Can’t we get the distance from the satellite to the point (surface) without using phase information, just by multiplying the speed of light by time? (I think I have this question because I don’t understand SAR data… I’m curious what elements I’m missing)

  1. If the baseline of the two images used when applying InSAR is not zero, that is, if the images are not taken from the exact same location, can’t we apply InSAR?
  • Even if the band used in SAR is as long as 70cm, if there is even a slight difference in the location of the two phases, it will have a big impact on the results. Isn’t this error considered?
  • The information contained in the interferogram is simply the phase difference between the two wavelengths. Isn’t it impossible to know how many more repetitions the slave has made based on the master?
  • Therefore, how do we know whether the pixel determined to be farther from the satellite due to the interferogram is actually a result of the surface being eroded or the satellite moving farther away?
  • Of course, Sentinel-1 adjusts its orbit to be located within a tube of up to 100m for InSAR, but if the baseline differs by about 10m, wouldn’t it be impossible to determine whether the satellite has moved away or subsidence has occurred?

For example, suppose no subsidence occurred at point A, the master satellite is 100m away from point A, and the slave satellite is 110m away from point A. (band length is 20cm)

  • If InSAR is applied using the two SAR images, it will determine that A point is 10m away, and fringes will be created based on that determination.
  • However, the interferogram is only displayed as the difference in phase (-pi ~ pi), not the absolute distance (10m), so it will appear as pi on the interferogram.
  • But if you just look at the value of the fringe without knowing the answer, how can you tell whether it is a phase difference after one repetition, or a phase difference after 100 repetitions (10m / (20cm/2)) like in the example above?
  • Also, how do we know whether actual erosion has occurred or whether the satellite is farther away?

I’ve worked hard to write down the questions, but in fact, there is a part to correct the orbit when applying InSAR with SNAP, and I think that such errors might be removed in that part.


In summary,

  1. Why can’t we calculate the absolute distance from the SAR satellite to the surface with just one SAR image? Don’t we know the position of the satellite and the time it took for the microwave signal to be transmitted and received? Isn’t it just a matter of multiplying the time and the speed of light?
  2. When analyzing ground subsidence with InSAR, how do we distinguish whether subsidence has actually occurred or whether it’s simply a result of an increased distance between the two satellites?
  3. In the interferogram, how do we know how many oscillations the slave’s wave has made compared to the master’s, based on the phase difference that is shown? The method of counting the number of fringes and multiplying it by the wavelength to calculate the total displacement assumes that the slave and the master have oscillated the same amount, but how do we know that this is the case?

Sorry for my English and long questions.

4 Likes

You have very good questions, I wish I had some more time right now to give you some answers. Meanwhile, please consult some literature, for example part A of:

1 Like

SAR imaging is quite complex, the radar is sending pulses over the length of the synthetic aperture (kilometers), and in the received echoes all the returns from all the scatterers within the antenna footprint (also kilometers) are mixed in together, in modulo 2pi phase. So even though it is known when and where the pulses were sent, and when and where the mixed echoes were received, it’s not possible to derive absolute (or even relative) distances to pixels from single images.

1 Like

The effects of imaging geometry are removed with the help of orbit information and a DEM. If there are fringes present after that they are due to deformation (and atmosphere/ionosphere).

1 Like

Thank you for your response. @mengdahl
I understancd the first answer clearly.
Based on my understanding, the second answer can be summarized as follows:

When creating an interferogram using two images, the orbital information is taken into account. Therefore, if fringes appear in the resulting interferogram, it indicates changes in the surface rather than differences in satellite distance."

Is It correct?

@mengdahl
Regarding the first answer, if you want to know in more detail why it can’t be done with a single SAR image, what kind of content should you refer to? (e.g., signal processing?) Of course, this is a bit different from SAR image analysis, but I’m personally very curious and want to know.

Here are some useful references to read and familiarize yourself:

  1. Imaging Geodesy—Toward Centimeter-Level Ranging Accuracy With TerraSAR-X | IEEE Journals & Magazine | IEEE Xplore
  2. https://authors.library.caltech.edu/115107/1/Range_Geolocation_Accuracy_of_C_L-band_SAR_and_its_Implications_for_Operational_Stack_Coregistration.pdf

There are a number of unknown systematic terms that can only be known to the order of a few cm, which is why relative measurements are used as a lot of these systematic terms drop out when differencing.

2 Likes

Thank you. I will try to read those.

Ok. First things first… never, NEVER, apologize for writing in English - unless it’s your mother tongue :grinning:. The scientific community couldn’t function if we didn’t have a common language in which to communicate.

The answer to your first question is simple. The position of the satellite is very well known. I think DLR knows the position of TerraSAR-X to within tens of meters. However, the distance from the satellite to the “ground” changes constantly as the satellite moves along its orbital path. By ground, I mean a particular pixel. An image is not captured instantaneously (not like a photograph). The satellite moves in space and this movement is what allows the radar image to be captured (a huge simplification but it’ll have to do).

The information contained in the first band of a SLC image is the intensity of the radar signal reflected from the ground to the satellite. There is no distance information. The distance information is derived from the phase band. If you didn’t have the phase information, then you couldn’t estimate the distance.

Question 2 (answer is extremely generalized): this is why two (usually a lot more) images are needed to detect subsidence (or rise, for that matter). The position of the satellite in space is very well known. So the geometry to calculate the distances is valid. After correcting for the geometrical differences, the phase difference allows one to determine any change in elevation with respect to a control point. The subsidence/rise that can be captured this way is typically one-half of a wavelength (~2.8 cm for S1). The solution gets more complicated if the subsidence is huge (like after a volcano explodes).

Question 3: the simplest answer is that we don’t know. Assumptions are made. If a volcano explodes and one does a before and after comparison, then the answer might be crazy if you don’t have any on-ground or reference data to help with the calculations. With “normal” subsidence, due to water withdrawal or the like, the movements are slow and one can assume that the ground hasn’t subsidized more than 3 cm during a six day period (in urban areas, this would cause obvious cracks in housing so that would also be valid reference data).

It’s great that you’re interested in radar technology. I agree with the others and think that some introductory courses in SAR would be of great help for you. Good luck!

1 Like

Thank you for providing a detailed explanation.
It was very helpful.

Almost correct. Fringes can be also created by changes in atmosphere/ionosphere, which affects the apparent distance between the satellite and the ground (due to diffraction). That is not a surface change.

1 Like

what about the real aperture radar. More antenna length is demanded to improve the azimuth resolution which requires an antenna of kilometers length to achieve an acceptable resolution. How is the physical antenna length related to the width of the beam? If there is an illustration that elaborates those equations would be much appreciated

image

Is the beamwidth of the main-lobe proportional to the slits spacing according to the following figure illustrating young’s double-slit experiment, thus more slit-spacing requires longer antenna ?

1 Like

Yes, we can measure the total radar path length between the satellite and the ground, if the radar system has accurate timing. The radar propagation path length will have some variations because it is not a vacuum for the whole distance, so the propagation won’t be at the speed of light in a vacuum the whole path. In particular, the ionosphere can cause major changes in the propagation speed and path length changes of several meters for L-band data. The delay is less for C-band. The variations in air pressure and water vapor in the troposphere also cause changes in the radar propagation that is the same for all radar wavelengths.

2 Likes