Choosing Data for long term subsidence monitoring

After trawling through lots of topics i haven’t really found any concrete answers so thought id make a new thread.

If you wanted to monitor subsidence across a longer time frame (approximately 3 years) how would you go about choosing data (in terms of dates, not type) and adapting your methodology for it?

Background:
I’ve been using S1 SLC IW data and following the s1tbx Tops interferometry tutorial (with the added step of multilooking) to produce stacks of single image pairs to try and measure displacement. When doing this across a 4 month period with images taken 12 days apart most of the interferograms form correctly and there are no errors, but trying to stretch out the data to 24 and 36 days apart and you being to get decorelation errors in the interferogram formation. So is there a better way to do long term displacement monitoring without using every single data point taken 12 days apart?

whenever you increase the temporal baseline, phase decorrelation will increase. So you can either stick to 6 or 12 day intervals or lose coherence (depending on your study site).

To reduce this effect, time-series approaches can be used which combine the interferograms of subsequent image pairs (AB, BC, CD…) and those which have suitable baseline configuration.

You might have a look at pyrate which does exactly what you need. https://github.com/GeoscienceAustralia/PyRate

More advanced techniques are based on persistent scatterer interferometry: StaMPS - Detailled instructions

“whenever you increase the temporal baseline, phase decorrelation will increase. So you can either stick to 6 or 12 day intervals or lose coherence (depending on your study site).” - Makes sense, we only have 12 day intervals but that seems to work in the short term.

If i avoid phase decorrelation by keeping 12 day intevals, and used approximately 30 images per year for 3 years would you expect the time series approach to give accurate results or is it not worth trying and moving to more advanced techniques you linked above?

Thanks for linking PyRate, ill be sure to have a look into it.

This can work, yes - but only if the other requirements are also fulfilled, most importantly of the interferograms look good, you have a common reference point and that there is a chance that unwrapping works. If the high coherence areas are only small and scattered, unwrapping will introduce random results.

I’ve somewhat confused myself with the data processing so apologies if this doesnt make sense.

My data set is sitting over the Sydney Australia so as far as i know DinSAR methodology should work as its a built up area (interferograms are coming out well with good coherence) but the success rate of the final products showing displacement is only around 10-15% (3-4 successful interferograms per year) after products were removed for various reasons (large ramps across the area, large patches etc).

Is there a way to mathematically fill in the gaps between the successful interferograms? How would you assess the long term total displacement when you don’t have a complete set of sequential interferograms?

Over an urban area with many persistent scatterers that stay coherence over years the preferred method is Persistent Scatterer Intereferometry (PSI), which is what StaMPS can achieve using SNAP-processed images. Unfortunately StaMPS requires a Matlab-licence.