Snap2stamps vs. manual processing + StaMPS

Following a downgrade from Ubuntu 20.04 to 18.04 LTS, upgrading RAM to 32 gb, and using SNAP v6, snap2stamps has been fully executed.

After running mt_prep_snap, however, I received an initial message after the opener, “matlab: permission denied,” and then the process continued- it’s still underway, but amplitude estimates have now been produced for all 114 images, as have their interferograms. Does that message indicate the results will be erroneous?

No actual error messages have appeared yet (3/9 patches processed). I went ahead and changed the matlab installation folder contents’ permissions to read/write, hopefully that doesn’t mess something up!

Good luck then! It’s good to plot the intermediate results every now and then to check if the data look alright

This only means that you do not have matlab in the environmental variables which can be found from mt_prep_snap and a starting file will not be created.
However, this can be created within MATLAB while running stamps itself.

Ok- and could any resulting files have been affected?

Also, on a slightly unrelated matter, does the application of TRAIN to snap2stamps PSI make much difference, since only the master APS is estimated (data set of 114 images, including master)?

no.

TRAIN cannot be applied to snap2stamps.
TRAIN is applied together with StaMPS processing, not to snap2stamps. Please do not mixed terms,

TRAIN does not estimate only APS for the master image. I believe you are confused.
Please read TRAIN manual and tutorial so that you will see that TRAIN estimates APS for each of the images, and the different options for it.

Ok, yes- I seem to have misread this post

There seems to be an issue with this link, however

http://gmt.soest.hawaii.edu/gmt/gmt_download.html

Is there any other way to access that software?

can you try to get GMT5SAR?
I think they moved time ago the website.
I have found using google this repo, but not sure is the official one, you better check it:

After I run the python coreg_ifg_topsar py project.conf scripts, my interfergrams are very noisy and has low coherence. What ideas do you have for improving their coherence?In addition, my study area does not have Sentinel-B images.

This is okay (to a certain degree), because PS InSAR will throw away most of the pixels and only continue with those with a stable phase information. Please have a look at these slides which compare a traditional interferogram with PS InSAR phase analysis: Persistent Scatterer InSAR and StaMPS

Coherence cannot be increased technically, it all depends on the amount of vegetation and other surfaces which cause decorrelation.

i’d say so, yes. Most of the interferograms show the same pattern which indicates that the remaining PS are largely free of noise. The interferograms which strongly differ from the rest are probably affected by atmospheric phase contributions, but as long as these are not imposing wrong patterns in the overall result, it should be fine. You can also identify outliers in the time series plots quite well.

Good job!

Thanks for your good and clear explanation, but how can I identify outliers in the time series plots quite well? I do installed Train package, how can use it for improving noise effects?

I have never used TRAIN, sorry.

You can plot time series if selected point with the ts option. If one date systematically differs from the temporal trend, it’s probably because of atmospheric disturbance.

I’ll look into that, thank you!

It seems my patches (9) are incomplete after running mt_prep_snap; they only contain 10 items each and no psver.mat file (as described here SNAP - StaMPS Workflow Documentation).

No errors messages appear (other than the initial ‘matlab: command not found’, which, if I’ve not misunderstood the above, is harmless).

I’m unsure if this could an issue similar to the one discussed here How to prepare Sentinel-1 images stack for PSI/SBAS in SNAP because my pscands.1.ij files do not result empty.

I’m not sure whether this would help either… https://groups.google.com/g/mainsar/c/iuje3LSAOfg/m/IISJQ0z6KAMJ

What do you think??

Hard to tell. I would proceed as long as you don’t get an error message

I did not- however, after running stamps in Matlab, I receive this message during step 3,
"Warning: Not enough random phase pixels to set gamma threshold - using default threshold of 0.3
In ps_select (line 184)
In stamps (line 356) "

And at then at the end:

"Index exceeds the number of array elements (0).

Error in llh2local (line 41)
dlambda=llh(1,z)-origin(1);

Error in ps_merge_patches (line 485)
xy=llh2local(lonlat’,ll0)*1000;

Error in stamps (line 473)
ps_merge_patches"

I set the ‘density_rand’ parameter to 50, and then 90, but still got the same errors.

Apparently none of the PS candidates are being selected, although step 2 consistently indicates multiple tens of thousands of candidates.

I’ve tried using setparm(‘percent_rand’, 60) instead of density,
and setparm(‘filter_grid_size’, 10), as suggested here Linux Installation using StaMPS and S-1 data - the error persists.

My study area (2 bursts) does contain lake areas- however, it is surrounded by densely urbanized regions on all sides except the north:

LONMIN=-97.1843
LATMIN=33.0559
LONMAX=-96.2829
LATMAX=33.3571

I’ve also noticed that in some patch folders the file pscands.1.ij.int are much smaller (few hundred kB) and they have what appears to be an erroneous icon (black rectangle), whereas the rest are ~2 Mb in size and they have a white sheet icon.

Assuming the interferograms are correct, to what else could this be related?

that is not an error. It just means that the preselection of PS candidates was quite well so that there are not many pixels with random phase.

However, this is an error - but I am not sure what it means. How large are your patches and what overlap did you select?

They vary in size, but most are just under 1 gb, a few are < 150 mb, and they all contain 29 items. I used the default 50/200 (range/azimuth) overlap settings

if one patch is smaller than 200 pixels, it might be a problem. But I am not sure if this is the case here. Maybe you run mt_prep_snap again with only 6 patches (3 2)

Same results with 6 patches (tried the same parameter settings as above again).

Would continuing to reduce the number of patches do anything at this point?

I’ve also been considering re-processing using 3 or 4 bursts, although I had hoped to process this dataset to all the way through to obtain preliminary results. Could that potentially resolve this?