Snap2stamps vs. manual processing + StaMPS

Given the specifications of the computer (link), would it be possible using the snap2stamps procedure to process 100+ S1 SLC IW images for six study sites, each covered by four bursts? Following the Graph Builder-based procedure of Foumelis et al. (2018) (link) I was unable to run part B as instructed, but I was able to obtain apparently correct outputs by running each tool individually for a subset of the 100+ image dataset.

Knowing now that processing in SNAP cannot be carried out even in a semi-automated workflow, should I expect that snap2stamps will not work (using the machine above linked) for a dataset of this size? Even if I process in SNAP manually, will the StaMPS procedure even work?

The working directory will be located on a WD easystore 8 TB external hard drive.

8 GB of RAM is probably critical for four bursts, but worth a try. I see no problem in the number of input products, because snap2stamps starts new processes for each input and is therefore quite memory effective.

Of course, it depends on the writing speed of your external drive as well.

1 Like

Well I’ve arrived at the point of running the Stamps Export script, but either I’m seeing the same result described here (Processing time for stamps export) perhaps due to the external hard drive (~150 MB/s write speed) and computer I’m using (specs linked in OP), or something else is wrong.

Processing begins without any apparent issue after running the script, the resultant folders are populated (18 items, 3 GB, in the INSAR_master folder; see below) for the first export, but nothing else happens. The same thing occurs if I try saving the outputs to my computer (rather than to the external drive). I’m going to leave the script running overnight, but so far it has been 2+ hours without further progress.

Screenshot from 2020-12-16 22-14-31
Screenshot from 2020-12-16 22-15-32
Screenshot from 2020-12-16 22-15-14
Screenshot from 2020-12-16 22-13-56

I’ve set -Xmx8G in gpt.vmoptions and the project.conf computing resource parameters are as follows

CPU=19
CACHE=7G

It strikes me as odd further because as you might recall, I had already gotten to this point previously in Windows using only the SNAP graph builder (following the workflow in Foumelis et al. (2018) linked above, with -Xmx6G in addition), and then the Stamps Export seemed to process faster (taking usually ~45 minutes).

Could I at this point just execute the Stamps Export in Windows again (using SNAP graph builder)?

(Apologies for the question barrage)

update: I’ve just noticed that my interferograms result erroneous (as in Problem with Interferogram Creation). Something appears to be going wrong with coregristration; when I try to back-geocode+ESD (or even simply back-geocode alone) using the graph builder the process goes on indefinitely. I’m not sure what it could be though, as all split images appear to be ok. Moreover, back-geocoding in graph builder works (taking ~5 minutes) on Windows, although with a rather low write speed (8 MB/s).

Does any of this indicate a problem with the Linux installation? Or does it seem more likely to be related to hardware performance?

there are currently problems reported with the SRTM download, so this might affect the BackGeocoding.
Is it an option to use SRTM 1Sec instead? You could also manually process the pairs in the SNAP GUI to pass this step until the error is solved.

SRTM 1Sec does work- what exactly is the difference between it and SRTM 3sec?

I’ve also now had my computer’s RAM upgraded to 32 gb. In Windows, using the SNAP GUI, with -Xmx22GB the TOPSAR Coregistration with ESD works without issue (~100 seconds); part B of Foumelis et al. (2018) however still does not appear to work (moved from 1% to 2% in about 1 hour), although I’ll try adjusting the -Xmx parameter. These were with SRTM 1sec too.

I’m still a bit confused about the performance difference (if there is any) between running snap2stamps in Ubuntu and using all the same tools in SNAP’s GUI in Windows… Is the main difference the amount of manual input involved?

SRTM 1Sec has a spatial resolution of 30m, SRTM 3Sec is the same data but resampled to 90 m

Performance differences are indeed dependent on the configuration of the machine. Things like cache management, temporary files, release of RAM, reading/writing speed of the hard disk,… all these can play a role.

Is your data located on an external drive maybe?

It is- a WD easystire 8 TB. I’ve already run checks on it and it’s working properly. The write speed is supposed to be ~130-200 MB/s.

What should I check for regarding temporary files?

please try to run the process with data on your local drives. Much of the reading/writing speed is limted by the speed of the USB port.

Is it possible to run the scripts with a portion of the images (storing on local drives, then transferring to the external drive) or do they need to be run with the entirety of the dataset to produce correct results?

snap2stamps could be processed in parts, as long as the reference image is always available and stays the same.
The export to StaMPS (last script) should probably be executed for the entire folder of coregistered images and interferograms.

StaMPS also needs all data at once then, but once the export has been conducted, you can remove all preparatory files from the drive.

Alright, well I finally managed to execute all scripts up until the export to Stamps (coregistered and interferogram image outputs all seem correct) storing everything on the external drive- no unusual processing times there. However, the export to Stamps appears to be still taking quite long, both using the script and SNAP graph builder. I set the config file to allow full (32 gb) RAM use, CPU usage to 25%, and tried exporting only one pair at a time- the script still ran for several hours without any change; similarly the graph builder export remained at 0% for hours (with -Xmx set to 32G in the gpt.vmoptions file).
I’ve now changed the cache folder to one on the external drive, however, I noticed that previously the path was “/home/user/.snap/var/cache” and I’ve been unable to locate that folder; does it matter that there was a “.” in the path name? There is only a “snap” folder, but it does not contain the sub folders that appeared in the SNAP GUI when I navigated to the new cache location (on the external drive).
Again trying to export one image pair (located on the local drive now) with full RAM use and snap.properties adjusted according to Snap2stamps package: a free tool to automate the SNAP-StaMPS Workflow, this time using the SNAP graph builder, it has been over an hour now remaining at 1%. I realize sometimes the export can take a long time, but could anything else be the issue at this point? Does there even seem to be an issue? It strikes me as such because Stamps Export took only about 45 minutes when I went through the process manually (as per Foumelis et al. (2018)), and that was before the RAM upgrade (8 gb at that time).

Also, are the coregistered and interferogram image files supposed to be named identically (i.e. the outputs for both are named 20181104_20161021_IW1.data and 20181104_20161021_IW1.dim)? That’s how they were output by the scripts, but I wondered if maybe that could result in error while running the Stamps Export.

Thanks (and I hope your new year is off to a decent start)!

This is the standard location and the point just indicates that this is a hidden folder which was automatically placed there during the installation. To find it, enable the display of hidden folders in your user directory. (c.\users\yourname\.snap)

each BEAM DIMAP product consists of two parts, one .dim file and one .data folder of the same name. So there is no problem with this.

Ah, ok- I meant that the outputs for the coregistered image and interferogram (that is, four outputs total in this case) all have the same name; i.e. the outputs in the coreg folder are named 20181104_20161021_IW1.data and 20181104_20161021_IW1.dim and the outputs in the ifg folder are also named 20181104_20161021_IW1.data and 20181104_20161021_IW1.dim - is that still ok?

Sorry to reiterate- just trying to make sure I explained the question clearly!

As for the long processing time for Stamps Export, apart from the Cache/CPU parameters (for the snap2stamps config file), the -Xmx parameter (in the gpt.vmoptions file for the SNAP GUI), the /temp folder files, and using only local drives, what else could help resolve the issue? Perhaps notably, while the Stamps Export script is running the CPU and Memory usage remain very low (less than 10% and ~3-4 GB respectively), despite the parameter specifications. Could that be an indication?

(also, is it ok to just delete the files in the /temp folder?)

yes, the products get the same names, but are stored in different folders (coreg, ifgs)

Can’t think of a reason why the export script works so long. I recently tested it and experienced a similar problem. The process started but never overcame the first product pair. Don’t know if it is the same issue as yours.

Would you happen to know to what the following issue (“WARNING: org.esa …” in attached image) could be related? It has been occurring when I run the ifg/coreg script. I left all settings as previously set, only now I’m using one burst (rather than four) and a different AOI, which I specified in the conf file

The only similar threads I’ve found seemed to suggest the AOI is the issue, but I have no clue whether that could be the case here…

I think you can ignore the warning, but still visually check the output of the coregistration

Unable to create RGB window for the coreg image, nor can I open image view for either ifg or coreg images. The SNAP - Error box reads, “java.lang.RuntimeException: Waiting thread received a null tile.”

and when you open both intensities separately?

Trying to open the coreg master intensity results in this error box, “A java.util.concurrent.ExecutionException exception has occurred,” for the slave intensity, “java.lang.NullPointerException” (this latter error message also results from trying to open the ifg intensity).

that means the coregistration was not successful. Which DEM is used during the coregistration?