Snap2stamps vs. manual processing + StaMPS

SRTM 1Sec has a spatial resolution of 30m, SRTM 3Sec is the same data but resampled to 90 m

Performance differences are indeed dependent on the configuration of the machine. Things like cache management, temporary files, release of RAM, reading/writing speed of the hard disk,… all these can play a role.

Is your data located on an external drive maybe?

It is- a WD easystire 8 TB. I’ve already run checks on it and it’s working properly. The write speed is supposed to be ~130-200 MB/s.

What should I check for regarding temporary files?

please try to run the process with data on your local drives. Much of the reading/writing speed is limted by the speed of the USB port.

Is it possible to run the scripts with a portion of the images (storing on local drives, then transferring to the external drive) or do they need to be run with the entirety of the dataset to produce correct results?

snap2stamps could be processed in parts, as long as the reference image is always available and stays the same.
The export to StaMPS (last script) should probably be executed for the entire folder of coregistered images and interferograms.

StaMPS also needs all data at once then, but once the export has been conducted, you can remove all preparatory files from the drive.

Alright, well I finally managed to execute all scripts up until the export to Stamps (coregistered and interferogram image outputs all seem correct) storing everything on the external drive- no unusual processing times there. However, the export to Stamps appears to be still taking quite long, both using the script and SNAP graph builder. I set the config file to allow full (32 gb) RAM use, CPU usage to 25%, and tried exporting only one pair at a time- the script still ran for several hours without any change; similarly the graph builder export remained at 0% for hours (with -Xmx set to 32G in the gpt.vmoptions file).
I’ve now changed the cache folder to one on the external drive, however, I noticed that previously the path was “/home/user/.snap/var/cache” and I’ve been unable to locate that folder; does it matter that there was a “.” in the path name? There is only a “snap” folder, but it does not contain the sub folders that appeared in the SNAP GUI when I navigated to the new cache location (on the external drive).
Again trying to export one image pair (located on the local drive now) with full RAM use and snap.properties adjusted according to Snap2stamps package: a free tool to automate the SNAP-StaMPS Workflow, this time using the SNAP graph builder, it has been over an hour now remaining at 1%. I realize sometimes the export can take a long time, but could anything else be the issue at this point? Does there even seem to be an issue? It strikes me as such because Stamps Export took only about 45 minutes when I went through the process manually (as per Foumelis et al. (2018)), and that was before the RAM upgrade (8 gb at that time).

Also, are the coregistered and interferogram image files supposed to be named identically (i.e. the outputs for both are named 20181104_20161021_IW1.data and 20181104_20161021_IW1.dim)? That’s how they were output by the scripts, but I wondered if maybe that could result in error while running the Stamps Export.

Thanks (and I hope your new year is off to a decent start)!

This is the standard location and the point just indicates that this is a hidden folder which was automatically placed there during the installation. To find it, enable the display of hidden folders in your user directory. (c.\users\yourname\.snap)

each BEAM DIMAP product consists of two parts, one .dim file and one .data folder of the same name. So there is no problem with this.

Ah, ok- I meant that the outputs for the coregistered image and interferogram (that is, four outputs total in this case) all have the same name; i.e. the outputs in the coreg folder are named 20181104_20161021_IW1.data and 20181104_20161021_IW1.dim and the outputs in the ifg folder are also named 20181104_20161021_IW1.data and 20181104_20161021_IW1.dim - is that still ok?

Sorry to reiterate- just trying to make sure I explained the question clearly!

As for the long processing time for Stamps Export, apart from the Cache/CPU parameters (for the snap2stamps config file), the -Xmx parameter (in the gpt.vmoptions file for the SNAP GUI), the /temp folder files, and using only local drives, what else could help resolve the issue? Perhaps notably, while the Stamps Export script is running the CPU and Memory usage remain very low (less than 10% and ~3-4 GB respectively), despite the parameter specifications. Could that be an indication?

(also, is it ok to just delete the files in the /temp folder?)

yes, the products get the same names, but are stored in different folders (coreg, ifgs)

Can’t think of a reason why the export script works so long. I recently tested it and experienced a similar problem. The process started but never overcame the first product pair. Don’t know if it is the same issue as yours.

Would you happen to know to what the following issue (“WARNING: org.esa …” in attached image) could be related? It has been occurring when I run the ifg/coreg script. I left all settings as previously set, only now I’m using one burst (rather than four) and a different AOI, which I specified in the conf file

The only similar threads I’ve found seemed to suggest the AOI is the issue, but I have no clue whether that could be the case here…

I think you can ignore the warning, but still visually check the output of the coregistration

Unable to create RGB window for the coreg image, nor can I open image view for either ifg or coreg images. The SNAP - Error box reads, “java.lang.RuntimeException: Waiting thread received a null tile.”

and when you open both intensities separately?

Trying to open the coreg master intensity results in this error box, “A java.util.concurrent.ExecutionException exception has occurred,” for the slave intensity, “java.lang.NullPointerException” (this latter error message also results from trying to open the ifg intensity).

that means the coregistration was not successful. Which DEM is used during the coregistration?

SRTM 3Sec - that is the default, right?

yes, the default, but currently unavailable: SRTM ZIP-files are corrupted or not found

Please try switching to SRTM 1Sec HGT (AutoDownload) until the error is fixed.

1 Like

Hmm, ok. Do you know if that has been happening intermittently over the past couple of weeks? Or has it been continuous?
I wonder if the Stamp Export could be affected by the same issue (I processed the previous AOI last week).

actually, the stamps export should not be affected, because the DEM data is already part of the interferograms. But if the DEM was not written correctly at the interferogram creation step, it will not be exported.

This has happened a couple of days ago when the link to the SRTM 3Sec data had changed. It will be fixed with the next SNAP update, but so long, SRTM 1Sec is the only option.

Ok, and by ‘will not be exported’, do you mean that neither the snap2stamps script nor the SNAP GUI will begin the export process? Or will they just run indefinitely?
It’s strange because the coreg and ifg images for the first AOI (processed using snap2stamps) seem correct (I just can’t seem to export them).

Can the snap2stamps scripts be modified to use SRTM 1Sec?

I’ve just tried both SRTM 1Sec HGT and Grid in TOPS+ESD Coregistration, and neither seems to work (I get the same “java.lang.NullPointerException” error).