Snap2stamps package: a free tool to automate the SNAP-StaMPS Workflow

It is indeed needed for the StaMPS export, but as you say, not in every product.

1 Like

Unfortunately it is a trade-of between RAM and HDD needed for the processing.

Orginally, you could do the stamps export of stacks of interferograms 1 master with several slaves, for which only one band of lat/lon and dem was saved.

Unfortunately, this solution requires a very powerful computer, able to load in RAM all the stack (specially difficult when working with large S-1 series), so I adopted the solution of working single pair master-slave, but then incrementing the extra info as the lat/lon and dem.

Maybe in the future StaMPS operator could be modified for accepting also the export without lat/lon and dem, so our solution would not have to keep them until the end.

Or another solution could be trying to change this extra bands by soft-links of the real information that could be saved in only one pair. It must be worth trying as at the end 3 bands for 150 pairsā€¦ could be removed from the space needed. But again, repercusions should be analysed.

For the moment the important fact is as it is, SNAP+snap2stamps+StaMPS PSI works perfectly, and it should be a bit deeper analysed how to optimise the resources needed for such kind of processing

that is true. It is good as it is now, because RAM is surely the more important factor, I would say.

1 Like

Hi, mdelgado
Could you please help with some proccesing errors in snap2stamps.
I have the following errors:
Java heap space (java.lang.NullPointerException)
GC overhead limit exceeded
Cannot construct DataBuffer

I use an ubuntu workstation with 12 cores and 24 Gb memory for processing a stack of 17 sentinel slc.
A full log from terminal and coreg_ifg2run.xm included.
Thanks in advance.

snap2stamps.txt (26.8 KB)
coreg_ifg2run.xml (7.0 KB)

that means not enough RAM was available to this process. Did you adjust the memory variable in project.conf?

According to your system I would suggest:

# COMPUTING RESOURCES TO EMPLOY
CPU=12
CACHE=20G

Hi ABraun,

I have the following settings:
CPU=24
CACHE=20G

As I understood it is necessary to specify logical cores (hyper-threading) of the processor instead of physical, or not?

youā€™re probably right. But then I wonder why SNAP returns this errorā€¦

Very interesting but in SNAP GUI all procedures are processed (in manual) without any errors.

Hi @Stoorm,

Probably you forgot to set up the maximum memory for SNAP GPT, found in different file that SNAP GUI.
Please check the -Xmx memory assigned on the $SNAPFOLDER/bin/gpt.vmoptions (maybe I should include this in the user manual)

Let me know as the errors you mentioned are directly SNAP logs saying, as @Abraun mentioned, that you need more memory (or the memory specified in the gpt configuration is not enough).

I guess this may help you to solve your issue but please keep me posted

Hi @mdelgado, thank you for reply.
My gpn.vmoptions settings:
-Xmx15G
Is it bad that the size of the allocated memory does not match with project.conf file?

In fact there are 2 types of memory. The -Xmx is the one allocated for the Java Virtual Machine that SNAP uses, and the one on the project file for the cache memory.

Thank for the comment. In the future we can include also this one on the project.conf. :wink:

Please put more for the JVM than for the cache memory. I use normally 16G for the cache and 25GB for the Xmx (JVM) and it worked ok, but I normally use few burst interferograms

1 Like

Ok I will restart processing with the corrected parameters.
Thank you.

In my latest test, I receive a SNAP-related error message at the coregistration step.
Anyone has an idea why it happens? I checked all slaves and they were computed correctly and match the extent of the master. The bounding box is also fine.

D:\Andy\S1_BSG/split\20170430\20170430_IW1.dim
[5] Processing slave file: 20170430_IW1.dim

SNAP STDOUT:INFO: org.esa.snap.core.gpf.operators.tooladapter.ToolAdapterIO: Initializing external tool adapters
Executing processing graph
INFO: org.hsqldb.persist.Logger: dataFileCache open start
======
Master: 08Dec2017
Slave: 08Dec2017 prep baseline: 0.0 temp baseline: 0.0
Slave: 30Apr2017 prep baseline: -72.1761 temp baseline: 222.00076

======
Master: 30Apr2017
Slave: 08Dec2017 prep baseline: 72.534164 temp baseline: -222.00076
Slave: 30Apr2017 prep baseline: 0.0 temp baseline: 0.0

IFG: isTOPSARBurstProduct = true
-- org.jblas INFO Starting temp DLL cleanup task.
-- org.jblas INFO Deleted 4 unused temp DLL libraries from D:\TEMP
0
0
Waiting thread received a null tile.
java.lang.NullPointerException
Waiting thread received a null tile.
Waiting thread received a null tile.
java.lang.NullPointerException
90% done.
org.esa.snap.core.gpf.OperatorException: 0
        at org.esa.snap.core.gpf.graph.GraphProcessor$GPFImagingListener.errorOccurred(GraphProcessor.java:363)
        at com.sun.media.jai.util.SunTileScheduler.sendExceptionToListener(SunTileScheduler.java:1646)
 ...

Hi @ABraun

Again, there is an error message from SNAP, and hence, it would be very much appreciated the error message could be understandable (which operator and why got the error).
Looking at it, I am wondering:

  1. if you have checked visually the slave image
  2. if you got the same error for all the slaves
  3. could it be DEM issue? For a while SNAP gave some issues while trying to download DEM
  4. could you please see the xml used by gpt for running? maybe it can help us to identify the issue. If it would be a pure SNAP error, there the developers might be needed to help.

Let me know

Dear @mdelgado, thank you for the response. I see that it is probably not related to snap2stamps but I wonder why there is an error when the past 3 cases worked fine.

The error happens to all slaves, although they were correctly produced b the split.
I changed the DEM to SRTM 3Sec but the error persisted.
I also checked coreg_ifg_run.xml and it looks alright. When I execute it in SNAP and get the same error. I am currently trying some settings within the graph to see where the error could come from. Obviously, the Back Geocoding doesnā€™t produce a usable result for the Enhanced Spectral Diversity operator.

For sure I will try to solve the issue you got, do not doubt it, not matter who produces it!

Can you please check the master image? Or had you already done it?
Please check the xml file that gpt runsā€¦ coreg_ifg_computation.xml saved on the GRAPH folder defined on the project.conf

PS: had you already use it 3 times? Really nice!

yes, but each time was a little different :slight_smile: But Iā€™m getting the hang of it.

Master image is also correct. When I open the cor_ifg_computation.xml in SNAP and input the master and slave image, I get a more helpful error:

Error [NodeId: Enhanced-Spectral-Diversity] Registration window width should not be grater (sic!) than burst width 0.

Indeed, this is my first example with a single burst master image. But no matter what value I set for the Registration window, the error persists. Maybe I should try a master with at least two burstsā€¦

Are you working with a single burst?

If so, remove the ESD operator in the template after backing it up.

It should work. Let me know

1 Like

The template is ready for a minimum of 2 bursts, otherwise ESD must be removed from the graph.

Probably I should write a warning in the manual or a template when master has only 1 burst. Thanks for testing the scripts intensively! That always helps for improving them

1 Like

that was it, thank you so much!
I now have two versions of cor_ifg_computation.xml, one for multiple bursts and one for a single burst master. I rename them according to the processed master, but probably a note in the manual would be good. In section 3.1.3 there is stated

This process can take less than 3 minutes of time for processing compressing a single burst

Which can probably be mistaken in a way that processing one burst would be possible.

Nevermind, I am happy to have a solution now.
I never was aware what the Enhanced Spectral Diversity really was for, but now do :slight_smile:

1 Like