In my latest test, I receive a SNAP-related error message at the coregistration step.
Anyone has an idea why it happens? I checked all slaves and they were computed correctly and match the extent of the master. The bounding box is also fine.
D:\Andy\S1_BSG/split\20170430\20170430_IW1.dim
[5] Processing slave file: 20170430_IW1.dim
SNAP STDOUT:INFO: org.esa.snap.core.gpf.operators.tooladapter.ToolAdapterIO: Initializing external tool adapters
Executing processing graph
INFO: org.hsqldb.persist.Logger: dataFileCache open start
======
Master: 08Dec2017
Slave: 08Dec2017 prep baseline: 0.0 temp baseline: 0.0
Slave: 30Apr2017 prep baseline: -72.1761 temp baseline: 222.00076
======
Master: 30Apr2017
Slave: 08Dec2017 prep baseline: 72.534164 temp baseline: -222.00076
Slave: 30Apr2017 prep baseline: 0.0 temp baseline: 0.0
IFG: isTOPSARBurstProduct = true
-- org.jblas INFO Starting temp DLL cleanup task.
-- org.jblas INFO Deleted 4 unused temp DLL libraries from D:\TEMP
0
0
Waiting thread received a null tile.
java.lang.NullPointerException
Waiting thread received a null tile.
Waiting thread received a null tile.
java.lang.NullPointerException
90% done.
org.esa.snap.core.gpf.OperatorException: 0
at org.esa.snap.core.gpf.graph.GraphProcessor$GPFImagingListener.errorOccurred(GraphProcessor.java:363)
at com.sun.media.jai.util.SunTileScheduler.sendExceptionToListener(SunTileScheduler.java:1646)
...
Again, there is an error message from SNAP, and hence, it would be very much appreciated the error message could be understandable (which operator and why got the error).
Looking at it, I am wondering:
if you have checked visually the slave image
if you got the same error for all the slaves
could it be DEM issue? For a while SNAP gave some issues while trying to download DEM
could you please see the xml used by gpt for running? maybe it can help us to identify the issue. If it would be a pure SNAP error, there the developers might be needed to help.
Dear @mdelgado, thank you for the response. I see that it is probably not related to snap2stamps but I wonder why there is an error when the past 3 cases worked fine.
The error happens to all slaves, although they were correctly produced b the split.
I changed the DEM to SRTM 3Sec but the error persisted.
I also checked coreg_ifg_run.xml and it looks alright. When I execute it in SNAP and get the same error. I am currently trying some settings within the graph to see where the error could come from. Obviously, the Back Geocoding doesn’t produce a usable result for the Enhanced Spectral Diversity operator.
For sure I will try to solve the issue you got, do not doubt it, not matter who produces it!
Can you please check the master image? Or had you already done it?
Please check the xml file that gpt runs… coreg_ifg_computation.xml saved on the GRAPH folder defined on the project.conf
yes, but each time was a little different But I’m getting the hang of it.
Master image is also correct. When I open the cor_ifg_computation.xml in SNAP and input the master and slave image, I get a more helpful error:
Error [NodeId: Enhanced-Spectral-Diversity] Registration window width should not be grater (sic!) than burst width 0.
Indeed, this is my first example with a single burst master image. But no matter what value I set for the Registration window, the error persists. Maybe I should try a master with at least two bursts…
The template is ready for a minimum of 2 bursts, otherwise ESD must be removed from the graph.
Probably I should write a warning in the manual or a template when master has only 1 burst. Thanks for testing the scripts intensively! That always helps for improving them
that was it, thank you so much!
I now have two versions of cor_ifg_computation.xml, one for multiple bursts and one for a single burst master. I rename them according to the processed master, but probably a note in the manual would be good. In section 3.1.3 there is stated
This process can take less than 3 minutes of time for processing compressing a single burst
Which can probably be mistaken in a way that processing one burst would be possible.
Nevermind, I am happy to have a solution now.
I never was aware what the Enhanced Spectral Diversity really was for, but now do
Indeed you are right, I need to correct that. Actually this timing was with 2 bursts.
I will gather your comments and introduce them in the next release.
Do not hesitate of tell me or asking useful things and finding situations which might be changed in future releases!
One more question: Does it make sense to include the data of the master image in the slaves folder or can I simply leave it out? As expected, it produces no interferogram, but by now I always had it included (don’t know why actually).
Well, in fact I never put the master image in the slaves folder, otherwise indeed the script produces an empty interferogram that later during the stamps export is not considered.
For the moment this is on user’s hands to decide where to put the master. As I have mentioned, normally I put the master image in a separated folder as it is logical that in the slaves folder you only let there the slaves, otherwise it could had been named SLC folder for example
I could include a check on the acquisition date to avoid that interferogram computation, that is always an option. But it is not a real problem let the master inside, I guess, even if it makes no sense.
me again, sorry. Do you suggest a specific name for the master. I noticed that the products in ifg and coreg are called “b.dim_201800101_IW1.dim” which probably comes from my master named “20171208_split_Orb.dim” and there might be some false string addressing. This later causes problems, when the output files are not named as expected.
Edit: I noted that if I add anything in the MASTER= tag before the coregistration step, the script tries to produce weird output names, such as name_of_split_product_full_path_of_master.dim.dim
Maybe this is Windows only but some of the strings appear where they shouldn’t.
It must be somewhere around line 87-90. I couldn’t find why the name of the master appears as an output name for the split process.
Mmm I see…
Indeed I thought that the master image will keep the master image name + _Orb_Split.dim or _split_Orb.dim or similar that SNAP adds at the end.
Could you please keep the master splitted name similar to:
My script in fact takes 8 characters starting from 19th character in the master string’s name. It might seem a bit weird that decision, but it seem to be quite logical and intuitive to get the master acquisition time for the master name after splitting and apply orbit step, assuming you do so using SNAP, and hence following its naming convention. Next time I will just get it from the metadata itself, whatever name the master may have.
CPU=5 * From a total of 6
CACHE=80G * I have 112G
##################################
Nevertheless, when I run the python script, this error appears
pmatuser@DataScienceSAR3:/media/datadrive/SAR/PSI_SBAS/CABILDO$ python slaves_prep.py project.conf
python: can’t open file ‘slaves_prep.py’: [Errno 2] No such file or directory
Is there a path structure for snap2tamps to be installed properly? *It can be an improvement for the manual, which is kind of cryptic.
did you navigate to the directory where the python scripts are located?
Second, it should be: IW1=IW2
Besides that, you need to prepare the master as defined here (TOPS Split and Apply Orbit) and name it as described here. Use the full path to the prepared master product in the MASTER= tag.
I guess I will copy a snap2stamps folder for each project I run in the future, so I would be able of keep the files in the same directory as the files. So, I understand from your answer that I need to run the python scripts from snap2stamps bin/ folder in the terminal.
About the slaves folder, where is the option to fulfill the path?
You need to install the python module pathlib,
It should work with : pip install pathlib
For what I see in your project.conf, your GRAPH variable should point to the folder with the graphs provided by snap2stamps so maybe you need to ensure that.
Regarding the slaves, that folder should be created inside your projectfolder with the already downloaded sentinel1 images all there together in the same folder being each of them a zip file.
That script will sorter them to be able to run the splitting correctly.
Why? I mean, I believe you, but if you explain me more, I’ll learn. Since I have 112G, I´ve used less than 80% of RAM. Please answer only if you have time.
Maybe the explanation is quite similar to the explanation I have for the DEM, amplitude and lat/lon images that SNAP created when I’ve used it in High Priority, tho!