Quite regularly when processing S1 products we receive the following error.
Can somebody provide a clue on how to avoid or resolve this type of issue?
Many thanks in advance.
org.esa.snap.core.gpf.OperatorException: Variable size in bytes 10154724216 may not exceed 4294967292
Caused by: java.lang.IllegalArgumentException: Variable size in bytes 10154724216 may not exceed 4294967292
… 3 more
Error: Variable size in bytes 10154724216 may not exceed 4294967292
Thanks for your quick reply!
We don’t think it is related with too much data. We write many NetCDF files always containing the same amount of data and only occasionally we get this error.
Our NetCDF’s are typically less than 2MB output. We suspect this error occurs when the gpt allocates disk space for a new file based on wrongly estimated/calculated area bounds. Maybe in the case that it cannot correctly determine the bounds, this error occurs?
Honestly… I forgot it. Sorry for this.
Can one of you provide a complete example? Including input product, the graph xml file and command line call. As this is not always happening it would be good to have an example which forces this problem.
unfortunately I won’t be in the office anymore this week to prepare you the info you request. After Easter, if an example is still wanted, I will prepare it for you. I hope in the meantime you will still give it a shot…
I managed to get some additional insight into the issue. Recently, we also installed the Snap_3.0_Alpha_01 version. Re-running our scripts that would throw the Variable Byte size error in the SNAP 2.0 version we now get a different error in SNAP 3.0 Alpha 01 (see error info below). So it appears that changes have been made related to the reported issue. In fact the error msg makes more sense now, stating that an exception is raised because the requested subset does not intersect with the data. Apparently, before in SNAP2.0, this was not catched and caused the Variable Byte Size exception in the subsequent write NETCDF operator.
So far so good, but unfortunately this still does not solve our issues. For us it is very undesirable behavior of the toolbox to raise an exception and/or quit when it is requested a subset that does not intersect the data. This is because we cut and write many subsets from a product using a single xml. For areas that we cut near the product boundaries we can, due to small changes of the S1 product coverage, never be 100% sure that a given subset has some overlap with the product. Instead we strongly prefer the behavior of the snap 2.0 version, where it would simply return the data/image as consisting completely of No_data values, but then it should only be made sure that the NETCDF-CF writer handles the case of “writing” no_data values only, correctly. In this respect we don’t care if it writes a NETCDF-CF file in which the bands are filled with No_data values or that it writes no file, just as long as it continues to process the remainder of the xml file.
I hope you now have all the info to address the issue, if not I can still prepare a GRAPH for you that re-creates the error, but than we should also agree on how I can sent the large (pre-processed) input data to you required to run the GRAPH.
Error MSG raised by SNAP3.0_ALPHA01:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/s1tbx-snap3.0-alpha-01/snap/modules/ext/org.esa.snap.snap-netcdf/org-slf4j/slf4j-simple.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/s1tbx-snap3.0-alpha-01/bin/…/snap/modules/ext/org.esa.snap.snap-netcdf/org-slf4j/slf4j-simple.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
org.esa.snap.core.gpf.graph.GraphException: Subset: No intersection with source product boundary S1A_IW1_SLC__1SDV_20160107T054958_20160107T055026_009384_00D956_251C_VH_Stack_Deb_TC
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
Caused by: org.esa.snap.core.gpf.OperatorException: Subset: No intersection with source product boundary S1A_IW1_SLC__1SDV_20160107T054958_20160107T055026_009384_00D956_251C_VH_Stack_Deb_TC
… 28 more
Error: Subset: No intersection with source product boundary S1A_IW1_SLC__1SDV_20160107T054958_20160107T055026_009384_00D956_251C_VH_Stack_Deb_TC
We do multiple subsets using multiple SubsetOp and WriteOp pairs. We got the Alpha Release via Luis Veci as there was some specific functionality implemented that he wanted us to test. This is not relevant though, because we also have the same issue with the official SNAP3.0 release of yesterday.
Any idea when this might reach the top of the prio list? It is an issue we are already experiencing since the 1.x versions. Then you guys did not look into it because we had to switch to the current version of SNAP. Now that we are at the current versions and we still see the same/similar issue, it would be nice if its fixed/resolved. The issue arises regularly in the data processing of our monitoring service based on S1-data and when it does it costs us really alot of resources to work around it. A first small step that would help us alot is if the node number/name at which the error occurs is reported in the error msg!
I hope I can look at it tomorrow morning. I think that it will not be easy to fix. Not sure yet.
What about the idea that you don’t use multiple SubsetOps and WriteOps in one graph but put them into separate graph files? Wouldn’t this solve your problem?
But maybe I’m wrong and it is easy to fix. Let’s see.
We are regularly cutting and writing hundreds or even thousands of subsets (approx 650x650m) from a single S1-subswath so the overhead this would introduce is immense and a no go for us. Please be in touch with us tomorrow when you are looking into it. Perhaps we can find a temporary fix together that works for us if a decent fix turns out to be difficult!
Adding the NodeId to the exception text, as you suggested, is not a problem. That’s done and will be released soon. Maybe this afternoon or later this week. Some other things need to be fixed and clarified before the update.
If you need more control about the process I would suggest to write your own little program and use the SNAP API. You could also write a special operator for your task. You can either use Java or Python.
Ok thanks! Just out of curiosity, did you find out what caused the exception in the NetCDF writer? Is it, as I expect, an attempt to write a file without valid data pixels? If so, it should not be too hard to catch this case in the operator itself right? I will have one of my developers to look at it if you guys are unable to provide additonal support at this time… Perhaps some pointers for us where to look in the source code?
True, the netcdf exception is circumvented by the new test in the subsetOp, so an exception is now thrown earlier with indeed a more sensible error msg. However, in practice, this doesn’t change anything for us. Still the processing of our GRAPH-file is interrupted because one subset does not intersect the source product. Do you see any mechanism in which we can make the GPT to ignore this exception (make it a warning) and just skip the write if it doesn’t intersect (or write an empty netcdf that is also fine!)?
I am looking forward to hear about the support options.
In the latest update I just published, the Subset operator does not throw the exception any more.
Instead no file will be written. Just a warning is shown on the console that there was no intersection.