Variable size in bytes OperatorException

Dear developers,

Quite regularly when processing S1 products we receive the following error.
Can somebody provide a clue on how to avoid or resolve this type of issue?
Many thanks in advance.

org.esa.snap.core.gpf.OperatorException: Variable size in bytes 10154724216 may not exceed 4294967292
at org.esa.snap.core.gpf.graph.GraphProcessor$GPFImagingListener.errorOccurred(GraphProcessor.java:373)
at com.sun.media.jai.util.SunTileScheduler.sendExceptionToListener(SunTileScheduler.java:1646)
at com.sun.media.jai.util.SunTileScheduler.scheduleTile(SunTileScheduler.java:921)
at javax.media.jai.OpImage.getTile(OpImage.java:1129)
at com.sun.media.jai.util.RequestJob.compute(SunTileScheduler.java:247)
at com.sun.media.jai.util.WorkerThread.run(SunTileScheduler.java:468)
Caused by: java.lang.IllegalArgumentException: Variable size in bytes 10154724216 may not exceed 4294967292
at ucar.nc2.NetcdfFileWriteable.addVariable(NetcdfFileWriteable.java:455)
at ucar.nc2.NetcdfFileWriteable.addVariable(NetcdfFileWriteable.java:420)
at org.esa.snap.dataio.netcdf.nc.N3FileWriteable.addVariable(N3FileWriteable.java:69)
at org.esa.snap.dataio.netcdf.metadata.profiles.cf.CfBandPart.defineRasterDataNodes(CfBandPart.java:214)
at org.esa.snap.dataio.netcdf.metadata.profiles.cf.CfBandPart.preEncode(CfBandPart.java:143)
at org.esa.snap.dataio.netcdf.NetCdfWriteProfile.writeProduct(NetCdfWriteProfile.java:48)
at org.esa.snap.dataio.netcdf.DefaultNetCdfWriter.writeProductNodesImpl(DefaultNetCdfWriter.java:62)
at org.esa.snap.core.dataio.AbstractProductWriter.writeProductNodes(AbstractProductWriter.java:109)
at org.esa.snap.core.gpf.common.WriteOp.doExecute(WriteOp.java:287)
at org.esa.snap.core.gpf.internal.OperatorContext.executeOperator(OperatorContext.java:1272)
at org.esa.snap.core.gpf.internal.OperatorImage.computeRect(OperatorImage.java:65)
at javax.media.jai.SourcelessOpImage.computeTile(SourcelessOpImage.java:137)
at com.sun.media.jai.util.SunTileScheduler.scheduleTile(SunTileScheduler.java:904)
… 3 more

Error: Variable size in bytes 10154724216 may not exceed 4294967292

Hi,

I think the amount of data is to much for NetCDF3. If you choose as output format NetCDF4-CF it should work.

Hi Marco,

Thanks for your quick reply!
We don’t think it is related with too much data. We write many NetCDF files always containing the same amount of data and only occasionally we get this error.
Our NetCDF’s are typically less than 2MB output. We suspect this error occurs when the gpt allocates disk space for a new file based on wrongly estimated/calculated area bounds. Maybe in the case that it cannot correctly determine the bounds, this error occurs?

Best Regards

@marpet Hi Marco, is this issue still on your radar? It is something we see quite regular in our processing and should not be too hard to fix…

Honestly… I forgot it. Sorry for this.
Can one of you provide a complete example? Including input product, the graph xml file and command line call. As this is not always happening it would be good to have an example which forces this problem.

cheers

Hi Marco,

unfortunately I won’t be in the office anymore this week to prepare you the info you request. After Easter, if an example is still wanted, I will prepare it for you. I hope in the meantime you will still give it a shot…

Best regards,
Sven.

The example would still be helpful.

Hi Marco,

I managed to get some additional insight into the issue. Recently, we also installed the Snap_3.0_Alpha_01 version. Re-running our scripts that would throw the Variable Byte size error in the SNAP 2.0 version we now get a different error in SNAP 3.0 Alpha 01 (see error info below). So it appears that changes have been made related to the reported issue. In fact the error msg makes more sense now, stating that an exception is raised because the requested subset does not intersect with the data. Apparently, before in SNAP2.0, this was not catched and caused the Variable Byte Size exception in the subsequent write NETCDF operator.

So far so good, but unfortunately this still does not solve our issues. For us it is very undesirable behavior of the toolbox to raise an exception and/or quit when it is requested a subset that does not intersect the data. This is because we cut and write many subsets from a product using a single xml. For areas that we cut near the product boundaries we can, due to small changes of the S1 product coverage, never be 100% sure that a given subset has some overlap with the product. Instead we strongly prefer the behavior of the snap 2.0 version, where it would simply return the data/image as consisting completely of No_data values, but then it should only be made sure that the NETCDF-CF writer handles the case of “writing” no_data values only, correctly. In this respect we don’t care if it writes a NETCDF-CF file in which the bands are filled with No_data values or that it writes no file, just as long as it continues to process the remainder of the xml file.

I hope you now have all the info to address the issue, if not I can still prepare a GRAPH for you that re-creates the error, but than we should also agree on how I can sent the large (pre-processed) input data to you required to run the GRAPH.

Best regards,
Sven.

Error MSG raised by SNAP3.0_ALPHA01:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/s1tbx-snap3.0-alpha-01/snap/modules/ext/org.esa.snap.snap-netcdf/org-slf4j/slf4j-simple.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/s1tbx-snap3.0-alpha-01/bin/…/snap/modules/ext/org.esa.snap.snap-netcdf/org-slf4j/slf4j-simple.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
org.esa.snap.core.gpf.graph.GraphException: Subset: No intersection with source product boundary S1A_IW1_SLC__1SDV_20160107T054958_20160107T055026_009384_00D956_251C_VH_Stack_Deb_TC
at org.esa.snap.core.gpf.graph.NodeContext.initTargetProduct(NodeContext.java:78)
at org.esa.snap.core.gpf.graph.GraphContext.initNodeContext(GraphContext.java:195)
at org.esa.snap.core.gpf.graph.GraphContext.initNodeContext(GraphContext.java:178)
at org.esa.snap.core.gpf.graph.GraphContext.initOutput(GraphContext.java:162)
at org.esa.snap.core.gpf.graph.GraphContext.(GraphContext.java:91)
at org.esa.snap.core.gpf.graph.GraphContext.(GraphContext.java:64)
at org.esa.snap.core.gpf.graph.GraphProcessor.executeGraph(GraphProcessor.java:130)
at org.esa.snap.core.gpf.main.DefaultCommandLineContext.executeGraph(DefaultCommandLineContext.java:84)
at org.esa.snap.core.gpf.main.CommandLineTool.executeGraph(CommandLineTool.java:502)
at org.esa.snap.core.gpf.main.CommandLineTool.runGraph(CommandLineTool.java:350)
at org.esa.snap.core.gpf.main.CommandLineTool.runGraphOrOperator(CommandLineTool.java:249)
at org.esa.snap.core.gpf.main.CommandLineTool.run(CommandLineTool.java:150)
at org.esa.snap.core.gpf.main.CommandLineTool.run(CommandLineTool.java:122)
at org.esa.snap.core.gpf.main.GPT.run(GPT.java:54)
at org.esa.snap.core.gpf.main.GPT.main(GPT.java:34)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.esa.snap.runtime.Launcher.lambda$run$13(Launcher.java:55)
at org.esa.snap.runtime.Engine.runClientCode(Engine.java:183)
at org.esa.snap.runtime.Launcher.run(Launcher.java:51)
at org.esa.snap.runtime.Launcher.main(Launcher.java:31)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.exe4j.runtime.LauncherEngine.launch(LauncherEngine.java:62)
at com.install4j.runtime.launcher.UnixLauncher.main(UnixLauncher.java:57)
Caused by: org.esa.snap.core.gpf.OperatorException: Subset: No intersection with source product boundary S1A_IW1_SLC__1SDV_20160107T054958_20160107T055026_009384_00D956_251C_VH_Stack_Deb_TC
at org.esa.snap.core.gpf.common.SubsetOp.initialize(SubsetOp.java:206)
at org.esa.snap.core.gpf.internal.OperatorContext.initializeOperator(OperatorContext.java:485)
at org.esa.snap.core.gpf.internal.OperatorContext.getTargetProduct(OperatorContext.java:272)
at org.esa.snap.core.gpf.Operator.getTargetProduct(Operator.java:382)
at org.esa.snap.core.gpf.graph.NodeContext.initTargetProduct(NodeContext.java:76)
… 28 more

Error: Subset: No intersection with source product boundary S1A_IW1_SLC__1SDV_20160107T054958_20160107T055026_009384_00D956_251C_VH_Stack_Deb_TC

How are you doing multiple subsets with one graph?
How have you got the alpha-01 release?

We do multiple subsets using multiple SubsetOp and WriteOp pairs. We got the Alpha Release via Luis Veci as there was some specific functionality implemented that he wanted us to test. This is not relevant though, because we also have the same issue with the official SNAP3.0 release of yesterday.

Hi Marco,
were you able to locate the source of the issue discussed here yet? Perhaps a quick status update?

Thanks!

Did not had a look into it. Busy with other things.

Any idea when this might reach the top of the prio list? It is an issue we are already experiencing since the 1.x versions. Then you guys did not look into it because we had to switch to the current version of SNAP. Now that we are at the current versions and we still see the same/similar issue, it would be nice if its fixed/resolved. The issue arises regularly in the data processing of our monitoring service based on S1-data and when it does it costs us really alot of resources to work around it. A first small step that would help us alot is if the node number/name at which the error occurs is reported in the error msg!

Thanks!

I hope I can look at it tomorrow morning. I think that it will not be easy to fix. Not sure yet.
What about the idea that you don’t use multiple SubsetOps and WriteOps in one graph but put them into separate graph files? Wouldn’t this solve your problem?
But maybe I’m wrong and it is easy to fix. Let’s see.

Marco

We are regularly cutting and writing hundreds or even thousands of subsets (approx 650x650m) from a single S1-subswath so the overhead this would introduce is immense and a no go for us. Please be in touch with us tomorrow when you are looking into it. Perhaps we can find a temporary fix together that works for us if a decent fix turns out to be difficult!

Best regards,
Sven.

Adding the NodeId to the exception text, as you suggested, is not a problem. That’s done and will be released soon. Maybe this afternoon or later this week. Some other things need to be fixed and clarified before the update.

If you need more control about the process I would suggest to write your own little program and use the SNAP API. You could also write a special operator for your task. You can either use Java or Python.

Ok thanks! Just out of curiosity, did you find out what caused the exception in the NetCDF writer? Is it, as I expect, an attempt to write a file without valid data pixels? If so, it should not be too hard to catch this case in the operator itself right? I will have one of my developers to look at it if you guys are unable to provide additonal support at this time… Perhaps some pointers for us where to look in the source code?

The NetCDF error didn’t happened any more. The Subset operator checks already if there is an intersection or not.
That’s the OperatorException: Subset: No intersection with source product boundary

Sure we can give some support to your developers (we are talking about it already via mail).
As an entry point the developer guide might be of interest for your developers.

True, the netcdf exception is circumvented by the new test in the subsetOp, so an exception is now thrown earlier with indeed a more sensible error msg. However, in practice, this doesn’t change anything for us. Still the processing of our GRAPH-file is interrupted because one subset does not intersect the source product. Do you see any mechanism in which we can make the GPT to ignore this exception (make it a warning) and just skip the write if it doesn’t intersect (or write an empty netcdf that is also fine!)?

I am looking forward to hear about the support options.
Sven.

FINALLY! :slight_smile:

In the latest update I just published, the Subset operator does not throw the exception any more.
Instead no file will be written. Just a warning is shown on the console that there was no intersection.