Help needed understading GPT error


I have a script in a docker container that makes use of GPT via Python’s subprocess library. Docker image works fine in my machine, but fails when deployed in Kubernetes.

Of course, I don’t expect to get help with Kubernetes here. The issue is that I can’t figure out what the issue is by looking at the info provided by GPT:

INFO: org.esa.snap.core.gpf.operators.tooladapter.ToolAdapterIO: Initializing external tool adapters
INFO: org.esa.snap.core.util.EngineVersionCheckActivator: Please check regularly for new updates for the best SNAP experience.
INFO: org.hsqldb.persist.Logger: dataFileCache open start"

Exception: Executing processing graph.

I don’t see any info here related to what the problem is. Did it run out of memory? It’s a 4gb product with a rather complex graph that required to increase the memory available to Java in my machine. But I don’t see any memory related messages in the traceback.


Indeed these messages do not tell much.

Actually, the only occurrence in the SNAP code of “Executing processing graph” is not related to an exception.
So this is strange.

What you could try to find the reason for the problem is to try to execute the gpt with Python subprocess. Try to run gpt directly with the graph and see what happens. It could be that you get better error messages. They might get lost on the way.

Excuting gpt directly didn’t provide additional info, as I was already capturing both stdout and stderr.

It seems it was running out of memory error. At some point JVM tried to allocate more memory than available in the system. For a brief moment top command shows a spike, then it crashes. Its either more resources or splitting the graph in two.