GPT mosaic OutOfMemoryError

I tried to mosaic daily images to get a yearly average map over a large area, and received the following error:

INFO: org.esa.snap.core.gpf.operators.tooladapter.ToolAdapterIO: Initializing external tool adapters
SEVERE: org.esa.s2tbx.dataio.gdal.activator.GDALDistributionInstaller: The environment variable LD_LIBRARY_PATH is not set. It must contain the current folder '.'.
Executing processing graph
INFO: org.hsqldb.persist.Logger: dataFileCache open start
 done.
java.lang.OutOfMemoryError: Java heap space
        at org.esa.snap.core.datamodel.ProductData$Byte.<init>(ProductData.java:1073)
        at org.esa.snap.core.datamodel.ProductData$ASCII.<init>(ProductData.java:2806)
        at org.esa.snap.core.datamodel.ProductData$ASCII.createDeepClone(ProductData.java:2871)
        at org.esa.snap.core.datamodel.MetadataAttribute.createDeepClone(MetadataAttribute.java:76)
        at org.esa.snap.core.datamodel.MetadataElement.createDeepClone(MetadataElement.java:633)
        at org.esa.snap.core.datamodel.MetadataElement.createDeepClone(MetadataElement.java:637)
        at org.esa.snap.core.util.ProductUtils.copyMetadata(ProductUtils.java:1643)
        at org.esa.snap.core.util.ProductUtils.copyMetadata(ProductUtils.java:1611)
        at org.esa.snap.core.gpf.common.reproject.ReprojectionOp.initialize(ReprojectionOp.java:267)
        at org.esa.snap.core.gpf.internal.OperatorContext.initializeOperator(OperatorContext.java:486)
        at org.esa.snap.core.gpf.internal.OperatorContext.getTargetProduct(OperatorContext.java:273)
        at org.esa.snap.core.gpf.Operator.getTargetProduct(Operator.java:387)
        at org.esa.snap.core.gpf.GPF.createProductNS(GPF.java:318)
        at org.esa.snap.core.gpf.GPF.createProduct(GPF.java:293)
        at org.esa.snap.core.gpf.GPF.createProduct(GPF.java:272)
        at org.esa.snap.core.gpf.common.MosaicOp.createReprojectedProducts(MosaicOp.java:377)
        at org.esa.snap.core.gpf.common.MosaicOp.initialize(MosaicOp.java:145)
        at org.esa.snap.core.gpf.internal.OperatorContext.initializeOperator(OperatorContext.java:486)
        at org.esa.snap.core.gpf.internal.OperatorContext.getTargetProduct(OperatorContext.java:273)
        at org.esa.snap.core.gpf.Operator.getTargetProduct(Operator.java:387)
        at org.esa.snap.core.gpf.graph.NodeContext.initTargetProduct(NodeContext.java:77)
        at org.esa.snap.core.gpf.graph.GraphContext.initNodeContext(GraphContext.java:195)
        at org.esa.snap.core.gpf.graph.GraphContext.initNodeContext(GraphContext.java:178)
        at org.esa.snap.core.gpf.graph.GraphContext.initOutput(GraphContext.java:162)
        at org.esa.snap.core.gpf.graph.GraphContext.<init>(GraphContext.java:91)
        at org.esa.snap.core.gpf.graph.GraphContext.<init>(GraphContext.java:64)
        at org.esa.snap.core.gpf.graph.GraphProcessor.executeGraph(GraphProcessor.java:128)
        at org.esa.snap.core.gpf.main.DefaultCommandLineContext.executeGraph(DefaultCommandLineContext.java:86)
        at org.esa.snap.core.gpf.main.CommandLineTool.executeGraph(CommandLineTool.java:534)
        at org.esa.snap.core.gpf.main.CommandLineTool.runGraph(CommandLineTool.java:388)
        at org.esa.snap.core.gpf.main.CommandLineTool.runGraphOrOperator(CommandLineTool.java:287)
        at org.esa.snap.core.gpf.main.CommandLineTool.run(CommandLineTool.java:188)

Error: Java heap space

My gpt setting according to : gpt --diag

INFO: org.esa.snap.core.gpf.operators.tooladapter.ToolAdapterIO: Initializing external tool adapters
SNAP Release version 6.0
SNAP home: /home/user/Programs/snap/bin//..
SNAP debug: null
SNAP log level: null
Java home: /home/user/Programs/snap/jre
Java version: 1.8.0_102
Processors: 56
Max memory: 18.7 GB
Cache size: 1024.0 MB
Tile parallelism: 56
Tile size: 512 x 512 pixels

To configure your gpt memory usage:
Edit snap/bin/gpt.vmoptions

To configure your gpt cache size and parallelism:
Edit .snap/etc/snap.properties or gpt -c ${cachesize-in-GB}G -q ${parallelism}

the computer configuration is:

Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
CPU(s):                56
CPU family:            6
Total Memory:               32G
Swap memory:               15G

I have tried the following strategies according to previous posts in this forum, but not work.

  1. edit the snap/etc/snap.conf, as this post suggested. default_options=, modified -J-Xmx21G to -J-Xmx10G or -J-Xmx25G; the computer has 32G of physical memory. 21G may be ideal as 70% according to this post

  2. add performance setting during the running: gpt -c 1G -q 30 -x ............, combined or separately, according to this same post.

Any suggestions?

You have at least a powerful system, so you should be able to process data.
Can you tell what kind of data do you try to mosaic and also which area (how big, also in terms of pixels), and how many bands shall you mosaic have in the end? This would be helpful for analyzing the problem.
As a suggestion, I would recommend to do L3 Binning instead of Mosaic. This gives you also more freedom on how the data aggregated and I guess that it will handle the memory better.
Or you could split your mosaic computation.
You could compute the first quarter and then use the result as updateProduct for the next quarter and so one. So you would need to run 4 computations.The updateProduct can be specified on the command line as additional source product and in the GUI you can select it as target product.

Thank you for the reply.

I mosaick ~150 raster images derived from Sentinel-3 OLCI data, over the Lake of the Woods. image size: 446x491; data type: float32. #bands mosaicked: 4. all the image with same extent and data type.

I have previous successfully run the same script over the same area, or lakes/area even larger. probably because the computer was less busy by then (we have many users able access to this computer). Therefore the Java heap error was quite possibly caused by the resource shortage during the run.

During the failed runs, I see this thread consumed a lot of CPU+Memory, but I feel it is acceptable:

 PID USER   PR  NI    VIRT    RES    SHR S  %CPU %MEM  TIME+ COMMAND
 9523 user  20   0 27.266g 0.019t  32428 S  2067 68.4  23:48.72 java

(I tracked the failing process, occupied Memory keeps growing and crashes at Mem=68.4%)

I tried the Binning instead of Mosaic at the very beginning (~2 years ago), but notice it has difficulty to handle the NoData area. could try it again as you mentioned the advantage it has.

If I want to stick with Mosaic with in my current processing flow, is there a way to limit the running to use less resource (cpu/memory)? I assume my computer is powerful enough for such run, even if I set gpt to use just half of the resource?
Thx.