Java memory leak using S1 operators


#1

Hi all,

I have developed a Java web application which downloads the Sentinel-1 images and then applies a chain of SNAP operators to pre-process the data and to perform a change detection between two S1 images.

I am facing a Java memory leak problem (i.e. Java heap memory keeps growing without being release by the garbage collector) while the referred SNAP operators over time. I will try to further detail the problem…

I am using SNAP version 4.0.1 and S1tbx version 4.0.0 and I am importing the following:

  • s1tbx-io
  • s1tbx-op-sar-processing
  • s1tbx-op-insar
  • s1tbx-op-calibration
  • s1tbx-op-sentinel1
  • s1tbx-op-utilities

Before calling any SNAP operators I perform the initialization by calling:

GPF.getDefaultInstance().getOperatorSpiRegistry().loadOperatorSpis();

and each operator is called like follows:

GPF.createProduct(opName, parameters, prods);

On each pair of images, for each image, I call the following pre-processing SNAP operators:

  • Subset
  • Apply-Orbit-File
  • ThermalNoiseRemoval
  • Calibration
  • Terrain-Correction

and then I perform the change detection by applying the “BandMaths” operator.

Note that each execution of the above mentioned processing chain is run in a separate thread.
The Java heap memory keeps growing after each execution until it reaches a point where the application crashes with an out of memory exception.

Analyzing a crash memory dump, it is possible to see that most of the Java heap memory is occupied by instances of “javax.imageio.stream.MemoryCache” instances:

Any idea why these cache objects are not released? Can anyone help me on this issue?

Thank you in advance!

Paulo Nunes


Snappy process the Sentinel-1A Runtimeerror error:17
#2

Let me ask you some questions.
Which data format are you using for the input and output files? Maybe Big-GeoTiff?
Do you dispose the products (Product.dispose()) when processing finished?


#3

I am using the .dim format.

Yes, at the end of the processing I call the dispose() method.


#4

But your are reading from the original S1 data, right? This might cause the problem.
I found usages of MemoryCacheImageInputStream in several S1TBX reader. This input stream uses the class MemoryCache.
I think those streams are not closed when the product is disposed. This might cause the problem. I haven’t followed this to the End. @JunLu can you have a look?


#5

Yes, I am reading from the original S1 data (zip file).

I am using the Read operator in the same way I use the other operators and previously I was reading the products calling directly the ProductIO.readProduct() method, but with the same behavior.


#6

Any news on this issue?


SNAP-5.0-s1tbx memory consumption increase
#7

No news till now. Currently other things need to be addressed first.


#8

When I search for MemoryCacheImageInputStream in s1tbx, I found only two places where it is created:

CEOSProductDirectory.getCEOSFile()
ImageIOFile.createImageInputStream()

I have traced through the code and I believe MemoryCacheImageInputStream is closed in each case.

In the first case, MemoryCacheImageInputStream is passed to BinaryFileReader and is closed when BinaryFileReader.close() is called.

In the second case, MemoryCacheImageInputStream is closed when XMLProductDirectory.close() is called.

So we are not able to find a memory leak from MemoryCacheImageInputStream.