Snappy doesn't clear memory cache

Hi, I need to process some S-1 images and I’ve written a code using the snappy module in python, which does several operations on those images (The code I’m using is at the bottom of this post).
I basically loop through all the images in a directory, execute (almost) the same operations for all the images and write the output products in another directory.
The problem I’m facing though is that, for each iteration, python doesn’t clear its memory, thus resulting in a code which runs slower and slower (I have not run into any memory heap error yet cause my system has 64GB memory and I set the java_max_mem in the snappy.ini file to 45GB).
I also tried to call the garbage collector or dispose the source products after I wrote it into a file, but none of those things seemed to solve the issue.
The real problem is that even if I reset the workspace (by calling %reset cause I’m using IPyhon Console in Spyder), I got the same memory usage like if I still had all the objects stored in the workspace.
Anybody who found the same issue and can give me some advice on this?

Here is the code I’m using:

import os
import snappy
import numpy as np
import pandas as pd
import re
import datetime as dt
import csv
import sys
import time
from zipfile import ZipFile as zipf
from snappy import HashMap, ProductIO, GPF, jpy
# Function to get full list of Operator parameters with their descriptions
#def Op_help(op):
#        op_spi = snappy.GPF.getDefaultInstance().getOperatorSpiRegistry().getOperatorSpi(op)
#        print('Op name: {}'.format(op_spi.getOperatorDescriptor().getName()))
#        print('Op alias: {}\n'.format(op_spi.getOperatorDescriptor().getAlias()))
#        print('PARAMETERS:\n')
#        param_Desc = op_spi.getOperatorDescriptor().getParameterDescriptors()
#        for param in param_Desc:
#            print('{}: {}\nDefault Value: {}\nPossible other values: {}\n'.format(param.getName(),param.getDescription(),param.getDefaultValue(),list(param.getValueSet())))
# Function to convert  meters in geo degrees 
def m2geodeg(m):
        import numpy as np
        fact = (6378137*np.pi*2)/360
        geodeg = m/fact
        return geodeg
# Function to get the resolution of the product bands from its Metadata
def getResolution(source):
        import numpy as np
        bnames = source.getBandNames()
        pixres = np.zeros(len(bnames))
        for el in els:
                if el.getAttributeString('physicalBand') in bnames:
        return pixres

#System = jpy.get_type('java.lang.System')
wdir = r'C:\ProgramData\Anaconda\Spyder_DATA\Lavoro\Soil_Moisture_Monitoring' #predefined working directory

InputFold = r'E:\Users\Davide_Marchegiani\SoilMoisture\Data\REMEDHUS\Sentinel_data\Sentinel_1\20170101-20170630_Ascending' #Input Data folder
OutputFold = InputFold + '_PROCESSED'

HashMap = jpy.get_type('java.util.HashMap')
#Get snappy operators    

# Get subset geometry from WKT file
wktfile = r'E:\Users\Davide_Marchegiani\SoilMoisture\Data\REMEDHUS\qgis_data\Remedhus_stations_subset_off1000m.csv'
with open(wktfile[:-4]+'_PLUS.csv') as csvfile:
        csv_read = csv.reader(csvfile)
        for row in csv_read:
                wkt = (row[0])
#num = len(files)
num = 2
count = 1
for i in range(0,num):
        print('Iter = {} -- Begin algorithm'.format(i+1))
        if count > 1:
                count -= 1
        ind = [m.start() for m in re.finditer('_',files[i])]
        datestart = files[i][ind[3]+1:ind[4]]
#        count = 1
        if i+count != num:
                while files[i][:ind[4]-7] == files[i+count][:ind[4]-7]:
                        count += 1
                        if i+count == num: 
        if count > 1:
                products = jpy.array('org.esa.snap.core.datamodel.Product', count)
                for k in range(0,count):
                        products[k] = ProductIO.readProduct(os.path.join(InputFold,files[i+k]))
                parameters = HashMap()
                source = GPF.createProduct('SliceAssembly', parameters, products)
#                prod_mosaic = source
        #                ProductIO.writeProduct(prod_mosaic,'prova_MOSAIC', 'BEAM-DIMAP')
                source = ProductIO.readProduct(os.path.join(InputFold,files[i]))
        dateend = files[i+count-1][ind[4]+1:ind[5]]
#        print(i)
### SUBSET (coarse)
        parameters = HashMap()
        parameters.put('geoRegion', geom)
        source = GPF.createProduct('Subset', parameters, source)
#        prod_subin = source
#        ProductIO.writeProduct(prod_subin,files[i][:-4], 'BEAM-DIMAP')

        parameters = HashMap()
        source = GPF.createProduct('Apply-Orbit-File', parameters, source)
#        prod_orb = source
#        ProductIO.writeProduct(prod_orb,'prova_ORBIT', 'BEAM-DIMAP')

        bandnames = source.getBandNames();
        parameters = HashMap()
        source = snappy.GPF.createProduct('Calibration', parameters, source)
#        prod_calib = source
#        ProductIO.writeProduct(prod_calib,'prova_CALIB', 'BEAM-DIMAP')

        nRg = 4;
        nAz = nRg;
        bandnames = source.getBandNames();
        parameters = HashMap()
        source = snappy.GPF.createProduct('Multilook', parameters, source)
#        prod_multi = source
#        ProductIO.writeProduct(prod_multi,'prova_MULTI', 'BEAM-DIMAP')                             
#        pix_sp_m = float(nRg*10)
        bandnames = source.getBandNames();
        parameters = HashMap()
#        parameters.put('demName', 'SRTM 1Sec HGT')
        parameters.put('imgResamplingMethod', 'NEAREST_NEIGHBOUR')
#        parameters.put('pixelSpacingInMeter', pix_sp_m)
        parameters.put('saveLocalIncidenceAngle', True)
        parameters.put('saveSelectedSourceBand', True)
#        parameters.put('applyRadiometricNormalization', True)
        parameters.put('saveSigmaNought', True)
        source = snappy.GPF.createProduct('Terrain-Correction', parameters, source)
#        prod_tercorr = source
#        ProductIO.writeProduct(prod_tercorr,'prova_TERCORR2', 'BEAM-DIMAP')

        # Get subset geometry from WKT file
        with open(wktfile) as csvfile:
                csv_read = csv.reader(csvfile)
                for row in csv_read:
                        wkt = (row[0])
        # Set SUBSET parameters
        parameters = HashMap()
        parameters.put('geoRegion', geom)
        source = GPF.createProduct('Subset', parameters, source)
#        prod_subfin = source
#        ProductIO.writeProduct(prod_subfin,'prova_SUBSET_FIN', 'BEAM-DIMAP')

        print('Iter = {} -- Writing product'.format(i+1))
        ProductIO.writeProduct(source,os.path.join(OutputFold,files[i][:ind[4]]+'_'+dateend+files[i][ind[5]:-4]), 'BEAM-DIMAP')
        print('Product written. -- t = {:.2f}s'.format(time.time()-st))
#        System.gc()
print('Done! -- t = {:.2f}s'.format(time.time()-st))

By looking more deeply into this problem I found out that:

1) the process which takes more memory consumption seems to be the ProductIO.writeProduct since, considering only 2 iterations (I set num = 2), running all the lines above ProductIO.writeProduct(…) produces a memory consumption of approximately 750 MB, whereas including the ProductIO.writeProduct(…) line it reaches about 8.5 GB of memory usage.
(Now, for example, if I execute the command again, without closing the console, I get a memory usage of 15.5 GB so the memory seems to stack over) ;

2) if by some chance I open a new IPython console and just run

import snappy

and then


it behaves as said above, it won’t clear the cache memory which remains at about 300 MB.
So it looks like it is an issue of the snappy module itself which somehow doesn’t clear the memory cache?!

Any help would be much appreciated! Thank you very much!

@marpet hello and happy new year :slight_smile: sorry to bother you again but can you help me out with this issue?
Thank you so much!

The major part of the memory is allocated during writeProduct(…) because all the computation is triggered here.
Actually, the processing should not slow down as long there is memory available. As soon it reaches the limit, old data should be replaced with new the data.

Maybe you also edit the tile cache size. See here:

What you observe in point 2) is just the memory the JVM needs right after starting up.

This leads me to an idea. You could try to stop and restart the JVM after each product. I haven’t really tried this my self.
But maybe you give it a try.
Have a look at the of snappy ( at github).
There are the calls:

jpy.create_jvm(options=_get_snap_jvm_options()) # line 235


jpy.destroy_jvm() # line 265

Maybe you can call it from your script. First, destroy it and then restart.

I changed the property ‘snap.jai.tileCacheSize’ as you suggested (it was already about 35000 MB and I set it to 45000 so I don’t think that will change things a lot anyway).
For JVM you mean the Java Virtual Machine right? I’m not a Java programmer so I don’t really know much about it.
I’ve tried to apply what you suggested for point 2) but I have some questions:

1) First of all, by calling


should I be able to free the memory?
Because I’ve tried doing so, after I launched a script (where I got a memory usage of about 6-7 GB), but the memory didn’t free up.

2) By calling


should I be able to recreate the JVM right? Where I can get the options?
I’ve tried to return them through the snappy.jpyutil.get_jvm_options() command, but if I run something like this:

opt = snappy.jpyutil.get_jvm_options()

It raises the error RuntimeError: jpy: failed to create Java VM

I also noticed that the options I get from snappy.jpyutil.get_jvm_options() are different from those in the of snappy, mine are the following:

opt = ['-Djpy.pythonPrefix=C:\\ProgramData\\Anaconda\\envs\\SNAP',

What am I doing wrong?

I’m encountering the same issue: snappy does not clear the memory cash. The code below is a simple example, similar to that shared by atteggiani, which illustrates the problem: I simply loop through S1 products and plot subsets of the intensity bands to png files. I always experience this issue, no matter the complexity of processing pipeline, and no matter the product type (S1 or S2).

Several posts indicate how to tweak the various memory settings (post) and tile cache size (post), however I would like to know how to properly release the memory, which is the only way to process large number of files sequentially.

I have tried various approaches discussed elsewhere, with no success:
(1) dispose the product to release all resources used by the object (post)
(2) explicitly call the Java Garbage Collection (post)
(3) try to restart the Java Virtual Machine (post), I obtain the same error described by atteggiani

import os
import sys
import time
import snappy
from snappy import ProductIO
from snappy import HashMap
from snappy import GPF
from snappy import jpy

# (
# pico /usr/local/lib/python2.7/dist-packages/snappy/snappy.ini
#   java_max_mem: 15G
# pico ~/.snap/snap-python/snappy/
#   jvm_maxmem = '15G'
# (
# pico ~/snap/etc/
#   snap.jai.tileCacheSize = 10000

JAI = jpy.get_type('')
ImageManager = jpy.get_type('org.esa.snap.core.image.ImageManager')
System = jpy.get_type('java.lang.System')
System.setProperty('', 'true')

zipfiles = ['',

for f in zipfiles:
    # --- read product
    p = ProductIO.readProduct(f)

    # --- apply orbit file
    parameters = HashMap()
    p = GPF.createProduct('Apply-Orbit-File', parameters, p)

    # --- deburst
    parameters = HashMap()
    p = GPF.createProduct('TOPSAR-Deburst', parameters, p)

    # --- terrain correction
    sourcebands = ['Intensity_VH', 'Intensity_VV']
    sourceBands_str = ','.join(sourcebands)
    parameters = HashMap()
    parameters.put('sourceBands', sourceBands_str)
    p = GPF.createProduct('Terrain-Correction', parameters, p)

    # --- subset
    geoRegion = 'POLYGON((40.63 13.64, 40.735 13.64, 40.735 13.53, 40.63 13.53, 40.63 13.64))'
    parameters = HashMap()
    parameters.put('copyMetadata', 'true')
    parameters.put('geoRegion', geoRegion)
    p = GPF.createProduct('Subset', parameters, p)

    # --- get metadata
    acqstart = p.getMetadataRoot().getElement('Abstracted_Metadata').getAttributeString('first_line_time')

    # --- plot bands
    for bname in sourcebands:
        print('- plotting band "' + bname + '"')
        band = p.getBand(bname)
        im = ImageManager.getInstance().createColoredBandImage([band], band.getImageInfo(), 0)
        f_out = '{}_{}.png'.format(acqstart[0:10], bname)
        JAI.create("filestore", im, f_out, 'png')

    # --- FREE MEMORY
    # - attempt 1: DISPOSE product
    print('- dispose product')

    # - attempt 2: GARBAGE COLLECTOR
    System = jpy.get_type('java.lang.System')

    # - attempt 3: JVM restart
    print('- create JVM')
    opt = snappy.jpyutil.get_jvm_options()

Am I doing something wrong? Are there any recommendations on how to proceed?

Many thanks in advance,

1 Like

Second this post, there doesn’t seem to be a easy way to force this to happen. My API is rendered useless after a few calls:

Sorry to keep bringing this one up, is there any other suggestions of ways to clear this cache? It seems each method in the posts above is not effective.

Appreciate the help a lot!

same issue here–

1 Like

bump. I’m having the same issue of running out of memory when using ProductIO.writeProduct in a loop.

Do you have something similar to this situation ?

I’m trying to loop several products as well and the loop stops unexpected without error.

I’ve just replied to another post:

So you can expect progress soon.

1 Like

@gbaier and @geoagr2003 it is worth saying that post that @marpet linked also has a workaround that will unblock your development for now whilst the issue is addressed by the team!

You mean the one with around the getting out of the loop jpy.get_type() ?
Do you know if the operator parameters can be configured as nested dictionaries in the current version of snappy ?

No sorry, it’s the issue where running multiple processing jobs inevitably means you’ll run out of memory.

Do you have an example of what you’re trying to do with the operator parameters?

Here is my piece of code:

I believe that to many class instantiation happens within the loop and this generated objects won’t destroy after usage.
What do you think ?

It’s a bit of everything really. Any time you run a loop with snappy, anything you instantiate or perform in memory is not garbage collected.

For your example, I’d use my work around and run read_cloud_band()in a separate script. You’ll find that each time the script terminates the memory will be freed.

This is taken from my workaround post:

The line of code I use to spawn my processing pipeline is:
pipeline_out = subprocess.check_output(['python', 'src/', location_wkt], stderr=subprocess.STDOUT)

Note: pipeline_out is the STDOUT from the script, so in my case to find out what file has just been processed I have print("filepath: " + path_to_file) in src/

So for your code I would try

for file_name in file_list:
    print (file_name)
    pipeline_out = subprocess.check_output(['python', '',file_name], stderr=subprocess.STDOUT)
    #Your old call CheckClouds(file_name).read_cloud_band()

Where is the method in your CheckClouds class in a python script and file_name is a passed in parameter to the script.

1 Like

Subprocesses is a good piece of documentation on this.

Thanks for the advises @Ciaran_Evans I’ll come back on this page with a feedback after I’ll make a try with the proposed solutions :slight_smile:

No problem, hopefully it helps!

A post was split to a new topic: Failed to create Java VM