Snappy error - Memory is cached

Hi Everyone,

I am processing S1 SLC images in python, unfortunately I have to use snappy to calibrate the images in Python before I can use them. I run my script to process multiple SLC images

I am not exactly sure how it works, but it seems to me there is some sort of issue with the allocation and clearing of memory in snappy.When I look at the memory usage in windows, I notice that with every image processed, the Standby (cached) memory increases. This happens specifically in the part of the script where I am using the snappy module to Calibrate the SLC images.

This memory is not cleared or released, even though the Python script started processing the next SLC image. I tried several tricks in Python (garbage collecting, multiprocessing, etc…) but it wont seem to clear this cached memory.

Could anyone familiar with snappy help me with this issue… So far it has been driving me nuts…

This is the part of the code responsible for reading and calibrating the SLC images:

print('Reading...')
time_1 = datetime.datetime.utcnow()
# ~~~~~input data~~~~~
tl_x = 0
width = meta['samplesPerBurst']
tl_y = 0
height = meta['linesPerBurst'] * len(burst['azimuthTime'])
# ~~~~~Snappy Calibration~~~~~
GPF.getDefaultInstance().getOperatorSpiRegistry().loadOperatorSpis()
satdata = ProductIO.readProduct(loc)
#    Calibration
parameters = HashMap()
parameters.put('outputSigmaBand', True)
parameters.put('selectedPolarisations', 'VV')
parameters.put('outputImageInComplex', True)
product = GPF.createProduct("Calibration", parameters, satdata)
print(datetime.datetime.utcnow())
# read bands
bi_VV = product.getBand('i_' + swath + '_VV')
bq_VV = product.getBand('q_' + swath + '_VV')
# create numpy variables
i_VV = np.zeros((height, width), dtype=np.float32)
q_VV = np.zeros((height, width), dtype=np.float32)
i_array = bi_VV.readPixels(tl_x, tl_y, width, height, i_VV)
q_array = bq_VV.readPixels(tl_x, tl_y, width, height, q_VV)
slc = i_array + 1j * q_array
# delete product from memory
satdata.dispose()
product.dispose()
import jpy
System = jpy.get_type('java.lang.System')
System.gc()

Here is a screen capture with the memory usage while running the script.

For those of you interested, I solved the problem by deciding to not use snappy at all, I am now loading the SLC images with GDAL in python and calibrate the images directly using the sigmanought values described in the LUT and using bi-linear interpolation on values which fall in between.
I can share the code if more people are interested.

1 Like

If you do share the code please verify that your calibration results agree with what is implemented in SNAP.

Below, you ll find a plot of the azimuth fourier domain of a 1024x1024 subset of an SLC image calibrated in Python and SNAP. As well as the image statistics.

{‘mean’: 0.017828778599654531, ‘variance’: 1.3269059890074035, ‘skewness’: 657.55982652486739, ‘curtosis’: 4023.8421537990412}
{‘mean’: 0.017828667302725656, ‘variance’: 1.326906561458109, ‘skewness’: 657.562957020299, ‘curtosis’: 4023.8558936153108}

And below the code which I implemented to calibrate the measurement file. ( as input you have to define a subswath and the location of the unzipped .SAFE folder).

"""
Created on Wed Sep 13 16:16:51 2017

@author: kwant
"""

import glob
from xml.etree import ElementTree
import numpy as np
from scipy import interpolate
from osgeo import gdal
import os


#~~~~Input~~~~~
swath = 'IW1'
loc = r'd:\data\unzipped\portugal\S1A_IW_SLC__1SDV_20141125T183457_20141125T183524_003442_00405B_3F04.SAFE'


#~~~~ Execute~~~~
dataloc = os.path.join(loc,'measurement',r's1a-iw1-slc-vv-20141125t183459-20141125t183524-003442-00405b-004.tiff')

if swath == 'IW1':
    str2 = '-004.xml'
    datastr = '-004.tiff'
elif swath == 'IW2':
    str2 = '-005.xml'
    datastr = '-005.tiff'
elif swath == 'IW3':
    str2 = '-006.xml'
    datastr = '-006.tiff'

dataloc = glob.glob(loc + r'\measurement\*' + datastr)
dataloc = dataloc[0]

name = glob.glob(loc + r'\annotation\*' + str2)
name = name[0]
# Open element tree and retrieve parameters
tree = ElementTree.parse(name)

olp = tree.findall(".//imageAnnotation/imageInformation/numberOfSamples")
mylist = [t.text for t in olp]
no_pixels = int(mylist[0])
olp = tree.findall(".//imageAnnotation/imageInformation/numberOfLines")
mylist = [t.text for t in olp]
no_lines = int(mylist[0])


name = glob.glob(loc + r'\annotation\calibration\calibration*' + str2)
name = name[0]
# Open element tree and retrieve parameters
tree = ElementTree.parse(name)
# Load meta data in dictionary meta

line = []
olp = tree.findall(".//calibrationVectorList/calibrationVector")
for ele in olp:
    i = ele.findall('line')
    linei = [t.text for t in i]
    line.append(int(linei[0]))
y = np.asarray(line)

olp = tree.findall(".//calibrationVectorList/calibrationVector/pixel")
for t in olp:
    t3 = t.text
    t1 = t3.split()
    t2 = [float(t) for t in t1]
    x = np.asarray(t2)

olp = tree.findall(".//calibrationVectorList/calibrationVector/sigmaNought")
for i,t in enumerate(olp):
    t3 = t.text
    t1 = t3.split()
    t2 = [float(t) for t in t1]
    
    if i ==0:
        t_arr = np.asarray(t2)
    else:
        t_arr0 = t_arr
        t_arr1 = np.asarray(t2)
        t_arr = np.vstack([t_arr0,t_arr1])


f = interpolate.interp2d(x, y, t_arr, kind='linear')
xnew = np.arange(0, no_pixels, 1)
ynew = np.arange(0, no_lines, 1)
znew = f(xnew, ynew)

    
ds = gdal.Open(dataloc)
slc = np.array(ds.GetRasterBand(1).ReadAsArray())
  
ints = np.abs(slc)
phase = np.angle(slc)

s0 = np.sqrt(ints**2 / (znew**2)) #https://sentinel.esa.int/web/sentinel/radiometric-calibration-of-level-1-products
slc_new = s0 * np.exp(1j * phase)

Many thanks for these contributions, I’ll envisage do do it the same way.

How does the difference-image between SNAP-calibrated image and your implementation look like?

There’s a S-1 processor-upgrade in the horizon so the products and the treatment of noise will change. The important thing is to not spread incorrect calibration methods into the community.

1 Like

Hi there!

I am facing the same problem and haven’t been able to resolve it yet. I am new to programming, I have tried several tricks but none of them has worked so far. Not using snappy is not exactly the solution I am looking for, since I am using several processes apart from calibration.

I have already changed the snappy.ini file, tried the garbage collector, deleting and reseting all variables.

The error Ι get is:
RuntimeError: java.lang.OutOfMemoryError: Java heap space

And this is the memory usage while running the script:

Thanks