Snappy not freeing memory

Further on to this, it seems to be an issue with caching after multiple calls.

The process seems to hang but then does eventually write out the .tiff, yet the error still occurs.

Seems caching only works a few times? Then I effectively have to process the whole image again without it…

Bumping this with some more investigating.

I bumped my VM up to 8GB and also bumped up the java_max_mem property in snappy.ini to 8G

I’m processing some S1 imagery and subsetting it to POLYGON ((-4.136685132980347 50.36611236042063, -4.12860631942749 50.36611236042063, -4.12860631942749 50.37036895441764, -4.136685132980347 50.37036895441764, -4.136685132980347 50.36611236042063)) so its very small in terms of a S1 image.

Running the process once I get python hogging 16% of the VMs memory, making several calls it always idled at more memory:

16%, 28%, 42%, 52%, 81%, 81%, 95%, 95%, 90%, crashed.

I’m assuming this is to do with the Java side because memory just isn’t being freed. I’m removing all files in /tmp every time I run the process too so. Is there anyway to stop this continuous buildup of memory till eventual crashing?

Many thanks,

Ciaran

Further to this, I’ve noticed posts mentioning forcing garbage collection via something such as jpy.get_type('java.lang.System').gc() and this doesn’t seem to do anything. It actually seems to increase the memory usage…

1 Like

Wondered if anyone has any suggestions RE this? Renders my pipeline useless after a few calls.

@marpet sorry to pester! Do you know of any fixes/workarounds to this?

Snappy is known to have memory issues, especially with clearing the memory after running a script. What might be worth a try is running your python script from Command Window, instead of Spyder (i particularly experienced problems with Spyder).

The script is being running through a Flask application http://flask.pocoo.org so it’s not a issue with Spyder.

I’m hosting this as an api to call so having it effectively bomb out after a few calls isn’t handy at all, so far my only option is to write a cron job to kill the process and restart it, but that’s basically making the api useless as it could cut out when it’s running a job.

Frustrating that trying to force garbage collection doesn’t seem to help.

No, I have no clue. Sorry. I also don’t see meaningful improvements coming very soon.
I still think that the major cause is the S1 data, respectively the S1 reader. But I’m not sure.

That’s unfortunate to hear, I know it’s affecting a few users.

@seb is also experiencing issues with this. It makes creating apps hard/unmanageable as they eventually die from running out of memory.

Wonder if there’s anyone else who has suggestions for this?

Hi Ciaran,

Garbage collection did not help me either. Maybe something related to this? I had a similar memory issues because I was invoking jpy.get_type() inside a loop. It is better to move out of the loop. It solved the issue for me.
Cheers
jose

1 Like

Thanks for the suggestion!

I managed to solve my issues with:

I’ll give yours a go sometime to see if it saves me doing my workaround!

1 Like

at least, it closes the file and enables windows preview

Hi,

Thank you very much Ciaran and Marco.

I have the same problem. The only effective solution that worked in my case was the one of @Ciaran_Evans (launching and independent process).

@marpet is there any open ticket or similar about the issue?

1 Like

Same issue here, especially when using the S2sampler

1 Like

No there is no special ticket for memory issue in snappy, but there are some others
https://senbox.atlassian.net/browse/SNAP-869
https://senbox.atlassian.net/browse/SITBX-562
https://senbox.atlassian.net/browse/SITBX-482

I’ve created one now. (SNAP-960). For me it is not yet clear where the memory issues comes from. Maybe it is within SNAP, maybe in jpy or in snappy. This needs to be investigated.
However, I can tell that the further development of SNAP and the Toolboxes will start again soon. And then we will tackle such memory and performance issues. Finally… :slight_smile:
But till the fixes will be released it still might take a few month.

1 Like

@lumaro - glad it fixed your issue (for now)

Thanks @marpet for raising the issue! Hopefully the workaround can be useful for people at the moment, I know here at work our Remote Sensing folk have hit the issue too.

@harmel I’d recommend giving the workaround ago, it’s by no means pretty but it allows for bulk processing. Give me a shout if you need a hand!

Thanks a lot @Ciaran_Evans .
I noticed that image data are loaded by chunk of 512 pixel rows. So for me the work-around is to run the process for each 512-row chunk independently.
Cheers,
Tristan

1 Like

Any news about memory leaks and/or possibilities of freeing memory from snappy with snap V7?

Hi Marco, just providing some diagnostic info that I hope can help you solve the issue. In my experience, the problem occurs with GPF.createProduct(). When using ProductIO.readProduct and ProductIO.writeProduct the memory issues do not appear provided you use the .dispose() method on all products you read. After using product = GPF.createProduct() the product.dispose() method is ineffective - the memory is not released. So perhaps the problem has to do with using the dispose method on the products created using GPF. Hope it helps!

Dear @MarkWilliamMatthews, @marpet
Do you confirm that you can release memory with product.dispose() after product = ProductIO.readProduct(file) and product.getBand(‘bandname’).readPixels(…) ?

From my side, it is still ineffective.

For our activities, it would be miraculous to have a working dispose() function like this:

product = ProductIO.readProduct(file)
bands = product.getBandNames()
w, h = product.getSceneRasterWidth(),product.getSceneRasterHeight()
arr = np.empty((len(bands),w,h))
for i,band in enumerate(bands):
     # load raster
     product.getBand(band).readPixels(0,0,w,h,arr[i,...])
     # release memory of the given band
     product.getBand(band).dispose()

Best,
Tristan

1 Like