I’m not sure, I’m not very familiar with snappy, but I think you have to set a value in the jpyconfig.py script in the snappy folder.
Try setting jvm_maxmem = None
to jvm_maxmem = '6G'
Another place to change the memory usage might be in snappy.ini in the snappy folder.
Change the line # java_max_mem: 4G
to e.g. java_max_mem: 6G
Hello @marpet hope you are doing well,
I have one challenge in sentinel1 preprocessing using snappy python,
I am used one tool “Apply-Orbit-File” using manual process with SNAP Desktop as well as with the help of snappy python everything is working fine,
when I am trying to compare both output (manual and python) in Qgis software the pixel value does not match at all. I just need the output which is match with snappy python output.
can you please let me know where i have to focus on this,
The source code I have given below for your reference ;
You are saying the values do not match at all.
Have you also stored the data to GeoTIFF-BigTIFF when using the Desktop and have you reload it?
It could be that the scaling factor of the bands, if they are scaled, makes the difference.
In GeoTiff only raw values are stored. Maybe you can also work with NetCDF4-CF or store the data to BEAM-DIMAP?
please let me suggest the solution for my problem that I have mention.
I have made one mistake at the first post at that time I said “I just need the output which is match with snappy python output.” but it was my mistake ,
actually I need the output which will be the same as per SNAP Desktop output.
I wanted to know how can I write image in chunk with the help of snappy.ProductIO.writeProduct ?
because full image occupy whole RAM so I want to do with chunk with multiprocessing.
This looks like a new topic, so you should start a new thread with a title chosen so others with relevant experience will respond.
Snappy may not be the most memory-efficient way to process large images. Java can do some types of image processing in smaller “tiles”. Do you need to manipulate the image using Python code? If not, you may be able to modify your existing Python code to generate an xml graph and then use Python subprocess.run() to run gpt. The snapista conda package uses this approach and is available for linux but needs a VM on Windows, so may not be practical on a system with limited RAM.