No, these settings are not helpful to improve the performance of python code.
Please not that python, by default, is not the fastest language.
But there are ways to improve the speed.
For example you can use numpy, especially when working with arrays.
Also numexpr by be worth to have a look at.
Just to give you an indication of what is possible. Recently we improved the performance of one algorithm from taking 3:40 h to 5 minutes. This was mainly achieved by using numpy.
I have been looking into the SNAP configuration options and have tried setting snap.parallelism to the number for CPUs. This seems to be set (by default) in the “snap.properties” file from the GUI. I have tried copying the settings over to the “snappy.properties” but it doesn’t seem to have any effect.
This seems like a significant problem with process automation. Is the python interface to SNAP really restricted to a single-core as you suggest? If so, are there any plans to work around this, or is there another approach that I could be using?
Would it be possible to use the dask package to solve this issue?
I have no experience with these kind of things and don’t know how to apply this to snappy functions, since they are basically a black box for me
Any advanced Python user around who can help?
(PS: very good introduction to dask on YouTube by Jim Crist of Continuum Analytics)
I am processing Sentinel-1 images…
In the end I want to save it as GeoTIFF-BigTIFF (14 GB size of an image).
When I do it in SNAP it takes just 5-10 min to export, but when I use python and snappy and call
ProductIO.writeProduct(x, y, ‘GeoTIFF-BigTIFF’)
it takes up to 2 hours to create it.
Is there a way to solve this and speed up?