I and our IT team are trying to get SNAPPY working on a high-performance cluster (Linux). The plan is to use a common SNAP install to supply the Python bindings to each user who wants them. However, we have run into a number of issues as detailed below. I guess that ESA or another institution has this problem solved? If so it would be great to share what the solution is. I copy below the feedback from our scientific computing lead for context as to the issues.
(1) Snappy-conf fails when trying to request a large amount of memory on our system
… --jvm_max_mem 176G
(2) Any attempt to bypass this issue either by editing snappyutil.py or setting java_max_mem in snappy.ini fails, because both snappyutil.py and snappy.ini are re-downloaded for every run of snappy-conf. So any changes made are lost.
(3) Finally, if you manually extract the install line, and then manually change the --jvm_max_mem setting, you can get snappy to install. However, it gives the following error when we import the snappy module:
SEVERE: org.esa.s2tbx.dataio.gdal.activator.GDALDistributionInstaller: The environment variable LD_LIBRARY_PATH does not contain the current folder ‘.’. Its value is ‘/cm/local/apps/torque/6.1.3/lib’.
Most importantly: adding . to LD_LIBRARY_PATH is bad practice. Firstly LD_LIBRARY_PATH is used by all binary programs in your environment, every single time they are invoked. And secondly, means the current directory, which changes as you move around the filesystem. All it takes is for some malicious user to create a doctored copy of, e.g., libm.so in a non-standard location and you may find yourself loading the doctored version of libm.so rather than the system one. It’s generally considered bad practice to use .LD_LIBRARY_PATH anywhere.
I have seen some other users getting around such issues by loading individual toolboxes into Docker containers, but Docker is not useable on the system we have.
Any help- or direct contacts to email on this subject would be much appreciated!