S1tbx GPT tool randomly killing process?


I am running SNAP with GPT and I am getting an error I’ve not seen before given below,

org.esa.snap.core.gpf.operators.tooladapter.ToolAdapterIO: Initializing external tool adapters\nINFO: org.esa.s2tbx.dataio.gdal.GDALVersion: Incompatible GDAL 3.5.0 found on system. Internal GDAL 3.0.0 from distribution will be used.\nINFO: org.esa.s2tbx.dataio.gdal.GDALVersion: Internal GDAL 3.0.0 set to be used by SNAP.\nINFO: org.esa.snap.core.util.EngineVersionCheckActivator: Please check regularly for new updates for the best SNAP experience.\nINFO: org.esa.s2tbx.dataio.gdal.GDALVersion: Internal GDAL 3.0.0 set to be used by SNAP.\nINFO: org.hsqldb.persist.Logger: dataFileCache open start\nExecuting operator…\n20%INFO: org.esa.snap.core.dataop.dem.ElevationFile: http retrieving http://step.esa.int/auxdata/dem/SRTMGL1/N37E043.SRTMGL1.hgt.zip\nINFO: org.esa.snap.engine_utilities.download.DownloadableContentImpl: http retrieving http://step.esa.int/auxdata/dem/egm96/ww15mgh_b.zip\nINFO: org.esa.snap.core.dataop.dem.ElevationFile: http retrieving http://step.esa.int/auxdata/dem/SRTMGL1/N37E042.SRTMGL1.hgt.zip\nINFO: org.esa.snap.core.dataop.dem.ElevationFile: http retrieving http://step.esa.int/auxdata/dem/SRTMGL1/N36E043.SRTMGL1.hgt.zip\nINFO: org.esa.snap.core.dataop.dem.ElevationFile: http retrieving http://step.esa.int/auxdata/dem/SRTMGL1/N36E042.SRTMGL1.hgt.zip\n....30%....40%....50%....60%....70%....80%Killed\n

It is basically just killing the process and giving no information as to why? Then it is not completing the process.


You should mention the details of your OS. Recent linux has Out-of-Memory (OOM) killer. Normally, Java apps should be configured to avoid critical shortages of memory, but you may have other applications (web browser) consuming a lot of memory.

I’m running it on an Ubuntu 20.4 VM from windows. 200 gb of ram is allocated to the VM and the min and max ram settings for the GPT VM is 10 gb and 90 gb respectively.

You should use OS tools to watch memory usage. 200GB is a lot of RAM – it is possible that you found a bug in some memory management/allocation. The system logs may give a reason for the process being killed. Rackspace has some examples using linux tools to detect/understand OOM killer activities.

Some VM’s may allocate up to the configured limit on demand, so if the host machine has allocated a lot of RAM the VM may not get the full configured amount.