Hey there!
I implemented the gpt routine into a python script in order to run it on dozens of scenes.
Now I realized that it takes ages for gpt actually compute results.
If I run the gpt (“gpt myxml.xml”) from the Linux command line everything works just fine…
You should show us the Python script. If you are moving data between Python and Java (e.g., using snappy) that could be a bottleneck. If you are simply looping thru a list of files and running gpt on each file, a big slowdown on linux may mean the system is tight on memory. There are tools, starting with top, to monitor the resources used by a process which may show you where the bottleneck occurs. You may need to tweak the settings in gpt.vmoptions. You may want to consult a distro-specific forum for help finding appropriate tools.
Yes, indeed I am using a python script to call Popen to start gpt for each file.
For some reason os.system was much slower than than subprocess…therefore I replaced it, which increased the processing Speed extremely.
Where can I find the gpt.vmoptions file?