GPT options to reduce disk I/O

Hi,

I’m trying to run my (Sentinel 1) processing chain on a big cluster which is shared with many users.
It seems to me that disk I/O is a bottleneck, causing it to run magnitudes slower than on my local machine with only 4 cores.
On our cluster a single node has 28 cores and about 62GB of available memory.

GPT operators are called from a bash script to execute xml processing graphs.

Options I’m using so far are in gpt.vmoptions
-Xverify:none -Xmx58g -Xms2048m -XX:+AggressiveOpts -Djava.io.tmpdir=/gpfs/scratch/gpt_temp
and in the gpt call:
$GPT -c 2G -q 28 -x p0.xml ...

Are there any further measures I could take to reduce disk I/O?
Or should I ramp up the cache size and reduce the number of cores?

Yes, what you suggest in your last sentence if probably the best option.
You have 58GB of Memory available but you use only 2GB.
You can drop the -q option, because by default the parallelism is equal to the number of cores.
So. try with -c 50g.

Ah, ok. I had intuitively/wrongly always multiplied the cache size by the number of cores.
I’ve set it to 50G and now it is at least in the same ballpark as my local machine.

Thanks!