Hi,
I’ve no experience with AWS but we run our cluster and I can give some general advices for running gpt.
I think it is best to run one processing chain on one instance. If you split it you need to transfer the results from one instance to the other. An this will slow down the processing.
In the gpt.vmoptions you should set set the Xmx value for the heap space.
If you have 16 GB RAM you can set
-Xmx13G
Also important is the cache size gpt uses. This is the -c parameter of gpt.
Or you change the default value for it in etc/snap.properties.
Its name is snap.jai.tileCacheSize. I think a good value is 70%-80% of the heap space.
Influence on the performance has also the snap.jai.defaultTileSize property. Also in the snap.properties file. It can be also specified on the command line or in the gpt.vmoptions as system property.
This property defines the size of the tiles which are computed on each core. And therefor it influences the memory usage and the performance if you run out of memory.
A huge tile size can result into a single product and then only one core is used and the computation is not performed in parallel.
A to small size can result into to many threads and this results in thread management overhead.
You just need to play with these settings to find the best for you use case.