Processing time and memory setting of SNAP GPT?

I am doing 2 bulk processings:

Both for master-slave interferograms, both using a single subswath and 3 burst (Sentinel-1 TOPSAR data).

One stack works pretty smooth with -XmX 5G set for gpt. with times of around 6min per interferogram computation (backgeocoding-ESD-Interferogram-TopoPhase-Deburst-Subset)

The second stack crashes with that -Xmx 5G parameter so I increase it to 15G. It takes 20min per interferogram (same workflow).

If I try to run again same first stack with the second -Xmx configuration, it takes 4x times longer, reaching the 20mins of the second stack processing.

Do you think is normal?

Can you please help me with the proper setting of gpt memory to use the less memory and doing the fastest as possible? @abraun @marpet @lveci @junlu @mengdahl please help!