GPT Sticky Needed in the SNAP Forum

Hi Everyone,

I hope the following query will be useful for all who have already figured out how to effectively use gpt but also for new users.

I would like to batch process more than 500 S1 scenes using gpt. Processing involves the first three basic steps namely Radiometric Correction, Terrain Correction and Speckle Filtering. Can you please possibly share your steps on running graphs on several files. I am particularly unsure about the following so I will be grateful if you would provide some examples and pointers. I am yet to locate a post or tutorial that provides definitive instructions on these.

1.How to maintain GPT performance as fast as possible.
2.How do I specify inputs i.e. various S1 file names and operator parameters, into the graph xml. Do I need to unzip the .zip files downloaded from the portal before using in in the gpt xml file?
3.How do I specify intermediate and final output file names or do I need to specify the file names?
4.How to best batch the gpt e.g. using just DOS-batch or using python.
5.How to minimise disk and RAM usage overhead from intermediate step outputs.

Thank you very much in advance,

1.How to maintain GPT performance as fast as possible.

For best performance it is good to have a lot of RAM available. The amount gpt can use, can be adjusted in the gpt.vmoptions file.
For me it seems that there are still some issues with the performance in the gpt code or in certain operators but we, the developers, will have a look at it and try to provide the best performance as possible. One issue, for example is that for some operation processing within the SNAP application seems to be faster as with gpt.

2.How do I specify inputs i.e. various S1 file names and operator parameters, into the graph xml. Do I need to unzip the .zip files downloaded from the portal before using in in the gpt xml file?

Using the zipped S1 files should actually work when you specify them as source. I must confess that I don’t fully understand your question regarding setting the inputs and parameters. As example I attach an graph xml file which concatenates two operators (FlhMci and BandMaths). The FlhMci gets the source from the command line and the BandMaths operator references the FlhMci as input.

3.How do I specify intermediate and final output file names or do I need to specify the file names?

In order to write intermediate results to disk you need to add the Write operator to the graph. For the final result a Write operator is added automatically to the graph by gpt. So all you need is to specify the target on the command line.
By the way, have you read the help in SNAP. If not Open SNAP and press CTRL+I, this will activate the search box in the upper right corner. Type “SNAP GPF” and press enter. This will give you the help pages for GPF.

4.How to best batch the gpt e.g. using just DOS-batch or using python.

For batch processing there is an example available in our SNAP wiki.
For python there are meanwhile some examples available in this forum. Probably reading this thread (Example script for multiple operations?) will help you.
But you can also do batch processing from the Desktop application. See in the menu at Tools --> Batch Processing.

5.How to minimise disk and RAM usage overhead from intermediate step outputs.

The simplest solution for minimising disk usage is to not write intermediate results. Processing can be done all in memory with gpt but of course this will increase the usage of RAM. But if you write the intermediate results you can use a format which uses compression. ‘NetCDF4-CF’ is well suited for this.

3 Likes