A way to inject data into graph with gpt/python?


My goal is to create a simple workflow to process S2 data in GPT.

I have created a graph that loads multiple datasets (ProductSet-Reader), makes a mosaic of them (Multi size Mosaic), and then calculates certain values (BandMaths). It works well when run from GPT on a dataset that it was created for.

But when I want to process a different dataset with the same graph, I have to change input filenames and mosaic bounds manually in the XML file. Filenames are fine to manage, but to check the mosaic bounds, I have to open SNAP each time, which defeats my point for the workflow to be simple. Is there any way to “inject” mosaic bounds automatically from the loaded datasets (from GPT/Python level), as it happens in Graph Builder in SNAP?

You can define variables in your graph which you can set from the command line.
This is explained here:
Bulk Processing with GPT - SNAP - Confluence (atlassian.net)

In Python you can open the products you want to process and calculate the bounding box and then use it as parameter.

For doing this you can have a look at this example:
How to use the SNAP API from Python - SNAP - Confluence (atlassian.net)

And there is snapista. With this you can generate a graph and execute it as command line process.
But you would still need snappy to retrieve the geoinformation, or you read it from the metadata.

1 Like

Thanks! I will try those.

I’ve managed to recreate my workflow in snappy, and it works great, but I am still having trouble with said boundaries. Could you point me the way to calculate bounding box for multiple S2 images, please? I didn’t manage to find it under the link you provided.

I’ve no python code available but in general it should be doable like this for your use case:

import snappy
from snappy import Rectangle, GeoUtils

bounds = Rectangle()
# iterate over products

# with the following methods you get the corner coordinates of the bounding box.

1 Like

I created a ticket in our issue tracker: SIITBX-495

I have already managed to figure out a workaround with geopandas, but thank you nonetheless. It will be useful to know another method.