Reproject raster image: How to choose geolocation band?

Hi there,

I want to reproject Suomi-NPP/VIIRS/DNB raster images with SNAP gpt. The data has two types of geolocation bands included (one without and one with terrain correction). Due to the bad fitting of the results, I am guessing that SNAP uses the one without terrain correction. That’s why I am trying to find out how to choose the other geolocation bands as reference for the reprojection, but I couldn’t find an option to do so (I tried it with the desktop version too).
Maybe someone here knows if it is possible to change the used geoloaction bands and knows how to to it?

Thank you very much in advance !

Maybe the ‘Attache Pixel Geo-Coding…’ can help. You find it in the menu under Tools.
There you can select the bands with the terrain corrected data.

Thank you, this tool seems to do what I need. But I couldn’t find an option to use it with gpt, did I miss something?
I need to process several time series of data, so I need to integrate the change of the geocoding reference into my gpt workflow - changing the reference manually for every picture wouldn’t be possible.

Indeed this is a missing operation when using gpt from the command line.
You could do it with a Python script. Is this an option for you?

Yes this would be an option. But since I never worked with Python before, I am wondering if there is some kind of example of how to do something similar like that as a starting point?
(I also wasn’t familiar to bash before, but with the help of your tutorial “Bulk processing with GPT”, I was able to adapt it to my problem really fast, so thanks for that)

1 Like

When you install snappy (the snap-python bridge) there are also some examples installed.
Some general links are given in this post:

For you specific problem there is no example. But in general you would need to do the following.

PixelGC = jpy.get_type('org.esa.snap.core.datamodel.PixelGeoCoding')

product = ProductIO.readProduct(input_file_path)
lat_tc = product.getBand('lat_corrected')
lon_tc = product.getBand('lon_corrected')
valid_mask = None
searchRadius = 3
pixel_gc = PixelGC(lat_tc, lon_tc, valid_mask, searchRadius)
product.setSceneGeoCoding(pixel_gc )
ProductIO.writeProduct(product, output_file_path, 'NetCDF4-CF')

That’s roughly the code. It reads the product, creates and sets the new GeoCoding and writes it as NetCDF.


Thank you very much for your help, it seems to work!

I just would like to check if it really did exactly what I wanted by checking the used GeoCoding bands for a product, but I can’t find a way to show them. I was searching for an option to show the GeoCoding details in SNAP desktop, but I didn’t find something. In the metadata which I can see in SNAP desktop, there is no information about it. I also tried to print the result of product.getSceneGeoCoding() before and after the setting of the new one, but I can’t interpret the following result (I guess that it was maybe not the right way to print it):
I am especially wondering, why the original PixelGeoCoding has the 2 added to its name and if there is a way to translate the “code” of numbers and letters to some readable GeoCoding info?

In SNAP Desktop you can see information about the GeoCoding when you select from the menu Analysis / Geo-Coding. This will show the following dialog with the information about the current Geo-Coding.

PixelGeoCoding and PixelGeoCoding2 are just two different implementations. Unfortunately, both have their issues, e.g. both are slow and at some edge cases they don’t give good results.
The plan is to replace both by a third implementaion which will do a better job. This should happen till the end of the year.

There is no single method which write out all the geoCoding information.
You can have a look here:

That’s how the information is written in SNAP Desktop. Maybe you can grab some snippets for your Python script, even if it is Java.

1 Like

Thank you, the Analysis/GeoCoding tool is exactly what I was searching for.

I got another problem concerning this topic:
I tried to integrate the python/snappy code, which I until now only tested for one image by directly calling it from the terminal, into my bulk processing bash code. So I am doing the replacement of the GeoCoding first and after that I am using gpt to do some other operations, everything inside a loop through my image files in one bash script.
The problem is, that now the python code is very slow, it needs approx. 1 hour to process the same image like I did using the terminal to run the code. There it needed ~10 mins. When I check the python-process, it only uses 0 to 1 % of the CPU but says it is running. The point where it takes so much time is the writeProduct part (I used the BEAM-DIMAP format). (I only need to write it because I need it as input for the following gpt processing, so saving the images is maybe not necessary but I don’t know another way to use the result of the snappy part for my gpt in the bash script.)

I already checked the advises for example in, but the configuration is like it should be.
Is it possible that the processing speed is influenced by the way I am starting the python code?

The slowness you see is probably a mixture of multiple things. The GeoCoding, the writing and probably other things (e.g. bugs in SNAP).
In general it might be better if you stay within python and don’t use the bash script to start gpt.
In the forum you find some example how to invoke operators from python.
This way you can skip the writing step.

In this post from @johngan is a pretty good example:

In general you invoke an operator by calling

target_product = GPF.createProduct('operatorName', parameter_map, input_product)

The target_product can be used again as input for the next operator.