Need help verify coherence snappy code

Good day,

I need to ask if someone can help me assessing if the code below is correct and if the workflow is logic to produce a coherence file using SLC images. I am using snappy.

I already tried running the code and although it generates a file, this file only shows a black tiff. Not sure if the problem lies more in the workflow or in the code itself.

Here follows the code used:

import os
import snappy
from snappy import GPF
from snappy import ProductIO
from snappy import HashMap
from snappy import jpy
from glob import iglob
from os.path import join
from termcolor import colored

Hashmap is used to give us access to all JAVA oerators

HashMap = jpy.get_type(‘java.util.HashMap’)
parameters = HashMap()


product_path = “C:/ProgramData/Anaconda3/envs/RADAR/S1SLC/”
input_S1_files = sorted(list(iglob(join(product_path,’**’,‘’), recursive=True)))

file_1 = snappy.ProductIO.readProduct(input_S1_files[1])
file_2 = snappy.ProductIO.readProduct(input_S1_files[0])

def topsar_split(product):
parameters.put(‘subswath’, ‘IW3’)
parameters.put(‘selectedPolarisations’, ‘VV’)
parameters.put(‘firstBurstIndex’, 9)
parameters.put(‘lastBurstIndex’, 9)
return GPF.createProduct(“TOPSAR-Split”, parameters, product)


def apply_orbit_file(product):
parameters.put(“Orbit State Vectors”, “Sentinel Precise (Auto Download)”)
parameters.put(“Polynomial Degree”, 3)
parameters.put(‘continueOnFail’, True)
return GPF.createProduct(“Apply-Orbit-File”, parameters, product)



parameters = snappy.HashMap()
parameters.put(‘demName’, ‘SRTM 1Sec HGT (Auto Download)’)
parameters.put(‘demResamplingMethod’, ‘BICUBIC_INTERPOLATION’)
parameters.put(‘resamplingType’, ‘BISINC_5_POINT_INTERPOLATION’)
parameters.put(‘maskOutAreaWithoutElevation’, True)
parameters.put(‘outputDerampDemodPhase’, False)
geocoding_file_1_2 = snappy.GPF.createProduct(‘Back-Geocoding’,parameters,[orbit_file_1,orbit_file_2])


parameters = snappy.HashMap()
parameters.put(“singleMaster”, True)
parameters.put(“subtractTopographicPhase”, False)
coh = snappy.GPF.createProduct(‘Coherence’, parameters, geocoding_file_1_2)


parameters = snappy.HashMap()
parameters.put(“Polarisations”, “VV”)
topsar_deb = snappy.GPF.createProduct(‘TOPSAR-Deburst’, parameters, coh)


parameters = snappy.HashMap()
parameters.put(‘grSquarePixel’, True)
parameters.put(‘nRgLooks’, “1”)
parameters.put(‘nAzLooks’, “1”)
parameters.put(‘outputIntensity’, False)
multilook_data = snappy.GPF.createProduct(‘Multilook’, parameters, topsar_deb)

Terrain correction

proj = ‘’‘PROJCS[“UTM Zone 29 / World Geodetic System 1984”,
GEOGCS[“World Geodetic System 1984”,
DATUM[“World Geodetic System 1984”,
SPHEROID[“WGS 84”, 6378137.0, 298.257223563, AUTHORITY[“EPSG”,“7030”]],
PRIMEM[“Greenwich”, 0.0, AUTHORITY[“EPSG”,“8901”]],
UNIT[“degree”, 0.017453292519943295],
AXIS[“Geodetic longitude”, EAST],
AXIS[“Geodetic latitude”, NORTH]],
PARAMETER[“central_meridian”, -9.0],
PARAMETER[“latitude_of_origin”, 0.0],
PARAMETER[“scale_factor”, 0.9996],
PARAMETER[“false_easting”, 500000.0],
PARAMETER[“false_northing”, 0.0],
UNIT[“m”, 1.0],
AXIS[“Easting”, EAST],
AXIS[“Northing”, NORTH]]’’’

parameters = snappy.HashMap()
parameters.put(‘demName’, ‘SRTM 3Sec’)
parameters.put(‘imgResamplingMethod’, ‘BILINEAR_INTERPOLATION’)
parameters.put(‘pixelSpacingInMeter’, 10.0)
parameters.put(‘mapProjection’, proj)
parameters.put(‘nodataValueAtSea’, False) # do not mask areas without elevation
parameters.put(‘saveSelectedSourceBand’, True)
terrain_correction = snappy.GPF.createProduct(‘Terrain-Correction’, parameters, multilook_data)

outpath_name = “C:/Users/V00cbgcs/Desktop/ficheiros_Sentinel/Coherence/coh_flow”
snappy.ProductIO.writeProduct(terrain_correction, outpath_name, ‘GeoTIFF’)
print(colored(‘Product succesfully saved in:’, ‘green’), outpath_name)


When building a complex workflow, it is best to start by coding each step independently so you can be confident that each step is working properly. Other things to check:

  • unexpected look-alike non-ASCII characters (often inserted by overly helpful editors). Some editors can highlight non-ascii characters.

  • typos in the proj variable could easily result in the generated data falling outside the area of your image


First of all, thanks for your reply.

I also ran the code with each step apart and found out that the enhanced spectral diversity (performed after geocoding and before coherence) had issues. Couldn’t fix this and removed this step, to try to see if I could move on without this step and performing the others. In this case I had an error message: RuntimeError: org.esa.snap.core.gpf.OperatorException: Unparseable date: “16Feb2022”

It seems there is a problem in geocoding step. When I perform the next step, coherence (being enhanced spectral diversity step removed), I get a black tiff. No error message. Could it be the geocoding output? Or the parameters set in each step?

Also though can it be something related with RAM problem, in this case 16Gb, since this workflow is very demanding in relation to memory requirements.

As far as I could see, I checked for non-ASCII characters and that´s not the problem.

I tried to compare the proj variable with other examples and seemed ok.

Any suggestions on this?


You may get some insights into the geocoding issue by examining the geolocation metadata for the “black” tiff. GIS packages often have logic to deal with inconsistent geolocation metadata. Have you tried loading the geotiff file in a GIS (QGIS is available to all so a good choice). You can also use gdalinfo in a terminal window to display the metadata.

This is a common error when working with dates in Java. You can many discussions on
the internet. Others have reported “Unparseable date” errors with SNAP, but the reports lack the detail needed to reproduce the issue. Are you using SNAP 9? What OS? What input data?

RAM problems generally reveal themselves in very slow performance or out-of-memory errors.


I have been trying to overcome the problem in snappy python following your tips. Again, it looks like the problem is in geocodding. The process stops abruptly by saving a file (“black” tiff) presenting no error message. I reproduced the whole procedure in snap and it worked fine.
Just a small part of snappy python message:


at org.esa.snap.core.gpf.Operator.getSourceTile(

at org.esa.s1tbx.insar.gpf.CoherenceOp.computePartialTile(

… 45 more

Caused by: java.lang.NullPointerException

at org.esa.s1tbx.sentinel1.gpf.BackGeocodingOp.computeExtendedAmount(

at org.esa.s1tbx.sentinel1.gpf.BackGeocodingOp.computeTileStack(

… 56 more

Product succesfully saved in: C:/Users/V00cbgcs/Desktop/ficheiros_Sentinel/Coherence/coh_flow

I just noticed there´s already version 9, I have been using snap 8. Also, I am using windows 10, anaconda (python 3.6) and as input data I used 2 SLC files from 23 january and 16 february:



Also tried files from 23 january and 28 june:



I carefully selected pairs of SLC files of my AOI that overlap so it wouldn´t cause any error.
I had already loaded the tiff files in QGIS to try to see something different other than a “black” tiff. I tried to change the color pallette, but nothing. I did many attempts changing the parameteres, and some files seemed to have strange values and others no values at all. Not sure what can I do different. In snap happened the same.

Not sure what to do with the unparseable date problem. It seems to be a problem with the date format, but can´t trace the problem. The only thing I can think of is the date in the file name.

I would really appreciate any other tips, ideas, on how to solve this?


You mention a bunch of issues, but for each issue you need to provide enough detail to allow others to reproduce the problem. The black tiff issue might be a good place to start. You may be writing data to locations outside the specified region, or reading data from outside the region of interest. One trick that is useful in such cases is to create a file with global extent that has one band filled with latitude and the other filled with longitude using lonlat projection. Then you can experiment with gpt Subset and Reproject with your UTM Zone 29 “proj” settings.

With your “black” tiff you should look at the metadata in the geotiff to verify that the geolocation is what you expect. QGIS can do that, but QGIS uses GDAL, so for a forum post, running gdalinfo ( %USERPROFILE%\/snap\auxdata\gdal\gdal-3-N-N\gdaliinfo.exe) in a terminal will give text output.


Thanks for your suggestions.

I have been using just one burst (IW3, VV, 9,9) for both files to make sure they overlap perfectly and it´s my area of interest.

In QGIS the “black” tiff is located where it should be.

When I run gdalinfo I get something very different. Something definitely wrong.

Size is 8678, 3503

ERROR 1: PROJ: pj_obj_create: Cannot find proj.db

ERROR 1: PROJ: createGeodeticReferenceFrame: Cannot find proj.db

ERROR 1: PROJ: proj_as_wkt: Cannot find proj.db

ERROR 1: PROJ: createGeodeticReferenceFrame: Cannot find proj.db

ERROR 1: PROJ: pj_obj_create: Cannot find proj.db

ERROR 1: PROJ: proj_as_wkt: Cannot find proj.db

ERROR 1: PROJ: proj_create_from_wkt: Cannot find proj.db

ERROR 1: PROJ: proj_create_from_wkt: Cannot find proj.db

ERROR 1: PROJ: pj_obj_create: Cannot find proj.db

ERROR 1: PROJ: proj_as_wkt: Cannot find proj.db

ERROR 1: PROJ: proj_as_wkt: Cannot find proj.db

ERROR 1: PROJ: proj_create_from_wkt: Cannot find proj.db

ERROR 1: PROJ: proj_create_from_database: Cannot find proj.db

Origin = (514542.646872786455788,4239959.020412807352841)

Pixel Size = (10.000000000000000,-10.000000000000000)







Image Structure Metadata:


Corner Coordinates:

Upper Left ( 514542.647, 4239959.020)

Lower Left ( 514542.647, 4204929.020)

Upper Right ( 601322.647, 4239959.020)

Lower Right ( 601322.647, 4204929.020)

Center ( 557932.647, 4222444.020)

Band 1 Block=8678x3503 Type=Float32, ColorInterp=Gray

Computed Min/Max=0.000,0.000

No sure how to materialize your sugestion of creating bands with lat and long and then apply those, still have to dig into it.

proj.db is GDAL’s projection database. If you are using the GDAL included with SNAP 9, it should be %USERPROFILE%\.snap\auxdata\gdal\gdal-3-2-1\projlib\proj.db.

If you are using some other GDAL installation you may need to set GDAL_DATA to the location of proj.db for that version.

I used gdalinfo again, couldn´t do it with proj.db, but using snap 9. I updated the snap 8 I was using to 9, but still was the snap 8 running. Not sure what happened. This time got this info from the “black” tif:

Size is 8678, 3503
Coordinate System is:
PROJCRS[“WGS 84 / UTM zone 29N”,
DATUM[“World Geodetic System 1984”,
ELLIPSOID[“WGS 84”,6378137,298.257223563,
CONVERSION[“Transverse Mercator”,
METHOD[“Transverse Mercator”,
PARAMETER[“Latitude of natural origin”,0,
PARAMETER[“Longitude of natural origin”,-9,
PARAMETER[“Scale factor at natural origin”,0.9996,
PARAMETER[“False easting”,500000,
PARAMETER[“False northing”,0,
Data axis to CRS axis mapping: 1,2
Origin = (514542.646872786455788,4239959.020412807352841)
Pixel Size = (10.000000000000000,-10.000000000000000)
Image Structure Metadata:
Corner Coordinates:
Upper Left ( 514542.647, 4239959.020) ( 8d50’ 1.19"W, 38d18’27.40"N)
Lower Left ( 514542.647, 4204929.020) ( 8d50’ 3.77"W, 37d59’30.83"N)
Upper Right ( 601322.647, 4239959.020) ( 7d50’28.31"W, 38d18’ 7.22"N)
Lower Right ( 601322.647, 4204929.020) ( 7d50’46.25"W, 37d59’10.88"N)
Center ( 557932.647, 4222444.020) ( 8d20’19.78"W, 38d 8’52.85"N)
Band 1 Block=8678x3503 Type=Float32, ColorInterp=Gray

The point of using gdalinfo was to check the corner coordinates to make sure they match your AOI.

Not sure what this means. Since your SNAP 8 install was incomplete, you should probably start again with a clean install of SNAP 9.


Just a matter of updating snap to the latest version. No problem it´s done.

What I still have no idea is why I am getting a “black” tiff. I opened it in QGIS and it is where it should be. Used just one burst of each image to match perfectly. It doesn´t seem to be a location issue.

Is there a workaround for geocoding, another substitute function that I can try? I changed all the geocoding parameters and nothing. Can it be some problem with the code itself? Running out of ideas of what can I try next.

Tried with gpt and worked (without enhanced espectral diversity which keeps on getting error, something to see later on). But I needed to be able to create a code to automatize all these procedures so that I can proceed with further coherence analysis.


Any particular reason you want to use snappy? Would it not be enough to run SNAP gpt graphs from Python for example by using the snapista wrapper?


No particular reason other than be able to use python to go on with coherence analysis and other analysis, mainly RF. The main goal is crop monitoring, and thought snappy would be the best option to deal with the preprocessing. What is the best way for automatic preprocessing of radar images from an entire country, that is, huge volumes of files? What is the common practice to accomplish this?

I´ve already heard about snapista, couldn´t really configure it and never used it. It seems to be so close with snappy, but if it is a good option with snapista, I will try to install it and give it a try.


Have you seen Bulk Processing with GPT?

I have done volume processing using gpt (mainly with loops in linux shell scripts). Snappy can be useful when you want to replicate some processing you can do in the SNAP GUI, but there are issues with memory management if you try looping over a list of files (Python tells Java what to do, but doesn’t have any control over Java memory objects). Snapista constructs a command-line for gpt, and can be used for volume processing with gpt starting fresh on each iteration, but the value added part is the ability to construct graphs using python. If you already have the xml graph, you can run gpt on a list of files using a loop in python that constructs the gpt command-line for each iteration (e.g., using different filenames).

Snappy-processing is single-core only and the performance-penalty is often dramatic. You are much better off scripting gpt-processing in Pyhthon, for example using the snapista wrapper.

No, I was not aware of bulk processing with gpt. Thanks.

Not really much into linux shell scripts, have to see if I can make it work. Maybe this is the fastest solution as long as I can create a loop? Already ran xml individually in windows and worked fine.

Have been trying to install snapista in my environment in anaconda:

conda install -c terradue snapista
PackagesNotFoundError: The following packages are not available from current channels:

Already tried other channels. According to anaconda website it should be it.

Does this has anything to do with spyder or conda versions?

There´s this web page but not working

Thanks once again for all your support.

I think the conda snapista package is for linux. You can search on Installation has a section for Windows and macOS using docker or Visual Studio remote containers.

Thanks for all suggestions.

Have been trying to run and modify my xml file in python.

Is it just me or installing snapista isn´t that easy? Or maybe my approach hasn’t been the best. Not sure how to use the docker, never really used a docker image to install a module or anything else.

Is there a tutorial for this? Have been searching on the internet but couldn´t find anything useful for this purpose.

See Docker’s Get-Started page.