S1tbx installation on docker container

If I read the product with the desktop version of snap I obtain this error:

The only that I have done is: File/Open Product and select the file MTD_MSIL2A.xml

I’m using the version 9.0 of SNAP.

From my Python code I’m working against a SNAP v9.0 but installed within my docker container.

If I apply the second option: convert the path to an absolute path I obtain the same error:

But I have checked the deployment of my container with SNAP and I update all the SNAP modules. Could be an error of these files?

You can check if the S2 Toolbox is enabled by using the plugin manager (Tools / Plugins).
The installed modules tab should show that the S2 Toolbox in installed and activated.

In your docker you can run:
snap --nosplash --nogui --modules --list --refresh
This will list all installed modules. It is explained on this wiki page: Update SNAP from the command line
Check if the S2 reader is installed and enabled.

Have you tried other S2 products?

@Marco_EOM ,

you were right. The image used by my container had only the S1tbx installed. Both Sentinel-2 and Sentinel-3 were missing. I have found an image that already has all the toolboxes.

Now I have another question, I am trying to recover /generate Tiff images of all the products I can from both Sentinel-2 and Sentinel-3. Without processing anything, looking at what I download from Copernicus Data Space many of these images are 2D or even 1D. Is it possible when reprojecting the images and converting them to GeoTIFF that they become Geo2D images?

I am also analyzing SMOS images. But I don’t know through Python code how to read the *.dbl or *.hdr that I download from Copernicus Data Space. I would like to be able to generate your NetCDF or a TIFF from each of these files.

Thank you very much

Good that this is resolved.

I’m not fully sure what you mean by Geo2D images. I guess you mean that the images are georeferenced. This is possible, even without reprojecting. You can export each band(image) separately.
For Sentinel-3 this would increase the amount of data because it uses bands for providing the geo-location and it would need to be replicated for each exported image.
I once provided a sample graph which can split S2 data into single tif files.
Split product bandwise into single GeoTiff files - GPF Graphs - STEP Forum (esa.int)
Instead of using the simple GeoTIFF format I would suggest using GeoTIFF-BigTIFF. This uses a lossless compression.
You can execute this in different way you can setup the graph in python as you already did you can spawn sub-processes and call the gpt tool or you can run the GraphProcessor as discussed in this thread.

Read also the subsequent posts in this thread.

Regarding SMOS. There is s special tool which converts to NetCDF
SNAP Download – STEP (esa.int)

Hi @Marco_EOM ,
what I want is to do the conversion inside my Python container. As with Sentinel-2 and 3. Do you have any example of dbl and hdr to netcdf conversion. And how to reproject these images.

In the manual in section 6 the tool es explained.
But I think you don’t even need to use this tool because you don’t need this format it creates. You can use the graph you already have with a different configuration.
Using the .dbl file as input and selecting the bands you need should work.


I have tried what you said using Graph to read the dbl file and then retrieve the bands. This is my code. Very simple. Inside the directory ready the files to keep only the dbl and read it.

HOwever, I get an error:

I checked that my container had all the necessary toolboxes and it does:
Where is the problem?

It seems that even though you checked the modules that the reader is not available for some reason. Or the path to the data is not valid.

You can check in your python code if the reader plugin is available:

ProductIOPlugInManagerType = snappy.jpy.get_type('org.esa.snap.core.dataio.ProductIOPlugInManager')
piopim = ProductIOPlugInManagerType.getInstance()
plugins = piopim.getReaderPlugIns('SMOS-EEF')
reader = plugins.next()
if reader is None:
   print('SMOS reader not found') 

If you see the ‘SMOS reader not found’ message the smos-box is probably not correctly installed.

@Marco_EOM I have executed your code and I obtain the following:

So, it’s working. The SMOS toolbox is correctly installed. The text Done’ is a print that I have on my code.

But I have executed your lines:

Hi @Marco_EOM
problem solved. A mistake on my part. In the same directory where the dbl is generated there is always an hdr. Once the file was unzipped I just moved the dbl to another directory and deleted the original. Therefore, it is necessary to always keep the hdr and dbl together. Now I do get a list of bands.

Is there any place where it is indicated the parameters that can be obtained from each band? The band names in this case are not very descriptive.

1 Like

Greate that it now works.
You can also call getDescription() on the bands which shall give a bit more information.
You can also look on the help pages: SNAP Online Help

A good list of documents can be found at the end of this page: SMOS L1 and L2 Science data - Earth Online (esa.int). Especially the product specification.

Hi @Marco_EOM ,

The whole process works perfectly. Until now, despite deploying it as a container, my tests were on Windows. I just deployed my container on a server with Linux and I get the following error:

This happens when I try to writeProduct and write the result to my output directory.

What can this be due to?

Not sure. The log doesn’t show an actual error. Actually, it says the loading of the library was successful.
What error do you observe. Is the target file not written at all or is it broken?
Have you tried another output format?

I am currently working with Sentinel-3 files. What I was doing was reading the bands and then from the bands I create NetCDF files with the writeProduct. Then I read the bands and with the one I am interested in I create the NetCDF. At that moment is when the error occurs. What do I notice? That the execution of my API stops. I was inside a for to go through the bands that interest me and on them to generate the NetCDF.

Could an OutOfMemorryException be the reason? If you read the band data and keep it in memory you might hit the limit. Do you read the whole raster data of a band, or do you read it in chunks or lines?
For bands you don’t use anymore you can call
The data you have in your python arrays you need to handle too.

The issue is that until now I have worked and done all the process on my local computer which is Windows and now I am on a server that improves my performance in terms of RAM and disk. The only difference is that it is Linux. Anyway, I will try what you tell me.

You can also try to add exception handling to the suspicious code:

Maybe this helps to find the reason.


this is the code where the error happens

I have a try-catch structure using the type of exception you said but I don’t capture any error. However, the process ends always in the same point when try to execute the writeProduct

I have changed the output format from NetCDF4-CF to NetCDF4-BEAM but the result is the same.

I have read in the forum that other colleagues obtain the same error but reading sentinel-2 and in that case is a memory problem: This is the link: NetCDF-CF writing: Segmentation fault and metadata issues with SNAP v9.0.4+ - #2 by sakvaka

I don’t if it’s the same. Reading the thread is not clear which one was the solution for that problem

The most curious is that the execution works perfectly in a Windows machine but not in a Linux server. I have 16GB RAM. In my opinion must be enough

Have you tried BEAM-DIMAP as output format? Would be good to know if this works.

Can you try to invoke gpt from the command line?
Maybe there is an issue with the java-python bridge.

testGraph.xml (1.2 KB)
This is an example graph you can adapt. You need to change the region and the output path.
In the header of the file, you can find an example how to call it.

@Marco_EOM ,

I have changed the output format but the result is the same, I mean, I don’t obtain the previous message about the NetCDF loader but the execution ends.

Regarding to execute gpt from command line is not clear how to do it because all my process has been developed programatically. I don’t know how to proceed because I have ended the development but I can deploy it because it doesn’t work on that machine