Is pyroSAR a good substitute for python based SNAP toolbox solution?


I recently came across pyroSAR and starting to setup it up. I am trying to create a python based preprocessing workflow.

  1. Does pyroSAR provide same time efficiency as SNAP toolbox?

  2. On comparing the snappy and pyroSAR, is there a major difference between the two in terms of time efficiency for preprocessing a Sentinel-1 product?

  3. Does pyroSAR support batch processing?

I am using Sentinel-1 GRD product with IW and EW swath in HH polarization.


1 Like

glad you asked, I am the main author. In its core, pyroSAR is just a thin wrapper around the SNAP toolbox. It creates XML workflow files and executes them using SNAP’s GPT command line tool. In my experience this is a lot faster than using snappy. Snappy directly interfaces with Java using the jpy package. This only works for a few Python versions and slows down processing.
Furthermore, pyroSAR can split workflows into smaller steps to execute them in sequence. This way more data is written to temporary files but memory usage and execution time are reduced.

Currently pyroSAR’s SNAP API is limited to backscatter processing because not all workflow nodes are accessible. This is going to change in the next version which I will release in the coming days. This is also going to be the first version available via conda-forge.
The function snap.geocode was made for S1 GRD processing so this might be a good starting point for you. Also, there is some documentation on how to batch process lots of images here.



Hi John,

Thanks for the prompt response. Yeah, I am trying to create a fully python based workflow for my project which has the Sentinel-1 pre-processing part at the moment in SNAP toolbox. Moving it out and working with pyroSAR would definitely help to create a consistent solution.

I however, did not find the batch processing part in the pyroSAR documentation. Can you point it out specifically where it is or the class name I need to look at?


Hi Sid,
to my understanding batch processing is basically just processing a list of scenes with a single workflow. So in SNAP’s graph builder one would create a workflow XML file and load it into the batch processing module together with the list of scenes to then process them in sequence.
pyroSAR has a little different approach in that it creates an individual workflow file for each scene, processes the scene and then moves to the next one. The idea is to treat these XML files as additional metadata attached to the processing result so that in the future it is always clear how, i.e. with which nodes and their parameters, a particular scene was processed.
So basically this is just a loop iterating over a list of scenes, which is this part in the docs:

for scene in selection_proc:
    geocode(infile=scene, outdir=outdir, tr=resolution, scaling='db', shapefile=site)

The function geocode first writes an XML file individual to the scene, i.e. same naming convention as the output GeoTiffs and the input and output products hard-coded in the file, and then passes the file to the GPT command line tool. The output might look something like this:


The naming scheme is consistent for all sensors and products containing some basic scene metadata and the processing steps.
In its basic configuration, only the workflow and one GeoTiff for each polarization are saved. Ancillary products, like the local incident angle map, can be exported with parameter export_extra.



Hi John,

Thanks for the update on batch processing. This clears the batch process with pyroSAR. I am currently writing the code for enabling the batch script, will update you once its done.


1 Like

FYI…pyroSAR version 0.11 is now available via PyPI and Anaconda.


Hi @johntruckenbrodt,

It’s super handy you’re around on the forums, I found pyroSAR a while back and was keen to know more!

Could you possibly let me know if there’s a benefit to me switching from just calling gpt (via subprocesses) to using pyroSAR?

The processing I’m currently doing to imagery is Read > Radiometric Calibration > Terrain Correction > Write (For Sentinel 1) - The gpt workflow is quite trivial and I’m wondering if I’d gain much benefit using your library? (I’d love to if it’l benefit me!)

Cheers and awesome work!

Hi @CiaranEvans,

sure thing, the forum is really the place to be for us SAR enthusiasts :wink:.
Whether you can benefit from pyroSAR of course largely depends on your needs and the way you’d like to work with the data. Calling GPT via subprocesses sure works fine, pyroSAR also does that in the background.
However, pyroSAR goes further so that the whole data processing pipeline becomes easier to use. There are several things to it, but for now I’ll stick to those that might be relevant for your use case, i.e. Sentinel-1 GRD processing with SNAP. I assume that you have the S1 raw data storage figured out and have a working processing chain so I won’t go into raw data management. The question is, how flexible do you need your workflow to be in the long run?
For example, do you always process the whole scene or are you sometimes only interested in a small area? pyroSAR makes it quite easy to load your existing workflow and insert a Subset node with coordinates from some bounding box. Or are you interested in ancillary products like the local incident angle map from time to time and need to output them as well? These kinds of modifications to existing workflows are quite easy to do on the fly with pyroSAR’s SNAP API.
On the performance side, the ability to split workflows can significantly speed up processing. Instead of executing the whole workflow at once, pyroSAR can split it and execute the sub-workflows in sequence. In your case this could look like this:

Read > Radiometric Calibration > Write
Read > Terrain Correction > Write

This way the processing is faster and needs less memory. I agree though that this should not be necessary and SNAP will eventually be just as efficient for long workflows as it is for short ones.
On top of that, I occasionally discovered some output formatting issues. For example, SNAP does not properly write the nodata values to the GeoTiff files. Also, I found that the pixel ordering in the GeoTiffs is quite unusual and cumbersome to read in other software. I had addressed this here. Those issues might be fixed by now but sure saved my day some time ago.

One feature that really gets me excited lately is the ability to directly connect it to my analysis environment. I do a lot of time series analyses and want to treat all processed images of an area as one 3D cube. For this I am developing spatialist, which is pyroSAR’s spatial data handling backend. Here’s a small example:

from pyroSAR.ancillary import find_datasets, seconds
from spatialist import Raster

# some directory containing processed data
directory = 'E:\\DATA'

# collect all files with the pyroSAR naming pattern
files = find_datasets(directory, polarization='VV')

# sort the files by their acquisition time
files = sorted(files, key=seconds)

# connect all files as a 3D object and read a subset into a numpy array
with Raster(files)[0:100, 0:100, :] as ras:
    arr = ras.array()

Here pyroSAR’s consistent naming scheme comes into play making it easy to keep track of processed scenes. However, this really excels when using data from different sensors.

Okay, that’s already quite a long message so I’ll leave it here. I hope this sheds some light and you are still interested in using it. I’ll be around to answer more questions of course.



Thanks John!

Awesome, thanks for such a great reply!

It’s interesting to see what you’ve done with pyroSAR, also somewhat difficult to decide whether I will need more flexibility.

I didn’t think about running the Calibration and Correction separately though, that’s an interesting point that it is quicker. I will have to investigate that!

no worries mate!
In German we have a saying “Mit Kanonen auf Spatzen schießen”. No point in “shooting sparrows with cannons”. Big tools might come with limitations.
Splitting might not make much of a difference in your case as it is quite a short workflow. It sure does with longer workflows as applied by pyroSAR.snap.geocode:

(See the link above for image description)


What a stellar saying! Definitely appreciate that benefit!

I know it’s not related to pyroSAR (but as you have a way of processing using SNAP in a different way to me) - have you by any chance seen Graph Builder Subset and Terrain Correction Error Sentinel 1? I assume the problem I’m having would persist unless your subsetting is something new and fancy?

Mh, no idea. I directly insert the WKT into the XML and it never failed me. Maybe this workflow helps to figure it out. It’s an older one though and might not work out of the box.
S1A__IW___D_20180826T053453_tnr_bnr_Orb_Cal_ML_TF_TC_dB_proc.xml (8.7 KB)

Hey everyone,
I just came across this post and was wondering if it might be possible to put this workflow together with pyroSAR. I’m not quite sure after looking into the documentation, can anyone help?

SNAP steps:

  1. Read
  2. TOPSAR Split
  3. Apply Orbit file
  4. Back Geocoding
  5. Enhanced spectral diversity
  6. Coherence
  7. Deburst
  8. Multilook
  9. Terrain Correction
  10. Subset
  11. Write

Thanks in advance!