Pixel misalignment after processing series of Sentinel-1 GRD images

For crop classification task I was processing hundreds of Sentinel-1 GRD images in python using snappy, following this processing path I came across long time ago:

  1. Calibration
  2. Slice assembly (when necessary, in order to merge images and get full coverage)
  3. Subset (to a complex area border)
  4. Speckle filtering
  5. Terrain correction
  6. Linear to dB

All the images were from the same relative orbit (both ascending and descending), but after processing none of the images did align its pixels to any of other images.
For the sake of research, I want pixels to align perfectly. What is the best way to achieve this?

My guess is that collocation could help me out now, but what method would be the most appropriate, nearest neighbour perhaps?
I was also wondering (for future processing) what should’ve I included in the processing workflow in order to have them aligned right away after processing?

I did not perform step “Apply Orbit File” as I want to use images as soon as they’ve been acquired and not wait 20 days (as I saw in some other post) to get this orbit file.
Another doubt I have is if any other step is preferred in this kind of workflow such as e.g. thermal noise removal, border noise removal, terrain flatenning, etc?

Check whether it works when you check the Align-to-grid option in the terrain correction step. This is off by default and (logically) leads to sub-pixel shifts between image acquisitions. Whether subpixel shifts are relevant in crop classification is disputable, but you can figure that out afterwards.

This sounds usefuly for many purposes. But where can this be found?

This option is “poorly documented”. In fact, you won’t find it in the help docs for Range Doppler Terrain Correction and their is no checkbox for it in the UI by default (likely because default projection is geographic). But if you save to graph, you will find it in the XML. The effect of Align-to-grid is that the upper left image start is on a rounded metric projected (e.g. UTM) coordinate linked to pixel size setting.


Yes, “align-to-grid” works perfectly in my code. I have tested it on a set of images and they do align now. Thank you very much for suggesting it.
Regrading relevancy of subpixel shift to crop classification, I plan to do pixel-wise classification using machine learning, so I feel like pixel-to-pixel alignment is preferable.
Any comment regarding steps in workflow for such a task? Is any other step desirable or necessary?

I would skip the linear-to-dB step. For a classification, it’s better to work in natural values, esp. if you do some kind of interpolation first. Terrain flattening would (somewhat) cancel the ascending/descending and multi-orbit view configuration variation, in case you use mixed orbit data and have significant slopes in your area.

Great. At the moment I’m considering to use some kind of interpolation and thank you for raising the flag. Back in time I was wondering if linear-to-dB is suitable for all kinds of application but it seemed that lin2dB is followed blindly by vast amount of community. Any suggestion for further readings?
Regarding the terrain flattening, area is not characterized with significant slopes and currently I will not mix orbits, but I do have it on my mind on a long run, so once again thank you for your suggestions.

Yes, good observation! It’s amazing how often this is misunderstood. “Literature” does not have to go much beyond 2nd grade secondary school math: log(a) + log(b) != log(a + b). So any step that involves resampling, filtering, interpolation, etc. should not use dB. Just dB the end result, if needed for display.


Andreas doesn’t the recommended processing (without snappy) use stacking to get rid of the shifts? This should be in some of the tutorials but I don’t remember which one(s).

Do you mean the time-series analysis tutorial?
Here, all data is preprocessed (conversion to dB as last step @glemoine :slight_smile: ) then stacked. But it doesn’t make use of the align-to-grid option (yet).

1 Like

Your db conversion is the second last step, before subsetting. It would be more orthodox to swap these 2 steps. Subsetting leads to resampling if subset geometry is not aligned (which is likely the case).

Looking at the code of the Subset operator, I see no sign of pixel resampling (not a Java expert though), so I supposed that the subset output is snapped to the closest pixel of the input grid.
Maybe @marpet can clarify?

Yes, that’s correct. The subset operator does not resample. That’s why it has the subSamplingX and subSamplingY. They define the stepping.

1 Like

Hi @glemoine,
I’m very pleased to discovers this “Align-to-grid” option, going to save a lot of time for further stacking, thanks a lot for that!

And just to be sure, what are settings the OriginX & Y option ? it sets the master’s first pixel right ?



Yes. It aligns to an integer pixel location (ceil for easting, floor for northings), depending on your pixel spacing. If pixels spacing is 10, it will align to an integer increment of 10 if the values are 0.0. You can vary the alignToStandardOriginX|Y between 0 and 10, to get an offset from the integer increment of 10. If 10, you are aligned again, but the whole image will be shifted 1 pixel. Keeping it 0 seems to be the logical choice.

1 Like

more likely floor for Easting, ceil for Northings, but you can figure that out experimentally.


I did integer the AligntoGrid option in my calibration loop, but I still have to resample my products in order to stack them. The GeoTIFF are not Aligned (but seems normal as they are Terrain Corrected individually?)
So… nothing changed on my side.

I’m curious if someone is able to stack their terraincorrected product without resampling with this option ? Or I did something wrong ?


I use AligntoGrid option with Subset to produce images that can be directly stacked.