Terrain-Flattening output dependent on the subset size?

Hi,

I have a problem with processing S1 GRD products with SNAP. I believe there is an error in the way resampled DEM values are calculated in the Terrain-Flattening algorithm.

I did a simple test with the Terrain Flattening operator:
I preprocessed a whole S1 product, and some area inside that product with different bounding boxes around it (200m, 5 km, 5.1km, and 6km).
After that, I collocated these terrain-flattened products and checked the value of gamma0 for some pixels in the center of the selected area. To my surprise, the pixel values were all different (for the whole product and for the subsets with different boxes).

I know that Terrain-Flattening operator is based on this paper:

, and that it uses bilinear DEM resampling (which is the default for TF operator also). So my question is - how is it possible that the gamma values for pixels in the center of the area are affected by the size of the image processed (different bounding boxes)?

Between 5km and 6km bounding box, nothing has changed for that pixel - it still corresponds to one or more( or less) pixels in the DEM, the same DEM pixels (should) influence the sigma0 output for that pixel. But surely pixels far away from that location shouldn’t affect the output value.

Is there a bug in the Terrain-Flattening algorithm or am I missing something?

Best,
Ognjen

can you please provide visual examples?

Sure,

here is one pixel from the collocated image from SNAP (the smallest bbox was chosen for collocation master):

And here is the same pixel from one of the larger images (I included this only to see visually one of the images with larger bbox):

You can see in the first image how the pixel value for gamma0_VH “jumps” up and down as the image size is changed. These bbox sizes that I’ve chosen are not important, I have experimented with many more sizes, but the result is always different (sometimes more sometimes less).

This is also true for the Thermal Noise Removal operator, tested the same way as above:

I can provide the code if needed (I used python and gpt command line commands).

I tested this because I want to be able to get the same results for a pixel when I apply preprocessing for the whole S1 image and for some subset of that image.

Thanks,
Ognjen

what makes you sure that you are retrieving data from exactly the same pixel? Especially for slant range products of different extens there can be shifts at this coordinate precision. So if you want to go sure you can terrain correct all of them and then collocate (always with nearest neighbor resampling) and then select one pixel.

Why do you think there would be shifts if exactly the same product is used and I haven’t done any transformations, I’ve just taken a subset of the product? I am using ground range product for this testing:
S1B_IW_GRDH_1SDV_20180827T163301_20180827T163326_012451_016F5C_7360

I have tried what you suggested (for two subsets), the result is still different. This is the code I’ve used for Terrain Correction (I have tried both nearest neighbour and the deafult-bilinear interpolation for radar image resampling):

gpt Terrain-Correction -PimgResamplingMethod=“NEAREST_NEIGHBOUR” -PalignToStandardGrid=true -PstandardGridOriginX=-5 -PstandardGridOriginY=-5 -PsaveSigmaNought=true -PdemName=“Copernicus 30m Global DEM” -PmapProjection=‘EPSG:32634’ -PsaveProjectedLocalIncidenceAngle=true -PpixelSpacingInMeter=10 -f GeoTIFF-BigTIFF -Dsnap.dataio.bigtiff.compression.type=LZW -Dsnap.dataio.bigtiff.tiling.width=512 -Dsnap.dataio.bigtiff.tiling.height=512 subset_5000_sh2_tf.dim -t subset_5000_sh2_tf_tc_near.tif

I have always used nearest neighbour for collocation.

very strange. Are the images correctly aligned after collocation?

Can you pleas create an RGB image of the different inputs?

I think they are, they seem correctly collocated. Can you suggest some way to check that?

Here is the output RGB :


One more thing, only some tools are dependent of the input image size. For example Terrain Correction (Range Doppler, geometric correction) is returning the same pixel values. I tested this by creating subsets from Terrain Flattened product and then correcting them separately, and finally collocating them - the pixel values are the same for all rasters.

I did the same test with Terrain Flattening and Thermal Noise Removal operators (subsetting from original product and applying these operators to subsets) - and they yield different values for each pixel (as I mentioned above). So it seems that these two operators are dependent on the input image size. I think this shouldn’t be the case.

If I’m correct, this should be the code for these two operators:


I’m not great with Java, but I’ll try to look through the code. It would be nice is one of the developers who wrote the code could chime in on the disscussion (“Cecilia Wong, Jun Lu, Luis Veci”), or at least someone who is proficient in Java…

thank you, the fact that the RGB is grey in the overlapping area indicates that the products are well aligned.
Yet, there are smaller color differences between them which support your indication that the same product results in slightly different pixel values (still not sure if the extent is the reason).
Maybe @jun_lu and @lveci can comment here?

I’ve taken a look at https://github.com/senbox-org/s1tbx/blob/d82796560d021446a1187a27cb667026135094d4/s1tbx-op-calibration/src/main/java/org/esa/s1tbx/calibration/gpf/Sentinel1RemoveThermalNoiseOp.java#L773

The only two things that cross my mind are that there are two pairs of class methods with the same name (lines 683 and 709, and lines 773 and 818), or that I’m not sure if the correct noise vectors are taken for each image row. I guess when retrieving noise vectors for scene subset, the image rows (and columns also) need to be assigned row numbers with respect to the whole scene.

But these are just wild guesses, I hope the developers will figure it out…

Which DEM?

Copernicus EUDEM 30m, but I think the problem exists for all DEMs, I even think I tested with SRTM before.

A JIRA ticket (https://senbox.atlassian.net/browse/SITBX-862) has been created to track the problem. We will look into it and thank you for pointing out the problem

Hello @jun_lu,
do you know of any news on this issue?

We have looked into the issue. Actually there are two issues here. Both the thermal noise removal operator and the terrain flattening operator have problem with subset. But the causes of the problem are different. For the thermal noise removal, the cause was that the subset was not properly handled. We have fixed the problem and it now produces identical result for both subset and the whole image.

For terrain flattening however, we believe that there could be some difference between the gamma0 output for a subset and the gamma0 output for the whole image. We know that the kernel of the algorithm is to simulate the projected illuminated area for each pixel and “normalize” beta0 using the simulated image. If, for different bounding boxes of the image, the simulated images are different, then the output gamma0 will be different. We believe this is the case. Given a bounding box of the image, we will first get the max and min of the latitude and longitude based on the geo-positions of the corner points. Then a latitude/longitude grid is created within the area bounded by latitude/longitude min/max in the map domain and the projected illuminate area is computed for each grid. The computed illuminate area is finally mapped back to the 4 pixels in the image domain using the bilinear distribution method. Here we can see that different bounding box will lead to different latitude/longitude grid, and hence different simulated result for each pixel. Now the question is how much difference is reasonable. We are currently reviewing the code to make sure there is no loose end.

4 Likes