Terrain-Flattening output dependent on the subset size?

can you please provide visual examples?

Sure,

here is one pixel from the collocated image from SNAP (the smallest bbox was chosen for collocation master):

And here is the same pixel from one of the larger images (I included this only to see visually one of the images with larger bbox):

You can see in the first image how the pixel value for gamma0_VH “jumps” up and down as the image size is changed. These bbox sizes that I’ve chosen are not important, I have experimented with many more sizes, but the result is always different (sometimes more sometimes less).

This is also true for the Thermal Noise Removal operator, tested the same way as above:

I can provide the code if needed (I used python and gpt command line commands).

I tested this because I want to be able to get the same results for a pixel when I apply preprocessing for the whole S1 image and for some subset of that image.

Thanks,
Ognjen

what makes you sure that you are retrieving data from exactly the same pixel? Especially for slant range products of different extens there can be shifts at this coordinate precision. So if you want to go sure you can terrain correct all of them and then collocate (always with nearest neighbor resampling) and then select one pixel.

Why do you think there would be shifts if exactly the same product is used and I haven’t done any transformations, I’ve just taken a subset of the product? I am using ground range product for this testing:
S1B_IW_GRDH_1SDV_20180827T163301_20180827T163326_012451_016F5C_7360

I have tried what you suggested (for two subsets), the result is still different. This is the code I’ve used for Terrain Correction (I have tried both nearest neighbour and the deafult-bilinear interpolation for radar image resampling):

gpt Terrain-Correction -PimgResamplingMethod=“NEAREST_NEIGHBOUR” -PalignToStandardGrid=true -PstandardGridOriginX=-5 -PstandardGridOriginY=-5 -PsaveSigmaNought=true -PdemName=“Copernicus 30m Global DEM” -PmapProjection=‘EPSG:32634’ -PsaveProjectedLocalIncidenceAngle=true -PpixelSpacingInMeter=10 -f GeoTIFF-BigTIFF -Dsnap.dataio.bigtiff.compression.type=LZW -Dsnap.dataio.bigtiff.tiling.width=512 -Dsnap.dataio.bigtiff.tiling.height=512 subset_5000_sh2_tf.dim -t subset_5000_sh2_tf_tc_near.tif

I have always used nearest neighbour for collocation.

very strange. Are the images correctly aligned after collocation?

Can you pleas create an RGB image of the different inputs?

I think they are, they seem correctly collocated. Can you suggest some way to check that?

Here is the output RGB :


One more thing, only some tools are dependent of the input image size. For example Terrain Correction (Range Doppler, geometric correction) is returning the same pixel values. I tested this by creating subsets from Terrain Flattened product and then correcting them separately, and finally collocating them - the pixel values are the same for all rasters.

I did the same test with Terrain Flattening and Thermal Noise Removal operators (subsetting from original product and applying these operators to subsets) - and they yield different values for each pixel (as I mentioned above). So it seems that these two operators are dependent on the input image size. I think this shouldn’t be the case.

If I’m correct, this should be the code for these two operators:


I’m not great with Java, but I’ll try to look through the code. It would be nice is one of the developers who wrote the code could chime in on the disscussion (“Cecilia Wong, Jun Lu, Luis Veci”), or at least someone who is proficient in Java…

thank you, the fact that the RGB is grey in the overlapping area indicates that the products are well aligned.
Yet, there are smaller color differences between them which support your indication that the same product results in slightly different pixel values (still not sure if the extent is the reason).
Maybe @jun_lu and @lveci can comment here?

I’ve taken a look at https://github.com/senbox-org/s1tbx/blob/d82796560d021446a1187a27cb667026135094d4/s1tbx-op-calibration/src/main/java/org/esa/s1tbx/calibration/gpf/Sentinel1RemoveThermalNoiseOp.java#L773

The only two things that cross my mind are that there are two pairs of class methods with the same name (lines 683 and 709, and lines 773 and 818), or that I’m not sure if the correct noise vectors are taken for each image row. I guess when retrieving noise vectors for scene subset, the image rows (and columns also) need to be assigned row numbers with respect to the whole scene.

But these are just wild guesses, I hope the developers will figure it out…

Which DEM?

Copernicus EUDEM 30m, but I think the problem exists for all DEMs, I even think I tested with SRTM before.

A JIRA ticket (https://senbox.atlassian.net/browse/SITBX-862) has been created to track the problem. We will look into it and thank you for pointing out the problem

1 Like

Hello @jun_lu,
do you know of any news on this issue?

We have looked into the issue. Actually there are two issues here. Both the thermal noise removal operator and the terrain flattening operator have problem with subset. But the causes of the problem are different. For the thermal noise removal, the cause was that the subset was not properly handled. We have fixed the problem and it now produces identical result for both subset and the whole image.

For terrain flattening however, we believe that there could be some difference between the gamma0 output for a subset and the gamma0 output for the whole image. We know that the kernel of the algorithm is to simulate the projected illuminated area for each pixel and “normalize” beta0 using the simulated image. If, for different bounding boxes of the image, the simulated images are different, then the output gamma0 will be different. We believe this is the case. Given a bounding box of the image, we will first get the max and min of the latitude and longitude based on the geo-positions of the corner points. Then a latitude/longitude grid is created within the area bounded by latitude/longitude min/max in the map domain and the projected illuminate area is computed for each grid. The computed illuminate area is finally mapped back to the 4 pixels in the image domain using the bilinear distribution method. Here we can see that different bounding box will lead to different latitude/longitude grid, and hence different simulated result for each pixel. Now the question is how much difference is reasonable. We are currently reviewing the code to make sure there is no loose end.

4 Likes

Has there been a solution to this issue?

I just encountered the same, having different VH and VV backscatter values depending on using a subset or processing the full GRDH image. On the one hand, the backscatter values of the subset are around 1 dB lower across the whole subset area, on the other hand the simulated area values are around 0.5 higher in the subset across the whole area. These higher simulated area values in the subset might also be the reason for the lower backscatter values.

I applied the following processing steps with S1TBX 9.0.3 using the internal Copernicus 30 m DEM for terrain flattening and correction:

Full GRDH image

Subset

To make it more illustrative you find the VH polarisation with the same colour bar range (-20 to -10 dB) of the full GRDH and the subset image of S1A_IW_GRDH_1SDV_20170401T161416_20170401T161441_015953_01A4E4_0A10:

Full GRDH image

Subset (POLYGON((19.63 64.14, 19.96 64.14, 19.96 64.37, 19.63 64.37, 19.63 64.14, 19.63 64.14, 19.63 64.14)))

Colours
image

Would clipping the bounding box limits here to the input DEM grid - i.e, a multiple of the demResolution not help address the discrepancy to a large degree here:

By aligning it, the subset should still iterate over the same oversampled DEM locations? This should also help processing across different images to iterate over the same oversampled DEM locations? This jitter effect of the bounding boxes in our opinion definitely impacts results significantly in steep terrain.

1 Like

What is the latest on this issue @jun_lu ? Any solution to it planned? As mentioned in the post above from the 23th of March, the differences in resulting backscatter values are quite substantial!

Hi,
I am facing the same challenge in my research site! Also in my case the data value drops when creating the subset (I added two screenshots one using subset and one without). Would be great if we could solve this!


Thanks!

2 Likes

I see we are already a few coming across this issue. Should be maybe move this thread to the “problem reports” section? @ABraun @mengdahl

What is the latest on this issue? Will this be solved in the coming release of SNAP 10? @jun_lu