Sentinel-1 TF/TC errors - square patches not corrected

Hello,

I am using SNAP v8 on Windows. I have been trying to terrain correct/terrain flatten Sentinel-1 GRD data.

I am noting, in some data, many square patches in the output image. The problem seems to be in both the terrain correction and terrain flattening steps. I use the dem provided by ESA, but the problem does not appear to be in the dem.

Here is an example from TF (no TC):

Here is an example from TC (no TF) - harder to see, but you will noticed the artificial boundary:

image

did you select SRTM 3Sec or SRTM 1Sec?

In any way, please test with the respective other for a try.

Hello,

I tried with the default setting (SRTM 3 sec) and SRTM 1 sec - both had the same problem. I am trying another dem setting GETASSE30 now.

But when I did the TC step (and saved the DEM output), the DEM looked fine.

strange indeed. Maybe it is worth testing an external DEM, e.g. AW3D30 or the Copernicus DEM if your area is in Europe.

Actually, the problem goes deeper. I am seeing these boxes, prior to any terrain processing, but in the calibration. Theyre are much harder to see in the calibration images, but they are there. For example, a chip from the orginal product:

Same area after orbit correction and calibration:

Notice the duplication of the dark linear feature:

Actually, I take that back - I am seeing this same block simply in the orbital corrected image. I did not think that there was actually any image changes in that step, but was just updating the metadata. But it clearly changes the image, at least in spots.

@marpet could this somehow be related to the tiling of products?

Yes, it seems to be related to the tiling, in some way.
@pmallas Do you save to BEAM-DIMAP? Have you tried NetCDF4? I’m wondering if this is related to a bug, we want to release a fix for by tomorrow. If so, then the problem is in the writing.

@marpet could this be related to the DIMAP-writing bug you fixed? In any case the issue should be resolved in the next module update @lveci

Yes, could be. That’s what I was writing in the post before :slight_smile:

This kind of serious data quality issues should not get pass release testing @MartinoF - ideas on how to better control the quality of the gpt-test output products?

Hi @mengdhal, right now we test the output of GPT test using by comparing few samples to the reference values we expect, however as the number of samples is limited (also because it is a long manual process to add the reference samples).

In my opinion a good way to solve this issue and improve the test quality is to use perceptual hash and store the reference hash in our testing system and compare it to the hash computed from the GPT.
We can then adjust the error tolerance (e.g. the approximation error) by selecting the correct error threshold.
I made already some test in python on S2 images and the system worked fine using a random matrix as kernel of the hashing system, however more test and discussion is needed.

Cheers,
Martino

1 Like

Hello Marco @marpet

I was thinking that yesterday afternoon - maybe the problem has nothing to do with any of the processing I was doing but simply the process of importing the data. And yes, I am using the Beam-DIMAP format since this is the native/default format.

I will try and repeat everything with NetCDF4 and I will report back.

Thanks,
Paul

Hello,

Well, I tried my first step - orbital correction and output to NetCDF-CF. It took about 10x longer than processing to Beam-DIMAP and produced no results. There are items listed in the product explorer, but displaying the images shows nothing. The tab opens, but the entire iamge area is blank.

Hello,
Do you suspect this is a bug introduced in v8? Would v7 have the same problem? I don’t have the older version installed, but I do have data processed with v7. I will try and look for myself, but I thought I would ask.

Thanks,
Paul

Hello

I downloaded the updates today. At first glance and with the example I used above, I am seeing a better result:

1 Like