GRD-Border-Noise and Thermal noise removal are not working anymore since March 13. 2018

I also have troubles getting my processing chain working because of memory issues.
Even 100GB on a cluster machine is not enough for the SliceAssembly-step of 4 or more tiles. (Noise Removal also requires much more memory than before March). Changing the settings -Xms -Xmx to sensible values in the gpt.vmoptions does not help either.
I’d appreciate if this issue would be addressed soon.

1 Like

I’m also having issues with GPT at the moment. I have a polarimetry workflow that works in the SNAP GUI but not in GPT.
GPT gives no errors but hangs so I assume it’s a memory issue? Maybe there is a better way to debug GPT (I will try with -e).

My machine has 64GB RAM and I have allocated up to 40GB.

Tryin to run Sentinel-1 ESD corregistration. Using graph builder on GUI works fine but when trying the same graph on GPT I have the java.lang.NullPointerException. I am working on Linux 16.04 with two S1 SLC IW images.

M

Similar failure to noise remove on IPF 2.9 IW GRD scenes.
SNAP v6.0.2 does not produce valid sigma0 after noise removal.

Granule (of after March 13, 2018) S1A_IW_GRDH_1SDV_20180314T152304_20180314T152333_021013_02414B_59E7.zip downloaded from the Hub (and tested on many other granules globally)

[1] OS: CentOS 6.8 linux
SNAP version 6.0.2 GUI, updated now
Procedure: Radar radiometric cal --> S1 thermal noise removal.
Calibration works well, producing sigma0. But the output of noise removal has blank sigma0 all over the scene for both VH and VV (min(sigma0)=0, max(sigma0)=0).

[2]OS: MacOS 10.12.6
SNAP version 6.0.2 GUI, updated now
“java.lang.NullPointerException” is what I got. The process fails to complete.

[3] SNAP V5 produces calibrated noise corrected sigma0 on IW GRD before Mar 2018

[4] Mixed success. SNAPv6.0.2 on CentOS linux 250GB ram. Started from SLC. Noise removal on IPF2.9 data (March 13, 2018) works. One scene however has many artifacts, unrelated to noise removal (granule names are provided).

S1A_IW_GRDH_1SDV_20180323T001321_20180323T001347_021135_02452C_426C_Cal_Deb_ML_Spk_SRGR
S1A_IW_GRDH_1SDV_20180314T152304_20180314T152334_021013_02414B_1C0A_Cal_Deb_ML_Spk_SRGR

1 Like

@lveci could you check it?

I think I have a similar issue - S-1 Thermal Noise removal introduces Artefacts in the image
Was a solution found to this problem?
Thanks, Sanjay.

We are currently looking into this issue. Sorry for the inconvenience.

Thanks Magda. I have only seen this issue in that single image take on 13th March (i.e. S1A_IW_GRDH_1SDV_20180313T180532_20180313T180557_021000_0240E0_114C), though I must admit that I have not checked every single image in details as I process several hundreds of images in batch. They look okay as such but I have no way of checking whether the values are correct after latest S1TBX fix. Sanjay.

Hi, @mfitrzyk @lveci - any update on my query please. I noticed there was an update, which I did apply. However, I am still seeing the artefacts in S1A_IW_GRDH_1SDV_20180313T180532_20180313T180557_021000_0240E0_114C. Thanks, Sanjay.

I’ve managed to work out what the problem is here: the java_max_mem value in snappy.ini needs to be greater than or equal to roughly 12GB in order that the IW SAFE archives can ben processed. My development machine only has 8GB and hence I was running into this problem (getting NullPointerExceptions). After testing on a machine with more memory, I found that I could reproduce the problem by setting java_max_mem to 10GB and trying to process an IW SAFE archive. This would explain why I could process EW scenes on my development box, since they don’t require as much memory and would possibly explain why others haven’t seen this issue, since one would assume they are using SNAP in an environment with 16GB or more of RAM.

Thats interesting @paultcochrane - I would be interested to see if your installation can replicate the issue of data quality after thermal noise removal with the following image: S1A_IW_GRDH_1SDV_20180313T180532_20180313T180557_021000_0240E0_114C
My steps are as follows:

  1. Apply Restituted Orbits
  2. Border Noise removal
  3. Thermal Noise removal - this step produces the bad image in S-1 Thermal Noise removal introduces Artefacts in the image. I think this is related to the number 4 issue raised by @biocpu
    Thanks,
    Sanjay.

Anyone has any news for this problem?

This should be fixed with the update from May 20th

Thanks lveci!

Just FYI and someone would be interested.

The new Sentinel-1 images (after 2018.03.18) “Thermal Noise Removing” procedure uses about twice memory size than the previous images (before 2018.03.18). So enlarge the memory limitation will solve the “NullPointerException” error.

According to my tests, it should be 15GB or larger.

Find the file SNAPINSTALLATIONDIRECTORY/snap/bin/gpt.vmoptions

add
Xmx15G

1 Like

I can confirm the memory issue with Sentinel1RemoveThermalNoiseOp. The method buildNoiseLUTForTOPSGRD creates very large matrices, for instance:
noiseMatrix = new double[numLines][numSamples];//numLines: 16694, numSamples: 8807

This causes the memory requirement to be a bit impractical. Perhaps the LUT can be created in a more memory efficient manner?

1 Like

Hi,

I am using SNAP 6.0 with s1tbx (V. 6.0.3). I am trying to co-register two images acquired after March 13, 2018 with GPT, by performing the Thermal noise removal and other pre-processing steps. The process stops after 40% as shown below, providing an output image with half of the cells affected by No data values:

…10%…20%…30%…40%.Sentinel1RemoveThermalNoiseOp: ERROR: i = 9 y = 13527burstBlocks[i].azimuthNoise.length = 13527
java.lang.NullPointerException

The same graph works if I use the SNAP user interface, highlighting that in my case this issue is not related to the used memory.

Could you give me some information about this error ?

Thanks,
Giuseppe

So the issue is still not fixed? Do you have a recommendation about how to get around this problem?
VHR

It’s been working fine for my graphs. Maybe you should include exactly which products and which preprocessing you did to repeat the problem

I would suggest you bring the image id so that they can easily locate the problem. Just FYI.

LOL

For me I think the problem is somehow fixed as long as we got sufficient memory space. I have already processed about 40 images without any problem.

For giuse’s question, I think it may relate to the image itself.