GRD-Border-Noise and Thermal noise removal are not working anymore since March 13. 2018

I think I have a similar issue - S-1 Thermal Noise removal introduces Artefacts in the image
Was a solution found to this problem?
Thanks, Sanjay.

We are currently looking into this issue. Sorry for the inconvenience.

Thanks Magda. I have only seen this issue in that single image take on 13th March (i.e. S1A_IW_GRDH_1SDV_20180313T180532_20180313T180557_021000_0240E0_114C), though I must admit that I have not checked every single image in details as I process several hundreds of images in batch. They look okay as such but I have no way of checking whether the values are correct after latest S1TBX fix. Sanjay.

Hi, @mfitrzyk @lveci - any update on my query please. I noticed there was an update, which I did apply. However, I am still seeing the artefacts in S1A_IW_GRDH_1SDV_20180313T180532_20180313T180557_021000_0240E0_114C. Thanks, Sanjay.

I’ve managed to work out what the problem is here: the java_max_mem value in snappy.ini needs to be greater than or equal to roughly 12GB in order that the IW SAFE archives can ben processed. My development machine only has 8GB and hence I was running into this problem (getting NullPointerExceptions). After testing on a machine with more memory, I found that I could reproduce the problem by setting java_max_mem to 10GB and trying to process an IW SAFE archive. This would explain why I could process EW scenes on my development box, since they don’t require as much memory and would possibly explain why others haven’t seen this issue, since one would assume they are using SNAP in an environment with 16GB or more of RAM.

Thats interesting @paultcochrane - I would be interested to see if your installation can replicate the issue of data quality after thermal noise removal with the following image: S1A_IW_GRDH_1SDV_20180313T180532_20180313T180557_021000_0240E0_114C
My steps are as follows:

  1. Apply Restituted Orbits
  2. Border Noise removal
  3. Thermal Noise removal - this step produces the bad image in S-1 Thermal Noise removal introduces Artefacts in the image. I think this is related to the number 4 issue raised by @biocpu

Anyone has any news for this problem?

This should be fixed with the update from May 20th

Thanks lveci!

Just FYI and someone would be interested.

The new Sentinel-1 images (after 2018.03.18) “Thermal Noise Removing” procedure uses about twice memory size than the previous images (before 2018.03.18). So enlarge the memory limitation will solve the “NullPointerException” error.

According to my tests, it should be 15GB or larger.

Find the file SNAPINSTALLATIONDIRECTORY/snap/bin/gpt.vmoptions


1 Like

I can confirm the memory issue with Sentinel1RemoveThermalNoiseOp. The method buildNoiseLUTForTOPSGRD creates very large matrices, for instance:
noiseMatrix = new double[numLines][numSamples];//numLines: 16694, numSamples: 8807

This causes the memory requirement to be a bit impractical. Perhaps the LUT can be created in a more memory efficient manner?

1 Like


I am using SNAP 6.0 with s1tbx (V. 6.0.3). I am trying to co-register two images acquired after March 13, 2018 with GPT, by performing the Thermal noise removal and other pre-processing steps. The process stops after 40% as shown below, providing an output image with half of the cells affected by No data values:

…10%…20%…30%…40%.Sentinel1RemoveThermalNoiseOp: ERROR: i = 9 y = 13527burstBlocks[i].azimuthNoise.length = 13527

The same graph works if I use the SNAP user interface, highlighting that in my case this issue is not related to the used memory.

Could you give me some information about this error ?


So the issue is still not fixed? Do you have a recommendation about how to get around this problem?

It’s been working fine for my graphs. Maybe you should include exactly which products and which preprocessing you did to repeat the problem

I would suggest you bring the image id so that they can easily locate the problem. Just FYI.


For me I think the problem is somehow fixed as long as we got sufficient memory space. I have already processed about 40 images without any problem.

For giuse’s question, I think it may relate to the image itself.

I posted a couple of days ago this (one of the scenes that I´m having problems with is S1A_IW_GRDH_1SDV_20180425T115326_20180425T115359_021623_025464_D0DE):

For some time now I have been unable to complete a batch preprocessing sequence even if I have the latest version and update plugins every time I start SNAP. My graph contains:

_ Apply orbit file_
_ Thermal noise removal_
_ Calibration_
_ Multilook_

I try to process 5 S1A_IW_GRDH_1SDV dual polarization scenes dating from between March and April 2018 and that cover Northern Guatemala.

Some of the scenes (the older ones and usually just 2 of them) are processed well, but the others just don´t have any meaningful data. The process creates a huge file (17 GB) in the log directory (file created = heapdump.hprof.old). I´m attaching the resultant log file.

Can someone help me correct this issue? Thanks.

And my computer has 32 GB of RAM . . . would that be a problem? How much do you have?


Would you please check SNAPINSTALLATIONDIRECTORY/snap/bin/gpt.vmoptions

Did you set the software memory limitation? like:


Enlarge it or write it down to see if you can solve this.

And you can also check some memory inspection applications to see if the program uses enough memory.


You are right we will need to fix this with a module update.

Thanks for your suggestion. I checked and I have Xmx20G that I assume means that I have 20 GB configured. Do I move this number to 32? Am I limited to just what I have in RAM memory? Thanks again.