The new Sentinel-1 images (after 2018.03.18) “Thermal Noise Removing” procedure uses about twice memory size than the previous images (before 2018.03.18). So enlarge the memory limitation will solve the “NullPointerException” error.
According to my tests, it should be 15GB or larger.
Find the file SNAPINSTALLATIONDIRECTORY/snap/bin/gpt.vmoptions
I can confirm the memory issue with Sentinel1RemoveThermalNoiseOp. The method buildNoiseLUTForTOPSGRD creates very large matrices, for instance:
noiseMatrix = new double[numLines][numSamples];//numLines: 16694, numSamples: 8807
This causes the memory requirement to be a bit impractical. Perhaps the LUT can be created in a more memory efficient manner?
I am using SNAP 6.0 with s1tbx (V. 6.0.3). I am trying to co-register two images acquired after March 13, 2018 with GPT, by performing the Thermal noise removal and other pre-processing steps. The process stops after 40% as shown below, providing an output image with half of the cells affected by No data values:
…10%…20%…30%…40%.Sentinel1RemoveThermalNoiseOp: ERROR: i = 9 y = 13527burstBlocks[i].azimuthNoise.length = 13527
java.lang.NullPointerException
The same graph works if I use the SNAP user interface, highlighting that in my case this issue is not related to the used memory.
Could you give me some information about this error ?
I posted a couple of days ago this (one of the scenes that I´m having problems with is S1A_IW_GRDH_1SDV_20180425T115326_20180425T115359_021623_025464_D0DE):
For some time now I have been unable to complete a batch preprocessing sequence even if I have the latest version and update plugins every time I start SNAP. My graph contains:
I try to process 5 S1A_IW_GRDH_1SDV dual polarization scenes dating from between March and April 2018 and that cover Northern Guatemala.
Some of the scenes (the older ones and usually just 2 of them) are processed well, but the others just don´t have any meaningful data. The process creates a huge file (17 GB) in the log directory (file created = heapdump.hprof.old). I´m attaching the resultant log file.
Thanks for your suggestion. I checked and I have Xmx20G that I assume means that I have 20 GB configured. Do I move this number to 32? Am I limited to just what I have in RAM memory? Thanks again.
VHR
at index 23: C:\WINDOWS\System32\Wbevator.GDALPlugInActivator: Illegal char <
m
at index 23: C:\WINDOWS\System32\Wbelegal char <
m
at sun.nio.fs.WindowsPathParser.normalize(Unknown Source)
at sun.nio.fs.WindowsPathParser.parse(Unknown Source)
at sun.nio.fs.WindowsPathParser.parse(Unknown Source)
at sun.nio.fs.WindowsPath.parse(Unknown Source)
at sun.nio.fs.WindowsFileSystem.getPath(Unknown Source)
at java.nio.file.Paths.get(Unknown Source)
at org.esa.s2tbx.dataio.gdal.activator.GDALDistributionInstaller.findFolderInPathEnvironment(GDALDistributionInstaller.java:213)
at org.esa.s2tbx.dataio.gdal.activator.GDALDistributionInstaller.processInstalledWindowsDistribution(GDALDistributionInstaller.java:179)
at org.esa.s2tbx.dataio.gdal.activator.GDALDistributionInstaller.install(GDALDistributionInstaller.java:67)
at org.esa.s2tbx.dataio.gdal.activator.GDALPlugInActivator.start(GDALPlugInActivator.java:22)
at org.esa.snap.runtime.Engine.informActivators(Engine.java:222)
at org.esa.snap.runtime.Engine.lambda$start$10(Engine.java:121)
at org.esa.snap.runtime.Engine.runClientCode(Engine.java:189)
at org.esa.snap.runtime.Engine.start(Engine.java:121)
at org.esa.snap.runtime.Engine.start(Engine.java:90)
at org.esa.snap.runtime.Launcher.run(Launcher.java:51)
at org.esa.snap.runtime.Launcher.main(Launcher.java:31)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at com.exe4j.runtime.LauncherEngine.launch(LauncherEngine.java:62)
at com.exe4j.runtime.WinLauncher.main(WinLauncher.java:101)
at com.install4j.runtime.launcher.WinLauncher.main(WinLauncher.java:16)
Executing processing graph
INFO: org.hsqldb.persist.Logger: dataFileCache open start
java.lang.NullPointerException
done.
I had to remove ThermalNoiseRemoval from my graphs. When using ProductReader and SliceAssembly, ThermalNoiseRemoval, errors and notes : “Error: Noise removal should be applied prior to slice assembly”, however ThermalNoiseRemoval after ProductSet-Reader is not allowed also.
What is your use-case? Sounds like you are trying to generate a series of multi-slice GRD acquisitions, correct?
ps. Your processing does not include calibration that is the most important step in post-processing GRD-data. Also with S-1 you should be able to use backgeocoding instead of SAR-simulation for the terrain-correction.
My mistake on my previous post: I showed a summary of a graph used to create layover/shadow images only.
I am terrain correcting ~1000 Sentinel-1 images for each 1 degree geocell. Some cells lie on the boundary between S1 slices in one orbit. After the updates in March relative to the format change, I can no longer use ThermalNoiseRemoval in my ProductSet-Reader>SliceAssembly graphs.
Below is the correct summary of my original terrain correction graph used prior to March 2018. I now run the same graph without ThermalNoiseRemoval.
A very similar graph for single images has worked fine for the last couple years, and still uses ThermalNoiseRemoval :
Read > Apply-Orbit-File > Remove-GRD-Border-Noise > ThermalNoiseRemoval > Calibration > Terrain-Flattening > Terrain-Correction > Subset > Write