Stack averaging filtered data

I have a problem with stack averaging that follows multi temporal speckle filtering in Snappy.

I want to perform multi temporal speckle filtering for two back-geocoded Sentinel-1 SLC images to get average backscatter. I try processing chain like calibrate and save complex -> back-geocoding -> TOPSAR deburst -> Multi temporal speckle filter -> Stack averaging -> Multilooking, terrain correction, converting to dB and saving as GeoTIff. Basically it all ends with Null exception.

As far as I have tracked the issue, Snappy is unable to do stack averaging without writing previous step. All described processes worked fine in SNAP Desktop, it also worked, when multi-temporal speckle filtered was written and then opened again for stack averaging.
Maybe it is somehow connected with source bands? When I watch stack averaging in Desktop version, it is visible, that SNAP creates temporal virtual band “Intensity” for a multitemporal-speckle-filtered file (which contains two intensity bands) and finally writes it to a new file. Perhaps Snappy can’t perform that way?

Can somebody suggest some workaround besides writing a product and reading it again?
Maybe I an order of processing steps is incorrect?

I have a very similar problem. Any help would be greatly appreciated.

As snappy is just the bridge between Java and Python, I think it is not to blame for this error.
Maybe the configuration of the processing chain is not yet good or the configuration of the snappy (in terms of memory usage).

Can you provide the Stacktrace of the NullPointerException? Maybe you need to enable the debug mode for snappy in the in the snappy directory.
Set line 54 to
debug = True
Then you should see more output.

I assume you have already increased the memory settings for snappy, otherwise you would probably have other issues.

But it can be that the configuration is still not valid. Maybe you can show us the critical part

Thanks for Your reply.
Yes, it might be something in processing chain as it is actually quite difficult to understand which tools should be used in which order. Processing chain in different forum topic also sometimes varies, so there are a lot of experimenting, what happens when something is changed, if it works or not.

Output din’t change with or without debugging, the problematic part of it was:

Wed Apr 12 16:27:38 2017, Calculating Maximum backscatter: d:\file1.dim
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/C:/Program%20Files/snap/snap/modules/ext/org.esa.snap.snap-netcdf/org-slf4j/slf4j-simple.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/C:/…/SNAP/modules/ext/org.esa.snap.snap-netcdf/org-slf4j/slf4j-simple.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]
Wed Apr 12 16:27:40 2017 Writing: statsname.tif
Traceback (most recent call last):
File “”, line 395, in
statsbackscatter(file1, ‘Maximum’)
File “”, line 311, in statsbackscatter
snappy.ProductIO.writeProduct(targetDB, productpath, ‘GeoTIFF-BigTIFF’)
RuntimeError: org.esa.snap.core.gpf.OperatorException: java.lang.NullPointerException

Which happens during function statsbackscatter(filename, statistic):
This part works well, if I write speckle filtered product and open it again (commented lines between ###test###)

print ("\n", time.asctime(time.localtime()), “\nCalculating”, stat, “backscatter”, file1)
source1 = snappy.ProductIO.readProduct(file1)
parameters = HashMap()
targetTOPSARDeburst = GPF.createProduct(“TOPSAR-Deburst”, parameters, source1)
#multi temporal speckle filter
parameters = HashMap()
targetSpk = GPF.createProduct(“Multi-Temporal-Speckle-Filter”, parameters, targetTOPSARDeburst)
#testpath = “spk.dim”
#print (time.asctime(time.localtime()), ‘Raksta:’, testpath)
#snappy.ProductIO.writeProduct(targetSpk, testpath, ‘BEAM-DIMAP’)
#targetSpk = snappy.ProductIO.readProduct(testpath)
parameters = HashMap()
parameters.put(‘statistic’, stat)
targetStat = GPF.createProduct(“Stack-Averaging”, parameters, targetSpk)
parameters = HashMap()
targetMultilook = GPF.createProduct(“Multilook”, parameters, targetStat)
parameters = HashMap()
parameters.put(‘demName’, ‘External DEM’)
parameters.put(‘externalDEMFile’, ‘dtm.tif\’)
parameters.put(‘nodataValueAtSea’, ‘false’)
targetTerrain = GPF.createProduct(“Terrain-Correction”, parameters, targetMultilook)
#linear to dB
parameters = HashMap()
targetDB = GPF.createProduct(“LinearToFromdB”, parameters, targetTerrain)
productpath = path + statsname
print (time.asctime(time.localtime()), ‘Writing:’, statsname)
snappy.ProductIO.writeProduct(targetDB, productpath, ‘GeoTIFF-BigTIFF’)
# clean memory

I hope these parts could help us find out, what is not good.

It would be interesting at which point exactly the NullPointerException is thrown.
It must be in some of the operators. Maybe it is caused by a memory issue.
Have you changed the memory settings for snappy?
Have you followed these threads?

That the output did not change is strange.
What happens if you modify also line 75

jpy.diag.flags = jpy.diag.F_ALL

I changed memory settings using some lines I also found in this forum.

#To avoid RuntimeError: java.lang.OutOfMemoryError: Java heap space
print((“Current _JAVA_OPTIONS: '” + os.environ.get(’_JAVA_OPTIONS’, ‘Not Set’)))
print(“will be changed to ‘-Xmx4096m’ to avoid OutOfMemoryError”)
os.environ["_JAVA_OPTIONS"] = “-Xmx4096m”
os.system(‘export _JAVA_OPTIONS=-Xmx4096m’)

Is it worse than changing
Also, changing memory settings in or changed nothing for this exception.
Even more - jpy.diag.flags = jpy.diag.F_ALL also did not give more detailed information about the error. Arrgh.

I couldn’t imagine better solution to see, witch step threw a NullPointerException before posting here, so I tried writing target product in different processing steps, starting from the end - there was a NullPointerException for every operator that followed Multi-temporal-speckle-filter. That’s why I guessed that problem somehow appears in stack averaging multi-temporal-filtered data. Everything also works fine if I don’t do speckle filtering (deburst - averaging - and so on)

Probably 4GB of memory is not enough. What I meanwhile learned is that you should have at least 8GB if you want to process S1 data.