StampsExport Op with snappy

Dear @marpet, maybe you can help me using the StampsExportOp with snappy.

Here is a test dataset, which was successfully exported using the SNAP GUI.

What I tried in snappy:

stack_path = '/home/thho/SBAS/snap_prepro/subset/stack/20150113_20141126.dim'
ifg_path = '/home/thho/SBAS/snap_prepro/subset/ifg/20150113_20141126_ifg.dim'

stack = ProductIO.readProduct(stack_path)
ifg = ProductIO.readProduct(ifg_path)

stack_ifg = []
stack_ifg.append(ifg)
stack_ifg.append(stack)
    
parameters = HashMap()
parameters.put('targetFolder', '/home/thho/SBAS/export/20150113_20141126/')
parameters.put('psiFormat', True)
stamps_exp = GPF.createProduct('StampsExport', parameters, stack_ifg)

I can run this code without an error, but it just creates target folder ‘20150113_20141126,’ which is nice, but the folder is empty. It generates an output in python too:

org.esa.snap.core.datamodel.Product(objectRef=0x5630e76401d0)

But I think it should export the data to the target folder and creating the folders described in the Op…when it is successful, the target folder (top left) and the sub folders look like this (result with SNAP GUI):

Do I use this special Op in a right way? Or is it impossible to use this Op with snappy? I tried also

ProductIO.writeProduct(output of GPF.createProduct(), ...)

But this just gave me the data in a format I choose, not in the exported structure given in the Op

Thanks for any advice :slight_smile:

I used gpt and it worked like a charm, my change from snappy to gpt is a bit late but it is so much faster!

Did someone find a solution for this?

Apparently using snappy with GPF.createProduct(‘StampsExport’, parameters, stack_ifg) doesen’t produce the output to the folder, but just creates the target folder. Which means it succesfully runs until line 120 here StampsExportOp.java#L120 but we don’t know what goes wrong after as I can’t see any error as output.

parameters = HashMap()
parameters.put(‘targetFolder’, ‘/home/thho/SBAS/export/20150113_20141126/’)
parameters.put(‘psiFormat’, True)
stamps_exp = GPF.createProduct(‘StampsExport’, parameters, stack_ifg)

@marpet do you have any idea of what is happening here? If I run the same with gpt through the graph in snap2stamps it works fine. Doesen’t GPT call also the same createProduct method ?

Apparently the initialize() method doesen’t actually write to the output folder, the only method which could be helpfull and actually writes to the output folder is writeProjectedDEM() but it is private and can’t be call from jpy. Is there any alternative? Or does someone have any idea of how this method is supposed to be called ?

The GPF.createProduct() method creates only an in-memory representation of the product.
And no data is calculated up to this point. Only if the data is accessed or written then the processing takes place.
If you want to write the product to disk you need to do:

ProductIO.writeProduct(target_product , <'your/out/directory'>, write_format)

You can find more information here:
How to use the SNAP API from Python - SNAP - Confluence (atlassian.net)

You can also find some examples in this forum:
Search results for ‘ProductIO.writeProduct #development:python’ - STEP Forum (esa.int)

Thanks for your reply. Ofc I know that, the problem is that stampsExport writeProduct doesen’t actually trigger the stampsExport behaviour that does gpt. writeProduct just write out the product that comes from targetProduct in the Operator extended class StampsExport.java

As you can see from here StampsExport takes a list of sourceProducts (needs 2). Once createProduct is called from GPF, it triggers initialize() method in StampsExport which sets targetProduct to be the second sourceProduct, this is not the actual output that the StampsExport produces via gpt command line.

What actually triggers the StampsExport export of files for stamps is computeTileStack(Map<Band, Tile> targetTiles, Rectangle targetRectangle, ProgressMonitor pm) which never gets called from just a createProduct().

Analying gpt code from snap-engine, I noticed that it calls executor.execute() from the OperatorExecutor instance, which is the only way to trigger that computeTileStack which actually writes the files for stamps.

I have wrapped part of the gpt code here that does that, for StampsExport with jpy.

OperatorSpiRegistry = jpy.get_type(‘org.esa.snap.core.gpf.OperatorSpiRegistry’)
OperatorExecutor = jpy.get_type(‘org.esa.snap.core.gpf.internal.OperatorExecutor’)
System = jpy.get_type(‘java.lang.System’)
PrintWriterConciseProgressMonitor = jpy.get_type(‘com.bc.ceres.core.PrintWriterConciseProgressMonitor’)

products = HashMap()
products.put('sourceProduct', coreg)
products.put('sourceProduct.1', ifg)

params = HashMap()
targetFolder = File(output_folder)
params.put("targetFolder", targetFolder)



gpf_instance = GPF.getDefaultInstance()
operator_spi_registry = gpf_instance.getOperatorSpiRegistry()
operator_spi = operator_spi_registry.getOperatorSpi("StampsExport")

operator = operator_spi.createOperator(params, products)


executor = OperatorExecutor.create(operator)
executor.execute(PrintWriterConciseProgressMonitor(System.out))

Now the problem is another one, there is huge spike in memory around 4000mb when executor.execute(PrintWriterConciseProgressMonitor(System.out)) this line get’s executed, and it never gets cleared. I have tried the same via java code but it doesen’t take this much memory. Do you have any idea of what is the cause of it?