I am trying to use snappy to preprocess S1 SLC data. I have many files to process and don’t want to sit there running multiple graphs, even in batch processes. Is this idea realistic?
Currently I am trying to implement the TOPSAR-Split operator with
def module1_swath_n(product, swath_number):
band_names = product.getBandNames()
swath_bands = [band for band in band_names if f"IW{swath_number}" in band]
parameters = snappy.HashMap()
parameters.put('selectedPolarisations', swath_bands)
split_swath = snappy.GPF.createProduct('TOPSAR-Split', parameters, product)
and am running into the
ValueError Traceback (most recent call last)
Cell In[56], line 1
----> 1 module1_swath_n(product, 1)
Cell In[55], line 11, in module1_swath_n(product, swath_number)
6 #TOPSARSplitOp = snappy.jpy.get_type('org.esa.s1tbx.sentinel1.gpf.TOPSAR-Split')
7 #split_product = snappy.TopSARSplitOp(product, subswath=f"IW{swath_number}", selectedPolarisations=swath_pol_bands)
8
9 # TOPSAR Split
10 parameters = snappy.HashMap()
---> 11 parameters.put('selectedPolarisations', swath_bands)
13 split_swath = snappy.GPF.createProduct('TOPSAR-Split', parameters, product)
ValueError: cannot convert a Python 'list' to a Java 'java.lang.Object'
problem, as detailed (ultimately with no answer) in this thread.
An solution to the above problem would be great, but also an answer to the question of is it realistic for a person who has some coding experience but is definitely not a software engineer to be able to fully implement S1 SLC preprocessing, from SAFE.zip all the way to dual-pol eigen decomposition etc.?
Edit: just to clarify, I the end of the linked thread says something like “passing lists as parameters is meant to be implemented in the future, but that hasn’t yet happened”.