Export Object Detections vector data

Hello,

For what I’m doing, I cannot use the SNAP desktop or a graph-based option. I can only use python and the snappy module.

I"m using the snappy module in Python (2.7) and have successfully ran through a tutorial on ocean object (ship) detection. However, I want to export the ship detections as a vector just like the ESA SNAP desktop application does. I’m not seeing any documentation on vector export and the only possible option in File/Export I see is CSV for this purpose. Ideally, a geojson or ESRI ShapeFile would be ideal, but even if CSV is all I can get, that is fine.

Right now, I’ve adapted the code in the ship detection graph tutorial and it references ${outputproduct1} and ${outputproduct2}, with the second one being the GeoTIFF. Then, for the first one (which I am assuming is the vector), I write it as such ProductIO.writeProduct(write_1, write_1_prefix, 'CSV') where write_1 is the GPF.createProduct and write_1_prefix is the file path to save the file. The second one is a GeoTiff, which writes fine.

When I do this, I get an > 30 GB file with no extension for the write_1 output. This surely isn’t a CSV of vector data. I’m not sure what it is.

Any idea on how I can export the ship detections?

One other possible option is to somehow export the ship detection mask raster in the Object-Discrimination folder (titled Sigma0_VH_shp_bit_msk.img). Is that possible?

Update: Code is below:

##############################################################################

import snappy

from snappy import ProductIO
from snappy import HashMap

import os, gc   
from snappy import GPF


GPF.getDefaultInstance().getOperatorSpiRegistry().loadOperatorSpis()
HashMap = snappy.jpy.get_type('java.util.HashMap')

import time
start = time.time()
path = "C:\\Users\\username\Downloads\\S1A_IW_GRDH_1SDV_20190313T145036_20190313T145101_026321_02F14F_426A\\"
for folder in os.listdir(path):
    gc.enable()
    
     print(folder)
    
    output = path + folder + "\\"  
    timestamp = folder.split("_")[4] 
    date = timestamp[:8]
    #Then, read in the Sentinel-1 data product:

    sentinel_1 = ProductIO.readProduct(output + "\\manifest.safe")    
    print sentinel_1

    pols = ['VH','VV'] 
    for p in pols:  
        polarization = p    
      
        # APPLY ORBIT FILE
        
        parameters = HashMap()
        parameters.put('orbitType', 'Sentinel Precise (Auto Download)')
        parameters.put('polyDegree', '3')
        parameters.put('continueOnFail', 'false')
        s1_orbit_applied = GPF.createProduct('Apply-Orbit-File', parameters, sentinel_1)
        file_prefix=output + date + "_orbit_applied_"+polarization
        ProductIO.writeProduct(s1_orbit_applied,file_prefix,'BEAM-DIMAP')
        
        del parameters
        
        # LAND-SEA-MASK
        
        orbit = ProductIO.readProduct(file_prefix + ".dim")
        parameters = HashMap()
        parameters.put('useSRTM', True)
        parameters.put('landMask', True)
        parameters.put('shorelineExtension','10')
        parameters.put('sourceBands', 'Intensity_' + polarization) 
        s1_landseamask = result = GPF.createProduct('Land-Sea-Mask', parameters, orbit)
        mask_prefix=output + date + "_land-sea-mask_"+polarization
        ProductIO.writeProduct(s1_landseamask,mask_prefix,'BEAM-DIMAP')
        
        del parameters
        
        # CALIBRATION
        
        masked = ProductIO.readProduct(mask_prefix + ".dim")
        parameters = HashMap() 
        parameters.put('outputSigmaBand', True) 
        parameters.put('sourceBands', 'Intensity_' + polarization) 
        parameters.put('selectedPolarisations', polarization) 
        #parameters.put('outputImageScaleInDb', True)  

        calib_applied = output + date + "_calibrate_" + polarization 
        s1_calibrated = GPF.createProduct("Calibration", parameters, masked) 
        ProductIO.writeProduct(s1_calibrated, calib_applied, 'BEAM-DIMAP')
        del parameters
        

        # ADAPTIVETHRESHOLDING

        calibration = ProductIO.readProduct(calib_applied + ".dim")
        
        parameters = HashMap()
        parameters.put('targetWindowSizeInMeter','30')
        parameters.put('guardWindowSizeInMeter','500.0')
        parameters.put('backgroundWindowSizeInMeter','800.0')
        parameters.put('pfa','12.5')

        adapt_thresh_prefix = output + date + "_adapt-thresh_" + polarization
        adapt_thresh = GPF.createProduct("AdaptiveThresholding", parameters, calibration)
        ProductIO.writeProduct(adapt_thresh, adapt_thresh_prefix, 'BEAM-DIMAP')
        
        del parameters

        # Object-Discrimination
        
        adaptThreshold = ProductIO.readProduct(adapt_thresh_prefix + ".dim")
        
        parameters = HashMap() 
        parameters.put('minTargetSizeInMeter','30')
        parameters.put('maxTargetSizeInMeter','600')
        objdisc_prefix = output + date + "_obj-discrimination_" + polarization 
        objdisc = GPF.createProduct("Object-Discrimination", parameters, adaptThreshold) 
        ProductIO.writeProduct(objdisc, objdisc_prefix, 'BEAM-DIMAP')
        del parameters
        
        # Write 1
        
        obj_disc = ProductIO.readProduct(objdisc_prefix + ".dim")
        
        parameters = HashMap()
        parameters.put('file','${outputproduct1}')
        #parameters.put('formatName','CSV')
        write_1_prefix = output + date + "_prod_1_" + polarization 
        write_1 = GPF.createProduct("Write", parameters, obj_disc) 
        ProductIO.writeProduct(write_1, write_1_prefix, 'BEAM-DIMAP')
        del parameters
        
        # Write 2
        
        obj_disc = ProductIO.readProduct(objdisc_prefix + ".dim")
        
        parameters = HashMap()
        parameters.put('file','${outputproduct2}')
        #parameters.put('formatName','Geotiff')
        write_2_prefix = output + date + "_prod_2_" + polarization 
        write_2 = GPF.createProduct("Write", parameters, obj_disc) 
        ProductIO.writeProduct(write_2, write_2_prefix, 'GeoTIFF')
        del parameters`
1 Like

I was able to extract the ship detections via the detection mask provided from the Object-Detection results and create a vector file that way.

Another question, is the snappy API able to read non-SAFE Sentinel-1 data, or does it need that XML to run?

Via the snappy API you can any data which SNAP is able to read, e.g. GeoTiff, NetCDF, jp2, HDF, …
You just need to use ProductIO.readProduct(…)

Just a hint for your script you have provided above.
You import HashMap twice.

from snappy import HashMap

and via

HashMap = snappy.jpy.get_type('java.util.HashMap')

You can remove the second one.

Also, the call to load the operator SPIs is not necessary anymore. Since SNAP 5, or so.

GPF.getDefaultInstance().getOperatorSpiRegistry().loadOperatorSpis()

@ aesarnut Can you please help me know how you did that code by converting the ship_detection_mask to vector file using snappy code?

I had run the code above successfully till adaptivethresholding step but object-discrimination function is not generating a csv file. Can you please help in any modifications to be made to the above code by you to get the csv or xml file

unable to generate csv file while coding in snappy. kindly help

Hi. I understand we can access the mask of detected ships by the band ship_bit_msk, as @aesarnut mentioned. However, obtaining locations and ship sizes from that would require more processing to find the segments, when this information is already in the vector file ShipDetections (if we open this file in snap it shows a table containing geometry, size and latlon for every ship detected). Does anyone know a way to access the data inside ShipDetections vector file from snappy?

I might have found a solution to it. I wasn’t getting any ShipDetections.csv either via SNAPPY when I was saving products with snappy.ProductIO.writeProduct().

I tried everything I could think to see why it didn’t work like it does on SNAP Desktop. So, I suddenly got it to work by using snappy.GPF.writeProduct instead. I don’t really know how different these are, but I have been getting the CSVs almost everytime.

It’s a very strange outcome to recreate.

I hope it still helps… 2 years later hahaha

Thanks for the report @javiruizs.
This is indeed an issue.
I’ve created a ticket for this and fixed the problem.
[SNAP-1498] ProductIO does not rewrite header if product has changed during writing - JIRA (atlassian.net)
In the next release it will work also with ProductIO.