Handling sentinel data in python ( working with snappy)


#21

You should call GPF to invoke the operator. After reading the products do:

from snappy import HashMap

parameters = HashMap()
parameters.put('selectedPolarisations', 'hh,vv') # or whatever is necessary here
products = jpy.array('org.esa.snap.core.datamodel.Product', 2)
products[0] = sentinel1 
products[1] = sentinel2 
result = GPF.createProduct('SliceAssembly', parameters, products)
ProductIO.writeProduct(result, 'C:\data\temp\result.nc', 'NetCDF4-CF')

#22

This is really interesting. I have never needed such structure so far…but I will take note…for future tutorials :wink:


#23

Hi @marpet,

Sorry for disturbing you again! I have tried the code you mentioned, but the result.dim only included Metadata, Vector Data and Tie-Point Grids. The Bands were disappeared.

I’m new in using snappy, so I don’t really understand how to deal with this problem.

My script now is as below:

from snappy import ProductIO
from snappy import GPF
from snappy import jpy

images\track_69\test\S1A_IW_SLC__1SSV_20141022T100021_20141022T100051_002941_003570_A0CE.zip"
sentinel_1 = ProductIO.readProduct(path_1)

path_2 = “L:\Sentinel images\track_69\test\S1A_IW_SLC__1SSV_20141022T100049_20141022T100119_002941_003570_8A60.zip”
sentinel_2 = ProductIO.readProduct(path_2)

from snappy import HashMap

parameters = HashMap()
parameters.put(‘selectedPolarisations’, ‘hh,vv’) # or whatever is necessary here
products = jpy.array(‘org.esa.snap.core.datamodel.Product’, 2)
products[0] = sentinel_1
products[1] = sentinel_2
result = GPF.createProduct(‘SliceAssembly’, parameters, products)
ProductIO.writeProduct(result, ‘L:\Sentinel images\track_69\result.dim’, ‘BEAM-DIMAP’)

Many thanks!


#24

Hi @marpet,

I have solved this problem!
I changed the Java_max_mem to 12G in snappy.ini, and everything goes fine!
Now, I could do SliceAssemmbly of my images automatically
by python!
Thank you very very much!!


#25

Hi @marpet,

I’m trying to make Slice Assembly with more than 70 images. The script could run successfully, but I met another problem with the big size of temp files generated when processing by snappy.

The temp files will appear in .snap\var\cache\temp, and I couldn’t delete the files when my python running the script. I need to close python then I could delete the temp files, because the system shows that python is using all temp files, even though I stopped running the script in python.

Is there any method to solve this problem?

Thank you for your kindly help!

Best,
Sharon


#26

No, this not possible. There is no good control over the cache.
Maybe it helps if you disable the FileCache in the settings.


#27

Hi @marpet,

I have tried what you said, the temp files won’t be generated during processing.
But it appeared another error:
"RuntimeError: java.lang.OutOfMemoryError: Java heap space"
or
"RuntimeError: java.lang.OutOfMemoryError: GC overhead limit exceeded"

I also changed the java_max_mem from 16G to 20G in snappy.ini, but nothing changed.

Thank you for your help!


#28

Hello,
I would save Sentinel-2 bands in geotiff files. Can you help me please

#!/usr/bin/env python
""" 
This puthon code is designed to process Sentinel-2 imgages 
"""
import numpy
import numpy as np
import snappy
from snappy import Product
from snappy import ProductData
from snappy import ProductIO
from snappy import ProductUtils
from snappy import FlagCoding
from snappy import HashMap
import os, gc   
from snappy import GPF
import matplotlib.pyplot as plt


GPF.getDefaultInstance().getOperatorSpiRegistry().loadOperatorSpis()
HashMap = snappy.jpy.get_type('java.util.HashMap')

#Now loop through all Sentinel-2 data sub folders that are located within a super folder (of course, make sure, that the data is already unzipped):

outputpath= "/home/ndjamai/job/S2A_L2A/"
file_name='/home/ndjamai/job/S2A_L2A/S2A_USER_MTD_SAFL2A_PDMC_19700101T000000_R055_V20160918T173952_20160918T173954.xml'
    
print("Reading...")
product = ProductIO.readProduct(file_name)
width = product.getSceneRasterWidth()
height = product.getSceneRasterHeight()
name = product.getName()
description = product.getDescription()
band_names = product.getBandNames()

print("Product:     %s, %s" % (name, description))
print("Raster size: %d x %d pixels" % (width, height))
print("Start time:  " + str(product.getStartTime()))
print("End time:    " + str(product.getEndTime()))
print("Bands:       %s" % (list(band_names)))

red = product.getBand('B7')
NIR = product.getBand('B8')

   #SMAPVEX16 site
WKTReader = snappy.jpy.get_type('com.vividsolutions.jts.io.WKTReader')
wkt = "POLYGON((-98.16 49.83, -97.75 49.83,-97.75 49.33,-98.16 49.33,-98.16 49.83))"   
geom = WKTReader().read(wkt)
parameters = HashMap()
parameters.put('geoRegion', geom)
parameters.put('outputImageScaleInDb', False)

subset = outputpath + "band_subset"
target_1 = GPF.createProduct("Subset", parameters, red)
ProductIO.writeProduct(target_1, subset, 'Geotiff')

#29

There are two problems I currently see.
The subet operator does not have the parameter outputImageScaleInDb. You should remove this one.
Instead you should add as parameter which bands you would like to have in your target product.

parameters.put('sourceBands', 'B7')

The call to GPF.createProduct() should get the source product as last parameter and not a band.

target_1 = GPF.createProduct("Subset", parameters, product)

This way it might work.

Actually you don’t even need to use Python for this, of course you can if you like, but it can also be achieved by writing a graph XML file.
If published an example here:


#30

Thanks it works!!
I m using Sentinel-2 L2A product, and I need SCL, WVP and AOT maps.
Using PCI I can see and export easily theses maps (see Fig.1).
But, with SNAP, I can not find them (Fig.2)
Even when read the available bands using snappy, these bands are absent.
############################################################################
file_name=’/home/ndjamai/job/S2A_L2A/S2A_USER_PRD_MSIL2A_PDMC_20160902T183652_R098_V20160901T173902_20160901T174728.SAFE//S2A_USER_MTD_SAFL2A_PDMC_20160902T183652_R098_V20160901T173902_20160901T174728.xml’

print(“Reading…”)
product = ProductIO.readProduct(file_name)
width = product.getSceneRasterWidth()
height = product.getSceneRasterHeight()
name = product.getName()
description = product.getDescription()
band_names = product.getBandNames()

print(“Product: %s, %s” % (name, description))
print(“Raster size: %d x %d pixels” % (width, height))
print("Start time: " + str(product.getStartTime()))
print("End time: " + str(product.getEndTime()))
print(“Bands: %s” % (list(band_names)))

Product: S2A_USER_MTD_SAFL2A_PDMC_20160902T183652_R098_V20160901T173902_20160901T174728, None
Raster size: 10980 x 10980 pixels
Start time: 01-SEP-2016 17:39:02.026000
End time: 01-SEP-2016 17:47:28.547000
Bands: [‘B2’, ‘B3’, ‘B4’, ‘B5’, ‘B6’, ‘B7’, ‘B8’, ‘B8A’, ‘B11’, ‘B12’, ‘quality_aot’, ‘quality_wvp’, ‘quality_cloud_confidence’, ‘quality_snow_confidence’, ‘quality_scene_classification’, ‘view_zenith_mean’, ‘view_azimuth_mean’, ‘sun_zenith’, ‘sun_azimuth’, ‘view_zenith_B1’, ‘view_azimuth_B1’, ‘view_zenith_B2’, ‘view_azimuth_B2’, ‘view_zenith_B3’, ‘view_azimuth_B3’, ‘view_zenith_B4’, ‘view_azimuth_B4’, ‘view_zenith_B5’, ‘view_azimuth_B5’, ‘view_zenith_B6’, ‘view_azimuth_B6’, ‘view_zenith_B7’, ‘view_azimuth_B7’, ‘view_zenith_B8’, ‘view_azimuth_B8’, ‘view_zenith_B8A’, ‘view_azimuth_B8A’, ‘view_zenith_B9’, ‘view_azimuth_B9’, ‘view_zenith_B10’, ‘view_azimuth_B10’, ‘view_zenith_B11’, ‘view_azimuth_B11’, ‘view_zenith_B12’, ‘view_azimuth_B12’]
######################################################################
I dont know what is the problem,

Fig.1

Fig.2


#31

Fig.2


#32

Yes, you have asked this already in the other thread.
Need help for opening Sentinel-2 data.
I will answer there


#33

Hello,

marpet,

I have one question. I need unzip any images sentinel 1 and 2 (only using python code), but de size is very big, I can do when size is one gigabyte, but I have problems when the size is higher to one gigabyte.

You know any code python for unzipping files sentinel type?

Thanks.


#34

Sorry, no idea. And I guess something like the follow you have tried already:

But you could also invoke system commands to unzip the data. Maybe this works better.
Did you have success to unzip the data on the command line?


#35

If you are on linux, there are a couple ways to mount a .zip file as a read-only filesystem.


#36

I’m using this code:

dir_name = ‘path_to_images’
extension = ‘.zip’
namfil1=‘S2A_MSIL1C_’
namfil2=‘S1A_IW_SLC_’
namfil3=‘S2A_OPER_’
for item in os.listdir(dir_name): # loop through items in dir
if item.endswith(extension) and item.startswith(namfil1) or item.startswith(namfil2)or item.startswith(namfil3): # check for “.zip” extension
file_name = dir_name + “/” + item
zip_ref = zipfile.ZipFile(file_name) # create zipfile object
zip_ref.extractall(dir_name) # extract file to dir
zip_ref.close() # close file

I can unzip files of 5gb, but in files Sentinel 2, in format S2A_OPER_PRD_MSIL1C_PDMC, I can’t unzip, using images 700Mb or 5Gb.

Any idea?


#37

I’m on windows, the problem required using this os. :frowning:

Any other idea?


#38

Try it without unzipping. If you want to open standard S1 or S2 data just do

product = ProductIO.readProduct(productPath)

#39

A post was split to a new topic: Intersection between geoJson and product


#40

Hi Marco,

I’m re-sampling a Sentinel 2 product using the suggested operator. What I realize is that by using the operator in result = snappy.GPF.createProduct(‘Resample’, parameters, source) we are filling with data the hard drive. I’m using the resampled imagery in creating subsets for more than 10 imagery set, after creating the subsets I don’t need anymore the full re-sampled imagery resulted from using the ‘Resample’ operator. What I realized is that the createProduct function is, by default, filling the hard drive in the folder: C:\Users\The Specialist\AppData\Local\Temp\snap-The Specialist with the re-sampled imagery. I integrated the suggested method in a function used in a loop that is creating the imagery subsets set.
If I run for several times the script the HDD comes full.
Can you support me in dealing with the issue ? I would like to free the space from the hard drive after the re-sampling operation + subset-ing each imagery.

Looking forward to your reply.

George