Snappy: Where to start?

You can do the following:

sceneUL = PixelPos(0 + 0.5f, 0 + 0.5f);
sceneUR = PixelPos(product.getSceneRasterWidth() - 1 + 0.5f, 0 + 0.5f);
sceneLL = PixelPos(0 + 0.5f, product.getSceneRasterHeight() - 1 + 0.5f);
sceneLR = PixelPos(product.getSceneRasterWidth() - 1 + 0.5f, product.getSceneRasterHeight() - 1 + 0.5f);

this way you have the corner pixels. Afterwards you can do:

gp_ul = geoCoding.getGeoPos(sceneUL, gp);
gp_ur = geoCoding.getGeoPos(sceneUR, gp);
gp_ll = geoCoding.getGeoPos(sceneLL, gp);
gp_lr = geoCoding.getGeoPos(sceneLR, gp);

From these geo-positions you can retrieve lat and lon.


Compute the min and max for lat and lon and then you have the results.
When you have the lat and lon values you can compute the lat/lon width/height and divide it by the pixel width/height.
This way you get the pixelSizeX/Y

Good afternoon,

Thank you once again. I could successfully implement it.

But now I have two questions regarding this PixelSize and Mosaicing.

As you mentioned above I can read out the S2 data with their different UTM zones and afterward create a mosaic of both of them. Now I would like to keep the same resolution (e.g. 10,20 or 60m). I have a test file in the area of the Siachen Glacier (~36°N, India-Pakistan). If I did not make a huge mistake, then 10m in this latitude should be ~0.0001° (PixelSizeY, East-West) and ~0.00009° (PixelSizeX, North-South). But if I want to run the Mosaicing with such values I will receive an error RuntimeError: java.lang.RuntimeException: Cannot construct DataBuffer. . If I run the same script with 0.001° (~90m, which is also the minimum Value in SNAP) it works fine. Is there a reason why it is not possible to have a better resolution?



How much memory do you have? Sounds like that you do not have enough.


I tested it on my Desktop (8 GB RAM) and on my Laptop (16 GB RAM) with 4 DEM (Geotiff, each ~25 MB) and I got the same result:

Using 0.0010 as input: it works fine
Using 0.0009 as input: RuntimeError: java.lang.RuntimeException: Cannot construct DataBuffer.

You can try to set the JVM memory.
Have a look into the snappy directory. There is a file. Change ‘jvm_maxmem = None’ to jvm_maxmem = 6G for example.

I always get the RuntimeError: java.lang.OutOfMemoryError: Java heap space when processing S2 data. So I already changed it as you described it in Topic 1102. I changed and snappy.ini in C:\Users\Andreas Baumann\.snap\snap-python\snappy:

But it did not really work.

Hi, did you guys already make the mosiac code running?
Im failing trying to write a script making mosaics of .dim-files…
Anybody with here who could show me an example script how to use the mosaicing with snappy?!
Thanks a lot!

Good afternoon,

I was busy with other projects, so that I did not continue with the mosaic code. Attached two files regarding Mosaicing. If you don’t get the “out-of-memory” error, then it should theoretically work.

Thank you for any feedbacks.

Andreas (12.5 KB) (8.5 KB)

Thanks a lot Andreas,
I will try it for me within next days.
When i made subsets with snappy I got the “Java Heap Space” Error as well if you mean that.
The allocation of more memory didnt work for me at all.
Do you have any new solutions to that?


Hello Bennet,

I meant exactly this error. The allocation of more memory also did not work for me. But I only have 8GB RAM and if you consider that two S-2 datasets have approx. a size of 10GB, then I guess there is no solution to run it on my computer. :slight_smile:


Hi Andreas,
I have even 12GB but as you mentioned its still not enough.
For subsetting numerous files I just wrote a script which is starting a python script again and again and checks which files are already subsetted and which not, since the memory error disapears after restarting python.
Thank you again for the codes you sent!

Me again,
I tried your script Andreas, it’s working perfectly!
Since i made the subsetting before I don’t get any Memory-Errors here anymore because of the smaller file-sizes.
I included a part using MosaicOp$Condition as well and this also works fine.

Edit: If I include too many files I of course get the HeapSpace Error again…so I split up and create several mosaics and mosaic the smaller mosaics again, hope its understandable :grin:

1 Like

Hi, is there any way to access the “How to use the SNAP API from Python” page without having use an atlassian account?


I am trying to force snappy to open 10 m resolution of Sentinel-2 product but I don’t succeed. As abgbaumann said
ProductIO.readProduct(file, "SENTINEL-2-MSI-10M-UTM35N") produces error RuntimeError: no matching Java method overloads found. I didn’t make it work with the solution for S3 at
Can you help me out in case of S2? How to do it?

(I think that would be a solution to my problem of NDVI caluclation. I always get results in 60 m resolution. Instead of 1098010980 px, my result is always 18301830 px)

Hello pandza

Did you add the following code too?

HashMap = jpy.get_type('java.util.HashMap')


Yes, yes, I already had that piece of code in my code.


How do you create the file variable?
I think it is a string, right?
That’s the reason why no java method can be found. When specifying the format, the file needs to be Java file object.
Do it like:
ProductIO.readProduct(File(file), "SENTINEL-2-MSI-10M");
I think there is now specific- UTM-format for the 10m resolution.
But doing it like this will give you only the 10m bands.

Probably the best way to prepare the data is to read in the data in multi-size resolution format and then doing the resample to 10m.

Here I show how to do it.

The call to GPF.getDefaultInstance().getOperatorSpiRegistry().loadOperatorSpis(), which Andreas mentioned
is not necessary any more, since SNAP version 5.0.

Yes, I create file as string.
For ProductIO.readProduct(File(file), "SENTINEL-2-MSI-10M") what do I need to have File (Error message: ‘File’ is not defined) ? Is it from snappy import File or something else?
I will update SNAP and try everything and give feedback, but for now I see I’m using SNAP 3.0. I realised if I put
HashMap = jpy.get_type(‘java.util.HashMap’)
after ProductIO.readProduct(File(file), "SENTINEL-2-MSI-10M") it works with 60 m resolution and if I put it before it works with 10 m resolution. It doesn’t mind if I stated SENTINEL-2-MSI-10M or SENTINEL-2-MSI-60M, it just doesn’t care. For 20 m resolution I would have no clue. :grin: Is it possible that it remembers something somewhere in memory? Is there any tool to clear before product.closeIO()? It’s just my guess.

One more question is that in the end I get NDVI for e.g. 10 m resolution, but no projection. Can I somehow read EPSG from my input file and write it to my output file, and if yes how?

Yes that’s correct.
Alternatively you can do:

File = jpy.get_type('')

Why don’t you have any projection? How do you create the NDVI?
You can ask a product/band for a GeoCoding and copy it to your target.
You can use ProductUtils.copyGeoCoding(source, target);

1 Like

I just didn’t have a line for projection at all. Now I have a projection, thank you for the answer.
In addition, I create NDVI as a new empty product and then fill it with data; I use band math expression.

Last in this round of questions, when I calculate NDVI using SNAP GUI I use tool Optical/Thematic Land Processing/ NDVI Processor. Resulting product has bit depth 128, and result from my code has 32 bit depth. In my code I stated ProductData.TYPE_FLOAT32. What should be stated in order to have a bit depth 128? Is there a list of supported formats given somewhere?