Sentinel 3 SLSTR L2 Processing issues (missing dimensions + subsetting NPEs)

Hi everyone,
I’m reprocessing a Sentinel3 SLSTR L2 WST product in order to reproject it to WGS84.
I have taken a sample from the ESA FTP:
ftp://ftp.acri-cwa.fr/TDS_FULL_S3_20150918/SLSTR_Based-on-MODIS-or-AATSR/SLSTR_Based-on-AATSR/Level2/

While still “playing” with GPT to check the output I have noticed a couple of issues:

  1. The S3NetcdfReader only supports the following rows/columns pairs for underlying 2D dimensions:
    {“rows”,“columns”},{“tie_rows”,“tie_columns”}
    Whilst the L2P.nc file has nj, ni dimensions.
    Therefore, I had to add the {“nj”,“ni”} pair to that class. It’s a very trivial fix. Do you have any chance to doublecheck/confirm? I can provide a pull request if needed.

  2. That sample file is very big (2D extent = 435201500 rowscolumns).
    Therefore I have planned to split the processing in several pieces by doing multiple subset operations defining different regions: let’s say processing 500 rows by chunk.
    At the end, I will re-mosaic all of them with a gdalbuildvrt and so on.
    However, I have noticed that each subset with a not zero offset (Y=500) result into a NullPointerException.

As far as I can quickly see from the code (I didn’t debug too much in detail) it looks like the GeoCodingFactory applies a JAI Scale operation with negative translate Y offset (it does transY = -region.y, at line 159)
Afterwards, when the GeoApproximation class tries creating the approximations (by calling the “create” method) it triggers pixels computations inside the extractWarpPoints.
Then it looks for pixels from a tile with Yindex = -1 (due to the previous negative translate) which doesn’t exist in the source image so that the nullTile.getSample results into a NPE.
Do you have any chance to check it?

The sample graph containing a chain to replicate the issue is attached.
SLSTRL2graph.xml (1.8 KB)

Best Regards,
Daniele

I wonder what those dimensions are good for. Also it is just an test data set (TDS).
We have experienced some changes in the format for other product types we see in the commissioning phase. But we haven’t seen WST data from the commissioning phase yet. So I hesitate to change the reader now on the basis of the TDS and maybe change it again when we see real data.

But you know that processing is done in tiles (chunks) anyway in SNAP, don’t you?

It seems that you need to extent the band names list in the first subset operator to include lat and lon.

<bandNames>lat,lon,sea_surface_temperature,wind_speed</bandNames>

Actually this should be done automatically. So there is probably the bug.
I’ve noted this in our tracking system.

Thanks for the report

Sounds good.

Yes.
When doing everything into a single step the past year with BEAM, I have noticed longer time with respect to splitting in chunks.
My explanation was that when having several ops in the chain and using multiple threads, the tile cache is frequently updated with tiles going in and out due to involving different areas… a kind of thrashing.
Moreover, depending on the “path” of the satellite, the reprojected “final rectangle” can be very big, especially when going near the poles, resulting into having a lot of tiles filled with nodata/nan. Separate chunks allow to get smaller “rectangles” and I can leverage on empty_tiles support of GDAL Geotiff to create a final dataset with emptyTiles on the void area.

Trying to summarize the idea with this image. The green rectangle is the one-step final Geotiff bbox. The red borders are bboxes for separated chunks. Gdalwarp in conjunction with SKIP_SOURCE create option allows to create a final GeoTIFF covering the Green bbox but with no bytes being written for tiles outside the red bboxes.

Ah! Thanks for the suggestion. I’ll try it.
For other products (OLCI L1/L2 and SLSTR L1) it was working without the coordinate bands.
However SLSTR L2 has specific longitude and latitude variables so I think that this will make the difference.

Thanks for the feedbacks.
Daniele