S2 Subset with GPT Differs from SNAP


I’ve got an issue subsetting a masked image to a vector using GPT. In SNAP it works correctly - the RGB image from the subset operation is correct:

However, when I replicate these steps (Raster->Subset) using gpt the RGB image is not the same (close, but colours are well off):

Both of the above images were generated in SNAP using Open RGB Window with default Natural Colour bands.

Here is the content of the graph file - which was generated with the Graph tool:

<graph id="Graph">
  <node id="Read">
    <parameters class="com.bc.ceres.binding.dom.XppDomElement">
  <node id="Subset">
      <sourceProduct refid="Read"/>
    <parameters class="com.bc.ceres.binding.dom.XppDomElement">
      <geoRegion>POLYGON ((25.50843620300293 -25.871530532836914, 25.553791046142578 -25.871530532836914, 25.553791046142578 -25.85169219970703, 25.50843620300293 -25.85169219970703, 25.50843620300293 -25.871530532836914, 25.50843620300293 -25.871530532836914))</geoRegion>
  <node id="Write">
      <sourceProduct refid="Subset"/>
    <parameters class="com.bc.ceres.binding.dom.XppDomElement">

The original imagery from which this is derived is at https://scihub.copernicus.eu/dhus/odata/v1/Products(‘c51ad708-d8bb-439b-b2af-42da9a3c1108’)/$value

The Masked10m file is created with GPT (and in SNAP) using a Land/Sea mask for bands B2, B3 and B4 using an imported ESRI Shapefile confined to the pictured dam.

What could be causing this?

I haven’t checked it. but it could be that in SNAP Desktop the original colour information is kept from the source. When you do the subsetting with gpt this is not done.
Please check the histograms of both images. either in the histogram dialog or in the colour manipulation.
If you click in the colour manipulation on recomputeimage , I guess it will change to the same colours as the one from gpt shows.
There is also an FAQ entry for give RGB images a consistent look.
Why are RGB images differently colorised and not comparable?

Thanks Marco. I was originally subsetting first, then masking. Doing that sequence in SNAP also produced the wrong colours, so I thought that was the issue, and by changing the sequence SNAP produced the correct colours, but then GPT did not.

I’ll check the histograms.

I’ve checked the histogram of the SNAP generated image, and it does exactly as you said when recomputing - the image becomes the same as that which is produced with GPT. I’ve attempted to correct with pconvert using the RGB Profile described in the FAQ, but there is no difference in the resulting png file.

Can the RBG profile be applied in another stage, or is there some other way to retain the original colour information? I’m concerned about this as we also create C2RCC-MSI analysis against a 20m resampled version of the imagery using the same masking and subsetting after resampling. Can this affect the input values to the C2RCC-MSI analysis?

The actual values are not affected. It is just affecting the display of the data.
Because you have a water body it is likely that the thresholds in the RGB expressions are not affecting your image. The thresholds are better suited for land.
You could adjust the colours by dragging the sliders in the colour manipulation window till you get the result you want and the use these values as threshold.
Mmmh, not sure. This might not work either.

1 Like

Short addition. Actually, we have already the requirement in our issue tracker what you would need.
[SNAP-520] RGB profile shall support min and max value for each channel - JIRA (atlassian.net)
But this is not yet implemented.

Ok, good to know that the values input to C2RCC are not affected.

I’ve played with the sliders from our gpt created image to use the values that appear when opening the RGB view after subsetting in SNAP, and do get them to match. The question now is how I can automate that with gpt?

Is there any way to prioritize the issue in SNAP-520? I see it’s been on the radar for a few years now.

This was not often requested in the past. That’s why it didn’t got a higher priority.
By your post you might change the priority.
If ESA sees it and decides this should be done, then it might come in one of the next updates.

I think it is difficult for our use case to find a good expression. Your value range is very narrow and small changes might have big impact. I said that you might be able use the values of the sliders as threshold. This doesn’t work well, because of the small value range.

I tried to find updated thresholds and update the RGB expressions, but had no luck. The resulting image didn’t look good.

This might be a use case where snappy gives better control. With ESA BEAM based SeaDAS 7, I used beampy for batch processing to match images created using the GUI: beampy_write_image.py (3.8 KB).

1 Like

Thanks for pointing this out.
There is a similar example for snappy:
snap-engine/snappy_write_image.py at 81577c9811882440a5f1b026aaacbefa5b857cfe · senbox-org/snap-engine (github.com)

But yours has already evolved. In combination with both it should be possible to create images.

Thanks for the tip. I’ll give it a try!