Data after baseline 4.00 totally differes even when being processed with forced 3.00

Ok, I got a huge problem with data after baseline 4.00 was out. Did the original data change as well? Because even processing it with SEN2COR with target baseline 3.00 it contains totally different values and random nodatas…

Here’s an example of NDVI or NDWI from ~the same day in the year from 2021 and 2022 (both processed with baseline 3.00):

http://149.156.182.22:82/s2/

These values are totally not comparable, not to mention that there is a problem with 2022 data, that it has a lot of “nodata == 0” pixels.

How do I make them comparable?

Why do you think there is a significant difference?

The no-data area at the right border is caused by the different swath of the observation. If you want to compensate this, you need to create a mosaic or create a L3 binning.
Also, I noticed that the NDVI NDWI just contain gray scale values and not NDVI/NDWI. I guess there happend some conversion. Apart from this the NDVI looks plausible to me.
Where the image is greener in 2021 the NDVI (gray scale) values are higher.

differences on NDWI are like 10 times, I need to be able to use the same classification process as on images from before, it doesn’t now and changing it would make classifications uncomparable

How have you computed the NDWI. The processor available in SNAP. If so, then consider the flags which are generated. All high values are flagged.
Use for example this as valid pixel expression
!flags.ARITHMETIC and !flags.NEGATIVE and !flags.SATURATION
In the image the flagged values are marked red.

After applying the flags in the valid pixel expression only similar values remain.

1 Like

I am doing it in Matlab. Not using SNAP. Only to SEN2COR L1 image. Tried in baseline 3.00 and 4.00 and the problem still exists. Please note that in my case values are from 0 to 255 (1 byte == NDVI == 1)

Sorry can’t help with MatLab. Maybe there is a MatLab forum somewhere.

But is there any REAL way to harmonize these values? Manually… by GDAL for example.

Do you considered the product format change wiht baseline 4.00? There was introduced an offset in the data values.

Kind regards
BdPg

Not sure what You mean and tbh I don’t understand that information about multiplying values by 10000. Because these values differ more or less 7-11 times, in L1C more like 1.5-2 times (depending on band these values differ). Please help.

Thanks for your answer @bdpg.
I think the source of the issue has already been mentioned here.

That’s also discussed here.
[BUG] SEN2COR 2.10 creates some weird 0-value pixels - s2tbx / sen2cor - STEP Forum (esa.int)

I’m not sure if the user can do much about it. The quality_scene_classification is still no-data and the raw value is zero, which indicates no-data too.

The high values in your MatLab code probably result from “invalid” inputs.
That’s why SNAP flags such values.
In the NDWI values about 1 and lower zero are flagged as SATURATION respectively NEGATIVE.
When excluding those values in your MatLab result, the value range will be good.
At least this is how it works in SNAP.

How values shall be converted from raw to geophysical values is mentioned here:
Sentinel-2 Products Specification Document (esa.int)
image

But the note is misleading.
A reflectance value is not meaningful if it is between 1 and 65535. The values refer to the raw (DN) values, I think.

but raw (L1C) values are pretty the same and its like 2 times difference in between them, no way this will make for example 6000 (2021), 12000 (2022) equal… 12000 - 1000 = 11000 … 11000 / 10000 = 1.1 … to not to mention that differences differ (depending on the place in histogram), and there is no linear dependiency, but average difference also differs by channel, and in metadata we got:

<QUANTIFICATION_VALUES_LIST>
<BOA_QUANTIFICATION_VALUE unit=“none”>10000</BOA_QUANTIFICATION_VALUE>
<AOT_QUANTIFICATION_VALUE unit=“none”>1000.0</AOT_QUANTIFICATION_VALUE>
<WVP_QUANTIFICATION_VALUE unit=“cm”>1000.0</WVP_QUANTIFICATION_VALUE>
</QUANTIFICATION_VALUES_LIST>
<BOA_ADD_OFFSET_VALUES_LIST>
<BOA_ADD_OFFSET band_id=“0”>-1000</BOA_ADD_OFFSET>
<BOA_ADD_OFFSET band_id=“1”>-1000</BOA_ADD_OFFSET>
<BOA_ADD_OFFSET band_id=“2”>-1000</BOA_ADD_OFFSET>
<BOA_ADD_OFFSET band_id=“3”>-1000</BOA_ADD_OFFSET>
<BOA_ADD_OFFSET band_id=“4”>-1000</BOA_ADD_OFFSET>
<BOA_ADD_OFFSET band_id=“5”>-1000</BOA_ADD_OFFSET>
<BOA_ADD_OFFSET band_id=“6”>-1000</BOA_ADD_OFFSET>
<BOA_ADD_OFFSET band_id=“7”>-1000</BOA_ADD_OFFSET>
<BOA_ADD_OFFSET band_id=“8”>-1000</BOA_ADD_OFFSET>
<BOA_ADD_OFFSET band_id=“9”>-1000</BOA_ADD_OFFSET>
<BOA_ADD_OFFSET band_id=“10”>-1000</BOA_ADD_OFFSET>
<BOA_ADD_OFFSET band_id=“11”>-1000</BOA_ADD_OFFSET>
<BOA_ADD_OFFSET band_id=“12”>-1000</BOA_ADD_OFFSET>

Am I missing something?