Cloud mask SLSTR

Hi,
I have noticed that (at least over sea) the sentence “The pixel is flagged as cloudy if any one of the tests indicates the presence of cloud.” Is not always true.
https://sentinels.copernicus.eu/web/sentinel/technical-guides/sentinel-3-slstr/level-1/cloud-identification

In particular it seems that if one of the following tests 1.6 large histogram or 2.25 large histogram or thermal histogram fails, the summary_cloud index can instead not revealing the presence of a cloud. Is it correct?
Which is the index you recommend to use to avoid cloud measurements?

Can anyone help me please…thanks!
Enzo.

Indeed there are some known issues with the cloud flagging in the SLSTR products and the experts are working on it. So you can expect an update on this (I don’t know when it will be released).
I’ve forwarded your question to the Sentinel-3 Validation Team (S3VT).

Thanks for your report

HI Enzo

I am responsible for the basic cloud flags. You are correct that the 1.6/2.25 large and small scale histogram tests and the IR histogram tests are not included in the summary flag. These tests were not considered to be working well and therefore it was decided to remove them from the summary flag until they are fixed, but all the individual cloud test results are still there. There is an auxiliary file (PCP) which indicates which of the basic cloud tests is combined to produce the summary cloud.

The large scale 1.6/2.25 tests have since been improved and are now included in later releases of the data (products processed with processor baseline 2. 10, IPF version 6.09). The small scale hist tests are still being worked on. These operate in regions flagged as ‘sun glint’ and so the cloud flagging is not optimal here yet.

Therefore, the summary cloud flag should be used to avoid cloudy data, but it can still be useful to check each separate cloud test and combine them for your own purposes.

Caroline Cox

1 Like

Hi Caroline (& Marco),

is there any information/documentation about the status of the flags, their quality in certain processing cycles, under certain conditions etc., known issues, what “the experts” are working on or the like?

I think this would of interest and help for a significant number of users as it would allow to (better) judge, how to use/interpret them for in the context of the user’s specific application.

Best wishes,
jm

Hi

The best place to go for this information is the cyclic reports, found online at:
https://sentinel.esa.int/web/sentinel/technical-guides/sentinel-3-slstr/data-quality-reports
The information on the cloud masking is going to be more detailed in future issues, but it is still helpful to look at these for other issues on the data quality.

There are also ‘product notices’ the most recent of which is [Sentinel-3A Product Notice - SLSTR Level-1B NRT and NTC]
https://sentinel.esa.int/web/sentinel/user-guides/sentinel-3-slstr/document-library

I hope this helps. Let me know if you need any more information

Caroline

Hi Caroline,

thanks for the documents. The Product Notice (PN) is quite useful. :slight_smile:

As I understand it, the PN gets updated with progressing processing baseline versions. The current baseline version is v2.37 (IPF v6.16), applied from Aug02 (ie fairly fresh). Are there PNs for earlier baseline versions available? I couldn’t find any in the Document libraries.
Also, how can I see which baseline version my data has? If I understand correctly, the product name only contains the major baseline version number. Is the minor one anywhere in the data, too? where?

And, I keep struggeling with understanding the different cloud flags/masks. The PN speaks of “Bayesian cloud mask” (providing probability 0-1) and of “the probabilistic cloud mask” (which does currently not provide probabilities over land). From the PN it seems to me these are two different items. However, looking at the data, I come to the conclusion that this is one and the same thing - the values/array stored in the “probability_cloud” items in the flags*.nc files. Is that so?
Then, there are the “bayes” items in the flags*.nc, which in contrast to “probability_cloud” are bit masks. What is their meaning, and how are they derived (considering that they cover land, too, they seem closer related to the base cloud mask…)? I can not really get that from the short name/description in the data itself or the SLSTR PDFS (or the web-based User and Technical Guides).
I kind of asked these latter questions already there, but asking you directly here probably yields better results… :wink:

Thanks & best wishes!

See also related question here.
Any help is appreciated!