The version 3.2.2 of MAJA cloud detection and atmospheric correction software has just been released! It brings a lot of improvements. It can be used to process Sentinel-2 data using four different manners, whose description is available by clicking on the links below :
that looks very promising, I did’t know MAJA so far to be honest.
Are there tests which compare the outputs of sen2cor with those of MAJA?
Great! Are there any plans of making executables for Windows & Mac? It would be possible to distribute MAJA also as a SNAP-plugin but we’d like to be able to support all SNAP operating systems…
There was an inter-comparison workshop last year, named acix. Several ACs have been compared:
That’s the resulting paper: https://www.mdpi.com/2072-4292/10/2/352
Now, it goes into another round, acix-2nd.
This time cloud-screen will get a separate workshop
Hi, thanks for comments !
I had presented comparison results with Sen2cor at the RAQRS conference in September 2017, including the ACIX results :
Our paper which compares FMask, MAJA and Sen2cor cloud mask performances was just accepted (this morning) by remote sensing. I will add the reference here as soon as it is published (within a week or two).
@mengdahl Until your question, we had no plans to publish MAJA on Windows or Mac, but we should discuss that. I transfer your questions to the persons in charge.
ACIX-II and CMIX will be very hard for us, the organisers do not account much for the constraints of multi-temporal methods (which have also a lot of advantages). MAJA performances are optimal when it processes time series, not individual scenes. As a result, when CMIX asks to process 1 scene, MAJA has to process the 9 dates before. I heard they plan to ask us to process 800 scenes, which means 8000 scenes to download and process for us. We will probably not manage to do it.
And there you mention the weakness of MAJA.
While it is great if it has the time series at hand, it also needs much more data.
So, all approaches have there pros and cons.
Yes, a multi-temporal processor needs multi-temporal data as input. When inputs are mono-temporal, MAJA performs as the other mono-temporal codes.
It is faster to go by bike than to go on foot, but you need to carry a bike, it is a weakness of the bikes
Ahh, I didn’t know that MAJA can also work on single scenes. I thought the multi-temporal need is inherent to the used algorithm.
The future is hypertemporal
Yes, MAJA also works for single scenes, but quality improves with time series. It is therefore not worth using MAJA, if it is to process single scenes. To use the same metaphor as above, although Christopher Froome is faster than Usain Bolt in absolute, Usain will win if bikes are not allowed
MAJA uses both mono-temporal approaches and multi-temporal approaches at every stage (cloud detection, AOT estimates). When only one image is available, we use only the mono-temporal approaches.
As promised, here is the reference of our new paper comparing performances for cloud detection of
MAJA 3.3 (available in a few weeks), Sen2Cor 2.5.5, and FMask 4.0, using reference cloud mask generated with an active learning method named ALCD.
Paper is open access, ALCD code is open source, and Reference cloud masks are an open dataset.
Baetens, L.; Desjardins, C.; Hagolle, O. Validation of Copernicus Sentinel-2 Cloud Masks Obtained from MAJA, Sen2Cor, and FMask Processors Using Reference Cloud Masks Generated with a Supervised Active Learning Procedure. Remote Sens. 2019 , 11 , 433.