I read this paper about C2RCC but I still need help understanding how it works.
Currently my understanding is that NOMAD data was used to create a bio-optical model. This model was used to parameterize HydroLight. HydroLight is used to give water-leaving reflectances. Water leaving reflectances are given to SOS look-up tables which then give TOA radiances. 5 million water-leaving reflectance and TOA radiance pairs generated this way were then used to train neural networks to invert the mapping from TOA to BOA, and then BOA to IOPs.
My questions:
In the generation of the 5 million cases, what was changed between cases?
Why are neural networks needed to invert the radiative transfer equations? If they can give ground truth BOA and TOA data from IOPs, why can’t they be inverted and give us BOA and IOP data from TOA data?
In the generation of the 5 million cases, what was changed between cases?
The idea is to include the maximum number of optical water types, including those waters that represent extreme cases (highly absorbing or highly scattering, black waters, etc.). NOMAD is a good database, but is mainly open ocean Case 1 waters, while C2RCC is designed for Case 2 waters (more related to coastal areas).
I do not understand well your second question.
The input of the C2RCC processor are TOA radiances and the output of the AC part the BOA reflectanes. To put it simple, the AC NN model is trained with this TOA-BOA pairs to be able to learn all probable cases, taking the BOA in situ + simulations as references, and estimating TOA radiances from those, learning the range of possible situations. That knowledge is later used by the processor to generate BOA from a new range of TOA (from your image).
The in-water NN takes those BOA results to extract the IOPs from where concentrations are later calculated. The use of inverse and forward models is related to the out-of-scope and out-of-range tests, that helps with the quality control of the results.
The support on this forum is exceptional, thank you both.
Marpet,
I will look at those documents as well.
Abruescas,
My second question was me trying to figure out why neural networks are needed. Please correct me if I’m wrong but I believe the TOA and BOA data used to train the AC NN are both simulated using radiative transfer equations. So if algorithms like HydroLight and SOS can convert data from IOPs, to BOA reflectances, to TOA radiances, and the results are good enough to establish the ground truth for training a neural network, is there a reason these algorithms can’t be inverted, so that the input could be TOA radiances and the output would then be BOA reflectances, and then in the next step IOPs?
Why are neural networks needed to go backwards – TOA to BOA to IOPs?
You answered yourself: the NNs need to be trained to take the TOA of your L1 image and convert it to BOA and IOPs. Hydrolight produces the bio-optical model for the in-water part (taking as reference the BOA and IOPs in situ) but these need to be matched with TOA (that we do not have in situ). In a second step the algo has to be able to reproduce the BOA and IOP outputs with high accuracy, only with the TOA input. That’s the way the nets learn, so when you use your TOA image, BOA and IOPs can be derived.