Poor co-registration limits Sentinel-2 time series analyses (at pixel level)

Hello,
I would like to conduct a supervised classification of land cover types in a region that features fairly small “objects” relative to Sentinel-2 pixel size (e.g. riverine vegetation). Importantly, I am interested in using multi-temporal S-2 data to achieve this. Now I noticed some positional misalignment between images taken e.g. April and July 2018 in the range of at least 0.5 pixels. So I must keep a safe buffer distance when delineating areas to train the classifier. This in turn reduces the potentially available training areas significantly, and also limits the validity of the classified map in “border regions”. Not good.
In October 2018 Charis Lanaras et al. reported that the current (empirical) geolocational accuracy of S-2 data (cf. in flat terrain) was about 11 m, but would be reduced to <0.3 pixels between passes at 95% confidence. (Redirecting). This appears to have happened (Lin Yan et al. 2018 (Redirecting) - I can see the same magnitude of misaligement in the current S2 data - and these authors developed a method to reduce misregistration to 0.15 pixels. But has their method (or equivalent) found its way into ESA’s default processing chain yet?

According to Olivier Hagolle from CESBIO (Let’s ask ESA to improve Sentinel-2 multi-temporal registration – Séries Temporelles), ESA has started working on this in 2019 and plans to reprocess the entire archive accordingly. But so far this appears to not have happened (for my test data across Germany for the year 2018).

Do you guys have any new information on these developments? Is there a good workaround? I am curious to hear from you. Thank you!

Hi @kmbh, welcome to this Forum.

Thanks for quoting one of my posts, but that’s not exactly what I meant and wrote. ESA, CNES and contractors have started working at improving the registration of Sentinel-2 images long before Sentinel-2A launch, using image matching techniques, and a Geometric Reference Image to be build from S2 images. Doing this was a huge task, and using image matching for all Sentinel-2 images (2 PBytes per day) requires a huge computing power.

But this is taking a lot of time, too much time, and meanwhile, the effective resolution of S2 time series is degraded to 20 or 30 meters.

The last announcement i heard about for the release of images with an enhanced registration was “end of 2019”, and it only mentioned real time processing, and not the reprocessing of the 4.5 first years of Sentinel 2.

As of October 4th, the start of geometric refining is still not announced in the “outlook” section of the S2 mission status report. Let’s hope it comes soon.

Hi Olivier,

many thanks for taking the time to response and the clarification.

Yes, I can imagine that the magnitude of the task. Have I understood correctly, that work on the GRI has essentially been completed, and that the challenge now is to apply image matching algorithms to all (first current, then archived) S-2 data?

I really look forward to the results of these efforts. For if the effective resolution is as low as you say (in my experience it seems a little better than that), I suppose robust land cover mapping based on multi-temporal S-2 data is currently not possible at 10m spatial resolution.

Do you think resampling the time series to 20 or 30 m (perhaps applying bilinear interpolation to retain gradient info as much as possible) prior to performing a supervised classification would produce conceptually better results?

Thanks for your thoughts!

Hi,
Yes, you have perfectly understood !

Regarding your second question, sorry, I am not an expert of land cover classification, I do not know what is best, resampling before classification or after. If you plan to use texture or contextual information as an input feature, it is still probably better to start working at 10m.

In this case I think the best solution is to apply Principal Component Analysis (PCA) according to the results, you could re-sample the granule to the highest resolution band of the component, But putting in mind that the re-sample for instance from 20 to 10 m doesn’t retrieve the objects they already aren’t existed within the 20. For more details,

Please have a look at the following post, Re-sample Meaning

Source of the post

Hi OHagolle and alahfakhri,

thank you both for your ideas (and my apologies for the delayed reply - i was off this topic for a few weeks). I might explore both approaches, it depends on what I eventually want to measure. My initial wish was to compute a range of vegetation indices that have a mechanistic meaning regarding the land cover classes that i hope to distinguish. Textual indices could be a good addition. PCA would resort to a more empirical relationship but it does come with the advantage of condensing the info available to a few layers in subsequent analyses… i am going to think about this again.

have a good day!