Image Fusion Using Sentinel 1 and Sentinel 2

Any reason why you only need the RGB bands? I think you could make full use of the entire spectrum.

Anyways, you have to resample the full product to 10 meters before you can make a band subset. Not the best solution, but currently the only choice for proper metadata handling.

I recommend the following papers:

Thank you sir…

I did preprocessing steps (calibration, speckle filtering, terrain correction) for S1 data.
For S2 data I performed resampling for 4,3,2 bands (If I select all bands, the product is writing more than one hour and nothing happen), then I mosaic and mask my study area. Next I did the collocation by selecting S1 band as master product and S2 as slave. After that I applied PCA. But its loading and nothing happens. What was wrong and what should I do sir?

PCA is a computationally extensive task. Give it some time :slight_smile:

However, you shouldn’t include the “count” and “collocation_flag” bands in the PCA, they don’t contain image information.

Ok sir… So, I should select all other bands except “count” and “collocation_flag” in source bands. Am I Right sir?

yes. You can look at them, they are not related to what is happening at the earth’s surface.

Yes sir. I performed PCA. Now I want to extract the flooded areas from this fused data. What is the procedure in SNAP?

I’m not sure if a PCA is the best way to start flood mapping - do you have a source for this idea?

No Sir. I have heared about the IHS, PCA, Brovery transformation, wavelet based fusion. But I dont know what is the best method.

PCA is just a way to reduce the redundancy of a feature space (here a raster stack). It tells you where the images are alike (have the highest share of variation) and where they are different.

It is not necessarily linked to water body mapping, but there are studies which tested it:

Thank you for these papers. I refered this. I have finished PCA and HPF. After that they have made standardization of PCA components and inverse gray processing for High Pass Filtered microwave data. I have no idea about these processes.
In this flow chart can you please explain how to do that all processes?
(RGB to IHS and inverse IHS transform, standardization of PCA components, inverse gray processing, density separation, histogram matching and multiplication of the results?)

sorry, you have to ask more specific. I’m afraid I cannot tell you how to “do all that” :slight_smile:

SNAP is a image processing software, but you as a user must know what you want to do and why.

Ok sir… :blush:

We published a new tutorial on the fusion of Sentinel-1 and Sentinel-2 today

Synergetic use of radar and optical data


sir ,I have a new ptoblem here,if i intend to fusion using sentinel-1 and sentinel-2 and to evaluate the accuracy of results.i got two methods to do that including create a stack and Collocate them,what the difference for the two ways?

Please use the search function for such things:

1 Like

Hi, would you mind share more information about how you implement Coregistration and to manage to show both Sentinel-1 and Sentinel-2 at the same time? Many thanks.

Please check this tutorial: Synergetic use of radar and optical data

1 Like

Appreciate your advice! I have tried with the tutorial, the collocate went successful
=D Big thumb for these very helpful tutorials!


It’s coherent and corret to apply a classifier like RF on a PCA bands? You’ve said that for RF classification, it’s suficient stack the SAR and Optical bands.

Thank you.