Image Fusion Using Sentinel 1 and Sentinel 2

hello dini_ramanda. how did you go about your classification of the sentinel-2 dataset. I am getting something strange but the sentinel-1 works perfect

Radiometric correction only slightly changes the pixel values, it is not mandatory for classification (unless you want to compare scenes of different dates).
But there were cases where classification only worked after reprojection.

hello, i am doing S1 and S2 fusion, i did calibration, speckle filtering, terrain correction of s1 data from Snap as S1 preprocessing. i did nothing to S2 data,
then for fusion i need to do co-registration of both data on same reference coordinate scale. I need step how to do coregistration in detail with coordinate reference help.
and what will be future steps to do fusion,. i need help,

if you’ve terrain corrected the S1 products, just use Collocate or CreateStack on the S1 and S2 products.

hello , thanks for your suggestion.

Now i did collocate from SNAP raster geometric.

is collocate, create stack and coregistration is same thing?

now , what next i do for fusion of SAR and optical(PCA, IHS, Brovey etc), and is it possible from SNAP or i have to export into ERDAS.

kindly guide me , i am new remote sensing

If you are new to the field, take your time to study and compare some opinions and approaches. There is no standard way of fusion, so it also depends on what you want to do with the data after the fusion. I listed some references to fusion approaches here: Fusion of S-1A and S-2A data

It might be suitable to convert the SAR data to db (also right-click on the SAR bands for this) because this creates a more suitable distribution of backscatter values. Explanations are given here: dB or DN for image processing? and here Classification Sentinel-1 problems with MaxVer

Once they are in your stack you can create an RGB image by right-clicking on the product and selct “Open RGB image window”. This lets you allow to place colors on different bands and shows you their different information content.

Some fusion methods also can be done in the band maths tool (right click > band maths)
PCA is available under Raster > Image Analysis > Principal Component Analysis.

Maybe also an unsupervised clustering is an option to you (Raster > Classification > Unsupervised Classification)

HI everyone, can someone please explain, is there a difference between stacking images and fusing them. I see some recommend that image fusion be done through transformation techniques like PCA. I also want to use S1 and S2 as input variables in RF classification.

I applied all the s2 & s1 preprocessing steps you have mentioned above, in SNAP I . I converted my SAR files to decibels and saved them as bands. But I don’t know how to mosaic and how to create a subset of an image as the SNAP subsetting tool does not allow use of ROI or Shp files. My study area has an irregular shape. So I performed image mosaicking and raster clipping in QGIS. I then used the “align rasters” function to resample them to 20m resolution and to register them to a common projection WGS84 Zone 35.

I then created an 18 band stack consisting of 10 selected multi-temporal S2 bands, and 8 from SAR spanning over Four months period (VH & VV (4 months) = 8). Now I am trying to extract reflectance values into a Data Frame in R-studio, but the process is running forever, sometimes the system just crashes and restarts the computer. I want to perform a classification after extracting the reflectance/backscatter values (remember my file is an 18 band stack).

Please Help, what am I doing wrong, and Please do not forget to answer my 1st Question. Thank you in advance.

hello , can we we do object detection such as road, building, bridges etc from fused data.
if yes , then how and share me some related material to doing this

the only object detection currently supported by SNAP is the ship detection module based on the CFAR algorithm (examples)

For more complex objects, such as buildings or roads, you will have to pursue an semantic and object-oriented approach, for example in eCognition.

when i use the satck tool to get the stack with the s1A and S2A,following error is being shown:ple help me .

strong text

try to write it to a folder with no . in the name (e.g. D:\zip\class\20180515)

it didn’t work ,when it running to 2%.following error is being shown again

maybe your D:\ drive is full?

I had the same problem,could you tell me how to work it ? thanks in advance.

please have a look at the raster properties and check if there is some logical expression in the “valid pixel expression” which uses a variable, e.g. B2.raw, which no longer exists.

also i had another matter with fusion amd classification,which i need to use the sentinel-1 and sentinel-2 to class the glacier ,and to fusion them to get a better result, what steps should i do ?

what about the steps suggested in this topic?

.first stack the s1 and s2,then class them each other or class s1 and s2 each other and then stack them? which steps ?

I meant that this topic is full of ideas about the fusion of S1 and S2. There are multiple ways to do it, depending on your aim and the type of analysis which is performed. But if you want to perform a supervised classification, a stack containing all input sources is the best choice. Please have a look at these hints: Supervised classification for sentinel 1.

Another option is a principal component analysis or unsupervised classification but both require to scale the input data over the same value range first.

yeah,i need RF classification to work it ,so i need stack s1 and s2 firstly?