I would like to analyse/run some statistics on my processed Sentinel-1 flood detection images. For pre-processing I have used:
- Subsets of each image
- Calibration of each image
- Co-registated all 1A and 1B images
4 . Used band-maths for averaging my multi-temporal images (more than 5 images combined)
I want to test if using multi-temporal images helps me getting more accurate results than using single images. Unfortunately my study-sites are in Europe, meaning I do not have multi-temporal images for my flooded situation, only my non-flooded.
At this point I have only visualised my results using the RGB image window.
Does anyone have any tips on how I can eg. test the variance or some other stats in SNAP? Would be nice to see some trends in dB values for my flooded areas.
Thanks in advance
The region that is effected by flood will be changed according to the date of images that you select. increasing images in different date doesn’t increase accuracy of you result. normally, they use two images acquired before the flood event. following link explains step by step pre-processing and post processing on images to detect flood …
Thanks a lot!
Yes, my results are implying no improved accurasies when using stacks of multiple non-flooded images as the flooded image causes the same amount of noise as when using single non-flooded image. However, this is what I am trying to prove by statistics.
The flood-masking in the video helped for visualising.
Been looking for tips on how to test the variance in my two different scenarios 1 and 2:
- Single (one flooded image minus one non-flooded image)
- Multitemporal (one flooded image minus a stack of multiple non-flooded images)
Want to prove or disprove the bettering of using a image stack.