I am fairly new to SAR data, and I am trying to create a flood event/inundation map of the Houston area using S1 GRD VV polarization data. I have processed SLC’s before but I cannot find any automated steps to guide me through how to properly process GRD data.
So far, I have tried these steps:
Remove GRD border > Calibrate > Speckle Filter (Lee) > Linear to dB > Band Maths for Binarization >Range-Doppler Terrain Correction
I also would like to export JUST the flooded zones so I can analyze it in Google Earth. I’m not entirely familiar with the software so if anyone has a suggestion, it would be greatly appreciated.
This tutorial is very useful. Procesing chain is a little different from Yours.
I mapped floods by multilook-calibrate-dB-terrain correction. Then I performed band maths to extract water mask.
I guess that You can map just flooded zones by subtracting water masks (water during floods - water before floods).
How you defined the threshold???
It was also discussed somewhere in this forum. I can’t find the topic now…
Its quite manually. Select AOI with water and land pixels in similar amount, compute histogram for the AOI, and find the lowest point between the two peaks. For me it usually is around -1dB, but sometimes completely different (like in example).
Wow, that is exactly what I was looking for. Unfortunately, mapping over urban areas or areas with high wind can make it tough when you barely have a bi-modal histogram.
Also, if I choose to mosaic, which I would prefer in this case, would SAR Mosaic be the best option? I am assuming I would do this before Terrain Correction, but I’m not sure.
I created a stack following the directions from ESA, however, the images are not correctly aligned and I am unable to coregister GRD products. Is there any solution to this? I apologize for asking so many questions. I included a screenshot of the RGB view using master as red and slave as blue/green.
Did you apply orbit files before coregistration? You can increase the number of GCPs and change the window sizes. In general, the images which you want to co-register should not be too small so that a sufficient number of GPCs can be found throughout the image.
Otherwise, you can download the SLC products and use the S1 TOPSAR coregistration. It allows for the definition of sub-swaths and bursts and produces correct results most of the time.
Here we have the solution for the SLC correction.
For GRD we could use the name of the S1 products you are using as well as some processing if any have you done, as I don’t know which tutorial by ESA you mean. As always RD Terrain Correction, Calibration to backscatter and conversion to [db] should be obligatory. I write this because it doesn’t seem like db value example. I suggest to use regular Collocation from raster menu.
I am not sure about the RGB composition you made, I would recommend using master for green and blue and red for the slave with change…there you might see changed areas/flooded whatever in red instead of the result you are getting.
Can’t argue with that @ABraun with the corregistration for the amplitude values. Still if there is a good reference acquisition with regular values I would use that and look at the change in comparison to the reference image.
But your arguments are sound for distorted/irregular scenes.
I have also extracted the flooded water using thresholding technique for a complete scene. After processing and thresholding, the binary image is around 2.5 Gb. For calculation of flood statistics, I need to convert the raster to polygon. It is taking a lot of time to convert from the .img file obtained after thresholding. Is there any simple workaround for compressing the data without losing information? I have seen several inundation layer with less than 100mb and is of 8-bit unsigned data type.
It would be really helpful if anyone can give any suggestion to handle data size.
Have you tried the Convert Data Type operator? When the images are binary aftet thresholding you cannot lose any information by conversion to 8 bit integer, for example.
Also, the mask manager could be an option because it allows thresholding without creating new products.