ERS (and most other SAR sensors) are primarily acquired in StripMap mode. It is generated line by line.
Sentinel-1 is acquired in TOPS mode (illustrated as ScanSAR principle below). Neighboring pixels are of different acquisition phases. That is why Sentinel-1 data needs some more calibration and corretion steps.
However, most classification approches need a multi-dimensional feature space. Optical satellites acquire information at multiple wavelengths (visible, infrared, thermal), but SAR data is restricted to one wavelength. Its potential for classification purposes is therefore limited.
Have a look at the possibilities to increase feature space for SAR data:
I have GRD S1 product. I follow all the workflow.
If I write “30” at pixel spacing(m)( at Range Doppler Terrain Correction), I resampling my product to 30X30 meters pixel size?
I need after this to do registration between S1 and L8. And ERS with L5, so I need all the final products at the same resolution (30 m).
Instead of resampling at the Terrain Correction step, you can also apply Multi-Looking to your data before it. The result is slightly different - I suggest to compare which output produces the better image quality at 30 meters pixel size.
The results should not have a large difference. Bilinear should be the minimum number of samples to look at giving the best performance. Bicubic will use more of the neighbouring samples without a much larger performance hit. The sinc interpolators have a lot more to calculate and therefore will be slowest.
Hello! I need to extract the backscatter coficiente of ERS1 and ERS2 to work with Landsat TM (30 meters). What would be your flow of radiometric and geometric correction?
Thank you very much for the help
That is possible and should be no problem. Just make sure you select nearest neighbor resampling for your binary image in the terrain correction step so that the values will not be recalculated or averaged.
You are saying they differ in topographic features and volume scattering. But these factors will have lot to do on the final backscattered values . Ultimately, the classified images obtained from these two different inputs will give different outputs. So, how can you say that their wont be any difference if I am not wrong.
Dear @ABraun,I didn’t want to confuse you. It is just to clarify some issues with the Sentinel-1 products processing.
I don’t insist on the TNR irrelevance for the GRD processing.
The trouble is that I still can’t find out how to check the Thermal noise presence in my products.
And would you mind to explain, whether users should apply the TNR for SLC?