I’m a beginner with the SNAP program then I apologize for my following simple questions.
After downloading a Sentinel’s product from this site https://scihub.copernicus.eu/dhus/#/home,
I open it in SNAP, but please, how is it possible to obtain geocoded and orthorectified images to full resolution ?
the Deburst module requires pretty much of RAM but it is not needed for GRD data.
If you want to overlay Sentinel-1 GRD in Google Earth, the following steps are enough:
Calibration to Sigma0
Range-Doppler Terrain Correction (leave WGS84 as coordinate system)
Maybe you convert to db at this point to increase contrasts in your image.
File > Export > Other > View in Google Earth
the last one creates a kmz file of your current view.
You could additionally perform Removal of Antenna pattern as a first step, multi-look or speckle filter, but for viewing in Google earth this is not required.
Regarding your second question: The Google Earth Engine is something different and not easy to handle. Using SNAP is totally sufficient for the processing of single images.
Might be some times my answer is hovering around the idea I’d like to give (This is the teaching way I used to do after many years in different Universities )
An RGB and HSV is just a colour representation. There isn’t any physical meaning to combing sigma0. You just use it with some band ratios or decomposition to help make some parts of the image stand out over others.
The only thing you would have to consider is that band arithmetics are different for logarithmized data (Sigma0db):
If only two polarizations are available, many use the ratio of VV/HV as third color. But if you already processed it to db you would need to substract it instead VV-VH (source).
When your colour composites are composed of data from different acquisition dates, you can ignore this.