Hi, I am working with Sentinel-1 GRD data in Google Earth Engine. I tried mosaicing the data to cover my study area, but the resulting raster had substantial pixel value variations when examined horizontally. Is there a way to normalize the mosaic image so that the pixel value from different scenes would appear more similar in the resulting image?
As shown below, the two adjacent scenes appeared distinct: the right scene appeared slightly darker (lower backscatter value).
You should mention more details of the data products. Different passes are at different times and often different days, so observation geometry and atmosphere will differ. These differences are reduced by atmospheric correction, but not perfectly. Data quality are lower near pass edges due to longer atmospheric path lengths. You may want to investigate masking low quality pixels using level-2 flags and sun/sensor angles. In general, some differences will remain, and can be very visible in imagery, but resulting differences in geophysical quantities like chlor_a are within the expected error ranges because the algorithms (e.g., band ratios) limit the impact residual differences caused by different time of observation.
Hi @gnwiii, thanks for your response. The image was created using the median composite of 106 IW scenes of Sentinel-1 Synthetic Aperture Radar (SAR) GRD available in Google Earth Engine data catalogue. I applied a temporal filter to retain only observations between June 2015 and March 2016, and hence, 106 scenes. I think the variation among adjacent scenes impacts the classification I made rather significantly.
My experience is with optical sensors. Maybe someone with SAR experience can offer advice.
although this forum is not about the GEE, I can generally recommend to combine SARA images from one orbit direction (either ascending or descending). In the GEE, this would be
var Sentinel1 = ee.ImageCollection('COPERNICUS/S1_GRD')
To combine data from all available passes, I would suggest using terrain-flattened gamma0 rather than GEE’s ellipsoid-flattened sigma0, and structuring your composite backscatter as proposed in our recently published paper (early open access):
Small, D., Rohner, C., Miranda, N., Ruetschi, M., & Schaepman, M. E. (2021). Wide-Area Analysis-Ready Radar Backscatter Composites. IEEE Transactions on Geoscience and Remote Sensing. https://doi.org/10.1109/TGRS.2021.3055562
Thanks for the suggestion. Unfortunately, the orbitProperties_pass filter had been applied and didn’t seem to solve the issue. Also, sorry if this question seems a bit off-topic. I thought that the question was appropriate because the issue seems to be associated with the preprocessing of the GRD dataset, which was conducted in Sentinel-1 toolbox.
Thanks for sharing @eyeinsky! I am pretty sure that this novel approach of yours will solve the issue. I am very keen to try out the local resolution weighting (LRW) approach in my study. If I may ask a follow-up question, what is the best way to implement the framework over a batch of sentinel-1 scenes that I have? Are you planning to introduce this approach on SNAP-as a toolbox, for example?
It would be a departure from SNAP’s history of focusing on a relatively small number of input images. I have not been approached by ESA to work together in a contract to have the algorithm integrated in a future version.