Processing steps for Sentinel-1 GRD product

Hello, I’m completely new to the field of remote sensing and I’m trying to understand the different processing steps that get applied when using Sentinel-1 imagery. I have read the documentation describing the Level-1 products, and it says that in the GRD product, the data has already been multi-looked to reduce speckle, projected onto an ellipsoid model of the earth, and corrected for terrain height. However, I have read several threads in this forum describing the steps one should perform when starting with the GRD product (e.g. How Decompose Sentinel-1 GRDH data, Sentinel-1A data preprocessing, etc), that seem to say you should (or at least can) do some of these steps again, maybe different versions (e.g. different types of speckle filtering)? I don’t really understand this, like if you’re going to do your own speckle filtering and terrain correction anyway, would it not make more sense to start with the SLC product that hasn’t had any of that already done? If you’re starting from the GRD product that has already been multi-looked and stuff, would that not interfere with subsequent processing that you’re applying? Is it ever the case that you would want to use only the values in the GRD product, directly, without any processing (or maybe with only the transformation from DN to sigma-naught values or something), or do you generally always want to do some kind of processing but what type depends on your applications? It looks like there’s a whole API for getting just the TIFF file for the GRD product with only minimal/generic processing beyond the processing done to create the Level-1 product, but most of the discussion I’ve found on the internet seems to suggest that people do other processing steps requiring other parts of the SAFE file, so I’m not sure in what case you would ever want to get just the TIFF file.

1 Like

if you use SLC data, you have full control over the multi-looking (conversion of rectangular pixels to squares). Additionally, SLC data contains phase information which is required for interferometry.

GRD data has been multi-looked to 10x10 m pixels, mainly to reduce the file size and make the data more usable to basic users. As a side effect, some of the speckle was reduced, but it is always possible to add another multi-looking to decrease the resilution to 20x20 or lower, e.g. if you are aiming at a regional analysis or if you want to merge it with 30m Landsat data, for example. The actual reduction of speckle is done with filtering which was not applied to SLC nor GRD.
None of the two steps is obligatory but it can further increase the image quality. What is needed in both cases is the Range Doppler Terrain Correction (or Ellipsoid Correction over flat areas) to project the data into a coordinate reference system.

About the calibration to Sigma0 - this is required if you want to compare images of different dates or exctact the exact amount of backscatter. If you simply want to classify a single image, you do not necessarily need calibration.

Please let us know if this is helpful to you and which points remain open. I think you did a good job in getting into the topic and reading some basics, but I also admit that some of them are confusing and even contradictive. Also the sequence in which these steps are applied is still discussed.

1 Like

Thanks, that’s really helpful.
A few clarification/followup questions:

What do you mean about classifying just a single image? If images aren’t comparable to other images unless you calibrate them, then what information would a classification be based on if you only have a single uncalibrated image? Do you mean classifying different parts of the same image relative to each other?

The documentation for the GRD product says the data has been “projected to ground range using the Earth ellipsoid model WGS84. The ellipsoid projection of the GRD products is corrected using the terrain height specified in the product general annotation.” What is the difference between this and the Range Doppler Terrain Correction / projecting into a coordinate reference system, which you mentioned as having not yet been done?

If spatial resolution is important in my particular use case, does that mean I should use SLC rather than GRD, should not perform multi-looking but should probably do some type of speckle filtering? (I haven’t looked into the different types, but my understanding is that some are better than others at preserving fine-grained detail.)

Is it generally the case that the GRD product is mostly used by “basic users”? I don’t really have a sense of what kinds of applications need how much custom processing in order to get useful results. For context, what I’m trying to do is agricultural crop cycle monitoring, of areas that are on the order of a few thousand square meters.

yes, for example classifying water bodies in the image.

GRD products have square pixels and are more or less geometrically correct regarding their geolocation. However, this only works for flat areas. If you have topography in your image, the incidence angle of the imaging system causes geometric distortions. These can be corrected with a digital elevation model. Some more notes on this: The reason of range doppler terrain correction?

If you need the image in its original form, SLC offers you the most options, yes. GRD products are ready to use but they were already processed in a way which could have diminished the information you need. But SLC data do not necessarily contain more details.

[quote=“akenney, post:3, topic:16957”]For context, what I’m trying to do is agricultural crop cycle monitoring, of areas that are on the order of a few thousand square meters.
I personally would stick to the GRD products then. They are fine regarding their spatial resolution (10x10 m) and you can quickly generate timer-series of data by calibrating the products to Sigma0. The temporal aspect should give you more information than the pure pixel resolution.

Okay, thank you!

I have read that another reason the SLC products can be useful is that they contain the phase information, which could potentially be used for interferometry for measuring heights of things such as plants. However I read another discussion in this forum suggesting that the Sentinel-1 data isn’t very useful for that because of the comparatively long time delay between successive image acquisitions and maybe because of heterogeneity of ground cover within each individual pixel. So do you think it’s not worth trying to do that and would be better just to look at the backscatter values over time?

to my opinion, height estimates from Sentinel-1 are impossible. The only useful parameter which can be retrieved from Sentinel-1 SLC data in terms of landcover classification is the coherence as an indicator for change between two images. But it takes large amounts of data and also time to exploit it, so I don’t think that justifies using 8GB SLC data over 1GB GRD in your case.

Okay I have another question. Through Sentinel Hub I can download TIFF files and I can specify them to cover just the region I’m interested in, and I can tell them to be calibrated to sigma0 and there’s even an option for orthorectification (although I’m not sure if that means Range Doppler or something else). If I want to look at all the historical data, which is every week or two for the past couple of years, it would certainly be easier to deal with files that are a few megabytes than the 1GB SAFE-format data (I downloaded one of them and opened it in SNAP, but as soon as I tried to do any processing it crashed; I have read about people having various memory issues with SNAP which I might be able to fix by changing settings but I don’t know). So, do you think the Sentinel Hub processed images could be adequate and useful for crop monitoring, or do you think it’s pretty important to have the SAFE files and be able to do custom adjustments?

The data from the Sentinel Hub is calibrated to Sigma0 and ortho-rectified (using SRTM data), so basically you can use them. Just make sure you download them in 32 bit float to conserve the decimal numbers. The data should mainly range between 0 and 1 with a few outliers above 1.
For crom mapping, the data should be fine, because you are mainly dealing with flat areas (no geometric or radiometric distortions).

If you wan to have more control over the DEM used in the terrain correction, speckle filtering, calibration ect you will probably need to download the GRD data in SAFE format and process your data yourself. The graph tool allows you to automate that.