I have been studyng the SAR geometry adquisition and understanding many important concepts. Of all Ive studied I ve seen that it is not the same talking about resolution and talking about pixel spacing.
In a very interesting document from Marcus Engdhal @mengdahl "Multitemporal InSAR in land-cover and vegetation mapping"... it is well explained that:
So this is understandable OK, but… studyng SAR geometry, it is said that slant range resolution is a constant value while ground range is not because of the proyection on the ground and from the variation of the incidence angle. That´s all ok but, if you see on Sentinel-1 specifications…why does the slant resolution change along the slant range?? See this image:
Pixel spacing is minor than resolution and its ok…but why does range resolution varies from a value to another? Is this “range” the “ground range”, because in this case its all right!
Also I thought azimuth resolution was a constant, but here it seems that changes along the track…I dont understand. Maybe there´s something I have not understood, sorry for that.
Maybe this post is interesting for someone who is trying to understand SAR geometry.
In slant-range the range- and azimuth-resolutions are indeed constant per beam/swath (not per acquisition mode). SM has six possible beams, IW consists of three, EW of six and WV of two, see:
Hello everyone, sorry for resurrecting this question but I have some questions about the SAR geometry as well.
I don 't understand very well the concept of pixel spacing. Can someone explain this to me? and also, I want to understand the concept of multilooking in order to acquire square pixels. Why is this necessary? Is this just for analytical purposes? If for example I do a coherence analysis, will multilooking will change my vaules or coherence in the image in comparison to no multilooking?
If I do multilooking for purposes of studing intensity correlation, and this already does noise reduction, should I filter the images nevertheless again?
Should multilook the image and then do coherence analysis or the other way around?
Pixel spacing represents the distance on the ground for a pixel in the range and azimuth directions. They are different due to the slant acquisition. Multilooking can average the appropriate number of pixels in range and azimuth to end up with pixels that represent the same range and azimuth distances.
CCMEO has a good tutorial on the fundamentals of remote sensing. Look at the chapter on radar properties for an explanation of multilooking.
Dear lcevi. I posted this question in another thread maybe it makes more sense to use it here.
Dear People:
The images that I have say 2.33 m and 13.917 m in the metadata file, so… by not doing anything and applying
terrain correction then i get other values 13.92m and 3.69 m for Source GR pixel. I am confused about what is what.
If I apply a coherence window of lets say 72 do I have to multiply this 7 times 2.33 or 73.69 to know exactly which distance is on the ground??
As far as I have studied optical remote sensing. the concept of spatial resolution means the area on the ground represented by a square pixel. But it is causing me some confusion when SAR is considered where the pixel dimensions are not always square. I understand that pixel spacing is the distance in meters between the centers of two adjacent pixels in both slant and azimuth range, How does the SAR image resolution differs from optical image’s spatial resolution
Which distance exactly? between the centers of adjacent pixels, or between the edges of those pixels?
Yes it makes no difference in ground range images since the pixel grid is formed of uniformly-sized pixels, but what about in the slant range images, where pixels stretch in far-range?
Also, can this tutorial you have thankfully linked, be cited in a research thesis?