How does multilooking work?

Does the multilooking operation works in SNAP by averaging each line (resp. column) with the lines (resp. columns) that comes after it in the image?
The averaging is performed on the lines or on the columns, depending on the number of looks in azimuth or in range.

I was confused when I’ve read the definitions of multilooking I saw on the Internet, which say it consists in the averaging of multiple looks acquired, whereas in my case the products I’m dealing with have only a single complex band (from TerraSAR-X).

There are two different definitions of multi-looking.

  • Frequency domain ML: The bandwidth of the image is divided into several parts (looks) and each of them is used to form an image. Combining these images results in a smoothed representation at range resolution.
  • Spatial domain ML: Multiple lines of pixels in range direction are combined with the respective number of lines of pixels in azimuth direction by convolution within a small moving window, resulting in a squared pixel at reduced range resolution.

The latter is the case for the ML operator in SNAP.


That makes sense, thanks a lot.