Is there some way to combine the radiometric calibration and multilooking, e.g. before geocoding or terrain-flattening?
That would be very useful, as separate steps mean that extremely large files are generated. E.g. full resolution beta nought, when only 10x10 multilooked beta nought is desired.
Those large files increase both the memory footprint of the processing as well as the required CPU resources, even when full resolution backscatter products are not required.
This is a task perfectly suited for the graph processing capabilities of SNAP, which allows you to efficiently chain multiple processing steps together.
Here is a sample of a graph you might want to use to generate calibrated, multilooked and terrain-corrected images from S1 GRDs. Remove/add any steps and modify any parameters as you see fit.
To apply it to input images, you can use either the SNAP’s batch processing tool or the GPT command-line utility. I personally prefer the GPT.
For example, you could run the above graph with the following parameters from the command line, which will generate “preprocessed.dim” as the output.
The simplest thing to do would be to allow the calibration operator to multilook as you suggest. We’ll also see if we could update the calibration vectors after multilooking but it wouldn’t be the same thing.