Hello all!
I am an engineering student. Actually as my sophomore term project, I had to develop a tool for SAR Calibration. I used SNAP for this purpose. Then I discovered Snappy and so decided to write a script to automate the process. I wanted to know the correctness of the things I am doing for calibration. It would be great to get some inputs from you people.
I’m a super beginner at this so I am not sure of the correctness of my work. I read a lot of research papers for the methodology used for calibrating SAR. I used the ‘Calibration Constant’ value for calibration.
Can you kindly check if the method that I am following is correct or not? I’m using the steps as follows:
-
First I locate the corner reflector by giving the coordinates to the program.
-
Then I make a 128*128 subset around the point target for further calculations.
3)Then for background intensity correction, I take the mean of the background intensity by taking 4 subsets of the 128128 figure. I subtract the mean background intensity of the 128128 pixel square from it to get the value of Ip. (Ip is mean background corrected intensity)
-
Then I calculate Radar Cross Section given the length of corner reflector, and the frequency.
-
From Radar Cross Section (sigma) and other values like Ip, I calculate the calibration constant.
-
I have also added plots of intensity around corner reflectors in Python. I get a sharp peak at the location of corner reflector.
Is this good enough at a basic level? I plan to add more features like interpolation for better results. Based on the data which we were provided, I’m getting a reasonable value of Calibration Constant. Can you suggest me something that I might be missing?
Here’s a link to the GitHub repository for the project: https://github.com/WVik/sar-calibration-using-snappy
One paper which I read for Point Targets: http://aulavirtual.ig.conae.gov.ar/moodle/pluginfile.php/513/mod_page/content/28/MirkoPanozzoZenere.pdf