I am using the following product:
S1A_S3_RAW__0SDH_20220710T213600_20220710T213625_044043_0541DB_56CE
This is the same dataset used as an example in [Rich-Hall’s Sentinel-1 Level 0 Decoding Demo]https://github.com/Rich-Hall/sentinel1Level0DecodingDemo
.
I have implemented a method to select the ISPs of the same burst, similar to how it is done in Rich-Hall’s library. I am extracting ISPs from 408 to 19,658 to reproduce the example image. However, when accessing the IQ data from both decoders, I noticed differences in standard deviation, mean, min, and max values. Even after adjusting the scale to match the output, I obtain a similar image but with lower contrast.
I investigated the sample value reconstruction process in both decoders. As expected, both decode this ISP interval in FDBAQ mode, but the reconstructed values obtained from the LUTs differ between the two implementations.
In Rich-Hall’s approach, the decoding process follows this path (as seen in its example):
10file.get_burst_data() -> decode_packets() -> data_decoder.decode() -> FDBAQDecoder() -> reconstruct_channel_vals()
My main question is whether s1isp
correctly implements the sample reconstruction law for each bitrate of the FDBAQ mode, as detailed in Section 4.4 of the SAR Space Packet Protocol Data Unit. If so, is this implemented in get_fdbaq_lut()
?
Another suspicious observation is the value of THIDX. For the same ISP (e.g., ISP 408), s1isp
always assigns a value of 2 for all BAQ blocks, whereas in the other library, it varies. Since this parameter determines whether simple or normal reconstruction is applied, it seems that s1isp
is always decoding using simple reconstruction.
I am new to SAR (Junior Engineer BTW) and I want to use this to test RFI mitigation techniques so I need to understand how samples values of IQ data are reconstructed. I have also started a discussion in GitHub, answer where you prefer.
Thanks in advanced!