How to decode Sentinel-A L0 raw data? how to accomplish range compression correctly?

This is my IDL code.
It is the exact formula as what is written in the document section

; :Description:
;    Describe the of the Tx Pulse Starting Frequency in [MHz]
; :Params:
;    txpsfCode : code of the TXPSF. The code shall be signed already
;    txprr     : value of TX Pulse Ramp Rate in [MHz / us] 
; :Author: nunomiranda
function decode_txpsf, txpsfCode,  txprr, HERTZ=HERTZ
  fref =37.5347222D ;[MHz]
  txpsf = (txprr / 4D / fref) +  txpsfCode * fref / 2D^14
  if keyword_set(HERTZ) then txpsf*=1e6
  return, txpsf ;

Also the S-1 level-1 product SLC or GRD report the values decoded by the processor. For example:

            <noiseFormat>BAQ 5 Bit</noiseFormat>
          <swlList count="1">

could’n figure that TXPSF is not start frequency itself, but I should add it to centre frequency

I don’t believe that actually you need to add the radar frequency to it.
The txPulseLength, txPulseStartFrequency, txPulseRampRate are all what you need to construct the chirp and range compress the data

Dear Nuno.

I downloaded the sample data decoded you referred. I found that there were several TIFF format files. I tried to read them in the Matlab, using the function ‘imread(‘xxx.tiff’)’, but an error happened: ‘Error using rtifc. Unsupported sample format 5’. How to read this type of TIFF file and get the pixel value ?

Best ragards

Yes, I think so. Most of current software are focusing on the processing based on the image product, not the raw echo data. It is important for many researchers who major imaging formation algorithm to use the decoded raw data. I look forward to use such decoding-software very much. :slight_smile:

At ESA, RAW data have always been considered as users product. We always made them available starting from ERS and we continue for S-1. THis is not the case for other missions. However, we never made available RAW decoding software.
We have supported and will continue supporting anyone who would like to develop a “decoder” and we fully support if this is made available in public domain.

My personal opinion is that people willing to process S-1 RAW data (or any other mission) needs to learn about the sensor specificities.
If you want to process S-1 TOPS data, you have to learn about S-1. Being able to decode the RAW data is part of the learning curve. I can witness it>

The issue you are encountering is related to the matlab TIFF support that doesn’t understand complex format pixels. The standard tiff library and also GDAL seem able to properly understand this data (including IDL+ENVI).

For example:

tiffinfo S1A_IW_RAW__0SDV_20140825T224532_20140825T224605_002102_00217A_7AF4.TIFF_0001.tiff
TIFF Directory at offset 0x8 (8)
Image Width: 19768 Image Length: 15510
Bits/Sample: 64
Sample Format: complex IEEE floating point
Compression Scheme: None
Photometric Interpretation: min-is-black
Samples/Pixel: 1
Rows/Strip: 1
Planar Configuration: single image plane

1 Like

Dear Nuno Miranda,
sorry to use this post relating to another product but maybe you can help me with my request.
For my PhD project I need to work with Level 0/RAW ERS data, collecting all possible info recorded during the acquisition step and used in the focusing stage. I have the permission from ESA to download them through Eolisa. Unfortunately the files have extension .E1 or .E2 and and not .dat (as requested in commercial software as ENVI that reads ERS-1 Level-0 with extension dat*.001).
Do you know some available software to open this type of data? Moreover, do you know if some document for decoding is available?

The native format of ERS was the CEOS format that was a folder based product containing several file like the leader file, the dat file (dat.*001).

When came ASAR into play ESA have started proposing to the users to get ERS in native or in ENvisat format. The two formats were maitained in parallel for many, many years. In the last years, ESA have put in place a huge activity aiming at consolidating the ERS archive: rapatriating the old magnetic tape from every stations in the world, dumping their content (when possible) and keeping, perform an analysis to segregate good from bad data in order to store the data with todays standard ensuring that users can still have access to this data in the future. The idea is to allow users to work on long time series ERS1 + ERS2 +ASAR +S-1.

As part of this activity, it could very well be that the decision was taken to no support the native CEOS format. I recommend to drop an email to for a clear confirmation.

RAW is data not processed, I am not sure that s/w Like Envi will do anythoing with it (except you have the sarscape extension that may do the SAR focussing!?). The ERS in envisat format is nothing else the ERS ISP packed with with MPH/SPH acting as leader file. All the information can be found here:

If this format is not supported by ENvi/SARSCAPe you need to go back to the Envi people to update the s.w accordingly. Another solution is that you get directly the SLC or PRI from ESA.

However, I recommend to ask confirmation to eohelp in first place.
Kind regards,
Nuno Miranda

Dear Nuno,
thank you for your reply and I’m sorry to respond to you so late. Unfortunately ENVI is not able to deal with radar raw data and SARscape is an expensive tool and not useful for my project. I need to decode the RAW data to collect all the info associated to it. I read your response about S1 RAW data and I would like to learn how it is possible to decode it. Do you know some open source code to do that?

Dear Nuno,
the document Sentinel 1 SAR Instrument Calibration and Characterisation Plan (S1-PL-ASD-PL-0001) is not available for download.
Will I need it too to decode level 0 data?

Hi Nuno,
the FTP account in this document doesn’t work anymore. I want to debug the huffman decoder in my fdbaq decompressor. Is there any chance an example file could be posted again?

1 Like

Hello @nuno.miranda Could you check the FTP account for us please

FYI, I am creating a public repository at with my current investigations of Sentinel1 Level0 decoding software. In addition to using some raw datasets, I check on the reference dataset mentioned at the end of (section 5.3) that decoding is working properly, although I have not figured out yet how to compare with the TIF outputs. No idea yet how far this is going to get …

1 Like

I am struggling with FDBAQ decoding and am wondering if I am reading correctly the binary file.
1/ I am confused with BAQ mode page 33 of SAR Space Protocol Data Unit. The documentation says that BAQ mode is bits 3 to 7 of byte 37 of the secondary header, and reading this byte value I always get 0x0C. The confusing part is that 0x0C has not been shifted >>3 to remove the error flag and n/a. If I shift I get 1 which is “not applicable”, while 0x0C prior to shifting would match nominal “FDBAQ” but why the value without shifting ? I believe I am reading the header correctly as the Space Packet Count and PRI Count just before this field increment correctly from one block to the next
2/ assuming nominal FDBAQ, I dump the User Data Field in a file for decoding prior to implementing the Huffman decoder. The S1_L0_Decoding_Package only provides the final output after Huffman decoding and application of reconstruction by applying THIDX. Would it be possible to have the intermediate step of just Huffman decoding, for example the 1st two 128-samples of S1A_IW_RAW__0SDV_20200608T101309_20200608T101341_032924_03D05A_A50C.SAFE/s1a-iw-raw-s-vv-20200608t101309-20200608t101341-032924-03d05a.dat ?
3/ just to make sure I dump the User Data Field field correctly: I observe that NQ is always 11919 in S1A_IW_RAW__0SDV_20200608T101309_20200608T101341_032924_03D05A_A50C.SAFE/s1a-iw-raw-s-vv-20200608t101309-20200608t101341-032924-03d05a.dat: is this correct ?
Thank you

Hi @jmfriedt,

This is a nice initiative you have taken. I’m not sure if the people who might be able to help you, are reading here in the forum.
Maybe @mengdahl can tell you a better point of contact.

Thank you for your support. Actually thanks to the code provided by the author of

who kindly contacted me by email, I managed to compare my code with his working example and have achieved a realistic map of my first data processing as shown at This is still far from operational as the software meets an unexpected condition (impossible BRC value while processing) and stops at some point – and I am aware that only a subset of the possible cases is being implemented (FDBAQ type D) but as least some basic insight in the data structure seems to have been gained. The idea of indexing the bit number from 0 for the most significant bit (page 10 of the Packet Protocol Data Unit) strikes me as awkward at best and had me struggling for a week to understand the Huffman encoded data order, but so be it, the information is indeed documented although I had started reading from p.13 after the introductory tables of the documentation !
1 Like

A bit more progress as some error in BRC3 was corrected and now a whole raw dataset (S1A_IW_RAW__0SDV_20210112T173201_20210112T173234_036108_043B95_7EA4.SAFE – echos only, discarding calibration at the moment) leads to consistent decoding. BRC4 was added to the software but not validated since the dataset I am testing on does not require this encoding – will exhibit some flaw in the state machine most certainly when tested. Plotting the raw dataset magnitude seems consistent (tentatively interpreted as 5 bursts of one swath). As I am not familiar (yet) with range compression and even less with azimuth compression I have not validated the comparison with the processed data, but at least some files to play with if needed.

Back to basics: a complete StripMap dataset was processed, demonstrating I believe proper decoding of the raw data, conversion to {I,Q} coefficients and chirp shape estimate from telemetry. Range compression is achieved by only cross-correlating the predicted chirp shape leading to a single pixel wide correlation peak for point-like targets, which is fine. However I am confused with azimuth compression. Looking at the bottom of https://github. com/jmfriedt/sentinel1_level0 (Sao Paolo StripMap processing) we see that the phase along azimuth is parabola shaped meaning a linear frequency shift which I did not expect since in Ground Based Synthetic Aperture Radar (GB-SAR) the phase is expected to be linear with a slope equal to the spacing interval between measurements. Further analysis ( allows for recording the pulse shape along azimuth following range compression and indeed applying(cross-correlating) this pulse shape along the azimuth direction to to the whole image as shown on seems to achieve azimuth compression on the whole image with a single pixel wide cross-correlation peak. Could anyone hint at the cause of the leftover linear frequency modulation after range compression and how to identify the pulse shape along azimuth ? This is for SM and not IW/EW where I understand that some leftover frequency shift is expected from beam sweeping along the swath. I have the feeling this is related to Doppler centroid identification but at the moment this topic is beyond my understanding. Any pointer towards relevant information would be welcome. Thank you.

Apologies for the spaces in the github url … the Step forum won’t allow me to post this comment/question otherwise.

1 Like

I tried to fix the figure urls, but somehow could not find a target, can you please try again to post them? Otherwise, you can send them to me as a private message and I will insert them.

Concluding (at the moment at least) this whole investigation, I modified the software to output binary files, one for each swath, named with the NQ number of samples/range and the GPS time of the acquisition. To demonstrate the integrity of the generated datasets, I compare my output at with the output of SNAP analyzing the Level 1 data of this StripMap dataset at and I believe the results are close enough to demonstrate the proper decoding of the Level 0 data. At the moment I am not providing the satellite attitude data (quite easy to add from the current software architecture) nor is the processing for range and azimuth compression as fancy as those provided in the ESA documentation. Range compression uses a correlation with the nominal chirp shape and once achieved, I search for the strongest echo along the azimuth and assume this is due to a point like source and hence this echo is the impulse response of the chirp. I correlate all azimuth with this chirp which I have not (yet) understood how to construct analytically, without applying Doppler centroid analysis (best explained imho in Madsen, S. N. (1989). Estimating the Doppler centroid of SAR data. IEEE Transactions on Aerospace and
Electronic Systems, 25(2), 134-140. which includes a core information I have not found in the ESA documentation, namely that the autocorrelation quickly vanishes to 0 and that only samples at delays +1 or -1 are relevant, as observed experimentally). Since my initial objective was EMI analysis, I think I am going to stop there unless more features are requested. Thank you ESA for providing this amazing datasets and the documentation needed for processing, that was fun.

1 Like