Download and process subset of a S1 scene

Hi everyone,

I’m trying to form a time series of S1 observations for a small area (one agricultural parcel). I know how to use gpt to subset S1 scene and further process only that small part (apply precise orbits, calibration, terrain flattening, terrain correction). But I wanted to make it even more effiicient.

Since the area I need is very small, I thought it makes no sense to download whole S1 scenes just to extract data for that area and then throw 99.9% away. Since I want time-series for the whole year (even multi-year), this just seemed like a waste of bandwidth and time. So I used gdal vsicurl driver to extract the needed subset from online tiffs in /mesurement product dir.

But now I’m facing a problem of applying further processing on these tiffs, since they are no formatted the same as the ones obtained via gpt Subset . I’ve managed to convert them into ENVI format (.dim), but I’m still missing the relevant metadata. I can of course download .xml files with the product metadata, but how do I “embed” it into the subset that I created manually?

I event tried to “cheat” the gpt by downloading the whole product dir except the files in the /measurement dir, and placing my extracted files there (with correct naming and added GCPs), but with no luck (I get “Empty region” error).

Is there a way I could make this work, or am I just trying in vain?

Best,
Ognjen

Have you tried looking at the VFS (Virtual Filesystem Support) of SNAP? See here. You can find it in the SNAP help also.

1 Like

Thanks, that’s a nice addition, but I think that won’t solve my problem. I’ve tried it but unsuccessfully, since to access remote files I use a special header with a token, and besides that, I think the repo doesn’t support file listing. Even if it does, I’m not sure how I would use this programatically in a script.

I’ve also investigated how gpt Subset works by looking at snap git repo, hoping to maybe be able to reproduce it with a python script to construct subset.data folder and subset.dim file manually, to create /tie_point_grids and /vector data folders and subset.dim for specified polygon bbox based on product metadata. But it seems too complex…

So you probably have a custom solution supplying the data… to my best understanding the S1 reader needs the original format (or one of the imported ones) and the remaining modules will work from the BEAM-DIMAP format that the reader generates.

Perhaps you can explain your use case more precisely so someone else can try to advance some ideas…

@ABraun or @lveci - any suggestions from your side?!

I’m using Mundi to access data since it’s one of the rare (free) repositories that allows access/download for single file inside the product. This way I can subset only the area that I need from tifs in /mesaurement folder and not download everything. But I doubt I can add this source as a VFS.

I don’t know how to construct the BEAM-DIMAP product from these tiffs (I can use GDAL, but I’m missing the auxillary data that is present when using gpt Subset). Is there some documentation describing the process of subsetting original product?

Thanks for your help!
Best,
Ognjen