Dynamic imaging

Is it possible to create dynamic images using sent 1 imagery in either snap or s1tbx?

What do you mean? Filters applied on the fly in memory without modifying the original? Some filtering and colour manipulation can be applied this way but generally EO products are huge and we tend to process and write modified files in tiles.

Thanks for the quick response.

I mean splitting the aggregated data from a single scene into multiple lower resolution images which can then be played as a video or interrogated individually.
It has tremendous utility with moving objects and static rotating objects.

You could export the images and use a gif-maker outside SNAP to do your animation.

It’s splitting the exposure into multiple parts (temporally rather than spatially) which is the key step

How do you split a single SAR image into temporal parts? Are you referring to sub-looks?

not sure what you mean by sub-looks.

Instead of processing all the pulses to create 1 aggregated image you break the data into discrete packages, for example 5 blocks, based on when the pulses returned to the sensor, creating a temporal aspect although at lower resolution.

That refers to sub-look processing, for S-1 IW the burst-duration (and dwell-time) is 0.82 seconds so with sub-look processing you could split that time into shorter intervals, i.e. lower resolution sub-looks.

For sub-look processing you would need to start with Level-0 (unfocused) data - the S1TBX does not support processing of Level-0 data and there are currently no plans of including this functionality in the toolbox.

Well thanks for sticking with the conversation even though we have two slightly different sets of terminology.

In your opinion what would be the best method to highlight the effects of Doppler shift from static rotating objects.
I had a brief look at a GRD image and some artifacts are visible IVO the object, but its not leaping out at me.

Can I ask which objects you are interested in?

Maybe you can try IW SLC products. Around 10% of any given burst overlaps with the preceding burst, and another 10% with the succeeding one. If you are lucky enough that your object is in one of the overlapping zones, that will give you two ready-made observations of the object with a time difference of around 2s.

The last comment from mengdhal is not fully correct.
You can do sub look processing in the SLC image as mentioned by css.

In order to do that you need to :

The sub-look will be of reduced geomtric resolution but will represent different moment of the SAR aperture

I am not sure though how much of this can be done by the toolbox

Nuno Miranda
S-1 Data Quality Manager


Thanks again for the help.
I have had to put this on the back-burner as the original inquiry I received has changed.
Hopefully when I get a little more time I will look into it and inform you of the results.