Single or Multi-temporal speckle filter?

Thanks for the tips - ill keep trying. i had 6 inputs into the last and will try for more.

I don’t think that VV and VH should be mixed inside the multitemporal filter. It may give OKish-looking results but I’m quite sure that there are cases where it produces clearly incorrect results.

I was thinking about this a lot. I agree that is is obvious that an area which is homogenous in VV can also be heterogenous or even completely different in VH. But to my understanding this only affects the intensity of the filter.
If an area is homogenous throughout a stack the filter considers all the variation as speckle and therefore is more rigorous.
If it is different throughout a stack (maybe due to temporal changes or different polarizations) the filtered result is less smoothed but not necessarily wrong.
I made a few tests and didn’t find that a pattern from one image is transposed or ‘copied’ into the others during the multi-temporal filtering.

How do you think larger errors could occur?

There could be cases where due to regular structures the targets look very different in VV/VH, which could render the multitemporal filter unusable.

This could be tested from a suitably large stack of dual-pol acquisitions. Use the Quagan-filter separately for VV and VH stacks and then for the combined stack and see what the differences are. If you calculate the Quegan-filter “by hand” you could even compute the multitemporal-part of the filter from the VH-stack and then use it for the VV-stack and see if weird stuff happens.

thanks for the explanation - I will try this and let you know.

1 Like

In order to test if different polarizations interfere in multi-temporal speckle filtering I made the following tests:

I downloaded 11 Sentinel-1 images in dual polarization (VH VV), applied orbit files and coregistered them.

As you can see, the temporal range is about 1.5 years. Unfortunately, the data is not evenly distributed over the given time span. I therefore used 03.11.14 / 15.11.14 and 27.11.14 for RGB image as they had the smallest temporal baseline.

The following results show:

  1. Unfiltered data
  2. Multi-temporal filter on VH data only
  3. Multi-temporal filter on VV data only
  4. Multi-temporal filter on both VH and VV data (VV-RGB is shown)

Filter size was 3x3, I used the Quegan filter from the NEST toolbox.

First comparison doesn’t show much difference:

I then made a crosstable between the data sets from 10.10.14 to see how similar they are:

It shows that the results of single-polarization filtering (mtf_vv_VV) are nearly the same (bold r² = 0.99) as the results from dual-polarization filtering (mtf_vh+vv_VV). . In turn, the correlation of unfiltered data (VV) to filtered (mtf_vh+vv_VV) is clearly smaller (r² = 0.93). These numbers give a small idea of the differences between unfiltered, single-polarization filtered and dual-polarization filtered data but cannot explain the spatial dimension. 1% in a 5000x6000 pixel subset is about 300000 pixels (compared to 3 million in total).

unfiltered VH vs. single-polarization filtered VH


unfiltered VH vs. dual-polarization filtered VH

They are similar to a large degree.

single-polarization filtered VH vs dual-polarization filtered VH

So there are differences in the image, but we can’t tell if they are drastic.

So, let’s have a closer look at some areas. I tried to select areas with various land use types.

1.unfiltered
2. single-polarization filtered VH
4. dual-polarization filtered VH



Visually, some changes can be observed, but their impact is small. Due to the double number of input data sets the product of the dual-polarization filter is a bit smoother.
It surely depends on the application. Working with polarimetric signatures of the VH/VV Sentinel-1 IW mode I would personally disadvise filtering with both polarizations at once. But for simple image analyses, thresholds, classifications I can’t find any harm in doing it.

Any suggestions for further tests?

4 Likes

Good work. You can try studying the difference-images between the single vs. dual-pol filtered stacks to see where the differences are.

Thanks for the suggestion. I had a first look back then and the differences seemed quite random but I’ll give it a closer look.

@ABraun, could you please write how much time was spent on treatment in both cases? ( I mean for Single Lee Sigma (1 source band, Lee Sigma, 1 Look, Window 7x7, Sigma 0.9, Target 3x3) and
Multi-temporal Lee Sigma (15 source bands, Lee Sigma, 1 Look, Window 7x7, Sigma 0.9, Target 3x3))

sorry, I can’t remember. I used a machine with 32GB RAM so it only took a couple of minutes in all cases.

Oh really? I used 16 GB and core i7 mac book and 1 cycle in batch takes about 15 minutes for 8 products (VV VH). Without multi speckle filtering it took <1 min per one cycle.

(Read - ApplyOrbitFile - ThermalNoiseRe - Calibr - MultiSpeckle - TerCorrection - LinearToFromdB - Write)

i didn’t use the batch mode, I was only referring to the multi-temporal speckle filter.

ABraun - Could you explain how you got those coregistered S1 files into one product, as shown in your Sep 23 screen shot of the Bands in your product? I’m trying to play with multitemporal filtering and that part is eluding me.

I can coregister two S1 files at a time, but how can I combine several such products into one?

The Radar > Coregistration > Coregistration dialog throws an error when it senses multiple inputs as S1 TOPS files, so that doesn’t work.

The S1 TOPS Coregistration dialog only works with two files at a time.

I can create a 2-file coregistration workflow and open it in the Batch Processor along with all of the input files in my time series, but I’m not sure which master it is selecting and the output products have strange names. *** AND *** the burst that I’m trying to coregister has different numbers for different collects in my time series!

I apologize, I know the answer is probably easy, but I haven’t found it on my own.

Thanks,

Tom

to be honest, I used GRD data. So I added all of them in the coregistration dialogue and it worked.
I’m not sure how to do it with multiple TOPS files, sorry. But I’ll have a try later.

Think I got it. With three separate S1 SLC files, make a Workflow that coregisters them all (everything to the left of and including the Back-Geocoding box below), then pass the results to MTSF (with an intermediate debursting step required). The Terrain Correction is just there to help me visualize things.

The output is a product like that below, which is what I expected (with the 20Apr collect being the master and the two other collects being slaves):

Assuming this is correct (and if I’m wrong, somebody please speak up), I’d like to respectfully opine that discovering this process was a bit harder than it needed to be. If the MTSF dialog would accept Bands that weren’t in the same product, this would have been easier to do. If the Stack Creation dialog would allow S1 TOPS SLC products, ditto. If the S1 TOPS Coregistration dialog would allow more than two products to be coregistered, ditto. If SNAP had some easy method to simply copy a band from one Product into another, ditto.

Again, I hate to complain, but yikes - I tried a lot of things before luckily hitting on one that worked. I’m not the sharpest wavelength in the spectrum, but for such a simple, seemingly common, task, that took a while to discover.

Fingers crossed that I actually did it right :slight_smile:

Thanks,

Tom

1 Like

thanks for showing. Good luck - I would be interested in the results.

Maybe one hint lveci gave yesterday in a similar topic

So, if it takes too long, you could stop after the back geocoding and start the debursting separately.

Good morning, I am trying to create a graph as @tqrtuomo, but I have this error : “Please select two source products” that appears in the back geocoding step: Maybe I have to change one of the parameter but I don’t know which one to choose, if someone could help me.
Thanks

Prior to version 5, the backgeocoding only allowed 2 input products. Update to version 5.

1 Like

Thanks a lot for your answer, I updated my version and the backgeocoding works.
At the step of the multi temporal speckle filter when I choose the different bands I want to analyse from the different products this error appear:
Operator ‘MultiTemporalSpeckleFilterOp’: Value for ‘Source bands’ is invalid.
What kind of values should I use ?

Hi ABraun,

first thanks for this great comparison … very interesting and helpful.

I was just curious about your snap/gpt setup. You said with your machine (32G RAM) the processing of 15 S1 images only took a couple of minutes. I have the same amount of RAM and the same number of S1 images I’d like to process. and I’m really struggling to find a setup that would work for me.

Also, I suppose if you’d go on to use the co-registered and filtered stack, you would need to apply the Stack-split operator. Do you have any experience with that? Even with 32GB of RAM, I constantly get a data buffere error.

Thanks!

Val