Nan output of sharpen in sen-et processing

Dear SNAP devoloper,

I have been following the sen-et tutorial to obtain evotranspiration at 20m resolutions (http://esa-sen4et.org/). All the steps are well achaived, however, in the SHARPEN LTS step the output of the Data mining sharpener is nan.
Any idea what might be going wrong?

I really appreciate your help

Thanks in advances,
Caleb

Hi Caleb,

Thanks for your question. I’m not sure if the developers of the plugin are around here and can answer your question. And I here the first time of this plugin. But it is great to see 3rd party plugins.
Unfortunately, on the esa-sen4et.org page no option is provided to contact the team.
@mengdahl Do you know who should be contacted?

Dear Marco

Thank you for your prompt response, I look forward to hearing from you if anyone else knows.

Caleb

in WRAP to TEMPLATE, what is the source image and template image?

Hi Calebdb,
Are you sure that there is spatial overlap between the high and low resolution images? Also, please provide more details, e.g. what settings do you use, what messages are printed out, etc.

I’m also having the same issue of NaN values of LST_SHARPENING output.
No error message is there.
Followings are my input and parameters.
Why the sharpened image is Nan?

Could you also double check that all the input layers overlap and display properly in SNAP. Also what machine are you running this on? The sharpening module can be quite resource intensive when run for a large area (e.g. S2 tile). See section 3.1 of the Sen-ET user manual for recommended system requirements.

This happened with me recently, and the cause was that the lai from the sentinel 3 preprocessing was empty, check results from previous steps to be sure the input is all right.

1 Like

Dear @radosuav

Thanks for your advice. Unfortunately, I have not been able to fix the problem. There is overlap between both areas. I turn up the sharpening processing input I used to drive, maybe it could help. There is also the output message in a txt file.

I really appreciate your help in this field.

Sincerely,

Caleb

https://drive.google.com/drive/folders/1kVnWGMqZSeOVf-Wy7c9TtH49vsc6TdCU?usp=sharing

Hi Calebdb, it looks like you are messing up the inputs to the sharpening algorithm. For example, you are using the same file for LST and LST quality mask. Have a look at figure 2.2 and section 3.3.1.11 of the user manual.

Thanks again @radosuav for your response.
It is probably my problem, however, I am followig each step of the manual and I do not know where is the mistake.

-I use S2 bands B2, B3, B4, B5, B6, B7, B8A, B11 and B12 as “Sentinel-2 reflectance product”.
-I use a product named S2_elevation that contains the elevation band as “High resolutions DEM”.
-A product that contains just LTS band for “Sentinel 3 LTS product”.
-The output of WRAP step as “High resolution Sentinel-3 geometry product”
-The cloud_in of Sentinel-3 as “LTS quality mask product”

I am using an small subset to evaluat it, is it ok or have I to apply over all the S2 tile?

Best regards,
Caleb

Hi Caleb, all your inputs sound good. But the small subset could be a problem. How small is it? To get sufficient training data for the sharpening model the subset should be at least a third of Sentinel-2 scene (i.e. 30 by 30 LST pixels). But the larger the subset the better.

That could be the problem because I am using a very small area. I’m going to test it in a higher zone and I will tell you the result.
Thank you!

Hi @radosuav,

Unfortunately it also didn’t work when using a larger area (around a third of Sentinel-2).

I upload the data that I am using in sharpening step, may be you can see what I doing wrong.
https://drive.google.com/drive/folders/14a9GeT8jiLHdMlmnAKpGpQbhI-MfaX_x?usp=sharing

I really appreciate your help.

King regards.

Dear @radosuav

I hope you are well. Have you can see something that help me?

Thanks in advance,

Caleb

Hi,
It looks like the cloud mask which you are using was not processed by the sentinel_s3_preprocessing_graph. You are using mask_in values directly, while it should be reclassified into 0s and 1s.

Otherwise, you could change the value which represents good quality pixels in your mask:

1 Like

Hi @radosuav.

Thanks so much! that works :)! I am going to do the next step!

Have a good day!

Hi @radosuav,

Thanks for your help so far. I have an operational question. When selecting the S3 image that I am going to use, I am not very sure what it should be. In the paper titled “Modelling High-Resolution Actual
Evapotranspiration through Sentinel-2 and Sentinel-3 Data Fusion” it says that “Each S3 scene is matched with an S2 scene acquired at most ten days before or after the S3 acquisition and the regression model used for sharpening is derived specifically for each scene pair”.

I understand that I have tot apply the entire methodology to each pair of images. So, in case that I want to obtain the evapotranspiration for a specific day, I download the S2 image and for the S3 I can choose 5 different acquisitions (at different times of that day). To obtain the evapotranspiration of that day, I should apply the methodology to each of these acquisitions and the final result would be the average of each result? Or would it be enough to use the one closest to the time of acquisition of the S2 image?

Thanks in advance,
Caleb

Hi @Calebdb
Normally there should be only two S3 acquisition per area per day - one in the morning and one in the evening. You should chose the morning one (descending orbit). In some cases you might get two (e.g. S3A and S3B) or more morning overpasses in which case I would chose the one with lowest cloudiness and lowest view zenith angle. Daily ET is extrapolated from instantaneous ET obtained using only one morning S3 acquisition. This extrapolation uses only solar irradiance information.

What we meant in the paper is that if you have a S2 image from 15.10 then you could pair it with S3 image from any date between 05.10 and 25.10 to obtain daily ET on that date.

Hope that clarifies things
Rado

Hi,
Would you please to share any literature with regard to this point!

Would you mind to share the paper!

Many thanks.