Hello,
As a follow-up to this topic Generate view angles from metadata Sentinel-2 - #10 by harmel
I tried both the core (generic) and the s2tbx resampling operators. The s2tbx enables one to get accurate values for the viewing angles of S2-msi pixels.
But, the s2tbx (‘org.esa.s2tbx.s2msi.resampler.S2ResamplingOp’) is very slow when called from snappy, and rapidly saturates memory (memory use keeps increasing when loading a second time the same data, hereafter “2nd call”).
Attached is a small code to test the operators efficiencies when applied to a given row of a S2 image.
This is showing that s2tbx has similar performances to the core operators when angles are not calculated:
core operator timing:
1st call with angles from core sampler: mean time to load a row: 1.5311s
2nd call with angles from core sampler: mean time to load a row: 0.0117s
s2tbx operator timing:
1st call with angles from s2tbx sampler: mean time to load a row: 1.6024s
2nd call with angles from s2tbx sampler: mean time to load a row: 2.0249s
1st call without angles from s2tbx sampler: mean time to load a row: 1.0254s
2nd call without angles from s2tbx sampler: mean time to load a row: 0.0068s
Do you see a way to fix the time and memory issues about the view angles computations?
I will have a look. I am going to add this to our issue tracker. Thank you for reporting.
When the angles are not calculated, in fact, the generic resampling operator is used so it is normal that the performance is similar.
Thanks a lot obarrilero
Update:
The above performances were evaluated with a jpy library implemented with jdk from oracle java 10.
I reset the whole snappy with jdk from java 8 and the time performances are now similar between S2_sampler and the generic sampler.
Nevertheless, the S2_sampler is very consuming in memory which is an issue when using snappy (memory cannot be freed).
Tristan
New update:
I retested the performances of S2_sampler vs generic sampler; simply loading the first 512 row, see test2_snappy_resampling.py (2.8 KB) :
for generic sampler:
1st call with angles from core sampler: mean time to load a row: 0.6293s
2nd call with angles from core sampler: mean time to load a row: 0.1114s
RSS=4262288 elapsed=0:40.06 cpu.sys=4.27 .user=79.35
for s2tbx sampler
1st call without angles from s2tbx sampler: mean time to load a row: 0.9681s
2nd call without angles from s2tbx sampler: mean time to load a row: 0.1022s
RSS=7916856 elapsed=0:53.61 cpu.sys=5.66 .user=114.67
where RSS stands for maximum Resident Set Size of the process during its lifetime, in KB.
(results obtained with
/usr/bin/time -f “RSS=%M elapsed=%E cpu.sys=%S .user=%U” python3.6 test_snappy_resampling.py ‘core’ S2A_MSIL1C_20170210T082051_N0204_R121_T33HYD_20170210T083752.SAFE)
So, S2sampler is a nice tool to get the proper viewing/azimuth angles for each pixel but is twice slower and twice memory greedier than the previous sampler…
and it is hard to process a whole S2 image (memory use goes up to 65 GB).
Any plan for improvement or some workaround?
Thank you,
Tristan
I don’t know when the S2TBX team has planned to do it. Maybe @obarrilero can tell.
However, we use the S2 Resampling in other projects too and we have also problems with the memory consumption. It depends on the needs of these other projects, but it could be that we will look at the S2 resampling, in the next few weeks, if and how it could be improved.
Hi. We have in our roadmap to improve it, but I am not sure when are we going to be able to do that. Currently we are working on other issues with more priority.