Speed up processing through GPUs

Is it possible to speed up SNAP toolbox processing through GPUs? Most of all, is it possible doing this by using snappy?

Thank you

Luca

No this is not possible.
We made some tests some years ago and found out it is not giving speed improvements.
The data transfer between memory and the graphics card ate up all the calculation improvements.
It only helps when you have a small amount of data and processing takes long time. But this is not the common case in the EO domain.

Thank you :disappointed:
Do you have any suggestion to improve snappy processing? For example through python multithreading?

I’ve written a post some time ago about it:

Also increasing the available memory might help:

Thank you very much

Luca

So you mean to say that GPUs only helps to cut down the processing time only when the amount of data is small, which is not essentially the case in the EO domain.

Maybe things changed since we did the last tests. GPUs increased a lot in the last ten years. But I think in general the statement is still true. Transferrring the data to the GPU takes time and this must be overcompensated by the faster processing on the GPU. Also the GPU is specialised for certain problems and is not fast for everything.

Python threading is great for creating a responsive GUI, or for handling multiple short web requests where I/O is the bottleneck more than the Python code. It is not suitable for parallelizing computationally intensive Python code, stick to the multiprocessing module for such tasks or delegate to a dedicated external library. For actual parallelization in Python, you should use the multiprocessing module to fork multiple processes that execute in parallel (due to the global interpreter lock, Python threads provide interleaving, but they are in fact executed serially, not in parallel, and are only useful when interleaving I/O operations). However, threading is still an appropriate model if you want to run multiple I/O-bound tasks simultaneously.

@marpet Sorry to bump an old thread. I just started using SNAP in the past few weeks, and did a quick search for GPU acceleration, with this being the only thread that I found on the topic. I would disagree that transferring data to the GPU will negate the advantages of using the GPU Accelerated processing. The data transfer speeds possible now mean that GPU processing is no longer handicapped by the transfer time.

As an example, I wrote a quick MATLAB script that does some basic math on the RGB bands from an S2 dataset. While this is not a perfect analog to the operations that are run directly in SNAP, or using the snappy module, it does demonstrate the decreased compute time that can be found by leveraging GPU computing.

The MATLAB script that I used can be found in this gist: https://gist.github.com/Marsfan/5526d75efaa618961e2ad5ee7473904c

Running the script in MATLAB gives me this as output

CPU Compute Time: 13.8965
GPU Compute Time: 7.32495

With the improvements in GPGPU computing, and memory bandwidth increases in the past few years, the data transfer bottleneck has become virtually non-existent, and using a GPU to accelerate processing of data in SNAP is viable.

2 Likes

Hi @BarrowWight
As I indicated already in my post things have changed in the last years. And probably you are right and the data transfer is not an issue anymore. I haven’t followed the GPGPU topic carefully in the last years. But still there are other obstacles. The algorithms might need to be adapted to perform well on the GPU. This would mean that we might have two implementations, at least partly.
Control flow was also an issue in the past. Conditions didn’t work. I read this has also changed meanwhile but only with a performance drawback. And most of our algorithms use such conditions.
Still I think GPGPU is great.
And it can be implemented by an operator in SNAP if it can benefit from the GPU computation. It could leverage some of the existing libraries which already exist.
What is not working, is that we add support GPGPU support to the framework and suddenly all operations are faster.

What I’m curios about is how your example would perform if you do the image adjustment differently for dark and bright parts of the image.

Maybe we should consider this topic again for SNAP and see how we can benefit from it. But this is nothing for the short-term development.

1 Like

@marpet I agree that it would be a large scale task to port the operations to GPGPU, I just wanted to draw attention to the improvements that can be gained now.
Porting is definitely not a short term goal, but I could see SNAP slowly adding GPU support for operations over the course of a number of releases.

1 Like

@luca I know it has been More than 3 years since you started this, but have a look at the cupy package for Python. It is designed to be a drop-in replacement for some NumPy functions, but it runs them on a NVIDIA GPU that supports CUDA.

1 Like