Has there been any attempt at creating a Parallel processing version of VIP?
How about an API to CUDA?
With deep learning really coming to the forefront, has VIP done anything to create/support the need for faster processing.
A couple of years ago, I did do some very limited work with improving Neural Network speed using VIP and CUDA however, I didn't have the time to really do it as it needed to be done. (and with my teaching workload holding me back, I am just now (after more than a year) starting to get back to it. I just downloaded VIP 10ce so I can jump back in. )
if you would just execute multiple threads in VP then you would just have to find a way for these threads to communicate with eachother? and then you would have implemented the so called Concurrency?
or is this too simple thought.
what is the whole benefit of Concurrency? because multiple process operate simultanously you would have more speed-up to process data, and do these separate processes have to be able to be aware of eachother to have the effect of concurrency implemented?