07-08-2008 11:21 AM
07-09-2008 07:27 PM
05-07-2009 06:48 AM
Odd how LabView is listed in this article supporting CUDA, but I see nothing listed on NI's website.
http://www.engadget.com/2009/05/06/nvidia-tesla-gpus-now-shipping-with-dell-personal-supercomputer/
Time will tell... unless somebody has some new info?
05-28-2009 07:58 AM
. . . and even more odd that they publish an article on LabVIEW Community Extends Support of New Protocols stating that NVIDIA CUDA support is one of the "Five LabVIEW Add-Ons in the Spotlight", yet when you go to the LabVIEW add-ons page to "Learn more about these and other LabVIEW add-ons" then there is no information and searching for it brings up nothing but forum threads like this one....
Maybe we will soon see it in the NI search too?? Or maybe someone more skilled at mining NI can find the correct URL's?
Either way, I hope it happens soon as this is very exiting news!
08-01-2010 10:26 AM
THE support for GPU processing is building up nicely for the Labview community.
See the recent addition of the ADV toolkit. This shows clearly that it is possible!
/Lajos
08-02-2010 10:34 PM - edited 08-02-2010 10:35 PM
I'll add to Layosh's post. While your post was before the official release of the LabVIEW GPU Computing module (LVCUDA) uploaded to NILabs in Aug 2009, it supercedes the prior LV solution presented as part of an add-on to CVI.
Using the toolkit for CVI didn't address some of the threading issues inherent in multi-threaded LV. The LVCUDA includes both a general interface to CUDA functions and an execution framework ensuring the execution is integrated into that of a LabVIEW application's as if it were any other process on the CPU.
You can find more information here: LabVIEW GPU Computing.
***NOTE**** This module is designed to integrate an existing CUDA function previously written outside LabVIEW, not facilitate GPU code generation from LabVIEW. The latter is far more challenging as LabVIEW's compiler is designed to target x86 instructions, not the proprietary NVIDIA chip's.
08-03-2010 07:33 AM
I'm a little out of my waters here, so take this with a grain of salt, but I /think/ that with the DirectX11 and "DirectCompute" (with siblings) there is now at least the beginning of a more standardized (on windows anyway) framework for doing calculations on GPU's and with both NVIDIA and ATI supporting DirectX11 this should make it easier to allow the LabVIEW compiler to start taking advantage of the GPU's by making DirectX calls. DirectX also supports multithreading and such, so this might be as I said another step closer to enabling LabVIEW to tie into GPU's without forcing users to one card or another.. of course they would force users to the Windows 7 platform.
01-25-2011 03:58 PM
NVIDIA released Parallel Nsight Support for Visual Studio. This enables them to use the same IDE for CPU and GPU programming and debugging. I am tempted.
Details here: http://www.nvidia.com/object/parallel-nsight.html
The new Mathematica 8 has similar support for GPU computing. Awsome! But the learning curve to use Mathematica effciently is very steeeeeep.
Labview has great support too, but I am not ready to switch yet, I like simple elegance in Labwindows CVI that lets me, an non-professional programmer to access the work of scientific software development and I need more power.
Labwindows CVI, What is you plan?
/Layosh
09-12-2011 03:22 PM
We have ben using LabWindows/CVI for years now. One of our applications involves genetic algorithms to analyze spectroscopic data. We thought that using a GPU would speed the process considerably. Has anyone used the NVIDIA OpenCl or CUDA API with LabWindows? I saw that the last post to this discussion was about eight months ago and was hoping there is some further developments. I'm new to the whole GPU field. Thanks.
10-30-2011 10:26 PM
Another programing task that needs the power of GPU: NanoSight particle sizer and counter takes images at 30 or more fps. One needs to track each and every particle that is in the path of the laser beam through frames, and the distance travelled in any direction is measured, from which (using the Stokes/Einsten equation, the diameter of the particle is estimated. In addition, the rotating particle flickers, and from this ellipticity of the particle can be estimated. Now, to make the images "readeable", they need quite a bit of preprocessing, stabilizing, etc. Currently it takes 30-40 minutes on my pc to run a single sample, and a recording of 90 seconds. It means I have no clue whether my sample was OK, untill probably next day, when everything is gone since recording and processing are separated. With GPU support I could get real time data, no need for redundant sample recordings and could use NanoSight for process control in vaccine production.