LabWindows/CVI

cancel
Showing results for 
Search instead for 
Did you mean: 

GPU programming support

Are ther any plans to include some basic support of GPU programming for us, mere mortals.
NI is SOOOO GOOOD at symplifying and making accessible the more rarified elements of ciomputing, that I have high hopes that sooner of later we will be able to harness teh extraordianry power of CHEAP GPU hardware. I read about CUDA, spent about a month of my spare time,  but found it too difficult to master it on my own. I would have never thought that there will be a time whyen I will really NEED the power of Nvidia's  Tesla-class thing or something similar, but here am I, wanting to implement some serious genetic algorithm and do need that power. Plus, I am sure the real-time follks and those interested in image processing,video processing, pattern recognition etc would appreciate the thousands of threads lingering out there.
 
Multitherding is good. Megamulthithreding could be better, and certainly is in our future, if it remains simple enough for us, who are using computation but are not specialist in computing. The question is how far out is the solution. Could somebody from NI tell us someting encouraging?
 
layosh
0 Kudos
Message 1 of 16
(8,071 Views)
Hi Layosh,

To my knowledge, there is currently nothing in the works for GPU support. However, I was able to find some other threads that touched on the subject:

http://forums.ni.com/ni/board/message?board.id=250&message.id=28702&requireLogin=False

http://forums.ni.com/ni/board/message?board.id=231&message.id=2608&requireLogin=False

http://forums.ni.com/ni/board/message?board.id=170&message.id=258905&requireLogin=False


You could also submit a suggestion for GPU support through our Product Suggestion Center:

Product Suggestion Center


I will be submitting a product suggestion on this topic in your behalf. R&D takes each product suggestion very seriously so I encourage you to submit one yourself as well.
Manooch H.
National Instruments
0 Kudos
Message 2 of 16
(8,039 Views)

Odd how LabView is listed in this article supporting CUDA, but I see nothing listed on NI's website.

 

http://www.engadget.com/2009/05/06/nvidia-tesla-gpus-now-shipping-with-dell-personal-supercomputer/

 

Time will tell... unless somebody has some new info?

0 Kudos
Message 3 of 16
(7,644 Views)

. . . and even more odd that they publish an article on LabVIEW Community Extends Support of New Protocols stating that NVIDIA CUDA support is one of the "Five LabVIEW Add-Ons in the Spotlight", yet when you go to the LabVIEW add-ons page to "Learn more about these and other LabVIEW add-ons" then there is no information and searching for it brings up nothing but forum threads like this one....

 

Maybe we will soon see it in the NI search too?? Or maybe someone more skilled at mining NI can find the correct URL's?

 

Either way, I hope it happens soon as this is very exiting news!

---------------------------------------------------

Project Engineer
LabVIEW 2009
Run LabVIEW on WinXP and Vista system.
Used LabVIEW since May 2005

Certifications: CLD and CPI certified
Currently employed.
0 Kudos
Message 4 of 16
(7,532 Views)

THE support for GPU processing is building up nicely for the Labview community.

See the recent addition of the ADV toolkit. This shows clearly that it is possible!

/Lajos

0 Kudos
Message 5 of 16
(6,844 Views)

I'll add to Layosh's post. While your post was before the official release of the LabVIEW GPU Computing module (LVCUDA) uploaded to NILabs in Aug 2009, it supercedes the prior LV solution presented as part of an add-on to CVI.

 

Using the toolkit for CVI didn't address some of the threading issues inherent in multi-threaded LV. The LVCUDA includes both a general interface to CUDA functions and an execution framework ensuring the execution is integrated into that of a LabVIEW application's as if it were any other process on the CPU.

 

You can find more information here: LabVIEW GPU Computing.

 

***NOTE**** This module is designed to integrate an existing CUDA function previously written outside LabVIEW, not facilitate GPU code generation from LabVIEW. The latter is far more challenging as LabVIEW's compiler is designed to target x86 instructions, not the proprietary NVIDIA chip's.

0 Kudos
Message 6 of 16
(6,819 Views)

I'm a little out of my waters here, so take this with a grain of salt, but I /think/ that with the DirectX11 and "DirectCompute" (with siblings) there is now at least the beginning of a more standardized (on windows anyway) framework for doing calculations on GPU's and with both NVIDIA and ATI supporting DirectX11 this should make it easier to allow the LabVIEW compiler to start taking advantage of the GPU's by making DirectX calls. DirectX also supports multithreading and such, so this might be as I said another step closer to enabling LabVIEW to tie into GPU's without forcing users to one card or another.. of course they would force users to the Windows 7 platform.

---------------------------------------------------

Project Engineer
LabVIEW 2009
Run LabVIEW on WinXP and Vista system.
Used LabVIEW since May 2005

Certifications: CLD and CPI certified
Currently employed.
0 Kudos
Message 7 of 16
(6,801 Views)

NVIDIA released Parallel Nsight Support for Visual Studio. This enables them to use the same IDE for CPU and GPU programming and debugging. I am tempted.

Details here: http://www.nvidia.com/object/parallel-nsight.html

The new Mathematica 8  has similar support for GPU computing. Awsome! But the learning curve to use Mathematica effciently is very steeeeeep.

Labview has great support too, but I am not ready to switch yet, I like simple elegance in Labwindows CVI that lets me, an non-professional programmer to access the work of scientific software development and I need more power.

 

Labwindows CVI,  What is you plan?

 

/Layosh

0 Kudos
Message 8 of 16
(6,003 Views)

We have ben using LabWindows/CVI for years now.  One of our applications involves genetic algorithms to analyze spectroscopic data.  We thought that using a GPU would speed the process considerably.  Has anyone used the NVIDIA OpenCl or CUDA API with LabWindows?  I saw that the last post to this discussion was about eight months ago and was hoping there is some further developments.  I'm new to the whole GPU field.  Thanks. 

0 Kudos
Message 9 of 16
(5,700 Views)

Another programing task that needs the power of GPU: NanoSight particle sizer and counter takes images at 30 or more fps. One needs to track each and every particle that is in the path of the laser beam through frames, and the distance travelled in any direction is measured, from which (using the Stokes/Einsten equation, the diameter of the particle is estimated. In addition, the rotating particle flickers, and from this ellipticity of the particle can be estimated. Now, to make the images "readeable", they need quite a bit of preprocessing, stabilizing, etc.  Currently it takes 30-40 minutes on my pc to run a single sample, and a recording of 90 seconds. It means I have no clue whether my sample was OK, untill probably next day, when everything is gone since recording and processing are separated. With GPU support I could get real time data, no need for redundant sample recordings and could use NanoSight for process control in vaccine production. 

0 Kudos
Message 10 of 16
(5,600 Views)