LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

openGL to labview

I can post the OpenGL stuff and the Create Window code, since it is basically a rehash of freely availible code with some mior modifications, and there isn't any reason that other people should have to go through the grief I did trying to do the whole window thing.

The data is going to be at least 500 by 1000, probably no bigger than 1000 by 1200 though.  This is 32 bit floating point data.  The intensity graph (or OpenGL graph), ideally, would be 1:1.  The data is coming in as a matrix, and the entire matrix is rendered.  That is to say, no to matrices are identicle and the entire quad must be retextured.  Rotating is a requirement right now, but may not be in the future.  I did briefly look into the 3D controls about 2 months ago for something different.  I was texturing .jpegs to an object that was created, and there was some problems with the texture at the edges.  I have not tried using the matrix as a texture.  Is this possible?  Is it hardware accelerated?

Thanks,
Austin
0 Kudos
Message 11 of 38
(4,480 Views)

I will write up a little example VI here in the next couple of days.  I did a little testing this evening and I saw that the intensity plot does indeed get bogged down with a plot of 1200 X 1200.  The 3D picture control also seems to suffer from this, but the scene window which is hardware acclererate seems very responsive.

I encourage you to look into the 3D picture control and specifically the Scene Window.  If you can use these, it means you won't have to rely on creating your own window, GL context, etc.  It also may simplify things a bit as you will be using more native LabVIEW, which is almost always easier to maintain than a hybrid system of dlls, windows api, and LabVIEW code.

 

 

0 Kudos
Message 12 of 38
(4,476 Views)
With the 3D Picture control, the performance will be terrible.


Let me explain...


The bottleneck in this application is the size of the data. Ideally, this data is passed directly to the GPU. This is the case with the direct opengl approach.


With the 3D picture control, the data is copied to a 3D picture control vi. Possibly it has to be converted to match it's data format. This vi converts it to (probably a simple flatten) the picture control's data string. This string is passed to the 3D Picture control. (Possibly it is passed around a few other vi's before that, but this can be avoided). The peace of code that parses the (3D) picture control data is basically a virtual machine. When it finds the opcode for the data, the data is send (probably copied) to opengl.


All in all, the data gets copied and converted several times before send to the GPU. Since the data set is large, the performance will drop significantly.


The performance of the direct opengl appoach 3D is no match for the picture control. Not to mention the features that opengl has, that are not implemented in the picture control. Also, the rendering size is limitted (or it was in LV7.1) to 256X256 pixels.


It is nice to have the 3D picture control though. It's just not mature enough to compete with opengl.


Regards,


Wiebe.
0 Kudos
Message 13 of 38
(4,469 Views)
It's hard to find a leak without seing the code...


Do you open the device context every frame? This is wastefull (you can reuse the context), and if you don't close the context, it will cause a leak. It's weird that the exe doesn't have the problem though...


Regards,


Wiebe.
0 Kudos
Message 14 of 38
(4,471 Views)
With the 3D picture control, the data is copied to a 3D picture control vi. Possibly it has to be converted to match it's data format. This vi converts it to (probably a simple flatten) the picture control's data string.
 
This was true in the first version of the 3D picture control, but not the current one.  We may need to convert some data, but nothing is flattened into a picture string anymore. 

Also, the rendering size is limitted (or it was in LV7.1) to 256X256 pixels.
 
Again, this is not the case with the current version

It is nice to have the 3D picture control though. It's just not mature enough to compete with opengl.
 
This still might be true, we don't expose every single low level feature of OpenGL to you in the current version of the 3D picture control, instead we give you an object oriented api that under the hood is implemented using OpenGL.  I haven't seen anything posted that needs any feature that we don't expose.
 
You might still be right though and  you still might not get enough performance out of this experiment.  I think that it is worth exploring though as it relieves you of creating a window, a GL context, etc.
 
My current idea is to create a SceneMesh object and give each vertex a color based on the value of the data.  I don't know if this will be performant enough, initial experiments are promising.
0 Kudos
Message 15 of 38
(4,463 Views)
Let's get this straight. I'm glad it's there. I just wish I could control the 3D picture control with normal opengl commands.


I tried to do this with the old 3D picture control. I made my own parser that parsed opengl commands and converted them to the 3D picture control. But things like push and pop matrix are not exposed, and they are kinda needed in opengl.


Things that are not exposed:


Reading (writing) the depth buffer.
Reading (writing) subtextures.
Poping and pushing matrix states (although NI's vi's don't use them, I control the picture control with it's native opengl opcodes, it's very inconvienient not having them).
Pixel shader functionality
Vertex shader functionality
Stencil buffer functionality
Object labeling


Just by hart. The list goes on.


Regards,


Wiebe.
0 Kudos
Message 16 of 38
(4,451 Views)
Wiebe:

Thanks, your advice on the Device Context was correct.  I had an extraneous call that was creating a DC and a RC that weren't needed.  Memory leak solved.  This non-memory leak DLL was used in the test.

As for the performance between Labview rendering and OpenGL rendering:

The test is a 500 by 1000 matrix of floating point numbers that has random noise added each loop.  That is fed into the data processing DLL and labview code that measures the time to process the data and display it to the screen.  This test was performed on a Core 2 Duo T6600 with a Geforce 7900 video card.  The main things to look at are loop times and CPU usage.

The OpenGL code (not reusing device contexts) processed and displayed the data in 29ms, utilizing 85% and 15% of the CPU cores.

The Labview Intensity plot completes its task in 21ms.  CPU usage on one core is 90 - 95% and the CPU usage on the other core is 30 - 40%.

Just processing the data and not displaying anything has a loop time of 19ms, and CPU usage of 85% and 10 - 15%.

There are still things that could be done to optimize the OpenGL code.  Wiebe pointed out that not reusing Device Contexts can cost time, and the OpenGL code is not reusing device contexts.  I will work today and tomorrow on figuring out how to use 1 device context instead of creating a new one each time and report back with those results.  I guess what I am shooting for is low CPU usage (85%, 10 - 15%) with the fastest possible loop time (19ms).  I have also not tried the Labview rendering tool, which I may be done later in the week.

Thanks,
Austin


0 Kudos
Message 17 of 38
(4,443 Views)

Great thread, I ran into the same issues of getting a DC from a labview window to do OpenGL rendering a few years back and gave up on it.  I do like the 3d toolkit but it is unfortunate theat push and pops (and a few other features are not accessable).  I hope NI will continue to push in this direction.  It does look like vista has much more support for 3d and vector graphics native, making it much easier to develope some applications non openGL applications in the future but I do like openGL much more than directX.  Thanks for all the useful information.

 

Paul

Paul Falkenstein
Coleman Technologies Inc.
CLA, CPI, AIA-Vision
Labview 4.0- 2013, RT, Vision, FPGA
0 Kudos
Message 18 of 38
(4,424 Views)
Hey guys, just another update. 

Wiebe was right, once again.  Reusing the same DC and RC makes a significant improvement.  Using the same test as my last post, I can run the data processing and display the data in OpenGL at 21ms.  The cores are running at 90% and 20%, or there abouts.

I also found a disconcerting issue in Labview.  The way I was timing the speed of these processes was using a sequence node that had the process in frame 1 and timers in frame 0 and 2.  The timers are subtracted, giving the time things take to run in frame 2.  So the issue is that the update to the intensity does not appear to be included in the timing routine as described above.  When expanding the intensity graph to 500 by 1000, Labview reports that processing and displaying the data takes 21ms, which is the data I reported in my first post on the timing.  However, looking at the graph on the monitor and how the graph updates, it is clearly not updating the intensity graph anywhere approaching 50 frames a second, more like 5 - 10.  Taking this oddity into account, an OpenGL intensity graph totally dominates a Labview one, no question about it. 

Thanks,
Austin




0 Kudos
Message 19 of 38
(4,422 Views)
The opengl method you use is significant.


Do you render using vertexarrays? Like this:


In the init:
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer (3, GL_FLOAT, 0, vertices);


Then in the loop:
glDrawArrays(GL_POINTS,0, glpoints);


It will improve if you are using a for loop to send each point to opengl with glVertex.


Also, disable smooth points, and render without shading:


glDisable (GL_POINT_SMOOTH);
glShadeModel(GL_FLAT);


If all points have the same color, you can disable the depth test:


glDisable(GL_DEPTH_TEST... or something like that);


That's the benefit of opengl. You can choose the optimal solution. That's also the down side... You have to choose yourself.


Regards,


Wiebe.
0 Kudos
Message 20 of 38
(4,235 Views)