LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

openGL to labview

I haven't had a chance to work on that example VI yet, but I do want to clear up some things.

First, it seems that some people are a little confused about what I am talking about.

There was a toolkit released (I think in the 8.0 timeframe) for rendering OpenGL into a picture control.  It was unsupported and limited in features but it was a good way for us to put out some feelers to see if anyone was actually interested in doing this.  Basically the implementation of this toolkit was that you would append opcodes to a picture string and then those opcodes would be parsed and converted into OpenGL when rendered in the picture control. 

This is not what I am talking about currently.

There is a new feature in LabVIEW 8.2 also called the 3D picture control.  It has an Object Oriented VI server based API that you use to build a scene to display into a 3D picture control or optionally in a standalone window.  The name "picture control" is a bit of a misnomer, because the control itself doesn't share any code with the classic picture control, but is written from scratch.

Basically when the "scene" is rendered, the graph that you built is traversed and OpenGL code is generated.  There is no parsing involved.  The result is a high performance, easy to use API to render OpenGL into a LabVIEW panel.

We decided to go this route instead of exposing LabVIEW users to OpenGL because we thought it was easier to use.  It is common practice in 3D programming to use scenegraphs instead of OpenGL directly, because it greatly improves the developer experience.  I liken this to developing in assembly (OpenGL) vs developing in C++ (scenegraphs)

With a scenegraph you can define relationships of objects and not have to know specifically when to push and pop matricies, etc.  There are many other benefits like geometry sharing, optimization of the graph, etc that scenegraphs have also.  If you are unfamiliar I urge you to check out a few

www.openscenegraph.org, www.opensg.org, http://oss.sgi.com/projects/inventor/

It is interesting that someone pointed out that you should implement your intesity plot like this

glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer (3, GL_FLOAT, 0, vertices);
glDrawArrays(GL_POINTS,0, glpoints);

Because this is exactly what our scene mesh class does.  This is is yet another reason why a scenegraph was important to us.  Now, we (or the library that we choose to use) has the responsibility for generating optimal GL code; users do not.

Anyway, scenegraphs do have the disadvantage that you cannot issue arbitrary opcodes that you may want and you are somewhat limited by what LabVIEW exposes via the scenegraph api that we provide. I am very interested to hear about your use case for wanting to use raw OpenGL instead of the scenegraph api or any features that you think our scenegraph is missing.  If you want to talk about it more, please start a new thread or give me a mail at jeffery.peters AT ni.com -- I think that this thread is pretty much owned by Austin's intensity plot problems now.

Austin, I will be posting my example VI soon, I hope that it helps.


 

 

Message Edited by jpeters on 04-25-2007 08:18 AM

0 Kudos
Message 21 of 38
(2,697 Views)
Wiebe, this is an interesting idea, though it seems like it would be slower since I have to send each point to the card instead of a pointer to a set of points that can be transferred all at once (I know nothing about vertex stuff yet)?  I will investigate this idea, since my next step is to do a coordinate transform on the data, and doing it point by point to the video card would be a very easy way to accomplish this.

As for the method I am using now, it is drawing a quad and texturing to it, then rotating the quad to orient the data in a particular manner.  This is probably not the fastest way, but it is fast enough to match the speed of the data processing algorithm.

jpeters, thanks for doing that VI.  I look forward to trying it out.  Have you looked into that issue I reported a couple of posts ago about the intensity graph re-draw time not being taken into account by the timing sequence loop? 

Thank,
Austin
0 Kudos
Message 22 of 38
(2,685 Views)
Have you looked into that issue I reported a couple of posts ago about the intensity graph re-draw time not being taken into account by the timing sequence loop? 
 
This is how indicators work.  We don't display on an indicator synchronously when you write its value.  Basically what happens when you write the value is that we schedule an update sometime in the future.
 
I think that you can get a better feel for the timing if you use the value property or if you turn synchronous display on the control (in the advanced popup menu)
 
Jeff P
 

 
0 Kudos
Message 23 of 38
(2,680 Views)
I did complete my mesh test, but I have found that the results are unsatisfactory.  This might be because my video card isn't powerful enough to transform 500,000 points in a reasonable amount of time.
 
I talked this over with a couple of guys here in Austin, and someone suggested that you try the picture control.  Basically, what you could do is build a flattened pixmap built of color values that are derived from you data values.
 
I have an example VI that has reasonable performance, but it seems like the bottleneck here is converting the double values into colors for the bitmap.
 
The VI that you are looking for is called picture intensity.vi
 
Jeff Peters
 
0 Kudos
Message 24 of 38
(2,668 Views)
Jeff,


I'll have to look at the new 3D picture control implementation. It appeared to be just a simple reference to the same old control, but if you say it's not, I'll have a further look at it.


The problem with scene graphs is that they implement an abstraction. If you're requirements differ from that abstraction, it is hard to use the implementation. For instance, if you have a scene graph that is intended for a 3d shooter, it will be hard to use it for drawing 2d charts.


Limitations in C++ scene graph implementations are avoided by allowing custom scene nodes. Because you are using C++, you can inherit from a base scene node class, and add your own scene node, that implement as much pure opengl, directx or intermediate low level stuff as you want.


I'm sure the LabVIEW scene graph has limitations. That can't be avoided. What is a pitty is that you can't add to the implementation (from what I get from your description). The scene graph is nice for a lot of stuff, but for the rest a direct interface would be nice.


I know NI is always carefull when it comes to users. Allowing a pure opengl interface would mean LabVIEW might crash, or even wurse, customer might complain they can't get anything to draw! This will add to the support needed, etc. etc.


I'll have to look at the LabVIEW implementation a lot before I can tell you why I need OpenGL, and can't use the scene graph. Your comparison between assembler and C++ is one thing. I'd like to use opengl when I need absolute performance. I'd use the scene graph when I want a faster implementation time and resonable performance. The newest opengl tricks can only be done using the newest opengl extensions. To toy around with them, you need access to opengl, and for now LabVIEW doesn't allow that. Also, for OpenGL users, it means they have to learn something else, while the now all the details.


Btw. The implementaton with the vertex arrays is a fast way to draw vertices. But nowadays you can use its follower: buffer object. They allow a few vertices in a set to be changed without sending the unchanged vertices again.


Regards,


Wiebe.
0 Kudos
Message 25 of 38
(2,665 Views)
If you already have 2d or 3d points, why don't you draw them directly in OpenGL (or indeed the LabVIEW Scenegraph)? That is what opengl is good at. With the vertex buffer, you pass a pointer to the points. Each time the points are updated, the data is transfered to the GPU. When you use a texture, the changed texture has to be passed to the GPU each time it changes. The difference is that the texture data is probably larger. Not nessisarilly though. If you have e.g. 2 points, you'll have 4 bytes of data. If you draw them on a 400X400 texture, you'll have 480000 bytes.


Transforming the points is easy (if the transform is linear) if you modify your transformation matrix (or use rotate/translate/scale). It's also very easy to add a third dimension to the points method, for instance the value.


Regards,


Wiebe.
0 Kudos
Message 26 of 38
(2,665 Views)
The problem with scene graphs is that they implement an abstraction.
 
You are exactly right, but I would phrase it a different way. The benefit to scenegraphs is that they implement an abstraction.  That way you don't need to know all the nitty gritty details to get something to work.  That said, I see your point.  Specifically,
 
The newest opengl tricks can only be done using the newest opengl extensions. 
...
The newest opengl tricks can only be done using the newest opengl extensions.
 
You are right that if you need to use some feature of OpenGL that we don't expose somehow through the scenegraph, you are probably not going to have an easy time working around the problem.
 
Limitations in C++ scene graph implementations are avoided by allowing custom scene nodes.
 
I think that this is a nice area of reasearch for us and something that we thought about when we adopted the scenegraph.  I will be sure to bring it up when we think about improvements to the scenegraph. The problems I can see off the bat would be how could we implement a custom node?  Because the scenegraph is a retained mode rendering, you cannot just issue the commands inline.  Instead, we would probably allow you to collect a stream of opcodes in a string or other construct and then parse it at render time.  That isn't exactly fast, and was precisely what the old 3D toolkit did.  It is ok for simple things like glVertex, etc, but for things like glVertexPointer or other opcodes that require a bit of data I think data copies would ultimately kill performance.  Which brings me to a different alternative:
 
I think it would be nice to just allow you to use OpenGL directly, if not for the "tricks" you can get, then just to be able to integrate code that is already written easier.  I am sure that many people who want to use 3D in LabVIEW already have routines that are highly optimized for their application... it would be nice if they didn't have to "port" them to our scenegraph. 
 
 
 
 
 
0 Kudos
Message 27 of 38
(2,686 Views)

If you already have 2d or 3d points, why don't you draw them directly in OpenGL (or indeed the LabVIEW Scenegraph)? That is what opengl is good at. With the vertex buffer, you pass a pointer to the points. Each time the points are updated, the data is transfered to the GPU. When you use a texture, the changed texture has to be passed to the GPU each time it changes. The difference is that the texture data is probably larger. Not nessisarilly though. If you have e.g. 2 points, you'll have 4 bytes of data. If you draw them on a 400X400 texture, you'll have 480000 bytes.


Transforming the points is easy (if the transform is linear) if you modify your transformation matrix (or use rotate/translate/scale). It's also very easy to add a third dimension to the points method, for instance the value.

I don't know if this response was directed towards me or not, but I will address it.

This was exactly what I was doing via our scenemesh class.  I think that the real problem was that under the hood our mesh uses glDrawElements which requires for every index that you pass you need an element in all enabled arrays.

Here is what I was doing.

1.  Create a static set of verticies which represented the placement of the samples.  Point 1 was 0,0,0; Point 2 was 1,0,0 and so on.  I created a mesh of 1000 X 500 of these

2.  Then when I got (simulated) data, I updated the current color array by mapping the double value to an RGBA color value.  The problem is, that because we are passing 500,000 distinct indicies, I also need to send 500,000 colors. 

So as I see it, for each frame you need to send

2 million bytes for the new color array

and another 2 million bytes for the array of indices you want to draw via glDrawElements.

I don't know if my card can handle that.

An alternative, which I didn't try, was just regenerating a texture.  By my calculations you would still need to send quite a bit of data over to the card, each frame

You would have to send a quad over and you would have to send a massive texture over because I assume that every value is important

Something like 4 (RGBA) * 1000 * 500, which again is another 2 million bytes. This seems certainly better than my mesh version... but I don't know if this is sustainable either.

 

0 Kudos
Message 28 of 38
(2,684 Views)
Let me know if I can help. I'm already in the beta testing program...


Regards,


Wiebe.
0 Kudos
Message 29 of 38
(2,684 Views)
It was a response to AustinMcElroy. Sorry about that.
0 Kudos
Message 30 of 38
(2,678 Views)