04-25-2007 08:17 AM - edited 04-25-2007 08:17 AM
I haven't had a chance to work on that example VI yet, but I do want to clear up some things.
First, it seems that some people are a little confused about what I am talking about.
There was a toolkit released (I think in the 8.0 timeframe) for rendering OpenGL into a picture control. It was unsupported and limited in features but it was a good way for us to put out some feelers to see if anyone was actually interested in doing this. Basically the implementation of this toolkit was that you would append opcodes to a picture string and then those opcodes would be parsed and converted into OpenGL when rendered in the picture control.
This is not what I am talking about currently.
There is a new feature in LabVIEW 8.2 also called the 3D picture control. It has an Object Oriented VI server based API that you use to build a scene to display into a 3D picture control or optionally in a standalone window. The name "picture control" is a bit of a misnomer, because the control itself doesn't share any code with the classic picture control, but is written from scratch.
Basically when the "scene" is rendered, the graph that you built is traversed and OpenGL code is generated. There is no parsing involved. The result is a high performance, easy to use API to render OpenGL into a LabVIEW panel.
We decided to go this route instead of exposing LabVIEW users to OpenGL because we thought it was easier to use. It is common practice in 3D programming to use scenegraphs instead of OpenGL directly, because it greatly improves the developer experience. I liken this to developing in assembly (OpenGL) vs developing in C++ (scenegraphs)
With a scenegraph you can define relationships of objects and not have to know specifically when to push and pop matricies, etc. There are many other benefits like geometry sharing, optimization of the graph, etc that scenegraphs have also. If you are unfamiliar I urge you to check out a few
www.openscenegraph.org, www.opensg.org, http://oss.sgi.com/projects/inventor/
It is interesting that someone pointed out that you should implement your intesity plot like this
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer (3, GL_FLOAT, 0, vertices);
glDrawArrays(GL_POINTS,0, glpoints);
Because this is exactly what our scene mesh class does. This is is yet another reason why a scenegraph was important to us. Now, we (or the library that we choose to use) has the responsibility for generating optimal GL code; users do not.
Anyway, scenegraphs do have the disadvantage that you cannot issue arbitrary opcodes that you may want and you are somewhat limited by what LabVIEW exposes via the scenegraph api that we provide. I am very interested to hear about your use case for wanting to use raw OpenGL instead of the scenegraph api or any features that you think our scenegraph is missing. If you want to talk about it more, please start a new thread or give me a mail at jeffery.peters AT ni.com -- I think that this thread is pretty much owned by Austin's intensity plot problems now.
Austin, I will be posting my example VI soon, I hope that it helps.
Message Edited by jpeters on 04-25-2007 08:18 AM
04-25-2007 10:38 AM
04-25-2007 10:56 AM
04-26-2007 12:42 AM
04-26-2007 03:40 AM
04-26-2007 03:40 AM
04-26-2007 09:07 AM
04-26-2007 09:25 AM
If you already have 2d or 3d points, why don't you draw them directly in OpenGL (or indeed the LabVIEW Scenegraph)? That is what opengl is good at. With the vertex buffer, you pass a pointer to the points. Each time the points are updated, the data is transfered to the GPU. When you use a texture, the changed texture has to be passed to the GPU each time it changes. The difference is that the texture data is probably larger. Not nessisarilly though. If you have e.g. 2 points, you'll have 4 bytes of data. If you draw them on a 400X400 texture, you'll have 480000 bytes.
Transforming the points is easy (if the transform is linear) if you modify your transformation matrix (or use rotate/translate/scale). It's also very easy to add a third dimension to the points method, for instance the value.
I don't know if this response was directed towards me or not, but I will address it.
This was exactly what I was doing via our scenemesh class. I think that the real problem was that under the hood our mesh uses glDrawElements which requires for every index that you pass you need an element in all enabled arrays.
Here is what I was doing.
1. Create a static set of verticies which represented the placement of the samples. Point 1 was 0,0,0; Point 2 was 1,0,0 and so on. I created a mesh of 1000 X 500 of these
2. Then when I got (simulated) data, I updated the current color array by mapping the double value to an RGBA color value. The problem is, that because we are passing 500,000 distinct indicies, I also need to send 500,000 colors.
So as I see it, for each frame you need to send
2 million bytes for the new color array
and another 2 million bytes for the array of indices you want to draw via glDrawElements.
I don't know if my card can handle that.
An alternative, which I didn't try, was just regenerating a texture. By my calculations you would still need to send quite a bit of data over to the card, each frame
You would have to send a quad over and you would have to send a massive texture over because I assume that every value is important
Something like 4 (RGBA) * 1000 * 500, which again is another 2 million bytes. This seems certainly better than my mesh version... but I don't know if this is sustainable either.
04-26-2007 10:10 AM
04-26-2007 10:10 AM