LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Displaying realtime video

Hi,
 
I am making a VI where I am acquiring frames from a camera using an NI frame grabber. Using the IMAQ Grab function, I can loop to acquire images at highspeed and display them at almost realtime. Another part of the VI is intended to analyse the images as quickly as possible. However, the image processing takes more time than it takes to acquire the images. My problem is that I dont know how to display the acquired images as quickly as they are acquired while still processing frames (not every frame, but when the program finishes processing a frame, it should begin processing whatever frame was last acquired). How can I set up my vi so that it isnt waiting for the processing to finish before updating the display? I was thinking I need two seperate while loops, one to acquire frames and display them "realtime" and then another while loop that processes frames. Could I use a local variable in the processing loop to access the image display in order to obtain the current frame? I have heard a lot of negatives about local variables, so I was wondering if there was another way I could do this that would be better.
 
Any suggestions would be very helpful,
 
Thanks a lot 
Jeff


Using Labview 7 Express
0 Kudos
Message 1 of 11
(4,467 Views)
I think two parallel loops and the use of the local variable to get the most recent image in the display is a perfectly acceptable way to do this.
 
Local variables aren't always bad.  The concern is when you have race conditions such as multiple writers (which writer goes second is the one whose data will last as the first one's write will get overwritten), or concerns where data will get lost and a critical data gets lost before it is read.
 
Since you said you are only interested in processing the most recently acquired frame, and it sounds like you have only one writer (whatever is doing the frame grabbing), I think your proposed setup will work.  I think you should go ahead and try it and see how it performs.  Good luck!

Message Edited by Ravens Fan on 09-22-2007 01:24 PM

Message 2 of 11
(4,450 Views)

I would use a queue that is set to hold one image. Load the queue in one while loop. Read the queue in the second loop and perform the analysis.

Maybe you could have several loops to analyze the images??

Message Edited by unclebump on 09-22-2007 05:33 PM

Message 3 of 11
(4,438 Views)
Thanks to both of you for replying!
 
I have never used queues in labview, so I wasnt exactly sure how they work. I'm going to go try and find some explanations of them. Thanks for the suggestion! Currently I am using the two parallel loops and a local variable, but I havent been able to test it yet, so I dont know how it will perform. I will repost once I have tested it.
 
Thanks
 
Jeff


Using Labview 7 Express
0 Kudos
Message 4 of 11
(4,415 Views)

Hello everyone,

Thanks for posting up to this point, and I like all of the recommendations for this kind of situation.  Processing images while acquiring them at the same time is actually a very common process that IMAQ customers face.  There is actually more than what meets the eye for image acquisition and processing in LabVIEW that I would like to mention.

Probably the biggest difference between IMAQ and other LabVIEW related activities is the fact that the image buffer datatype (purple wire) that is being passed around from one vision process to the next only contains a reference to the image in memory and not the actual pixel data.  This was purposefully done to increase performance in LabVIEW when passing the image from one point to the next since images are typically very large in size compared to other datatypes used in LabVIEW.  For this reason, the image datatype reference does not need to be passed through a queue to access the image data between multiple loops in LabVIEW.  There are more efficient means in doing so.

Instead, consider using an image acquisition process called a Ring.  A ring configures the IMAQ driver to acquire a sequence of images in memory automatically, and then you can call the IMAQ Extract Buffer in your processing loop to pull an image from this buffer list into a processing buffer (what you created from the IMAQ Create VI.)  When the IMAQ card reaches the end of the sequence (called a buffer list)in the driver managed environment, it automatically starts over at the beginning of the list.  Therefore, the images are held in memory until you need them for processing.  For more information on the use of Rings, check out the following suggested links:

NI Developer Zone: Rings
NI Developer Zone: Ring Acquisitions

Now, hopefully with a more efficient implementation of image sharing between loops put into place your processing loop will be able to keep up with the acquisition loop.  Otherwise what will happen is that the index in the buffer list of the ring will overwrite an image that you have not yet processed, effectively losing data in your program.  However, if your processing loop is simply taking too long, then you will have to consider other options to either improve processing performance.  Some ways to this include the use of ROIs to specify what you want to process, fine tuning pattern matching algorithms, substituting slower algorithms with other methods, reducing the resolution of the images that you are acquiring, or perhaps acquiring all of the images first (using a sequence acquisition) and then running the processing loop afterwards.

I hope this gives you some direction.  Working with images can be quite a different experience, and it is important to understand the acquisition tools that you have available in order to get the most out of the driver.  Please post back if you have any followup questions.

Regards,

Mike Torba
Vision Applications Engineer
National Instruments 

Message 5 of 11
(4,406 Views)

Thanks for that Mike!

I will look more closely at the ring tutorials for which you posted links. Maybe it would help if I gave a better description of my application. I am writing a program so that a technician can study the interference fringes created by a sloped optic. I am obtaining images of the fringes from a camera (currently using the grab function). I want the images to be displayed (as fast as they are acquired) on a display on the front panel, so that the technician can see the images realtime during processing. At the same time, the program analyses the images using an editted Vision script to determine how quickly and in what direction the fringes are moving vs. time, and displays the results on a waveform chart under the main image. I don't think there is any way that the processing can possibly function as quickly as the acquisition (at least not with my level of programming), so I intend to have the display show images as fast as possible while the processing function just processes whatever image was acquired last. To do this I implemented a stack (or a LIFO queue as labVIEW calls it), and limited the number of images that could be stored in the stack to avoid overflowing memory. This means that some frames are not processed, but this is not a problem since the velocity of the fringes is not great. I was quite concerned with the problem of working with image files (since I was unaware that IMAQ images are in fact pointers), so I had been avoiding wire branching as much as possible. When you say that IMAQ datatypes are references to the image in memory, do you mean they are pointers in the sense of pointers in C?

If this is the case it could create potential errors for me as I  have two seperate scripts running on the same image in parallel. I had assumed that a copy of the image was made, and then processed in the second script, but if it is in fact  two pointers to the same image, the image is getting processed in two scripts simultaneously. I suppose I can fix this by explicitly creating a copy of the image. On the other hand, my second script only exists so I can find the centroid of a small masked area of the image. Is it possible to set a temporary mask in a script to find a centroid and then remove the mask so I am once again working with the whole image? This would eliminate the need for parallel scripts.

On a side note, I seem to have lost the ability to rate messages. There is no longer a set of radio buttons with Rate This Message at the bottom corner of posts. Any idea why this might be?

 

Jeff


Using Labview 7 Express
0 Kudos
Message 6 of 11
(4,394 Views)
Hello again Jeff,
 
I understand your situation.  In this case a Ring is still handy because you can still copy an image that is sitting in the ring into a processing buffer in LabVIEW for display, and then every once in a while have the analysis loop pull its own copy of the image out for processing.  Of course, it can get a little tricky in determining when the processed image will be displayed.
 
You are correct in thinking that the image reference is like a pointer.  This is precisely what it is.  It contains a memory address to the image stored in memory.  Therefore it is quite possible for functions in Vision to accidentally process the image in the wrong sequence if there is not some other means to coerce the sequence of execution.  This is why you will see the excessive use of the sequence structure in a lot of the IMAQ and Vision shipping examples (aside from conserving block diagram real estate) as well as the careful use of the error cluster throughout the programs.  The image datatype does NOT follow the dataflow paradigm in LabVIEW, and race conditions will be experienced between VIs if the only element that is passed between them is the image datatype.  Be careful with this.
 
Your ideas dealing with processing an image, and then reverting back to the source are quite possible.  This is where the purpose of processing buffers come into play.  You will notice that with a lot of the Vision VIs there are two processing buffer inputs to them: one a source image input and the other a destination buffer input.  If you only supply the VI with a source buffer, then the algorithm will process the image and change the image that is sitting in that buffer.  However, if you supply a destination buffer the source image will be used to process the image, and the results will be stored in the destination buffer, effectively making a copy of the image and retaining the integrity of the source image.
 
The rating system has been pulled down by our web support operations group.  I apologize for this setback.  At this time I do not know if there are any plans to add it back, or some form of it.  There is a forum post somewhere that details what happened.
 
Let me know how it goes!
 
Mike Torba
Vision Applications Engineer
National Instruments
Message 7 of 11
(4,391 Views)
Here is the link about the rating system change.
 
Basically you need to have posted 50 times to be able to rate a message.  I see by your profile you are at 41 messages, so you lost the ability to rate messages, but it won't be for long.
0 Kudos
Message 8 of 11
(4,378 Views)
Hi,
 
thanks again for the help. I looked into rings, and it seems to be fairly similar to what I had implemented: a form of circular array. Im also glad you told me about the IMAQ image datatype functioning like a pointer, it helped explain some very funny errors that were occuring with race conditions and improper data flow! I managed to eliminate the parallel scripts by storing the original image in a buffer, running the first script, and then restoring the image from the buffer before running the second script. I think I have figured out the image acquisition part of this project, however I have a few questions relating to the actual processing stage.
 
Since they are not directly related to the topic of this thread I will start a new one. Thank you for your help though. Image processing is definately quite different and takes some getting used to.
 
Jeff


Using Labview 7 Express
0 Kudos
Message 9 of 11
(4,346 Views)

Hello Jeff,

Glad to hear that you are making progress on the project.  Once you can get past the image acquisition part, the processing is usually pretty straightforward in terms of behavior on the block diagram.  The worst of it is the bundling of all sorts of clusters to feed into the VIs.  It can get difficult to see how the Vision VIs are processing the image since by default the majority of them do not overlay the results of the algorithms onto the image like the Vision Assistant does for performance reasons.  Whenever customers are just having a difficult time navigating around the different vision tools to accomplish some task, I usually point them to the Vision Assistant.

Anyway, your post will be entering my territory of support, so my team will check things out once you post.

Thanks!

Mike Torba

0 Kudos
Message 10 of 11
(4,321 Views)