09-21-2007 09:10 AM
09-22-2007 12:23 PM - edited 09-22-2007 12:23 PM
Message Edited by Ravens Fan on 09-22-2007 01:24 PM
09-22-2007 05:29 PM - edited 09-22-2007 05:29 PM
I would use a queue that is set to hold one image. Load the queue in one while loop. Read the queue in the second loop and perform the analysis.
Maybe you could have several loops to analyze the images??
Message Edited by unclebump on 09-22-2007 05:33 PM
09-24-2007 07:48 AM
09-24-2007 12:02 PM
Hello everyone,
Thanks for posting up to this point, and I like all of the recommendations for this kind of situation. Processing images while acquiring them at the same time is actually a very common process that IMAQ customers face. There is actually more than what meets the eye for image acquisition and processing in LabVIEW that I would like to mention.
Probably the biggest difference between IMAQ and other LabVIEW related activities is the fact that the image buffer datatype (purple wire) that is being passed around from one vision process to the next only contains a reference to the image in memory and not the actual pixel data. This was purposefully done to increase performance in LabVIEW when passing the image from one point to the next since images are typically very large in size compared to other datatypes used in LabVIEW. For this reason, the image datatype reference does not need to be passed through a queue to access the image data between multiple loops in LabVIEW. There are more efficient means in doing so.
Instead, consider using an image acquisition process called a Ring. A ring configures the IMAQ driver to acquire a sequence of images in memory automatically, and then you can call the IMAQ Extract Buffer in your processing loop to pull an image from this buffer list into a processing buffer (what you created from the IMAQ Create VI.) When the IMAQ card reaches the end of the sequence (called a buffer list)in the driver managed environment, it automatically starts over at the beginning of the list. Therefore, the images are held in memory until you need them for processing. For more information on the use of Rings, check out the following suggested links:
NI Developer Zone: Rings
NI Developer Zone: Ring Acquisitions
Now, hopefully with a more efficient implementation of image sharing between loops put into place your processing loop will be able to keep up with the acquisition loop. Otherwise what will happen is that the index in the buffer list of the ring will overwrite an image that you have not yet processed, effectively losing data in your program. However, if your processing loop is simply taking too long, then you will have to consider other options to either improve processing performance. Some ways to this include the use of ROIs to specify what you want to process, fine tuning pattern matching algorithms, substituting slower algorithms with other methods, reducing the resolution of the images that you are acquiring, or perhaps acquiring all of the images first (using a sequence acquisition) and then running the processing loop afterwards.
I hope this gives you some direction. Working with images can be quite a different experience, and it is important to understand the acquisition tools that you have available in order to get the most out of the driver. Please post back if you have any followup questions.
Regards,
Mike Torba
Vision Applications Engineer
National Instruments
09-24-2007 12:57 PM
Thanks for that Mike!
I will look more closely at the ring tutorials for which you posted links. Maybe it would help if I gave a better description of my application. I am writing a program so that a technician can study the interference fringes created by a sloped optic. I am obtaining images of the fringes from a camera (currently using the grab function). I want the images to be displayed (as fast as they are acquired) on a display on the front panel, so that the technician can see the images realtime during processing. At the same time, the program analyses the images using an editted Vision script to determine how quickly and in what direction the fringes are moving vs. time, and displays the results on a waveform chart under the main image. I don't think there is any way that the processing can possibly function as quickly as the acquisition (at least not with my level of programming), so I intend to have the display show images as fast as possible while the processing function just processes whatever image was acquired last. To do this I implemented a stack (or a LIFO queue as labVIEW calls it), and limited the number of images that could be stored in the stack to avoid overflowing memory. This means that some frames are not processed, but this is not a problem since the velocity of the fringes is not great. I was quite concerned with the problem of working with image files (since I was unaware that IMAQ images are in fact pointers), so I had been avoiding wire branching as much as possible. When you say that IMAQ datatypes are references to the image in memory, do you mean they are pointers in the sense of pointers in C?
If this is the case it could create potential errors for me as I have two seperate scripts running on the same image in parallel. I had assumed that a copy of the image was made, and then processed in the second script, but if it is in fact two pointers to the same image, the image is getting processed in two scripts simultaneously. I suppose I can fix this by explicitly creating a copy of the image. On the other hand, my second script only exists so I can find the centroid of a small masked area of the image. Is it possible to set a temporary mask in a script to find a centroid and then remove the mask so I am once again working with the whole image? This would eliminate the need for parallel scripts.
On a side note, I seem to have lost the ability to rate messages. There is no longer a set of radio buttons with Rate This Message at the bottom corner of posts. Any idea why this might be?
09-24-2007 02:00 PM
09-24-2007 02:26 PM
09-27-2007 10:23 AM
09-28-2007 07:47 AM
Hello Jeff,
Glad to hear that you are making progress on the project. Once you can get past the image acquisition part, the processing is usually pretty straightforward in terms of behavior on the block diagram. The worst of it is the bundling of all sorts of clusters to feed into the VIs. It can get difficult to see how the Vision VIs are processing the image since by default the majority of them do not overlay the results of the algorithms onto the image like the Vision Assistant does for performance reasons. Whenever customers are just having a difficult time navigating around the different vision tools to accomplish some task, I usually point them to the Vision Assistant.
Anyway, your post will be entering my territory of support, so my team will check things out once you post.
Thanks!
Mike Torba