LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Synchronized High Speed Image Acquisition System with PCIe-8233 NI Frame Grabber using IMAQdx & LabVIEW

I am working on a system architecture to do synchronized high speed image acquisition on multiple cameras that are being connected to multiple PCs. Each Windows PC has LabVIEW vision application using four GIGe Mako-419B(2048*2048 12Bit) connected to PCIe-8233(4-Port) NI frame grabber.
This post will include all the things I learned so far in my pre-search and questions that I am seeking an answer for.
For my application: Lossless logging of all the frames(all the trigger pulses(~20Hz) during "HW Trigger"), always displaying the latest image all the time and synchronization with some sort of HW trigger source during the critical time window is essential. Grabbing at a rate of 20 FPS with packed12 bit option at full ROI would be close to maximum 1GIGe/sec per camera but in theory should be possible. CPU intensive processes like logging, displaying and image acquisition could all be done in asynchronous parallel processes to get the minimal effect of SW latency. 'HW Trigger' mode will last around two minutes and I can go back to 'free run' mode with a lower FPS log rate while my parallel logging process is transferring images from RAM to disk if there is any left overs in the queue. It is my understanding that using ring acquisition with `extract image` is the most optimal option that gives the highest throughput in the main vision acquisition loop as it is not making a memory allocation & memory copy in LabVIEW like `imaq copy` and 'grab vi' but rather uses the buffer memory that low level NI-VISION DLL allocates. Right? The idea is to set up enough buffers and use pointers to read directly from it so that no image data overflow will be seen in this two minutes time window while saving to disk at full speed and displaying latest . `Low Level Ring Parallel Workers` example seems a good point to start.
Looks like I can use 'IMAQdx Extract Image.vi' in `every` mode in main vision loop to ensure not skipping one to receive all sequential images in memory on FIFO base. It is my understanding that the pointer information this VI returns may or may not be the latest frame then. To overcome this issue, can I also use a different instance of 'IMAQdx Extract Image.vi' in `lastnew` mode to ensure display loop gets the latest frame? Documentation says it locks the buffer till requested image is received and not sure using multiple instances of 'IMAQdx Extract Image.vi' would create an issue there?
Is lossless communication and keeping up with the image retrieval rate possible if I stay within bandwidth specs with such a LabVIEW architecture? If not guaranteed as we are running on a windows platform or due to the nature of NI VISION API,
is it still possible to drop images/overwrite a buffer/skip image write on frame grabber without NI vision LostBufferCount property detecting it and giving an error?
What is the advantage of using "IMAQdx Configure Acquisition.vi" and "IMAQdx Get Image2.vi" combination both in free run mode and HW trigger mode over ring acquisition if there is any? Seems like it gives more control over LabVIEW image user buffer which I do not really need as it works on copy base and slower, right?
Any feedback or comments on my architecture plan is appreciated from those who have been there before 🙂 

 

 

Thanks & Regards ,

0 Kudos
Message 1 of 2
(2,197 Views)

We have multiple GigE cameras running at 30 fps, 640x480, and when an "Event" (measured by another device) triggers, we save an AVI of about 5 seconds, from one second before the Event to four seconds after.  

 

When the Event occurs, we get the current Buffer Number from IMAQdx, calculate the Buffer Number we want as the "Start Buffer" (we subtract 30), then the Make Video routine starts getting buffers using the Buffer Numbers from the Start Buffer to a computed End Buffer.  When configuring the Camera, we allow sufficient buffers for the Pre-Event frames and a few more seconds (experiment to see how many buffers you need).  We originally coded this in LabVIEW 2010 at 10 fps (I think), but the current version is in LabVIEW 2016.

 

Bob Schor

0 Kudos
Message 2 of 2
(2,163 Views)