07-29-2020 02:31 PM
For some time I have worked on developing a dual camera code that allows me to change camera parameters in real time (i.e. exposure and gain) and store images upon command. Ideally I’d like to directly store images in tiff format, but I decided to store the images in binary to speed up the process and to reduce data bottlenecks. The camera array (as configured) is 1952x800 pixels and the camera is GigE format (Imperx B1921).
I posted my first attempt at this some time ago and I have made many refinements based on comments made by several people on the forum and additional research. The basic design approach uses a producer/consumer loop with channel streaming. Images are saved in a binary 1-D array in the consumer loop. The code along with subvi is attached. The subvi contains the property node needed for controlling the camera. Keeps the code a little cleaner.
Based on my testing I am only able to download images from two camera at 15 FPS but would like to go to 30 fps. I recently swapped out a gigabit Ethernet switch with a USB 3.0 to dual Ethernet adapter to increase bandwidth by giving each Ethernet port true one gigabit Ethernet speeds. Each camera is producing data at about 750 Mbps when running at 30 FPS.
From what I can see my issue is how I am saving the data. I decided to save the images using the 1-D TDMS advanced streaming method. After modifying several settings I am unable to significantly increase the frame rate of the acquired images. All processing is being done in another code so this code only enables real-time camera adjustments and collect images.
So one obvious question is are there better ways to save data than the approach I am using? Can the producer/consumer loop be modified to make for more efficient processing? I even have a version of the code where the files are saved outside of the consumer loop but no observable difference in frame rate is observed. In this code I used a conditional indexing node to save the images which downloads the saved images to the same TDMS loop outside of the consumer loop once I hit the stop button.
Any thoughts or recommendations is definitely appreciated,
Stay safe!
07-29-2020 03:32 PM
Seems to me that we were saving 640x480 RGB images from 9 Axis cameras at 30 fps. Older code saved 640x400 images from up to 24 cameras at 10 fps (turns out the limiting factor, which I foolishly forgot to calculate, was the speed of our Ethernet connection to the Cameras). We saved data from each camera in an AVI file. We were saving "episodic" events lasting 5-15 seconds, so it is unlikely that more than 4 cameras were saving (all were running!) at the same time ...
Bob Schor
07-29-2020 03:46 PM
Bob,
Thanks for responding....many of the design ideas I tried to implement came from several of your prior posts, especially using the channel streaming to acquire images. I am trying to stay away from AVI since it uses image compression and I want uncompressed images, thus the need for tiff format. I guess I could try and see if I can save these images as AVI to see if I can get the speeds I want....would make the code a whole lot simpler.....
Do you see any issues in how my loops are constructed? I was always under the impression that the consumer loop can take the data from the producer loop and run at a different rates. My recent concern is does using channel stream force both the producer and consumer loops to run at the same speed? And what happens if you have two channel streams running at the same time in a single loop?
Another idea i am considering is using a frame grabber....maybe this might reduce the processing load on my laptop?
Jay
07-29-2020 06:07 PM
Bob,
Just to be sure I replace the binary save with plain avi. Turns out I get the same number of frames saving with avi as with the advanced tdms binary system I used in my original post....interesting....I am maxing out at 15-16 fps even when I try to run at 30 fps.
I am starting to think there is a camera setting or timing control that needs to be reset in NI Max....I tried messing with jumbo frames and bandwidth to no avail...
Jay
07-30-2020 08:02 AM
Jay,
If we were working together looking at the same VI (and I was "driving", as was the case about 6-7 years ago when a colleague who was monitoring mice for a behavioral study called me for some LabVIEW consultation), I'd say what I told him -- "Start all over, make sure your Block Diagram takes no more than one Laptop Screen (say, at more 1024 x 768 pixels), and have a minimum of 5 sub-VI calls to "hide the messy details". At that point, I'd never worked with IMAQ or IMAQdx, so that was a bit of a learning experience (especially since NI Vision is "rather different" from DAQmx, and much-less-well documented -- we did a lot of "experimentation", but that's because we're Scientists, not Engineers) (not that Engineers shouldn't experiment, especially if they can use LabVIEW to do "simulations" and write Test Code to "see how this Weird Feature works").
I don't have the time to rewrite your Block Diagram, straightening the wires, getting rid of excess White Space, compacting things so I can "see the forest for the trees". You might consider writing a little Test routine that generates some changing Images (like an alternating Checkerboard, changing Black to White (or Red, or Blue) at each frame and creating an AVI from these "images". This will let you experiment with Images of a known size without having to worry about a Camera, and study (and make measurements) various Image-savings routines. You might find that a simple Binary Write of the Image is fastest, or that one of the standard Video encoding schemes gives you the best blend of speed and accuracy in preserving the features of the image. Do take into account the expected Bytes/second, remembering that standard 8-bit Gray Scale takes one byte/pixel, 16-bit Gray takes 2, and RGB Color takes 4.
Bob Schor