LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How to Optimize Camera Speed (fps)

Solved!
Go to solution

Thank you for your advice Bob! and others!

I have fixed the false loop stop issue and it did improve the code save time a bit.

 

I have changed some of the code to use the low-level acquisition and am getting some interesting results that I am not sure how to read. I set the buffer number to 20 using the IMAQdx Configure function and still have the frame rate indicator running. The frame rate indicator is reading zero but after 2.3 seconds elapsed I stopped the producer loop and there were 22 images saved in the folder. I allowed the consumer loop to continue to run to 11.47 seconds and once I stopped the consumer loop the folder was populated with 243 images. The images being saved are 175 KB, and the image details says that the images are 2448x2048. 

 

I ran a second trial stopping the producer at 2.08s and 18 images were saved. The consumer ran for 4.5s and then up to 84 images were saved once I stopped the consumer.

The camera only operates at 35 fps with its current settings, or should. 

I am unsure how to read this data because 84/2.08=40.38 and with the consumer time, 84/4.58=18.34. The way that I'd like to read this is that the camera is sending through images or buffer data that are/is saving at 18 frames per second and that is why the folder saved 18 images within the first second, then after the consumer loop ended the remaining buffer images saved which led to the 84 total images. If the camera ran for 2 seconds then it is taking images at 40.38 fps. Does this mean the camera is operating much faster than expected or there are duplicate images? The consumer may be running a bit behind the producer but it seems to catch up pretty quickly once the code stops altogether.

The images are saving as JPGs currently, so that's where some of the compression comes from.

 

If the camera is operating better then cool but I am mostly worried about duplicate images coming throuhg now if the buffer is returning to the zeroth images and running through again. However, I've attached the current code to this reply as well and maybe you will see something I am missing easier.

 

Also, that makes sense to not do on-line image analysis if I'm wanting to process images later. A laser sensor may be a good thing to look into. Once it detects a particle, it will start the labview code for x amount of seconds, then labview will turn off (go to sleep) until the laser is triggered again. Do you know if Labview is capable of having an external trigger like this to start the code?

 

0 Kudos
Message 21 of 24
(2,306 Views)

Confirmed, there is definitely some image duplication. I've run a needle in-front of the camera and should be able to see it move upwards but the first image is being duplicated several times and after a few seconds it will go back to a blank background of light refraction from the light source. 

 

I've captured 10 micron size particles with this camera before so I know a needle should be like spotting the Hulk...

 

I did change the intial settings of the code to the Low-level acquisition tool, may that have an impact on it?

 

Thank you for taking the time to help.

0 Kudos
Message 22 of 24
(2,287 Views)

@JohnnyDoe771 wrote:

Do you know if Labview is capable of having an external trigger like this to start the code?


If it is not possible, I wish you'd told me six years earlier when I started doing this!  [In other words, of course it is possible, otherwise I wouldn't have described "how I do it"].

 

I'm not exactly using a "hardware trigger signal" to start Video acquisition.  Here's the situation:

  •  I have three CMH loops running in parallel.  One reads data from an instrument sending readings over a VISA channel at 10/sec.  This is my "detector" -- I'm looking for a positive deviation in the incoming signal over a certain amount, which gets me the "start of the Event" within 0.1" of the Event's onset.
  • I have another CMH that runs a Camera taking frames at 30 fps.  The responsibility of this CMH is to configure the camera and to display the Image on its Front Panel if the User is interested in seeing it (note that this takes time away from doing other things).  Remember the discussion about how many buffers to configure?  If I want to start my video 1 second before the Event occurs, I use about 90 buffers (for a frame rate of F and an Event time of T (seconds), I allocate F*T Buffers and add on an extra 50% "for safely").
  • The third CMH handles creation of the AVI Video.  It is started by the Camera CMH, which passes it a reference to the Camera (so the Video routine can query the Camera when necessary to get its current Buffer number).  When the Sensor (first CMH) routine detects an Event, it calls this CMH with Open AVI (which gets the current Buffer and opens the AVI).   The Video CMH then calls Save Frame as often as needed to save all the frames constituting the AVI (it obviously needs to have a "stop" criterion, which could be number of frames to save, or "until a different Event Signal is received").

At the present time, I cannot provide the code, itself (among other things, there are considerable "other complications" that would make it extremely challenging to take the code apart and make it adaptable for a new situation).  However, if you understand the QMH Design Pattern that ships with LabVIEW, or know how to construct a CMH (Channel Message Handler, essentially a QMH with Messenger Channels replacing Queues), I hope the above description will get you started.

 

Bob Schor

0 Kudos
Message 23 of 24
(2,283 Views)

This is an interesting way to reduce the complexity of the programming needed for running and acquiring data from a camera.  Can you use this concept for two cameras running at the same time?  I saw your diagram for the dual web cams and tried to implement it for two ethernet cameras (Imperx 1952x1112) running at 20Hz.  I seem to be running into  speed issues that the software cannot keep up wioth the camera frame rate.  I do want to see if I can get better performance using the channel wire concept that you described.  A few things I am trying to do in my code is to adjust camera settings and change frame rates in real time prior to saving frames. I also want to save frames on command to reduce overhead.  Additionally I am storing images as an array in binary to reduce overhead since the tiff vi is slow and bogs down the software.

 

Thanks for posting this.....

 

Jay

0 Kudos
Message 24 of 24
(2,184 Views)