08-21-2019 01:39 PM
Thank you for your advice Bob! and others!
I have fixed the false loop stop issue and it did improve the code save time a bit.
I have changed some of the code to use the low-level acquisition and am getting some interesting results that I am not sure how to read. I set the buffer number to 20 using the IMAQdx Configure function and still have the frame rate indicator running. The frame rate indicator is reading zero but after 2.3 seconds elapsed I stopped the producer loop and there were 22 images saved in the folder. I allowed the consumer loop to continue to run to 11.47 seconds and once I stopped the consumer loop the folder was populated with 243 images. The images being saved are 175 KB, and the image details says that the images are 2448x2048.
I ran a second trial stopping the producer at 2.08s and 18 images were saved. The consumer ran for 4.5s and then up to 84 images were saved once I stopped the consumer.
The camera only operates at 35 fps with its current settings, or should.
I am unsure how to read this data because 84/2.08=40.38 and with the consumer time, 84/4.58=18.34. The way that I'd like to read this is that the camera is sending through images or buffer data that are/is saving at 18 frames per second and that is why the folder saved 18 images within the first second, then after the consumer loop ended the remaining buffer images saved which led to the 84 total images. If the camera ran for 2 seconds then it is taking images at 40.38 fps. Does this mean the camera is operating much faster than expected or there are duplicate images? The consumer may be running a bit behind the producer but it seems to catch up pretty quickly once the code stops altogether.
The images are saving as JPGs currently, so that's where some of the compression comes from.
If the camera is operating better then cool but I am mostly worried about duplicate images coming throuhg now if the buffer is returning to the zeroth images and running through again. However, I've attached the current code to this reply as well and maybe you will see something I am missing easier.
Also, that makes sense to not do on-line image analysis if I'm wanting to process images later. A laser sensor may be a good thing to look into. Once it detects a particle, it will start the labview code for x amount of seconds, then labview will turn off (go to sleep) until the laser is triggered again. Do you know if Labview is capable of having an external trigger like this to start the code?
08-21-2019 04:10 PM
Confirmed, there is definitely some image duplication. I've run a needle in-front of the camera and should be able to see it move upwards but the first image is being duplicated several times and after a few seconds it will go back to a blank background of light refraction from the light source.
I've captured 10 micron size particles with this camera before so I know a needle should be like spotting the Hulk...
I did change the intial settings of the code to the Low-level acquisition tool, may that have an impact on it?
Thank you for taking the time to help.
08-21-2019 04:38 PM
@JohnnyDoe771 wrote:
Do you know if Labview is capable of having an external trigger like this to start the code?
If it is not possible, I wish you'd told me six years earlier when I started doing this! [In other words, of course it is possible, otherwise I wouldn't have described "how I do it"].
I'm not exactly using a "hardware trigger signal" to start Video acquisition. Here's the situation:
At the present time, I cannot provide the code, itself (among other things, there are considerable "other complications" that would make it extremely challenging to take the code apart and make it adaptable for a new situation). However, if you understand the QMH Design Pattern that ships with LabVIEW, or know how to construct a CMH (Channel Message Handler, essentially a QMH with Messenger Channels replacing Queues), I hope the above description will get you started.
Bob Schor
05-19-2020 03:09 PM
This is an interesting way to reduce the complexity of the programming needed for running and acquiring data from a camera. Can you use this concept for two cameras running at the same time? I saw your diagram for the dual web cams and tried to implement it for two ethernet cameras (Imperx 1952x1112) running at 20Hz. I seem to be running into speed issues that the software cannot keep up wioth the camera frame rate. I do want to see if I can get better performance using the channel wire concept that you described. A few things I am trying to do in my code is to adjust camera settings and change frame rates in real time prior to saving frames. I also want to save frames on command to reduce overhead. Additionally I am storing images as an array in binary to reduce overhead since the tiff vi is slow and bogs down the software.
Thanks for posting this.....
Jay