LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Image Out from Vision Assistant delay

Hello community of NI forums,

 

I have question about how can I optimize my code for obtaining a real image from a webcam (Real Webcam), processing it with Vision Assistant, and then plotting an XY Graph with the values of interest. My application goes for real-time processing. Therefore, as for now, the minimum delay that I was able to obtain was 2 seconds (found as dt in the code)  and the problem is that My Image Out from the Vision Assistant in the Front Panel is not showing (white screen) because I think it is either too big, or something that I am not aware of. I need to obtain this Image Out as close to a real-time process which my aim is at least 1 second delay.

Here I attach the code i am using,

and some extra information about what data I get from the Webcam:

 

Picture properties obtained from the webcam:

-2304x1536 0.15X 32-Bit RGB image 79,73,57 (0,0)

Picture properties obtained from the Vission Assistant after processed:

-1851x975 0.21X 8-Bit image 1 (0,0)

 

Hopefully I have provided enough information about what I am doing, what I need to do, and what my problem to get where i need is

Thanks In Advance

 

 

 

0 Kudos
Message 1 of 6
(2,906 Views)

first, get rid of the stack sequence. second, do not use the Vision Assistant vi.

Benoit

0 Kudos
Message 2 of 6
(2,875 Views)

Could you argue a bit what is the motivation behind your answers? I am kind of new (novice) with LabView

As I barely have 1 month left to keep this project up. And the script is done by means of a novice adivsor as well, is it really recommended to try using other Imaging Processing methods (Real Image> Gray Scale> Bloop Detection > Threshold controller > Particle Analysis (pixels area)) rather than in the Vision assistant?

 

About the stack sequence, now that I have removed the sequence, I get no image out from the camera, and my graph now is not plotting point after another so to create a report of the process

0 Kudos
Message 3 of 6
(2,865 Views)

First of all, novice should not put a month to realize a working project on face recognition. In any language, just to learn the basic of the language will take minimum  a month.

But here we are so let's try and not give-up.

 

First of all, use error wire to synchronize your step... (LabVIEW use dataflow.) which mean that till all data from wire in the input is not arrived, the function cannot be executed.

 

Second, what tools do you have for your image analysis library? Is it NI (need special license) Are you using external .dll?

 

Benoit

 

 

0 Kudos
Message 4 of 6
(2,863 Views)

HI,

I know that it takes time to learn how the (LABView data flow works). 

 

For the first, I was about to create a snapped initial image (accounting for noise/error in the image) and then deducting those measured values (bloop detection) from the actual image processing, so the error is subtracted (maybe not the smartest way or best way to do this). I found hard to try learning to do image processing without the Vision Assistant (as it is apparently simpler), even when I would not be able to get the dreamed real-time image as I desired. 

 

Second, I am working for a company then, these licenses are the ones which are in the package:
Product: LabVIEW Application Builder
Version: 2018
Product: LabVIEW Control Design and Simulation Module
Version: 2018
Product: LabVIEW Database Connectivity Toolkit
Version: 2018

Product: LabVIEW Digital Filter Design Toolkit
Version: 2018
Product: LabVIEW Report Generation Toolkit for Microsoft Office
Version: 2018

Product: LabVIEW
Version: 2018

 

I am not using external .dll. Rather, just using normal files to save momentarily the latest image retrieved, and the calling for the most recent image retrieved to be processed in the Vision Assistants

 

 

0 Kudos
Message 5 of 6
(2,856 Views)

Any extra comments on this? I found out a way to obtain a real-time image processing using continous vision acquisition in the vision acquisition… Most precisely, to obtain the most recent picture. Therefore, it actually runs in real-time until… Until I play around with the threshold values for a binary image in the vision assistant. Thoughts on this is that it requires more CPU time to process the current data, and gets lagged behind for a few seconds.

 

I am going to program offline to see if I can make a good image processing still using the vision assistant from a library of pictures from a real-time process. Then, going to bring the offline program to a real-time vision acquisition see if it can get any better.

 

As for now, I did manage to get a real-time picture (like 0 delay and your hand moves along with the image out. After processing) But as fast as I adapt the threshold values for binary image, it starts the delay. Maybe my program is not efficiently running or the CPU takes ages to process the data, maybe there is other reasons I am not aware of

0 Kudos
Message 6 of 6
(2,833 Views)