10-15-2020 11:13 AM
I am trying to create an object detection program that detects an object in a video from a camera as the video runs autonomously without having to create a bounding box beforehand such as the tracking example or having to upload the image of the object I need tracked first.
I have tried to get the threshold image of each frame that comes in and then place a bounding box around each particle based off of the report as the video runs but this runs extremely slow or crashes LabVIEW especially if the video has mixed intensities of gray.
I am now trying out running python and using an opencv background subtraction method off of the frames captured in LabVIEW and sent to the python script. Also looking at the TensorFlow object detection but does not seem to work well with videos yet for me.
Is there an easier way of doing this where you take a video and LabVIEW detects any movement without predetermining the object?
10-30-2020 12:47 AM
It is not very easy to propose some kind of universal method.
The most obvious thing to try is to substract one image compared to the one before or a reference one. I guess this is what you were trying to do with opencv background substraction but I'm pretty sure you could do the same in LabVIEW. Maybe you could avoid noise and reduce the "noise" or number of particles by applying a low pass filter before substraction and applyng proper thresholding and binary morphology.
Another possibility could be using the IMAQ Block statistics vi to look for blocks in the image which could have significant changes in their histogram statistics.
Then there are also motion estimation tools based on optical flow or feature correspondence but they are usually difficult to setup.
Sami