LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Colors detections

Hello!

I have a pretty simple question: how to make the program work 🤔

 

Task: to detect the position of the rat in the video.
We tag animals with colored tags and look for these tags in the video.
I apply a very simplified program. There is only a search algorithm, without the rest of the program with a database and reports.

In general, the idea is this: process the frame with a threshold and then find the biggest spot.
But the problem is that sometimes the algorithm responds either to the cell or to the white walls. Especially if someone goes near the cage and the color balance changes.
Maybe we are doing something wrong? And do you need to pre-process the frame? Or after the threshold do something with him?
We apply a mask to speed up processing, as well as to reduce response errors, but this only helps partially.
In addition to the program, I am attaching a sample video to test the program. The video also contains color samples. One option is to use these samples as reference. But this also did not help, because sometimes the color balance in different parts of the frame changes in different ways.
Of course, some of the problems are created by a "smart" camera, which automatically corrects the white balance.
But we have already shot many hours of video, and this video needs to be processed, and manual tracing takes a very long time.

I would be grateful for any ideas on how to improve our algorithm.

 

Video here, it's too large for forum:

https://drive.google.com/open?id=18G62jD75Qk1ZxJ6Ezl6qgBhyzZ7wwGBH

 

main.png

Download All
0 Kudos
Message 1 of 10
(4,041 Views)

The very first thing I'd do for any vision application related to color is convert to HSL\HSV, anything H.

 

The color, especially one plane (e.g. red) doesn't mean anything for the color!

 

Light green (0xA0 FF A0 for R G B) has more red in it than dark red (0x80 00 00).

 

Always use Hue for color detection. Now I can't run your code, and maybe the Hue is part of the magic inside those detection VIs. But I'd manually convert and get the Hue as an image, and do a plain (not local) thresholding on that image.

Message 2 of 10
(3,951 Views)

Yes, we use "H-magic" and yes, I know about color components

0 Kudos
Message 3 of 10
(3,880 Views)

So the AVI is already converted HSL? Because all I see is a conversion to greyscale.

 

Ah, so that's in the color mode of the detector... It's a bit tricky without the ability to run.

 

I'd (also) convert to Hue, so I can see what's happening.

 

IIRC, White (and black) translate to pretty much arbitrary hues. So a limit on L would filter out whites (and blacks).

 

Note that there might not be a way to get all correct objects without incorrect ones... If these objects have specific shapes (or other relation) you can use that to narrow the selection.

0 Kudos
Message 4 of 10
(3,874 Views)

@Artem.SPb wrote:

In general, the idea is this: process the frame with a threshold and then find the biggest spot.

The biggest? The size should be 'not too small', but also 'not too big'. A max value might filter out the large white arrays.

 


@Artem.SPb wrote:

But the problem is that sometimes the algorithm responds either to the cell or to the white walls. Especially if someone goes near the cage and the color balance changes.


Putting it in a box (controlling the conditions) might help (in the future)? Controlled lightning is #1 in vision.

 

Perhaps you can undo the color balance changes. If you have some reference color (white corners?) you might be able to normalize (scale) the image by scaling the white to known values.

 

I might have a go with the OCV lib I'm using (and made). But normally that's what I'm paid for 😁.

 

Do you have a video (or part of the uploaded video) where it goes wrong? The difficult part?

 

0 Kudos
Message 5 of 10
(3,871 Views)

It seems your solution has one H, S, L threshold. So each color needs to be in range of the H, S and L limits.

 

Have you tried to treat the colors individually? So get the reds with a HSL threshold, then the blue, then the green?

 

That seems to work OK-ish for me (although I used HSV).

 

The blue is a problem. The green color in the shade turns almost blue. So that's hard to distinguish.

 

For the red I had to merge two ranges. The Hue of red centers around 0 and wraps to the max, so one range doesn't suffice. You might need 190-255 and 0-15 or something like that.

0 Kudos
Message 6 of 10
(3,850 Views)

If these objects have specific shapes (or other relation) you can use that to narrow the selection.


There is no definite form.
Animals move, collars can partially overlap. Sometimes they are not visible at all if one animal climbed onto another

0 Kudos
Message 7 of 10
(3,762 Views)

"The biggest? The size should be 'not too small', but also 'not too big'. A max value might filter out the large white arrays."


You're right, it's better to say "optimal spot"


Putting it in a box (controlling the conditions) might help (in the future)? Controlled lightning is #1 in vision.


You're right again, but we have what we have


Do you have a video (or part of the uploaded video) where it goes wrong? The difficult part?


https://yadi.sk/d/eNannlOWPJyWEw

These records have big problems distinguishing between blue and green

0 Kudos
Message 8 of 10
(3,759 Views)

Have you tried to treat the colors individually? So get the reds with a HSL threshold, then the blue, then the green?

 

Of course, each component (animal) is searched separately. I showed a simplified code to understand how it works. In fact, this code is executed three times with different threshold parameters.

 

The blue is a problem. The green color in the shade turns almost blue. So that's hard to distinguish.

 

Yes, this is the main problem. We cannot configure the algorithm to correctly distinguish between blue and green

0 Kudos
Message 9 of 10
(3,758 Views)

@Artem.SPb wrote:

Have you tried to treat the colors individually? So get the reds with a HSL threshold, then the blue, then the green?

 

Of course, each component (animal) is searched separately. I showed a simplified code to understand how it works. In fact, this code is executed three times with different threshold parameters.

 

The blue is a problem. The green color in the shade turns almost blue. So that's hard to distinguish.

 

Yes, this is the main problem. We cannot configure the algorithm to correctly distinguish between blue and green


In my setup, the green also had lots of overlap with all the grays.

 

If I had time, I would look into background removal. That might make it possible to remove the gates, ground and walls. The animals will remain. Then, first locate the animals, and then for each animal get the colors. I'm not sure how feasible background removal is with IMAQ\NIVision. I haven't even experimented with it that much in OpenCV.

 

Another track I'd look into is feature tracking (AFAST, IIRC). That would probably require human interaction. Again, I thin IMAQ\NIVision falls short on those techniques.

 

0 Kudos
Message 10 of 10
(3,721 Views)