LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

VDM - Filter/highlight the "Void" of an Image

Solved!
Go to solution

Hi there, 

 

I am doing some script to process some Greyscale image to find out the "Void" and calculate the percentage of "Void" compared with the whole area. 

Attached is the partial of the image. From the image, you can see underneath it has a lot of bubbles, we called them "void". 

 

I was initially using IMAQ threshold by selecting a range of Pixels which it is defined as "Void" and use IMAQ histogram to calculate the void percentage. 

But the way I use, choosing a "Range of Pixels", are not really working well.

Because some of the Void Pixels are so close or similar to non-Void area. 

It is either overlapped too much of non-Void area or covered too less of Void area. 

 

I am literally new to VDM. I wonder is there any good feature can help with my image processing? 

 

Thanks and looking forward on the response. 

 

Regards, 

VZ

 

 

0 Kudos
Message 1 of 11
(1,113 Views)

Hi,

 

By quickly testing your capture with some filters in Vision Assistant (which comes with the Vision Development Module), I couldn't have conclusive results either.

 

Before doing any processing, it is always better to try having the best quality as possible for the source image.

 

For example:

 

1. Your image is a bit blurry, noisy and not well contrasted. Try adjusting the lens focus and aperture. There are also software parameters such as gain and exposition time that can be configured from NI-MAX and/or with the IMAQdx VIs / property nodes.

 

2. There are some black stripes covering your bubbles. Try to remove them physically, if possible of course.

 

3. It seems your illumination is not homogeneous. You have vertical bands on the left and on the right that are slightly darker. Having a constant source of light pointed at your inspection area (provided that it does not add reflections) can also make the background more homogeneous.

 

For item 3, another idea could be to take an image of reference with a test subject with no bubble, so you would just have the background, then subtract this reference from the real test images. It should globally homogenize your image brightness and make the task easier for contour analysis.

 

Regards,

Raphaël.

0 Kudos
Message 2 of 11
(1,096 Views)

Thanks for the response!

 

Unfortunately, this is the image captured from a X-ray machine. 

This image is part of the inside of a small semiconductor component, it is enclosed and seal. 

The Quality of image will be roughly around this and any stripes can't be removed.  

 

Is there a way that can process the image layer by layer? 

Like lighter part visually we can tell is top layer. Then, darker part visually is lower layer and so on. 

 

 

0 Kudos
Message 3 of 11
(1,074 Views)
Solution
Accepted by topic author chngveez

@chngveez wrote:

Thanks for the response!

 

Unfortunately, this is the image captured from a X-ray machine. 

 


Well, if you have an X-ray image, the usual Signal to Noise Ratio is typically low. To improve the SNR, I would recommend acquiring multiple images and averaging them together using IMAQ Compute Average Image. This is a common approach.

 

For image processing point of view, modern machine learning methods are very applicable nowadays, but they require a large number of pre-segmented images to train the model. Below, I'll describe a more or less "classical" approach.

 

You shared a PNG file with an RGB image, but in fact, this is a grayscale image. Typically, X-ray images are 16-bit, and I would recommend keeping them as 16-bit to utilize the full dynamic range.

For now, I will extract the intensity plane to convert the RGB image to an 8-bit grayscale image:

image-20250221111151504.png

Now it's time to remove the annoying black lines, otherwise you will get holes and stripes in the thresholded image:

Screenshot 2025-02-21 12.09.01.jpg

Here is a pretty simple trick — because your stripes are strictly horizontal and vertical, you can apply a linear (single line) Median Filter twice horizontally and vertically, something like this:

image-20250221111426172.png

I will use 19 for both dimensions. Now, your image looks like this (this action will remove some very small particles, and I’m not sure if you need them. In any case, they will be processed separately later):

image-20250221111522866.png

However, you still have vertical shadows. Since they are strictly vertical, you can compute the linear average horizontally along X-Axis and then apply Flat Field Correction using this as a bright image. Something like that:

image-20250221111808510.png

This will give you this image, which is suitable for Flat Field Correction:

image-20250221111952590.png

Then you can use this image in IMAQ Flat Field Correction (keeping the dark image disconnected). The corrected image will look like this:

image-20250221112055178.png

And with such image you can proceed with thresholding. The single 'fixed value' threshold will not work well, so you will need to use local thresholding instead. Assuming you need to segment both large and small voids, thresholding can be performed twice: once for each. For small voids, you can use the original (non-median filtered) image, because after median filtering, some small voids may disappear along with the black lines. Both combined together with IMAQ Or.

image-20250221112518215.png

To get better contours, you might need some morphological post-processing:

image-20250221112609076.png

With a simple trick, you can turn these into IMAQ overlay:

ov.png

Now, put everything together:

snippet.png

And the result:

image-20250221112810211.png

For experiments, I would like to recommend using the NI Vision Assistant, which is delivered together with the Vision Development Module. Additionally, to get a deeper understanding of image processing, I suggest reading the book 'Digital Image Processing' written by Rafael Gonzalez and Richard Woods.

 

The code in the attachment (has been downgraded to LV2018). Feel free to use it as a starting point (it is just a quick and dirty "breakfast exercise" some voids are not or only partially detected, and there is still a lot of work and "fine tuning" ahead to make it stable for every image you have).

 

Message 4 of 11
(1,050 Views)

Hi Andrey, 

 

Wonderful, thanks for your hard work! This is exactly something I am looking for. 

I will try it out and let you know the outcome. 

 

The image is actually from a snipping tool (always use RGB image), I can't share the actual image due to some reason especially in this public forum. The actual image will be in 8 bit bmp. 

 

 

Message 5 of 11
(504 Views)

@chngveez wrote:

Hi Andrey, 

 

Wonderful, thanks for your hard work! This is exactly something I am looking for. 

I will try it out and let you know the outcome. 


Glad to see it was helpful for you. I'm not sure about your particular setup, but classically in X-ray image processing, the Flat Field correction is the very first step. You should acquire and average a couple of dark images (typically 30-50) taken in the absence of X-rays, then bright «air» images (stay close to saturation level, but avoid overexposure), with X-rays but without an object. Sometimes these images are also known as «offset» and «gain», save these images. Then, after Flat Field correction of the each shot (the number of images to average depends on performance requirements), you will get a perfectly «flat» image without gradients. If these shadows and marks are always in exactly the same place, they will be removed by Flat Field Correction as well. Your final goal is to get an image as flat as possible, gradient-free, so that even a trivial threshold will catch all voids. Good luck!

Message 6 of 11
(480 Views)

Hi Andrey, 

 

I went through your code and re-created it (this is part of my learning style to get more familiar with and also change it to fit my need). It is so closed but some of the big voids we can clearly see it was different. I tried to find tune it but no luck. I attached the smoothening image. After smoothing the image, I can share the whole image, instead of snipping. 

 

Based on the image below, the highlight with red showing incomplete coverage of the Voids. The highlight with blue showing excessive coverage of voids. It seems like the Flat Field correction needed to be find tune. From your earlier example, you are showing X-Axis averaging only. In real case, it needs to be X and Y which I am not familiar with. I attached my code here. 

I am wondering, will you able to help up with this? I am using LabView 2018. 

 

 

chngveez_0-1740509983889.png

 

Download All
0 Kudos
Message 7 of 11
(401 Views)

From that point, it will not be so easy. You probably know the Pareto 80/20 rule. In image processing, especially in the X-ray area, this can easily turn into a 95/5 ratio. Let's say, to get 95% of the acceptable results, you will quickly spend only 5 hours, but to get the remaining 5% of detections at acceptable false positives level, you will need an additional 95 hours of work.

 

In this particular case, I would like to recommend analyzing the entire imaging chain, starting from the X-ray source. You could try increasing the X-ray current to get a brighter image, or try adding a copper filter (half a millimeter or so, directly after tube) to the X-ray beam, then increasing the current to improve contrast.

If you look at the line profile in the upper area, for example:

image-20250226085529587.png

You will see a falling gradient, which is the obvious reason for partial detection:

image-20250226085615764.png

So, your corners are "under-compensated." You could create an additional flat-field mask to compensate for this. Hard to create, trivial averages will not work, may be you could use a "gold part" (without voids), but this is very sensitive to the accuracy of its placement in the X-ray beam. Direct subtraction and comparison rarely work well because you need sub-pixel accuracy or a very robust image registration algorithm, but I have no idea about your setup.

You could also try ROI-based thresholding, where the rectangular inner part and outer border are thresholded with different rules.

Theoretically, you can increase robustness with some edge enhancement. For example, you can use a Sobel filter:

image-20250226090141677.png

Then you can implement an "intelligent" convex polygon algorithm, guided by information from the Sobel filter to fill missing areas.

You could also add edge detection and use this information to correct the particles (will be hard to et stable across whole image, because you have ow contrast - around 20 gray only):

image-20250226090400572.png

So, the devil is in the details. I would recommend creating a convenient work environment that can read lists of images with easy navigation, then split the dataset into "simple" and "complicated" cases, and run the algorithms repeatedly again and again after each change. Be prepared for a huge amount of work.

 

Message 8 of 11
(378 Views)

Hi Andrey, 

 

Thanks again for your detail explanation. It is really really helpful! 

Please allow me just 1 more question only here and I hope I won't buzz you anymore. 

 

Before I raised my issue in this Community Forum, I was actually trying something like you mention, ROI-based thresholding. However, I am not sure how to apply different rules as you mentioned below. Could you please explain how does it work?

 

What I did before was:

1) apply Image Mask on outer part to work on inner part. 

2) apply expected threshold and save the mask to Mask1. 

3) apply image mask on inner part to work on outer part. 

4) apply expected threshold and save the mask to Mask2. 

5) then combine Mask1 and Mask2. - I stuck here because I am not sure if we are able to combine images. 

By the way, I use vision assistant and use a filter function-"Convolution-highlight details" with Kernel Size = 25 to increase the border visibility. I put the image in the attachment. I hope the image could allow you to apply ROI more easily. 

 

Thanks and hoping to get your response!

0 Kudos
Message 9 of 11
(343 Views)
Solution
Accepted by topic author chngveez

@chngveez wrote:

Hi Andrey, 

 

Thanks again for your detail explanation. It is really really helpful! 

Please allow me just 1 more question only here and I hope I won't buzz you anymore. 

 

Before I raised my issue in this Community Forum, I was actually trying something like you mention, ROI-based thresholding. However, I am not sure how to apply different rules as you mentioned below. Could you please explain how does it work?


It is quite simple, something like that:

Snippet.png

Can be easily propagated to three or more nested ROIs. If you have many, then think about DRY principle (Don't Repeat Yourself) and do it with for-loops instead of copy paste of the same code pieces.

 

0 Kudos
Message 10 of 11
(318 Views)