Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

Pixel Scale with NTSC output and PCI-1405

I'm using a PCI-1405 framegrabber to capture images with a Watec 902H3-Supreme EIA CCD, using LabVIEW's IMAQ VI's. The data sheet for the Watec camera lists the number of effective pixels as "768(H) x 494(V)", but the image that is displayed from the framegrabber has a resolution of 640x480.

 

Since these two resolutions are different, I'm assuming there is either a rescaling step or a cropping step happening at some level. I am interested in the actual size of things in pictures (e.g., finding the waist of a laser beam), so I need to know the actual physical dimensions for the width and height of each pixel. The data sheet for the Watec camera does cite a "unit cell size", so I may need a conversion factor to get the physical size per pixel of the image that LabVIEW can display.

 

What I want to know:

 

- Where is the conversion to 640x480 happening? Is it on the framegrabber card or is it happening before the camera outputs a signal? Initially I had assumed it was at the frame grabber level, but from reading some other posts it seems like the PCI-1405 will just accept an NTSC signal (which has a standard resolution of 640x480, right?)

- Are the details of this rescaling or cropping standard, or do I have to contact the camera manufacturer? That is, I could think of many ways of getting "rid" of extra pixels: decimation, a single crop at one edge, rescaling pixel size and rounding, etc.--how can I find out which is being used, if any?

 

Thanks for your help!

0 Kudos
Message 1 of 4
(3,387 Views)

Your camera output is an analog signal.  That means each row of pixels is converted to a continuous voltage waveform.  When the 1405 samples the analog waveform, it times it so you get square pixels (usually).  The 768 pixels are more or less resampled to get 640 pixels.  I assume some edge pixels are trimmed or ignored.

 

Ideally, the camera outputs 494 lines of video.  Some lines at the top and bottom are trimmed to get 480 lines.

 

You don't really need to worry about the pixels on the camera sensor.  Just calibrate the final image you get from the 1405.  You can take a picture of a calibration grid to figure out the actual resolution of the acquired image.

 

If you want an exact one to one relationship between the pixels on the camera sensor and the acquired image, you should use a digital camera.  This eliminates the conversion to an analog signal.  You would know the exact size of the sensor pixels in microns.

 

Bruce

Bruce Ammons
Ammons Engineering
0 Kudos
Message 2 of 4
(3,386 Views)

Bruce, thanks for your response. I still have some questions, though:


@Bruce Ammons wrote:

Your camera output is an analog signal.  That means each row of pixels is converted to a continuous voltage waveform.  When the 1405 samples the analog waveform, it times it so you get square pixels (usually).  The 768 pixels are more or less resampled to get 640 pixels.  I assume some edge pixels are trimmed or ignored.


Does "more or less resampled to get 640 pixels" mean that each of the new 640 pixels corresponds to 768 / 640 = 1.2 pixels from the original waveform? Or is there some decimation taking place, in which every sixth pixel is thrown away? Or a combination of both? If this resampling step is happening in the framegrabber, it should be happening in a well-defined way. However, I did not see an explanation of the algorithm in the PCI-1405 manual. I really would like to know what exactly this instrument is doing, if it is possible for me to know.

 


Ideally, the camera outputs 494 lines of video.  Some lines at the top and bottom are trimmed to get 480 lines.


This sounds like the number of rows is being changed in a well-defined way (basically, a crop operation). Do you know this from experience/is this part of some video standard? Is there a reference that would explain how a framegrabber decides to crop? Why are the rows and columns being treated differently?


You don't really need to worry about the pixels on the camera sensor.  Just calibrate the final image you get from the 1405.  You can take a picture of a calibration grid to figure out the actual resolution of the acquired image.


 One of the problems with this is that I'd like to use the camera in different configurations with imaging optics and without. If I were to calibrate the pixel size based on a calibration grid, I'd still be limited in my determination of pixel size by my knowledge of the properties of the imaging optics (magnification, etc.) I suppose this would be possible to do if done carefully with some nice imaging optics, but it would be really convenient to know exactly the manner in which the image is being converted from a high resolution to a slightly lower resolution--I already have measurements for the pixel sizes from the camera manufacturer.


If you want an exact one to one relationship between the pixels on the camera sensor and the acquired image, you should use a digital camera.  This eliminates the conversion to an analog signal.  You would know the exact size of the sensor pixels in microns.

 

Bruce


Shouldn't the pixels in the digital image that results from processing the analog signal still have some physical correspondence with pixels on the sensor, but with analog-to-digital conversion noise (in intensity, or maybe with a pixel offset if there is noise in the sync) that wouldn't happen with a digital camera? Or is it really the case that with every image that is offered up to LabVIEW by the framegrabber, the physical region to which an image pixel corresponds could change? If that were the case, then doing a calibration wouldn't help.

0 Kudos
Message 3 of 4
(3,383 Views)

The analog waveform is resampled at a rate that gives you roughly 1.2 camera pixels per image pixel.  It is time based, so the exact ratio is not precise.  The framegrabber only sees the analog signal - it has no idea how many pixels the camera originally started with.

 

There are usually settings in the frame grabber that determine how many lines are skipped at the top of the image.  The next 480 lines are used for the image, and the remaining lines are ignored.

 

If you use the vertical size of the pixels, that should be a pretty good measurement of your final pixel size.  I am guessing the horizontal width is 1/1.2 the vertical height.  Most images end up with square pixels, so the line spacing is the fixed dimension.

 

The original camera pixels are converted to analog outputs (the waveform) followed by a filter that smooths out the pixel to pixel variations.  The results is a smooth continuous analog waveform.  This is what the framegrabber sees and reverses the process - analog values are captured at regular intervals and turned into digital pixels.

 

FYI, some older analog cameras had less than 640 horizontal pixels.  Some cheap ones might still.  For those cameras, the image pixels were narrower than the camera pixels, and the image quality was lower in the x direction due to blurring between pixels.

 

Bruce

Bruce Ammons
Ammons Engineering
0 Kudos
Message 4 of 4
(3,380 Views)