I'm having two unrelated problems with LabVIEW, and the IMAQ camera file.
1) I am using LabVIEW to interface with an NI 1424 frame grabber that is in turn connected to our custom camera. Our camera is interlaced, but I can’t get (or create) a camera file that looks at Enable D (Field Sync). This enable is hooked to our camera that sends out a high when the frame is even and a low when it’s odd. The app engineers at NI have been working on this for a while and all I’ve received is a random assortment of camera files with the “Try this one” response. I have found a work-around; I’m sending the entire frame to an array then manually interlacing the array then displaying it on an intensity graph since I can’t get it back into an “IMAQ image” format. This slows me down to 0.25 frames a second. Does anyone know how to edit the camera file to allow for an external trigger to be a field-sync? As a side note Enable A & B are Frame Sync & Line Sync respectively, and this part works.
2) This Camera that we built is color, but we are just spitting out single frames of red, green, blue and luminance (Y’) at a time. I wanted to take the three images, red, green, and blue convert it to Y’CbCr then convert it back to RGB, but introduce the new Y’. I’m doing this now by sending the image arrays through multiple formula nodes and then drawing a Pixmap. This only slows me down a small amount compared to problem#1 and works wonderfully, but the major drawback is my camera has 14 bits of information for each color. I have to compress everything down to 8 to get this to work. I could purchase the LabVIEW image add-on for $3000, but as far as I can tell (can’t find any good documentation) it can’t do anything better. Is this a limitation of LabVIEW, is there any way that can I get my full resolution? The only alternative I see is scrapping everything and restarting the program in C++ where I know I can get up to 64 bit color with OpenGL and most likely increase the speed on problem#1.
Thank you,
Shimonek