Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

Using FPGA-Cameralink with Andor Zyla sCMOS cameras

Does anyone have any experience or knowledge of using a FPGA/Cameralink frame-grabber (NI 1483R or 1473R) to acquire images from the Andor Zyla 4.2 sCMOS camera?  Or other high-end sCMOS cameras such as the Hamamatsu Orca Flash 4.0 V2, or the Andor Neo, or PCO edge?  These cameras appear to have their own interpretations of the 8-bit 10-tap CameraLink protocol, and I'd like to know how easy it is to configure them properly.

 

I'm particularly interested in the Andor Zyla 4.2 camera, with frame settings of 2048x2048@~100fps, 128x128@~1600fps and 2048x8@~26000fps.  The bandwidth of the 1483R and 1473R seems sufficient for each of these configurations, but it depends on being able to configure the data appropriately.

 

The only related information I've come across was a talk from last year's NI Week by Dan Milkie of Coleman Technologies using the Hamamatsu camera, but there are no specific details.

0 Kudos
Message 1 of 5
(5,503 Views)

Hello GregS,

 

Configuring cameras to work with an FPGA frame grabber consists of two steps. The first step is to configure the camera itself to use the desired resolution, frame rate, bit representation, and tap configuration. This would typically be done by communicating to the camera using the serial connection within the CameraLink bus. This Knowledge Base article is a great resource for setting this up:

 

Communicating via Serial with Vision-RIO

http://digital.ni.com/public.nsf/allkb/A56C0DAD5FD5B23286257A61005DF16F

 

The second step of configuring your system would require programming the FPGA to properly interlace the camera taps. Different manufacturers implement different acquisition patterns. The camera manufacturer should have documentation describing how to interpret the taps for a specific camera. Then it is a matter of editing the LabVIEW FPGA vision examples to interpret caps according to the camera documentation.

 

I know this probably doesn’t answer your specific questions, but hopefully this answers any higher level configuration questions you have. For details about the specific cameras, I strongly recommend contacting the camera manufacturer for more details.

Message 2 of 5
(5,483 Views)

Thanks, that's very helpful.  I'd previously talked with the Hamamatsu rep, but he had no knowledge of using their camera with an FPGA.  I'll do the same with Andor, but thought there may be some users here who had already done this.

0 Kudos
Message 3 of 5
(5,479 Views)

 Hello GregS,

 

I just wanted to clarify my last post for safe measure. I wouldn’t expect the camera manufacturers to know the implementation details required to program an FPGA to properly decode images from their cameras. But they should be able to provide documentation that shows the tap configurations and geometries for their camera. For example, take a look at the "Camera Link Tap Geometry" section of Baslers manual in the downloads sections of the below link. There Basler shows how image information is decoded into a specific number of taps.

 

ace Camera Link User Manual V5

http://www.baslerweb.com/en/products/area-scan-cameras/ace

 

You can reverse this process in software to decode the taps back into an image. The Vision Acquisition Software driver installs some FPGA vision examples for getting started. Several tap configurations are supported with typical geometries using the polymorphic VI. However if the tap configuration and geometry of your camera is not already implemented, then you would have to modify this example for your camera.

 

I hope that clarifies things a little further and helps you get off on the right foot.

0 Kudos
Message 4 of 5
(5,466 Views)

Thanks, that gives me some confidence that we can get this to work if we go down this track.  I'll try to remember to update the forum with any details - if we get that far!

0 Kudos
Message 5 of 5
(5,457 Views)