Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

cRIO IMAQ Express help

I'm new to IMAQ so I need help with what I'm assuming are basic problems.

 

  1. I want to continuously stream the camera's image, but only capture a certain amount (N) when the control is pressed. 

    Attached is the current VI I am trying to use which involves using the express VI twice. Once to stream the data and then again to capture the images. I don't think I am doing this right, but I haven't been able to figure out another way.

  2.  When the capture image button is pressed I want to convert the image and its data to binary. Then take the binary code and write it as a JSON file or in JSON format similar to this:
    {
        First image: image data,
        Second image: image data,
        ....
        Nth image: image data,
    }
  3. Then I want to take that data and send it to a TCP server. This part I know how to do, but my question is would it be better to send the data as a file or just as a string? Since I'm using Python I can use this library to read values from the FPGA bitfile. Would it be better to store it in an FPGA indicator and read it using that library or should I just use my TCP server instead?
0 Kudos
Message 1 of 6
(2,372 Views)

I want to add another condition for my first issue. Say I want to make it a timed capture that captures at different time rates according to the amount of captures suggested 

captures, interval (ms)
0, 0.0625

1, 0.125

2, 0.25

3, 0.5

4, 1.0

5, 2.0

6, 4.0

7, 8.0

8, 16.0

9, 32.0

10, 64.0

11, 128.0

12, 256.0

13, 512.0

14, 1024.0
15, 2048.0

 Is there an easy way to do this?

0 Kudos
Message 2 of 6
(2,350 Views)

Hi,

 

The FPGA Interface Python API let you communicate to the cRIO FPGA from Python running on a host computer. I understand you won't be doing any image processing in the FPGA and I don't think you need the FPGA to achieve the frame rates you mentioned which means that the cRIO would act just as a communication channel between the camera and the PC. In that particular case, I would prefer to connect the camera directly to the PC and use LabVIEW and the corresponding image acquisition driver directly on the PC. Does this make sense?

 

Regards.

0 Kudos
Message 3 of 6
(2,309 Views)

@DonhJoe wrote:

Hi,

 

The FPGA Interface Python API let you communicate to the cRIO FPGA from Python running on a host computer. I understand you won't be doing any image processing in the FPGA and I don't think you need the FPGA to achieve the frame rates you mentioned which means that the cRIO would act just as a communication channel between the camera and the PC. In that particular case, I would prefer to connect the camera directly to the PC and use LabVIEW and the corresponding image acquisition driver directly on the PC. Does this make sense?

 

Regards.


That does make sense, but lets say we want it all bundled within LabVIEW/Python without dealing the camera's individual acquisition drivers because we might use different camera's and automating each individual driver seems like more trouble than it is worth

 

Any thoughts on my other couple of questions?

0 Kudos
Message 4 of 6
(2,307 Views)

Here is what I am currently doing for the timed capture, is this correct?

0 Kudos
Message 5 of 6
(2,300 Views)

And now when I try to write that to a file I am getting a permission denied error

0 Kudos
Message 6 of 6
(2,292 Views)