07-13-2018 10:14 AM
Think we're back on the same page now...
@RavensFan wrote:
His original message was about why the previous images were lost and he was only getting the last image. I answered that in message 2 about how what he was queuing up was a reference to a specific memory location that is always overwritten when a new image comes in. The value that is on the purple wire getting queued up isn't a reference to a specific image, just a memory location that holds image data that will change whenever new images are acquired. The OP had actually marked message #2 as the solution, but later rescinded that. I'm not sure why, because it is the solution to the question he had posted.
Agreed, that's the solution to the question.
But in message 1, he described his original problem that his reason for trying save all the images after collecting was not because that is what he necessarily wanted to do, but he did that thinking it would fix his problem of a slow down in frame rate. So his real problem he is trying to solve is a slow frame rate. Saving each image after acquiring it in a sequential manner is too slow. (Now how slow that is or how fast he needs it to be wasn't mentioned.)
I read that as "saving (even parallel) makes the frame rate drop". Probably my bad.
That might still be like that, and if memory allows it, sequential execution is still an option.
Technically, a producer/consumer architecture is a sequential operation in that the consumer can't operate on a piece of data in the queue until after the producer has put it there. The parallel part is that it can do it at its own pace while allowing the producer to move on and produce additional data. Thus the two loops are operating in parallel.
Sure... I was going for 'all acquisition' sequential with 'all storage'. If memory allows it, that would make a much simpler solution.
Parallel execution IS an option in his case.
Unless the storage, even in parallel, slows down the acquisition. 500 FPS is pretty steep (depending on the image size of course).
It allows the consumer to do some work, albeit saving image data to a file at a much slower rate than the producer is acquiring the data. The architecture should allow the code to run a little bit longer before running out of memory than storing up all the data before writing to the file.
Agreed.
The key thing the OP needs to do is get the actual image data provided by purple IMAQ reference stored in the queue and not just the IMAQ reference. This needs to happen whether he does it fully sequential like his original VI shows, or he improves upon it and turns the 2nd loop in the sequence into a loop that can run parallel to the 1st loop.
Getting the actual image data from the reference in the acquisition loop will slow it down.
I'd simply open new IMAQ references in the acquisition loop (generate names based on loop counter), and in the storage loop close them. If the nr of frames is known upfront, he might even get those references (sequential) in a loop before the acquisition loop. Done that before and it works great. The array\queue with IMAQ references will do all the work.
07-13-2018 10:40 AM
If you are saving RGB (4 bytes/frame) 640 x 480 images at 500 fps, that comes to generating >600MB/sec of data that needs to be (a) possibly converted to Video Codec and (b) written to disk. A Web Search found this earlier Forum Post that suggested a two-step process. During Acquisition, save the Images as binaries, then in a post-processing step, convert the Binaries to AVI (or some other video format).
As others have noted, you need to use a Producer/Consumer design pattern. The Producer gets images from the Camera's Buffer(s) and enqueues them. The Consumer runs "as fast as data are available" and spools the images (in whatever format) to disk. Since you are using LabVIEW 2016, you might want to consider using the new Stream Channel to connect your Producer and Consumer loops (conceptually simpler than Queues, and, in my opinion, more "visually appealing" and intuitive, as well).
Bob Schor
07-13-2018 10:54 AM
@Bob_Schor wrote:
As others have noted, you need to use a Producer/Consumer design pattern.
Still not convinced. At that DAQ speed, PC it will help lengthen the recording time a bit (maybe 30%?), at the price of more complex code (although indeed channel wires make it easier).
It's up to OP I think... Let us know the details so we don't have to guess, if you need more help.
07-13-2018 01:11 PM
wiebe@CARYA wrote:
Think we're back on the same page now...
@RavensFan I'm glad to hear that.
Unless the storage, even in parallel, slows down the acquisition. 500 FPS is pretty steep (depending on the image size of course).
You know I overlooked that the OP said 500 FPS in the original message. I agree that is very fast.
It allows the consumer to do some work, albeit saving image data to a file at a much slower rate than the producer is acquiring the data. The architecture should allow the code to run a little bit longer before running out of memory than storing up all the data before writing to the file.
Agreed.
The key thing the OP needs to do is get the actual image data provided by purple IMAQ reference stored in the queue and not just the IMAQ reference. This needs to happen whether he does it fully sequential like his original VI shows, or he improves upon it and turns the 2nd loop in the sequence into a loop that can run parallel to the 1st loop.
Getting the actual image data from the reference in the acquisition loop will slow it down.
That may be. We won't know how much it slows it down unless he tries it.
I'd simply open new IMAQ references in the acquisition loop (generate names based on loop counter), and in the storage loop close them. If the nr of frames is known upfront, he might even get those references (sequential) in a loop before the acquisition loop. Done that before and it works great. The array\queue with IMAQ references will do all the work.
I think think that is perfectly acceptable.
wiebe@CARYA wrote:
@Bob_Schor wrote:
As others have noted, you need to use a Producer/Consumer design pattern.
Still not convinced. At that DAQ speed, PC it will help lengthen the recording time a bit (maybe 30%?), at the price of more complex code (although indeed channel wires make it easier).
It's up to OP I think... Let us know the details so we don't have to guess, if you need more help.
I don't think a a producer/consumer loop is much more complexity than having the original two sequential loop scenario. If a producer/consumer is too complex for someone to program, then the whole concept of trying to make this code run at 500 FPS would be way, way, way over the head of that same person to try to program.
07-13-2018 07:20 PM
Thanks for the help everyone. I'm helping with research at a university am only able to work one day a week so I can't work/share changes tell next week. The single IMAQ reference being re-written was definitely the issue. This is my first time programming with labview and was stupidly under the impression that the Queue was used as buffers. I definitely will try the producer/consumer loop when I get the chance as it logically makes sense to save some of the images as the camera acquires images to allow run time to be longer, which I may need. I'm working with a camera that captures grayscale 514 x 640 images at max 540 fps for 10 seconds to I believe as long as 5-10 minutes, so hopefully produce/consumer changes can get it to work at close to the desired frame rate for that long.
Much Appreciated Help
07-14-2018 10:14 AM
@danderson1 wrote:
The single IMAQ reference being re-written was definitely the issue. This is my first time programming with labview and was stupidly under the impression that the Queue was used as buffers. I definitely will try the producer/consumer loop when I get the chance as it logically makes sense to save some of the images as the camera acquires images to allow run time to be longer, which I may need. I'm working with a camera that captures grayscale 514 x 640 images at max 540 fps for 10 seconds to I believe as long as 5-10 minutes, so hopefully produce/consumer changes can get it to work at close to the desired frame rate for that long.
My introduction to LabVIEW Video came after about 3 years of LabVIEW development work. Our task was to capture short clips (up to 10 seconds) of RGB 640x480 images at 10 fps on receipt of a trigger signal. That's when I learned that "Video is different".
It starts with IMAQdx Configure Acquisition VI. There are two parameters -- Continuous? (a two-valued Enum, essentially a Boolean) and Number of Buffers, which for Continuous acquisition is the number of internal buffers inside the Driver.
When you start a Grab (Continuous Acquisition), the Buffers are used in a "ring-buffer" configuration. Each image is associated with an incremental Buffer Number (0, 1, 2, ...), but if you have, for example, configured 6 buffers, Image #7 will go into Buffer (7 mod 6), or Buffer #1. By default, IMAXdx Grab2 waits until the "Next" buffer has acquired an Image (so this loop wants to run at the frame rate you specified), and uses the Ring Buffer you defined as the first set of "buffers" for your images. It also provides you a reference (Image Out) to the most recent Image (for viewing, for example, but probably not at 500 fps!).
To save the Images, you put them on a Producer/Consumer Queue. Note that this Queue does not contain a lot of data -- the "Image" is really a reference to the storage location within the Driver where the pixels are stored. The Consumer is responsible for getting the pixels out from the Driver and saving them to disk in some suitable format, either a Video format (which takes some processing time) or in a "raw image" (binary) format, possibly for off-line conversion to Video.
Let us know how you make out with this.
Bob Schor
07-16-2018 03:05 AM
@danderson1 wrote:
Thanks for the help everyone. I'm helping with research at a university am only able to work one day a week so I can't work/share changes tell next week.
That's OK, we're not in a hurry. Feel free to post at any paste you like.
@danderson1 wrote:
The single IMAQ reference being re-written was definitely the issue. This is my first time programming with labview and was stupidly under the impression that the Queue was used as buffers.
That's not stupid, it's actually true. It's just that the data you are buffering is a pointer to the same memory. The queue is used as a buffer, so it's IMAQ that throws you off.
IMAQ is in general weird in many ways. It's designed to be efficient, and apparently when it was created >15 years ago, weird semi-pointers\by wire hybrid data types seemed like a good idea.
Your LabVIEW experience without IMAQ will be different.
@danderson1 wrote:
I definitely will try the producer/consumer loop when I get the chance as it logically makes sense to save some of the images as the camera acquires images to allow run time to be longer, which I may need. I'm working with a camera that captures grayscale 514 x 640 images at max 540 fps for 10 seconds to I believe as long as 5-10 minutes, so hopefully produce/consumer changes can get it to work at close to the desired frame rate for that long. Much Appreciated Help
You'll probably find out you have conflicting interests here. Reaching that frame rate and length of recording are conflicting. You can probably do some calculations upfront, and if it turns out that memory is a problem for a 10 min. recoding, use should use PC. If CPU is a problem, you probably need to do as little as possible during recording, and a sequence makes more sense. Then again, a PC is easily converted to act as a sequence, the reverse is not...
How are you going to store those images? I'd expect the IMAQ AVI VI's to be efficient.
07-19-2018 02:14 PM
The IMAQ buffers for each image is working. Here's the working Producer/Consumer code I wrote if anyone's interested. Feel free to reply if you see any improvements. Thanks for all the help.
07-19-2018 04:48 PM
You know, even though I've worked with IMAQdx and images and all, I haven't coded things recently so I've forgotten some of the details. Nothing like writing a little demo to remind yourself how things work!
So here is a simple example, using the Laptop Camera on my PC which takes 30 frames per second, so I'm taking 300 frames or 10 seconds of images. The Producer "takes" the Images, the Consumer "shows" them. But where's the Queue?
Notice the Frame in the path going to the Consumer, with a 1.5 second "Wait" in there. [Incidentally, this is one of the few times I use a Frame, since Wait functions don't "anchor" to the Error Line]. Before you code this up and try it out, predict what you'll see.
Also notice the 33 msec Wait in the Consumer. What if that were set to 0? Again, make a guess before you run the code.
This is coded in LabVIEW 2016, but you should be able to reproduce this in an earlier Version as everything is "out in the open". Everything you see here is there for a reason, and is "there" (meaning "where I put it") for a reason (which, I'll admit, might be wrong, but I doubt it).
Bob Schor