09-27-2012 12:01 PM
Hi,
I'm fairly new to Labview so I'm hoping the solution to the problem I'm having will be fairly obvious to someone. I want to write images taken from a microscope mounted firewire camera operating at 50Hz in a continuous stream to the HDD when the user presses a button. I've attached the VI that I have written to do this, it uses the producer/consumer parallel loop architecture. The producer loop displays a live image, when the 'record movie' button is pressed this image along with a time stamp are bundled into a cluster and passed into a queue. In the consumer loop the clusters are removed from the queue and the time data written to a text file and the image written to a TIFF file.
The problem that I am having is that when I analyse the time data I find that the time taken to get the image from the camera frequently deviates from the average of 20 ms, sometimes taking >0.5 s to get the next image. I'm missing frames! If I disable the Tiff writing function so that only the time data is recorded the problem disappears. I've also noticed that both the loop counters will simultaneously freeze for a short period. My understanding is that the two parallel loops should function independently, with the consumer loop waiting for items to be placed in the queue, and that the producer loop should be able to continue putting items in the queue at 50 Hz even if the consumer loop has slowed down waiting for the HDD.
Any help would be greatly appreciated.
Andy
09-28-2012 07:59 AM
Hi Andy,
I have a question for you, when you press the record button are you planning to just store a single image, or a sequence of images until record is turned off?
Kind Regards
09-28-2012 08:04 AM
Hi Kevin,
The plan is to store a sequence of numbered images until the record button is turned off, with the corresponding time stamp saved in the text file.
Andy
10-02-2012 09:29 AM
Hey Andy,
I have taken a look at your code I have some feed back to give you:
We do not tend to use the producer consumer architecture with vision applications, as we are not buffering data into a queue. With a standard DAQ task we will have data which we can queue up, with vision we have pointer to a memory location, so if we used producer consumer we will be queueing up pointers of the same memory location. If we do have references in the queue our time stamp will not match to the image (defeats the purpose of the queues) as the image in memory would have changed by the time that element is dequeued.
However I have stuck with your architecture, but just an air of caution if the consumer loop is slower than the producer all your images and time stamps will be incorrect.
The data that you have enqueued is not an image (frame) but is a reference to that image. The image is stored in a temporary memory location which is created with IMAQ create.
Therefore this reference is enqueued and passed down the queue. If the reference has been updated with a new frame, then when the reference is dequeued it will dequeue whatever is currently in the temporary memory location, i.e. the latest frame. This can be avoided by slowing down the producer loop so the consumer loop has enough time, to read the temporary memory location before it is updated.
You can check this by adding a “get queue status” to the consumer loop to see how many elements are left in the buffer. Ideally this should be 0.
You can use the time stamp to pull the time and date, instead of using the tick clock. It is a more elegant way to get the time data.
For more accurate timing you will need to use a real time or FPGA system, as the windows OS is not a real time system, it will have other tasks running in the back ground that will not give you accurate timing.
10-08-2012 05:48 AM
Hi Kevin,
Thanks for the very useful reply. I think it's helped me to understand a little better as to what is going on, however, I have a few questions. Could the reason that both the producer and consumer loop are simultaneously slowing down be because Labview is waiting to write the image data to the HDD in the consumer loop before releasing the image pointer? If the pointer is not released then presumably it would hold up the producer loop and be the cause of the missing frames I am experiencing. If that is the case it would at least mean that the time stamps I am recording are correct as a new image and time stamp would not be grabbed until the last image pointer is released. It would also explain why when I disable the image writing to HDD section of the code I no longer see any dropped frames.
I'm not sure that putting a delay into the producer loop will ultimately work for my application as I may want to increase the frequency at which I grab frames from the camera. Would converting the image data from a pointer to an array and putting the image array and time stamp into a cluster which I could then add to the queue work? I assume that by converting the image data to an array I'm then freeing the image pointer to grab another frame in the producer loop whilst the consumer loop can take as long as required to write the image data to the HDD without holding up the producer loop.
You say that a producer-consumer architecture is not normally used with imaging applications. Would using a circular buffer instead of a queue be a better option?
The reason that I was using the tick counter is because, as you say, the Windows clock is not real time. I also don't need the date, just the relative time between one image and the next so that I can check that the time gap between images is constant, which is (within a degree of error) when the image writing to HDD is disabled.
Thanks for your help.
Andy
10-08-2012 06:55 AM
Hi Andy
I do not feel that is the reason, as when a new image is grabbed it will be in that memory location and the previous image will be destroyed. It is strange behaviour that both loops are slowing down simultaneously. As the reason for using producer consumer is to have two parallel loops running independently of each other.
That data that is coming out of the IMAQdx Grab is NOT an image; it is a reference to a memory location, so by putting it into an array you would just have an array of references to a memory location. As I already mentioned the memory location will stay the same but what is contained in that location will differ. Have a look at my example and make the consumer loop faster than the producer loop and you will see why.
A circular buffer would not be a better architecture as you will have the same issue of having a buffer of references to the memory location. Usually with vision application sequence structures are used so you are guaranteed not to lose frames. Take a look at some example for vision. You could create more memory locations; IMAQ creates if you wanted to.
With the time stamp I have used, it is formatted only to show the time and the seconds to 2 dp accuracy. I guess it is up to you which method you use for timing.