09-06-2013 08:53 AM
Help Please!!!
I've been searching for an execution time issue in our application for a while now. Here is some background on the application:
Problem: During operation, the VI that writes the data to the text file will periodically take many hundreds of milliseconds to execute. Normal operation execution times are on the order of 1ms or less. This issue will happen randomly during operation. It's usually many seconds between times that this occurs and it doesn't seem to have any pattern to when the event happens.
Attached is a screenshot of the VI in question. The timing check labeled "A" is the one that will show the troubling execution time. All the other timing checks show 0ms every time this issue occurs. I simply can't see what else is holding this thing up. The only unchecked subVI is the "append error call chain" call. I've gone through the heirarchy of that VI and ensured that everything is set for reentrant execution. I will check that too soon, but I really don't expect to find anything.
Where else can I look for where the time went? It doesn't seem to make sense.
Thanks for reading!
Tim
09-06-2013 09:02 AM
Tim,
wow, that thing is huge....... nevertheless, posting code is most often better than posting images.
There are some points you should look into:
Which version of LV are you running? What is the OS?
Norbert
09-06-2013 09:04 AM
Tim,
The OS is probably the culprit. As the text file grows, the OS may need to re-allocate file space or fragment the file. And the OS does not know or care that you will be ready for the next write before it is.
The fix is to use parallel loops. Do the acquisition in one loop and the file writes from a buffer in another loop. Look at the Producer/Consumer Design Pattern.
Lynn
09-06-2013 09:06 AM
That's the nature of dealing with file reads and writes in any normal operating system. Sometimes they take longer than others. Windows is doing other things and may need to read or write to the hard drive pre-empting a file read or write you are doing with your program.
If these time pauses are a problem, then you should move the file operations out of the time critical loop and put them in another while loop passing data by way of a queue with a producer/consumer architecture.
09-06-2013
09:20 AM
- last edited on
05-12-2025
09:12 AM
by
Content Cleaner
Agree with everybody else. You should be using a Producer/Consumer. This way your data can be collected at whatever rate it needs and your file IO can process the data as quickly as it can.
09-06-2013 09:50 AM
This VI is only a very small part of a very large application. It is already in it's own queue. Data is collected and immediately passed into a queue for processing. Part of that processing is writing to disk. I can move the disk writes to another queue, but that seems like I'd just be pushing the problem somewhere else. What winds up happening right now is the data may pile up at the processing queue and eventually spiral out of control causing LV to crash out due to insufficient memory.
However, if this were indeed a disk write time issue, wouldn't that show up in loop times? (C) Those have all been 0's across the board when this occurs. If I had seen the time show up in "C", I wouldn't still be looking for the source. At one point, I also put timing directly around the "write text to disk" call and again found no time lost there. I've always suspected a disk write timing issue, but I haven't been able to produce the timing data that shows it. I'm afraid if I do all the work to move the disk writes out into their own queue, it won't do any good.
I did a source file distribution (see attached). I'm sure it wouldn't be too useful to run without the rest of the application, be you can at least poke around in it.
09-06-2013 11:40 AM - edited 09-06-2013 11:42 AM
You should probably increase how much data you write with a single Write to Text File. Move the Write to Text File out of the FOR loop. Just have the data to be written autoindex to create an array of strings. The Write to Text File will accept the array of strings directly, writing a single line for each element in the arry.
Another idea I am having is to use another loop (yes another queue as well) for the writing of the file. But you put the Dequeue Element inside of another WHILE loop. On the first iteration of this inside loop, set the timeout to something normal or -1 for wait forever. Any further iteration should have a timeout of 0. You do this with a shift register. Autoindex the read strings out of the loop. This array goes straight into the Write to Text File. This way you can quickly catch up when your file write takes a long time.

NOTE: This is just a very quick example I put together. It is far from a complete idea, but it shows the general idea I was having with reading the queue.
09-06-2013 03:47 PM - edited 09-06-2013 03:47 PM
Again, sounds like a great suggestion. However, can someone please explain why this is a disk write issue if it's not reflected in the execution time of the for loop? I really don't want to go fixing something that really isn't the problem. I feel like I'm going to do all that work, and it will still have execution time issues. Can someone please explain why the execution of "write text to disk" VI would be less than 1ms AND it's a problem writing to disk?
09-06-2013 07:13 PM
The point is that with a separate file write loop, you do NOT need (or even want) to write every millisecond. You accumulate the data in a buffer - a shift register, a queue, or a subVI - and then write a larger chunk of data to the file at somewhat irregular intervals. The timing and exact quantity of data written on each iteration is not important as long as you write fast enough and often enough to avoid a buffer overflow.
By taking the occasionally slow file operation out of the time critical loop, you avoid any possibilty of these interruptions.
This is one of the main reasons the Producer/Consumer architecture is so widely recommended.
Lynn
09-06-2013 10:21 PM
@thutch79 wrote:
However, if this were indeed a disk write time issue, wouldn't that show up in loop times? (C) Those have all been 0's across the board when this occurs. If I had seen the time show up in "C", I wouldn't still be looking for the source. At one point, I also put timing directly around the "write text to disk" call and again found no time lost there. I've always suspected a disk write timing issue, but I haven't been able to produce the timing data that shows it. I'm afraid if I do all the work to move the disk writes out into their own queue, it won't do any good.
It's not possible/realistic to have 0ms as write time, either it's an error in causing no writing or it's cached, in which case the several hundered ms is realistic when it finally writes.
But yes, you should see times != 0 in the list.
/Y