LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

binary write crashes timed loop?

Title says it, and I don't see any reason for it to happen with a VI as simple as the attached. Is there an issue related to it, and can someone point me at it?

 

config is LV8.6.1 on XP64, 8cpu machine. The attached VI crashes labview unless "y" is very small, and I can't see neither array allocation issues or disk behavior ones that can ever explain it. There is no crash if the timed loop is replaced by a while loop.

 

TIA, Enrico

Download All
0 Kudos
Message 1 of 15
(4,150 Views)

How long does the VI run for before it crashes?

How big do you need to make y before it crashes? (I tried 800 and it was fine)

 

It seems to run ok on my PC (XP-32 bit dual core), but I did not let it go through all 20k images!

0 Kudos
Message 2 of 15
(4,144 Views)

Fraction of a second at most. Running (actually a slightly more complex ancestor of it) with the light bulb on, the crash happens before BinaryWrite gets executed even once. The crash is surely due to BinaryWrite, as diagram-disabling it removes the crash. Only with y=1 I have no crash for at least several seconds. I'd also exclude disk issues, as I tried writing on three different (and fast) disks on the XP64 machine.

 

I too tried the VI on an XP32 platform, and crashes neither. Now I know, XP64 is not officially supported, but this issue is too macroscopic imho.

 

Enrico

0 Kudos
Message 3 of 15
(4,140 Views)
What happens if you replace the timed loop with a normal while loop? Does it still crash your 64bit machine?
0 Kudos
Message 4 of 15
(4,131 Views)

nope, it doesn't. I said so in my first post. Enrico

0 Kudos
Message 5 of 15
(4,119 Views)

I think you have two options:

 

1. Move to a supported OS. XP64 will never be supported.

2. Use a normal while loop. At a 200msec period, you'll probably never see the difference.

Message 6 of 15
(4,113 Views)

Enrico Segre wrote:

nope, it doesn't. I said so in my first post. Enrico


Sorry, I missed that.

 

I agree with Dennis about using a standard while loop and a wait primitive (or wait until next ms), you are not really gaining anything by using a timed loop.

0 Kudos
Message 7 of 15
(4,100 Views)

Unfortunately that is not the idea. I put 200ms in the example, to rule out subtler timing issues, but I want to go down to 2ms. There, timed wait jitters are not acceptable. A timed loop instead has the configuration option of recovering the original phase. And, in the larger picture, "images" are dequeued from a ringbuffer, and noncumulative jitters are tolerable, but incremental ones not. Perhaps I could try a while loop with a waiting time somewhat shorter than needed, and have a queue time the while loop, but that  may lead to something else.

Really all the blame for such a macroscopic crash is to the non-supported OS?

Enrico

0 Kudos
Message 8 of 15
(4,095 Views)

Your 8CPU system is very impressive but, unfortunately that computer has also some mechanical parts. Seeing this thread I though I am somehow obsolete but I am not. The fastest hard-disks (15000 RPM) have about 128 MB/s throughput and 2ms access time. Giving the situation with current technology I will not choose to write something on a file with an exact periodicity of 2 ms or so. More than this, writing 128000 bytes on each other ms means about 64 MB/s.

 

In your place I would "queue" images in the timed loop, and then "dequeue" and write to disk somewhere else in block diagram (e.g. a while loop) letting OS to write data on disk at its pace, being aware about the throughput at the same time (for example, watching the max queue size).

With other words: I would not recommend writing such large files in timed loops.

(Tip: writing text files might be faster than binary)

 

Wow, all of this just for 40 seconds of run-time, to fill-up an entire TB hard-disk .

0 Kudos
Message 9 of 15
(4,088 Views)

Thanks for all these suggestions, which of course I'm aware of. But please keep in mind that what I submitted is only the test snippet which reproduces 100% the bug, which is the point of my post; in the real thing I'm arranging things much differently.

 

As a matter of fact (that could be the subject of another thread, and actually was partially, see http://forums.ni.com/ni/board/message?board.id=170&message.id=421387#M421387) my system now has a RAID0 array of 8 carefully matched SATA disks and a fast controller (not the one mentioned in the above thread, which gave odd problems) capable of writing 820Mb/s at the beginning of the strip. Total disk volume is ~6Tb (actual usable length is lower due to the performance derate along the disk), accounting for some 2h+ of 1280x1024x500fps writes. Disk size was also a consideration for choosing a 64bit OS; windows was the forced choice because of IMAQ.

 

Rather, can you please argument further on "(Tip: writing text files might be faster than binary)"? That may be an interesting point. I'm using unbuffered write, in the real thing, anyway.

 

Thanks, Enrico

0 Kudos
Message 10 of 15
(4,049 Views)