LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

ai read hesitates (shows in chart)

Using 7.1
When I use analog in data acquisition at a rate of 1000 samples/s I see odd behaviour in my AT-MIO 64-E buffer backlog. The loop I use is plenty fast and reads enough samples out of the buffer for the backlog to stay on zero. Instead I see the backlog increase quickly over a certain number of cycles and then quickly empty again. This 1-2 second cycle repeats itself over and over again. After each cycle the chart stops for an instant, even if the backlog is not empty. Anybody know what is happening? I attached the vi.

Aart-Jan"
0 Kudos
Message 1 of 4
(2,632 Views)
WElcome to the world of Windows and other non-deterministic operating systems!

There are a number of issues.

1) Wait for next millisecond multiple will wait the time you specify only if the work performed in parallel with it runs in less time time that the interval specified. OTHERWISE it will miss the first multiple and will have to wait for the next multiple.

2) When you update your chart, space has to be allocated to store all of your data. LV is very smart about allocating memory but can not read your mind. When the space currently allocated for storage needs extended, LV will have to allocate more memory. Allocating memory takes a lot of time. If the allocation time exceed your loop rate spec'd by the Wait until.... you will miss a multiple.

3) Win
dows was developed to make it look like it could follow your mouse movements. Just about everything else has to wait. Windows will also go off into "la-la-land" just to make sure you can find all of your files fast, or check for other nodes on the network. THe list of distrations that Windows is subject to, can go on for quite a while. That is the nature of Windows. Only going to a Real-Time OS will help fix that issue.

Enough on what is happening and why. Now how to fix.

1) Use a "WAIT" not a "Wait until..."

2) Use a queue (or another mechanism_ to pass your data out of your data acq loop to a parallel loop that updates your display. THis will move the "memory allocation" delays out of the DAQ loop. The display loop will still expecience interuptions but it will only affect the display and not your DAQ backlog.

3) Update your display less often. Let you display updates pile up and apply the updates as a larger chunk. This way the work assoiated with auto-scaling and displa
ying is done less often and thereby reducing the demainds on your processor.

Now some questions.

How fast is your processor?

How much memory?

Ben
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 2 of 4
(2,632 Views)
I will try to implement your suggestions.
I am using a ibm laptop with a 1.8 Ghz mobile P4 and a whopping 768 MB of RAM.
I plan to have the graph update at least 20 times a second to make it look smooth for the eye.
I will get back to you as soon as I implemented your suggestions. Thanks Ben!

Aart-Jan
0 Kudos
Message 3 of 4
(2,632 Views)
Please keep us posted on your findings.

Ben
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 4 of 4
(2,632 Views)