LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

AI Buffer problem

I 've 2 AO with a buffer size of 16 384 and a update rate of 16384 S/s ( 1 cycle per buffer). I 've also 10 AI with a buffer size of 256 and a Sacn rate of 256 S /s ( 1 period by buffer and we want 256 point per cycle to save on a file). When i run the VI i had a problem with AI buffer; The program said that it haven't got ythe time to retrieve the data on the buffer .So the VI crash . This happen with or without graphics!! So we want to know why this problem and how to solve it because we use Labview RT v 7.0 with a RT engine PXI 8175 and it's capacities are greater than that !!!
0 Kudos
Message 1 of 13
(4,972 Views)
Could you please post a copy of the VI you are having trouble running?

I am a little confused as to how you are attempting to acquire the data.

A quick look at your code would probaly help us help you.

Ben
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 2 of 13
(4,972 Views)
Yes i 've forgot it on the first message but i've put it in the second one just few minute after . But i can easily send it again . Please answer quickly about what you think wrong.Maybe it's a loop's problem so the program calculate too much things!!!
One thing to know is that the program works perfectly with only one AO but since i've implemented the choice of a second AO , i've this problem!!
0 Kudos
Message 3 of 13
(4,972 Views)
I think you are correct.

Try a small modification.

Place another AI read just prior to your current AI read.

Wire up a constant indicating you want to read "0" samples. Then wire the "backlog" coming out of the new AI read into your current AI read.

This should tell LV to collect all available samples everytime you iterate.

Let me know what happens (please),

Ben
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 4 of 13
(4,972 Views)
Ok the program return me ( for the firs t scan backlog) 13 or 14 . And it was what i've find to run my Vi correctly without scan backlog ( i still have some trouble when i want to change the amplitude of the AO ). The graph are modified and i want to know if the scan backlog influence the files writing. I mean ; for us we don't care about the chart and its quality , But it's really important to have all the 256 points per cycle so if there is a gap on the chart when i change the amplitude , do i find this gap on the files or the program will write correctly the data?
Thx to answer so quickly
0 Kudos
Message 5 of 13
(4,972 Views)
Is the error gone?

Ben
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 6 of 13
(4,972 Views)
Not really ! but i have to do with it ! We don't need to change the amplitude quickly so the buffer is not rewritted but to avoid this i could increase the size of the buffer but the help from LV is not really easy and i don't know how really works the buffer . I want 256 points per cycle So i have made a buffer of 256 and a scan rate of 256 S/s . But if i increase the buffer size ( like 512 or 1024) how the program will behave . I mean does it pick up one cycle of 512 points or does it create many cycle of 256 points in the buffer ? And the scan rate ; does pick up the first 256 points of the buffer or does it the right points ( I mean if the buffer size is 512 and the scan rate 256 does the program pick up 1 p
oint on 2?) And could you explain what is the jod of the " number of scan read at a time !!! Because i know that with this number i will write on a file n points points per iteration but on the display ( in a chart for example ) what does it do ? ( we don't have understand therelation between this number and the chart ( Does it display only one point every n points per iteration) . So thank you for your answer Ben !!
Romain
0 Kudos
Message 7 of 13
(4,972 Views)
Hi Romain,

I am pressed for time today so I will not be able to do a good job at answering your question today.

I will try to reply tomorow when I will be able to do a better job.

Thank you for your patience!

Ben
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 8 of 13
(4,972 Views)
Hi Romain,

I took a closer look this morning, at your code.

I should have said something eralier!

You are trying to write formatted text inside your loop!

This is a major problem in more than one way. First the formatting the data as text requires allocating memory for the strings fom time to time. This will impact the determinism of the loop because the work involved in allocating the memory is rather demanding and of unknown duration. I avoid strings all together in my deterministic loops.

The next issue is the writing to disk. This is another thing that will detroy your determinism. Try shutting down all of the file I/O related code by commenting it out.

Please try that experiment and let me know if your loop keeps up and no errors are generated.

Re: buffers
I generally over allocate buffers. If I have memory available I will use it. I will set my buffer size to be twice as large as I expect. This handles interuptions (This comment mostly applies to non-RT environments where determinism is not expected).

Re: Freq of updates
It looks like you are attempting 20 updates a second. This should be atainable if we can get your loop timing correct. I suggest you use a millisecond timer to measure the actual repetition rate of your loop. Once you have that under control I suspect the rest of the code will fall together nicely.

Re: Architecture
An RT application is generally broken down into two major divisions "Deterministic" and "non-deterministic". In your application, the formatting and file writting are non-deterministic operations. I would suggest you restructure your VI. The general structure would be as follows.

1) Initialize Operations
This could open files, configure DAQ, and create a queue. After this phase was complete, two VI would then execute in parallel threads, one set as "Time Critical" the other set for Normal default.

2) Time critical VI
THis VI will perfom all operations associated with hardware output and input and keeping up with the associated hardware buffer demands. Any data read from the AI operations can then be inseted into the queue created durring the initialization phase. Updates of the values written to the AO can be handled using a LV2 style global (functional global, global,...). THis TC VI should sleep most of the time waiting for the next opertunity to read from the buffer etc.

3)Non-TC Loop
The non-TC loop would read from the queue that created by the inititalization phase and is being updated by the TC loop. The queue elements can then be formatted as text and written to file (Note: formatting data as text is demanding of CPU resources. If the application demands are xpected to increase, try writting data as binary and re-formatting the data as a post processing phase.) This non-TC loop would aslo read from the user interface output settings that can be written to the LV2 global used by the TC loop to update the AO.

4) I recomended a LV2 global because it can be set as "sub-routine". This gives you the option of using the "skip if busy" option when you make calls to the LV2. Search on "Skip if busy" to see how to use this feature.

I realize that this may be quite a bit of work to restructure as I have outlined but the results should be worth it if you want to keep your AO ans AI happy.

General comments
If you have the opertunity to upgrade to LV 7.1, you will find some of the enhancements very useful in diagnosising what is happening in your application. There is a function that allows you to capture a snap shot of the of the execution history of your application. If you upgraded to LV 7.1 an ran the exection trace tool in your aplication, I believe it would indicate that you application is hanging on the memory mangaer or the file I/O.

Please post the most recent version of your code so I can stat on the same page.

Keep me posted if you continue to have trouble.

Trying to help,

Ben
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 9 of 13
(4,972 Views)
Ok thank you for your help , Your answer is really helpful but really hard to understand in the all way . Could we fix the probleme one by one because , i'm not a Labview master.
First question : What is a LV2?
If i understand clearly , i have to create a big Vi where inside i generate the initialization and run 2 sub VI ( one TC and another not TC) ! So i'm ok for that but how can i do that because i had serious problem to declare and work together my 10 AI and my 2 AO . Now it's works perfectly and i don't want to spend all my time to change the structure because it 's sometime really hard to understand what Labview ' s engeneer have done in the example.So can i have the program i ha
ve now ( initilization & TC Vi) and with a queue system have in paralell the non TC sub VI.

last question What is the millisecond timer ? where i can find it? how does it work ? and what it return me?


( Ps i send you my new version of program , it works better but i need to put into an XY graph so iy would take more CPU resources i think; If it the VI doesn't run erase the XY graph and stuff attach with it( it's not the problem now)

Another remark: I 've put a delay in my loops ( ican change it during the loops and i 've seen that it influence my " number of scan to read at a time" ( remplace by scan backlog 2 ). If i increase the delay i increase the scanbacklog 2 but if i decrease too much the VI crash. So for ,ow the best result is 50 mS!

A last point : You don't have explain me the interaction between the " number of scan to read at a time " the scan rate and the buffer size . If i take a large buffer like 3 times my scan rate ( 256 S/s) does the VI save 1 point ev
ery 3 points or will it pick only the first 256 points of the buffer ( i hope no because it will be a great problem for me )

So thank you for your help and have a nice day
0 Kudos
Message 10 of 13
(4,972 Views)