03-20-2018 01:31 AM
Hello,
My attached VI is for a 20 second serial reaction time task. I'm currently sampling at 100 Hz but would like to increase to 1000 Hz. When I do this by changing the ms wait function to 1ms in my for loop and increasing the number of iterations to 20000 from 2000 my 20 second trial turns into a 30 second trial. The loop responsible for the visual stimulus (cue) is unaffected, but my keyboard response loop is what's delayed. I'm not sure why the program works fine at 100 Hz but does is seriously delayed at 1000Hz.
Regards,
Daniel
Solved! Go to Solution.
03-20-2018 02:17 AM - edited 03-20-2018 02:18 AM
Hi Daniel,
I'm not sure why the program works fine at 100 Hz but does is seriously delayed at 1000Hz.
So let's take a look at this loop:
- There are 4 functions in this loop: Wait(ms), AcquireInputData, GreaterZero, and BooleanArrayToNumber.
- I guess we can agree on GreaterZero and BoolArrayToNumber not influencing loop execution time significantly.
- Wait(ms) will wait for the amount of time given using OS calls, so it will not even use CPU power for it's execution. It is still only as accurate as the underlying OS, which you haven't mentioned so far.
- Now there is one function left: AcquireInputData! This function has to interface with the OS to read a lot of input data and will be executed as fast as the OS will provide that data. Why do you think this function (and the OS) will support reading user input at 1kHz?
03-20-2018 03:51 AM - edited 03-20-2018 04:03 AM
In addition to that, the standard timers in LabVIEW use the Windows timer tick which used to have a resolution of 64 ticks per second but are at 10 ms interval now. And to add injury to insult Windows is a non real-time system. This means that there is no guarantee whatsoever that a particular piece of code will ever be executed within a certain amount of time. Hoping for repeat-ability on the user level beyond 10ms in Windows is asking for disappointment. In a kernel driver you have a higher change of going down to 1ms but that is not an option here.
If you want that much of repeated accuracy you either have to go to a realtime system or let some hardware do the accurate timed operation and just read the data after a while in bigger chunks from an intermediate buffer.
And yes this applies to any non-realtime operating system including Linux, MacOSX, and Windows. Some might be able to go down to less than 10 ms for 99% of the time, but non can really guarantee you that a loop will always execute in 10ms even if the code that is inside the loop takes less than 10ms to execute. There are other processes on the system that want attention from the OS too, and a myriad of kernel drivers that interrupt the system very frequently to tell it that there has a new network packet arrived that needs to be read, the screen needs updating, the harddisk has finished writing the previous block of data to the disk, etc. etc. All this can and will prevent Windows from scheduling LabVIEW frequently enough to let it run your loop as fast as you would like.