06-07-2023 02:33 AM
Hi!,
This is a bit of a Mischmash post about me trying to understand better how labview and windows interact on a timing scale.
For several application, I usually need to communicate with a DUT via UDP or a serial adapter (CAN/RS485/422 to USB etc.)
As telemetry is involved, it is better to receive it at a fairly regular time for log and analysis, for one of my applications, it would also be great to be able to have 1 telemetry every 2ms.
What I've noticed using a signal analyzer is that my requests for telemetry are not so accurate in time, while the average is ok.
If the first telemetry is sent at 0ms, the next one at 2,55ms, the next one will be at 1.45ms. etc.
The problem is that, first of all, from time to time, this delay can be huge depending on other activities made on the computer (500ms...)
But also, with LV/windows compensating for the delays when it can, sometime I have an interval of less than 100µs which is too short for my DUT.
I feel I should probably notify here that the "request for telemetry" are coming from a certain part of the software, and have to be sent to my "Communication Module". I believe the delays are happening in this transferring via tools (Queues, notifiers, User Events...)that I am using
So I guess my two questions would be:
Thank you for your clarification and any literature you could forward me to 🙂
Vinny.
06-07-2023 02:53 AM - edited 06-07-2023 03:05 AM
Windows is NOT a realtime OS. There is no way to guarantee ms accuracy in code execution. The only way to achieve this is by having your own hypervisor or similar environment that executes the time critical code and that requires deep kernel integration as only the kernel has enough priority to guarantee this type of timing accuracy.
As a thumb of rule you have this list:
1) normal desktop OS. 10 to 100ms soft timing accuracy. There are however no guarantees. Windows can decide to go into a guru meditation for several 100 ms to multiple seconds at anytime, preventing your application to get in even a single CPU tick.
2) special RT extension using one or more CPU cores exclusively such as what TwinCAT can provide. Code executing in there can perform to ms accuracy but if data has to pass to your normal Windows application (to be provessed there or stored to disk for instance) you are back at square 1.
3) real-time OS: sub-ms to ms accuracy
4) FPGA hardware: sub-us to us accuracy
5) custom hardware design: ns accuracy
And no, LabVIEW events are not using Windows events. But user events are usually processed in an event structure that in most cases also contains event cases for front panel elements and therefore will be depending also on your user interface thread.
Generally user events should only be used for user interface interaction or if speed is not your biggest concern. You should instead use queues, notifiers and semaphores for normal subsystem interaction. But again, this is Windows! ms accuracy is impossible on application level (Ring 3)! You can do a lot of things in a ms but Windows will decide when it gives your application time to do that and that can easily be nothing at all for 10 to 100ms at any time!
06-07-2023 03:05 AM
Thanks for your quick answer Rolf,
Si I perfectly know that Windows isn't a RT OS, I was just wondering the level of timing accuracy it can get to, so thank you for this clarification.
Unfortunately, working with an RT machine isn't really possible (that would mean updating all our production computers just for this DUT) so we will most likely increase the timing between TM or deal with random delays.
That being said, is there any preference in the programming tools I should use?
Usually I'm using queues to send a request and wait for the answer via a notifier if this is something I need right away. If the answer will take several seconds for instance (like starting the DUT and waiting a "Ready flag") then this will be sent via Queue+UE..
06-07-2023 03:11 AM - edited 06-07-2023 03:12 AM
One of the better approaches is to avoid the command-response delay chase as much as possible or find ways to at least start multiple commands in parallel, that do not depend on the previous commands completion and then collect the responses later.
Maybe testability wasn’t a high enough priority in the design phase of this DUT?
06-07-2023 03:26 AM
@rolfk wrote:
One of the better approaches is to avoid the command-response delay chase as much as possible or find ways to at least start multiple commands in parallel, that do not depend on the previous commands completion and then collect the responses later.
Yes unfortunately it is rarely possible, depending on the communication protocol used. That's actually what I tried with a USBtoCAN, having a separate Write and Read loop, to even process the data received eleshwere and leave the write loop free to send at the most regularly time possible. But the DUT itself can sometimes take more time to answer and I then have a conflict on the bus...
@rolfk wrote:
Maybe testability wasn’t a high enough priority in the design phase of this DUT?
Exactly ... Many things are in development at the same time... It can be difficult to have strict requirements and I need to stay flexible, which really don't help to make choices and a fool-proof architecture.
06-07-2023 08:22 AM
Where do you require "timing accuracy"? I would think the place it is most important is at the site of measurement, where you are acquiring data at (if I recall correctly) 500 Hz, probably with a DAQ device of some sort which, as Rolf points out, usually has very high accuracy and precision. The trip from DAQ to PC has various added (and variable) delays, but if you are sending the data as, say, a Waveform of 1000 points (or one bunch every 2 seconds), then it will carry its t0 TimeStamp and its dt Time Interval, "giving back" the timing precision lost along the way.
Or am I missing something (I don't know much about CAN, so this is a real possibility ...).
Bob Schor
06-07-2023 08:58 AM
At this level of Time accuracy you may need to consider using a dedicated device to handle the communication without Windows Interference. Try explore a uC (like Arduino or Teensy) or if you want to stick wtih NI, check the Compac Daq or cRIO (are they still selling this ? ).
06-07-2023 09:20 AM
As mentioned, reading 10 or 100 packages/messages at a time will make things a lot easier (windows timing at 20 or 200ms is usually pretty good), but if you really want to send data every 2 ms, i'd set some logging mode that simple outputs it every 2ms without any need for request. Say you send a command like "Log(100,2)" for 100 measurements at 2 ms, and your target simply does a fire'n'forget.
06-07-2023 09:27 AM
@Bob_Schor wrote:
Or am I missing something (I don't know much about CAN, so this is a real possibility ...).
Bob Schor
No you're not missing anything, but it's just that I am not doing measurement on the device(s). The device is fully encapsulated and has an on-board software with which I am communicating, sending telecommands (In this specific case, such as rate; torque etc. - You guessed it, there is a motor involved) and requesting telemetry, (such as current rate and torque of course, but many more like Power consumption; temperatures, sw status etc.)
The points of our tests for both development AND production (on the longer run) is to be able to associate certain telemetries with others to find out if there is something wrong, where and why.
For that, it is important that all telemetries are being acquired (or better said here requested and received) in a timeframe that makes sense.
The problem we're facing with CAN right now, is that we're limited in the payload data (64bits) So if I want to receive 100 different telemetries, I will need to request (less than really) 100 CAN frames. Meaning sending a request, waiting for the answer before sending the next one, 100 times; Before requesting the next "wave".
So if I have a delay between two telemetries that is too big, I will have the Rate telemetry that won't match the Torque telemetry for instance.
That's why I'm trying to get this inner-wave delay as small as possible, but with wider delays between 2 waves.
Here is a quick illustration of what I'm trying to explain; of course this is just a quick drawing, very idealistic, but I think it illustrates my point: If there is too much time between TM1 and TM2, the correlation between the two is kinda lost. The time between two waves (here 1s as an example, would be the time accuracy (how much points for the Rate do I have in 1 minute)
06-08-2023 02:22 AM
@Yamaeda wrote:
but if you really want to send data every 2 ms, i'd set some logging mode that simple outputs it every 2ms without any need for request. Say you send a command like "Log(100,2)" for 100 measurements at 2 ms, and your target simply does a fire'n'forget.
There will be some compromise to make for sure.
What do you mean by "Your target" ? Because I can't request the device I'm testing/controlling to automatically send data unfortunately.
What I could do would be to send an array of TM that I want to receive periodically, and my communication module would have to handle it itself. That would limit the amount of internal data transfers, by a few.