LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Network Shared Variable Engine taking 20-30 seconds to update variables

Good day,

I'm having an issue where the Network Shared Variable Engine is not reliably updating information in a timely manner, as bad as 20-30 seconds between writing and reading the correct value. This is not stream information, just single value updates (e.g. booleans).

 

Some information:

- This is a cRIO-9030

- The "write" happens on the PC side on the UI with the "read" happening on the RT side

- The FPGA and RT both initialise with no errors

 

I can provide code if neccessary but as it would take some stripping back to only send the relevant components I thought I'd first find out; Has anyone experienced this issue and were you able to resolve it?

 

 

 

0 Kudos
Message 1 of 6
(3,394 Views)

If you haven't already, try disabling the PC's firewall.

 

I've seen all sorts of strange variable behaviour before, so this list of questions will hopefully narrow things down.

 

  1. Are the variables hosted on the PC side or cRIO side? Or hosted in both places and bound to one another?
  2. Is buffering enabled on the variable? Single-writer?
  3. If you read the variable back on the PC after writing it, how long does it take to reflect the new value?
  4. If you use the Distributed System Manager to perform the write, is the cRIO still slow to see the update?
  5. What does the cRIO's CPU + memory usage look like? A high CPU usage can affect updates (I normally install system state publisher on the cRIO and monitor it through DSM)
  6. Is the variable connection being opened and closed on every write?
  7. How many variables are being hosted or updated? How many libraries?
  8. What are the network settings for the PC and cRIO? Same subnet? If possible, try a direct network link between the two. I have seen IP address conflicts with other network hardware cause slow updates.




Certified LabVIEW Architect
Unless otherwise stated, all code snippets and examples provided
by me are "as is", and are free to use and modify without attribution.
0 Kudos
Message 2 of 6
(3,384 Views)

I'll try to answer as best I can!

 

Disabling firewall had no noticeable effect.

 

1) I'm not sure. They are in the "Shared Project Variable" library within the project, and deployed at runtime. (edit: From DSM it looks like they are hosted on the cRIO)

2) Buffering is not enabled, no single-writer. RT FIFO is however, single element.

3) It also takes 20-30 seconds

4) Yes it is

EDIT: I just re-tested this without the UI vi running and the updates happened instantly on the RT. However, I still see the CPU loads below.

5) CPU load is quite high - 100% on one core and 80-100% on the other. Memory is below 50% usage.

6) I'm not sure. I don't think so, I'm using the shared variable nodes on both sides

7) There are 24 variables being hosted but only 1 or 2 being updated in this case

😎 The PC and cRIO are connected via USB, using the "ethernet" IP address of 192.22.11.2

 

the obvious answer here would seem to be the CPU usage, but even after stripping out nearly all the code in my RT application and leaving just a single while loop, I still see these high CPU usages and the same effect.

0 Kudos
Message 3 of 6
(3,364 Views)

Thanks for answering all those questions. Reducing the high CPU use is probably a good starting point. Some more questions:

 

  1. If a project is created with a single library containing a single shared variable deployed to the controller, and no other software running (it's disabled on startup), what's the CPU usage like? If you then run a simple VI that reads the shared variable in a loop and run it on the cRIO, is it fast to update?
  2. Are you running the cRIO code from the LabVIEW project directly? Have you tried compiling and running it deployed?
  3. What's on the front panel when the UI is visible? Any graphs or charts or other 'heavy' controls?
  4. How quickly are controls on the front panel being updated?

There's this KB article on monitoring the CPU usage per process. I imagine lvrt will be using the most CPU, but maybe there's some other process using high CPU.

 

 




Certified LabVIEW Architect
Unless otherwise stated, all code snippets and examples provided
by me are "as is", and are free to use and modify without attribution.
0 Kudos
Message 4 of 6
(3,287 Views)

Inadvertently I seem to have fixed the issue. Between my last post and now I have;

- Removed a program from the PC that was hogging CPU on this side (looking at you MS Teams)

- Ran the VI analyzer on most of the VIs and added some additional time-control elements to various while loops on the UI and RT side

- Unbound a few front panel indicators from older variables that were now unused.

 

This seems to have done the trick although I couldn't necessarily tell you why.

 

To answer your questions anyway for completeness sake:

1. Yes that is fast

2. Yes I am running it directly, the end goal is to run it compiled and deployed

3. There is lots on the front panel, across 3/4 different tabs. This is a data-heavy application where feedback needs to be available instantly and changes made on the fly.

4. The controls on the front panel are manually updated and at most that is two or three times a minute.

 

Thank you for your help!

 

0 Kudos
Message 5 of 6
(3,260 Views)

Glad to hear you got it working. Were those indicators bound to non-existant variables? I have seen strange cases where attempts to open missing variables in one part of a system cause huge delays and freezing in other parts of a system, even though they were running in parallel with no dependence between them.




Certified LabVIEW Architect
Unless otherwise stated, all code snippets and examples provided
by me are "as is", and are free to use and modify without attribution.
0 Kudos
Message 6 of 6
(3,239 Views)