LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Network buffers versus RT FIFO versus Implementation Details

Hello,

 

can anybody provide information on how the Shared Variable Engine (SVE) is implemented (threads, priority, etc.)?

 

Specifically, I'd like to formally understand the properties implemented by a combination of RT FIFO with network buffers.

 

The official NI white paper (http://www.ni.com/white-paper/4679/en/) doesn't explain the rationale of chaining up buffers (an RT FIFO is also a buffer). See Figure 14.  

 

Without additional explanation, I would need to know what the SVE really does.

 

Can anybody name an application where such a combination is needed, and where a single RT FIFO or single network buffer wouldn't make it?

 

Thanks!

 

Peter

0 Kudos
Message 1 of 16
(4,043 Views)

The network buffer helps avoid losing data. The RT FIFO provides determinism. If you enable the RT-FIFO without the network buffer, you'll have deterministic performance but could lose a data point if the network communication lags. If you enable the network buffer without the RT-FIFO, you are much less likely to lose a data point, but you could introduce jitter.

0 Kudos
Message 2 of 16
(4,018 Views)

What exactly do you mean by "lagging communication"? How do I know that I need network buffer, which also means trading performance?

 

How can I simulate lagging communication? I'd like to see the use of network buffers and not just assume it!

 

Is it possible to know more about the implementation?

 

Thanks,

 

Peter

0 Kudos
Message 3 of 16
(4,004 Views)

It seems to me the white paper to which you linked in your first post is pretty clear about buffering. It says, "Buffering helps only in situations in which the read/write rates have temporary fluctuations." The white paper also describes the conditions under which both the server-side and client-side network buffers have an effect. Why do you say that the network buffer trades performance? I don't see anything that suggests that, although buffering does increase memory use somewhat.

 

If you need to capture every single value (you need the shared variable to act like a lossless queue) and you cannot guarantee that the read and write rates will stay synchronized, you should use buffering. If the read and write rates are relatively slow, or you only need the most recent value, then you do not need buffering.

0 Kudos
Message 4 of 16
(3,981 Views)

Hi nathand,

 

I don't think that the white paper is clear about buffering.

 

An RT FIFO can be seen as a buffer as well (being an array of values) and as such can also be used to compensate for read/write rate fluctuations.

 

We have built a simple application where a slow reader (PC side) reads from a fast writer (target side) and an RT FIFO is used to buffer the values of the writer so that the reader misses no value. The same can be achieved using a networt buffer.

 

But what is the difference between the two solutions (supposing static types)?

 

Thanks,

 

Peter

 

 

0 Kudos
Message 5 of 16
(3,966 Views)

The network buffering also includes the server-side buffer, as described in the white paper. However, if your network is such that the server-side buffer isn't necessary, then the RT-FIFO might be enough buffering for your application. It seems to me that the possibility to have this double-buffer situation is a side effect of NI providing both network buffering without RT, and RT-FIFOs for deterministic operation. My guess is those two features were implemented independently and what you're asking about is simply the result of stacking them on top of each other rather than a deliberate design decision to include two buffers.

0 Kudos
Message 6 of 16
(3,958 Views)

Hi nathand,

 

this is what the white papers tells aboit server-side buffers: "LabVIEW creates network and real-time FIFO buffers on an initial write or read, depending on the location of the buffers. Server-side buffers are created when a writer first writes to a shared variable."

 

It sounds to me that server-side buffers are also created in the pure RT FIFO case (with network-published variablesa and without network buffering). What do you think?

 

By the way, what exactly do you mean by "RT-FIFOs for deterministic operations"? Do you mean time determinism so that the reader/writer can read/write within a pre-given amount of time? Or do you mean deterministic scheduling of threads? Or something different?

 

Regarding your note on independent development of these features: since there is an official white paper comparing these features, I would expect NI to be clear about their usage. Statements like "With [network] buffering, you can account for temporary fluctuations between read/write rates of a variable." is misleading because the same property can be implemented via RT FIFOs.

 

So, how can one build commercial RT application without knowing the formal properties of the programming constructs one is using?!

 

Anyway, many thanks for your feedback! It helps me better understanding the topic and coming closer to answering my questions.

 

Peter

0 Kudos
Message 7 of 16
(3,942 Views)

Bokor wrote: 

It sounds to me that server-side buffers are also created in the pure RT FIFO case (with network-published variablesa and without network buffering). What do you think?


I don't think so, since it says "When you configure a network buffer in the dialog box above, you are actually configuring the size of two different buffers.  The

server side buffer..." which suggests that the server side buffer is configured only when network buffering is enabled.


Bokor wrote:

By the way, what exactly do you mean by "RT-FIFOs for deterministic operations"? Do you mean time determinism so that the reader/writer can read/write within a pre-given amount of time? Or do you mean deterministic scheduling of threads? Or something different?


Deterministic behavior means that the operation will always take exactly the same amount of time to complete. When you enable an RT-FIFO on a shared variable, you are insulating your code (such as a time-critical loop on an RT system) from resource contention with the shared variable engine (SVE), so that your loop rate is not affected by anything the shared variable engine is doing in the background. Without the RT-FIFO, if your code tries to access a shared variable and the SVE is already accessing the same variable, your code will wait until the SVE finishes, which introduces jitter - small variations in timing. In many cases that jitter is so small that it won't make a difference, but in some high-performance real-time applications it is critical to eliminate any sources of timing variation.


Bokor wrote:

Regarding your note on independent development of these features: since there is an official white paper comparing these features, I would expect NI to be clear about their usage. Statements like "With [network] buffering, you can account for temporary fluctuations between read/write rates of a variable." is misleading because the same property can be implemented via RT FIFOs.

 

So, how can one build commercial RT application without knowing the formal properties of the programming constructs one is using?!


I don't really understand your question. Again, I think the RT-FIFO and network buffering are two separate features and you're talking about the effects of enabling both of them simulatenously in which case yes, there's some overlap. If you need complete control over networking behavior, you can always fall back to TCP. Network shared variables simplify exchanging data over a network, with the tradeoff that you don't have complete knowledge of, and control over, the implementation (in fact, the implementation could change between LabVIEW versions, as explained in that white paper about the rewritten protocol). I wouldn't let that limit me from using them though if they're the right solution for my application.

0 Kudos
Message 8 of 16
(3,929 Views)

I'm still puzzled about the client/server side buffers.

 

How do you think RT FIFO shared network variables are implemented using a single buffer?

 

I imagine having a server-side FIFO, say, for a writer, and another client-side FIFO for a reader. The client-side buffer is created and taken care of the SVE.

 

In other words, also for RT FIFO shared network variables: the length of the RT FIFO is actually a configuration of two different buffers.

 

What do you think?

 

Peter

0 Kudos
Message 9 of 16
(3,907 Views)

A network-published shared variable is "hosted" on a computer somewhere. That host is the server. The client and server are not related to who does the reading and who does the writing. An RT-FIFO is always on the client side, regardless of whether the client is reading or writing, and of course can only be implemented on an RT target. Again, the goal of the RT-FIFO is to provide determinism in the client code, not to provide network buffering. So, to answer your question - no, I don't think a network-published variable with an RT FIFO enabled is the combination of two buffers, because there's only one RT-FIFO, on the client. See, for example, Figure 10 in the white paper, where the server isn't running RT so there is no RT-FIFO.

0 Kudos
Message 10 of 16
(3,883 Views)