LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Comparison RT & Labview

Hi Cobayatron,

 

So I just wanted to add/check to what I think you've done. Currently, you have:

  • DAQmx code running on the cRIO, in the "RT CompactRIO Target", probably.
    • As GerdW said, your cRIO supports DAQmx, so you should use it so long as it suits your needs. DAQmx is relatively easy to program and behaves nicely 🙂
    • Older cRIO systems had to choose between FPGA and Scan Mode for acquisition, and this meant FPGA was necessary in more cases
    • With DAQmx available, a lot of cases can be handled using a more friendly (compared to FPGA) but performant (compared to Scan Mode) API
  • On the desktop system (laptop, based on the filenames?), you're doing some logging to a TDMS file, and displaying data in a graph
    • As Bob Schor explained, the Windows (non-RT) system often has more memory available, and less time-pressure, so that's a good location for these things. Also, it often has a monitor plugged in, so you can see your graph - generally cRIO's don't have attached monitors
    • You're sending data from the cRIO RT system into the desktop system via Network Shared Variables. One of the two systems is hosting these variables, but it doesn't matter (for this post) which
    • Alternative communication methods exist, and might be important if you wanted faster data acquisition rates. This is because the NSVs are "tag-like" - each time you read it, it gives you the latest value. (By default, NSVs are tag-like. You can also enable buffering, which makes them more stream-like. I can't see which you have enabled, but I *think* it's a single waveform, perhaps with multiple points, but probably not in a buffered sense).
    • For "streaming" data, like a typical analog input device, you might very much want to keep your data evenly spaced and continuous. Other transfer mechanisms (like Network Streams) can be more useful here, because they prioritise streaming data rather than tags - all of the data points are important, not just the latest. The tradeoff is that (potentially) you could fall behind emptying the "queue" (Network Streams aren't exactly like the standard LabVIEW queue, but they're similar in behaviour) and then you might need to handle overflow on the cRIO. With tags, this can't happen (because if you read too slowly, you just lose data immediately rather than backing up a queue).
  • You're not using the FPGA target at all. That isn't necessarily a bad thing - although LabVIEW FPGA is not that different to "standard" LabVIEW, it does have some additional complexities, and lots of useful tools/behaviours are unavailable.
    • FPGA is usually used (amongst probably a long list of other tasks) for low level communication, or implementing custom protocols
    • You can use it to manually control digital levels in multiwire communication setups in a way that can be "simpler" (to some extent) than trying to do the same with e.g. DAQmx, where getting delays between different wires can be difficult or impossible, or require fast oversampled clocks (although this is essentially exactly what you get with a Single-Cycle Timed Loop (SCTL) on FPGA).

 


GCentral
0 Kudos
Message 11 of 16
(1,686 Views)

hI cbutcher,

Thanks for your detailed explanation, that was very useful.

I am using NSV with buffering for the data coming from my sensors but you still mentioned that Network streams perform better for this specific case, I´ll try to change to them then.

Thanks again

 

0 Kudos
Message 12 of 16
(1,680 Views)

@cobayatron wrote:

hI cbutcher,

Thanks for your detailed explanation, that was very useful.

I am using NSV with buffering for the data coming from my sensors but you still mentioned that Network streams perform better for this specific case, I´ll try to change to them then.

Thanks again

 


Hmm, given the choice I'd pick Network Streams, and I have heard some people have weird issues with NSVs, but I don't have experience with them myself (maybe that was obvious from my lack of initial clarity on their purpose - oops) and if you aren't having any problems, I don't know enough to say if it's worth switching preemptively...

 

That being said, definitely worth learning about Network Streams if you want to use them in the future.

On key point that initially frustrated me was their unfriendliness with disconnecting and reconnecting clients, or multiple clients.

A Network Stream is established between two endpoints, and if one of them closes (e.g. the desktop 'client' - although NS has no concept of "server" or "client", just "reader" and "writer") then the other end must also be closed and if you want to connect again, it must be reestablished (recreated).

 

There are (as usual...) many solutions to this, the three that I've tried are

  1. Put the Network Stream in a while loop, and if you get the error that corresponds to the remote disconnecting, close the cRIO end and recreate. Simple and effective, but I can never remember the error code that needs to be matched, and there are a few different conditions to handle, and it's annoying to reuse this if you have multiple endpoints
  2. Have an Actor (Actor Framework) create an endpoint on the cRIO, and when it dies, relaunch it (automatically, in my case, using Last Ack and a "relauncher" actor) and then create a new endpoint for a future connection. Downside - you need to write a couple of Actors and figure out how to use them, upside, much easier to reuse in multiple locations (I have a package with a "Create Server" that takes a boolean input to choose reader or writer, and then it automatically recreates the underlying NS as required until I use the "Destroy Server" method
  3. Create a TCP Listener and when it receives a connection, write to the connection information about a dynamically spawned endpoint (you need the name, basically - the client already has the IP address since it connected to the Listener). So you could create an endpoint using the client's ip/port combination and some prefix, and then send that back via TCP to the client. The client then reads this information, and creates the matching Network Streams endpoint to connect to the (new) endpoint on the "server" (cRIO). I like this a lot, but it's a bit more work than the second and a lot more than the first. The upside is that with this arrangement, you can have N clients connected to the "same" source endpoint on the cRIO (actually, you just have a bunch of them, and distribute the data to each of the endpoints, but the client doesn't worry about this and it can be very easy to use for applications connecting to the cRIO).

If you want more information, let me know (here, or in another question, I guess). I sometimes also consider releasing the source code for some of these things, but I have to talk to people at work for that first... 😕


GCentral
0 Kudos
Message 13 of 16
(1,667 Views)

I'm working on my second "big" LabVIEW-RT Project (I started the first a bunch of years ago, and the second a bunch of months ago).  Both involve using multiple Network Streams.

 

The basic Design for both Host and Target systems is a "State-Machine-like" design, built around something like a Queued (or Channel) Message Handler.  I typically have at least 4 Network Streams (one-way communication, of course) arranged as follows:

  • A Host-to-Target Stream that can act as a "Message" to tell the Target "what to do" (and can pass on "arguments" or "inputs" to the State Machine).
  • A Target-to-Host Stream that the Target can use (often to tell the Host "I just finished doing this, be ready for what I'm about to send you".
  • A "Data" Stream, especially useful when the Target is gathering Sampled Data at modest rates (e.g. 24 sampled Channels at 1 kHz each to be "viewed" by the Host at, say, 20-50 Hz by displaying every 50th or 20th point and streamed to disk).
  • An "Event" Stream for the Target to send "Point-in-Time" (or "Event") data to the Host, for example, occasional Digital signals recorded by the Target, providing a record of "what happened when".

NI has a number of White Papers on Network Streams that can be found on the Web.  They really aren't that hard, and work amazingly well.  I definitely didn't fare as well with Network Shared Variables ...

 

Bob Schor

0 Kudos
Message 14 of 16
(1,643 Views)

Hi cbutcher & Bob Schor,

I replaced the buffered NSV for a Network stream to pass the sensors data to the host and seems to work fine.

I will now add another network stream to pass postprocessed data (FFT, RMS levels...). According to Bob Schor they seem to coexist well so I´ll see what I can do.

 

Thanks for sharing your experience in this.

0 Kudos
Message 15 of 16
(1,632 Views)

The Key to Network Stream Coexistance is coming up with "good" Reader/Writer names.  I use a two-part name -- the first part describes what is carried in the Stream, the second is W (for Writer) or R (for Reader).

 

Examples:

   UI->RT Msg, W      Stream to carry UI (Host) Messages to the RT (Target), Writer (present on Host)

   UI->RT Msg, R      Stream on Target that accepts the above UI->RT Messages

   RT->UI Msg, R       Stream on Host that accepts Messages from Target

   ADC Samples, W   Stream carrying A/D Samples from RT to UI, present on Target

   RT Event, R           Stream on Host that accepts Events from RT (could also be called "Event, R")

 

I trust the logic is clear.

 

Bob Schor

0 Kudos
Message 16 of 16
(1,591 Views)