LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

"Buffer Utilization" in DataSocket Connections ?

I am setting up a test situation using DataSockets.

It will eventually be used between a host, and an RT board, but for right now, it's all in one VI.

I use DS Open, DS Write, DS Read, and DS Close.

I want the connection to buffer my data, as the two ends will run anynchronously.

The URL I use is:

dstp://localhost/XXX

Here's the sequence of events:

DS OPEN (BufferedReadWrite) (the sender end)
PROPERTY Write(BufferMaxBytes=1024, BufferMaxPackets=10)
DS OPEN(Buffered Read) { the receiving end }
PROPERTY Write(BufferMaxBytes=1024, BufferMaxPackets=10)
DS WRITE(Sender, 1)
DS WRITE(Sender, 2)
DS WRITE(Sender, 3)
DS WRITE(Sender, 4)
DS WRITE(Sender, 5)
DS READ (Rcvr) ==> Display 1
DS READ (Rcvr) ==> Display 2
DS READ (Rcvr) ==> Display 3
DS READ (Rcvr) ==> Display 4
DS READ (Rcvr) ==> Display 5
DS CLOSE (Sender)
DS CLOSE (RCVR)

So that's the basic sequence. If I set the MAX PACKETS property down to 4 or 3, I get only the latest 4 or 3 values, as expected. Fine.

But I would like to use the BUFFER UTILIZATION property to tell from the sending end whether the buffer is nearly full or not.

PROBLEM 1:
The BUFFER UTIL(Bytes) property does NOT report a number of bytes, but apparently a fraction of the MAX BYTES I specified.

PROBLEM 2:
The BUFFER UTIL(Packets) property does NOT report a number of packets, but apparently a fraction of the MAX PACKETS I specified.

PROBLEM 3:
No matter where in the sending chain I put a property node to read the buffer utilization, it always reports 1/N for the packet utilization, where N is the MAX PACKETS I gave it. If I set MAX PACKETS to 10, it reports 0.1 after opening, 0.1 after the first write, 0.1 after the 2nd write, and 0.1 after the last write. It doesn't matter if I overflow the thing or not.

PROBLEM 4:
If I put a property node in the receiving chain, I get weird answers, as well. with MAX PACKETS = 0, I get 0.4 after the 1st read, 0.3 after the 2nd, 0.2 after the 3rd, and 0.1 after the 4th. Fine.
But I get 0.1 after the FIFTH read (the buffer should be empty).
And I get 0.1 before the FIRST read (the buffer should be half full).

The Utilization in bytes is similarly useless.

Anybody know what I'm missing?

Why isn't this property more meaningful?
Steve Bird
Culverson Software - Elegant software that is a pleasure to use.
Culverson.com


LinkedIn

Blog for (mostly LabVIEW) programmers: Tips And Tricks

0 Kudos
Message 1 of 3
(2,986 Views)
You are correct in problem 1 and 2. The context help in LabVIEW 7.1 confirms it. Problem 3 and 4 are more of a tricky because those property nodes only work on buffered reading.

The idea with buffered datasocket is that you write to the datasocket server just like you always do. It is in the reading of the data that buffered datasocket is started. Once you set the read to be buffered, the properties work.

I attached an example to demonstrate the properties in action.
Message 2 of 3
(2,986 Views)
I did figure out some things about this:

#1 - #2: You can set two limits on the buffer size - if you set a limit of 10 packets, then even a megabyte buffer is full at 10 packets. If you set a limit of 100 bytes, then even a million-packet buffer is full at 100 bytes. So you can look at the "fullness" in either terms, depending on what you need to do. That's contrary to what I was expecting from the terminology, but I understand it now.

#3: Deep down in the help text, I found out that the property only works from the READ end. After thinking for a while about it, this too makes sense, since the READ end might be very distant from the WRITE end, so for the WRITE end to know about the status of the READ end would require lots of feedback. That
means lots of reverse net traffic in some cases (not in mine).

#4: the statement "And I get 0.1 before the FIRST read (the buffer should be half full)." was in error. When I verified the timing of things, the first status report did indeed indicate a 0.5, not 0.1. My bad.

#4: WILD GUESS: the never-going-below 1 sample problem is actually indicative of the fact that the DS READ operation will return a value every time (after the first one). If there is no new value, it will return the last value it read some previous call. That messes up the actual counting of values in the buffer, but there it is.

I ended up reporting the buffer status from the READ end back to the caller (along with other results going back), and letting the WRITE end use that number to judge whether it should write more data or not.
Steve Bird
Culverson Software - Elegant software that is a pleasure to use.
Culverson.com


LinkedIn

Blog for (mostly LabVIEW) programmers: Tips And Tricks

Message 3 of 3
(2,986 Views)