LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

read buffered network published shared variable

Solved!
Go to solution

As I said, I've done some experiments with multiple DataSocket connections.  Everything below has been verified across the network, although in

the interest of testing this on one PC, all DataSocet connections in the following VI snippets have been configured to run on the "localhost".

 

So, opening multiple DataSocket connections to a Shared Variable is possible and can produce the "traditional FIFO buffer" results that we've come

to expect from DataSocket:

 

1st connection reads work, 2nd connection reads work:

 

DataSocket to access SV - Multiple Connections - Works FP.png

 

 

1st connection reads work, 2nd connection reads work (block diagram):

 

DataSocket to access SV - Multiple Connections - Works Snippet.png

 

 

However, I've also seen some odd behavior when the connection 1 and connection 2 reads and writes are interlaced differently:

 

 

 1st connection reads duplicated, second connection reads okay:

 

DataSocket to access SV - Multiple Connections - 1st Duplicated FP.png

 

 

  1st connection reads duplicated, second connection reads okay (block diagram):

 

DataSocket to access SV - Multiple Connections - 1st Duplicated Snippet.png

 

 

 1st connection reads inverted, second connection reads okay:

 

DataSocket to access SV - Multiple Connections - 1st Inverted FP.png

 

 

 

 1st connection reads inverted, second connection reads okay (block diagram): 

 

DataSocket to access SV - Multiple Connections - 1st Inverted Snippet.png

 

 

1st connection reads okay, second connection reads inverted: 

 

DataSocket to access SV - Multiple Connections - 2nd Inverted FP.png

 

 

1st connection reads okay, second connection reads inverted (block diagram):

 

 DataSocket to access SV - Multiple Connections - 2nd Inverted Snippet.png

 

 

 

Again, all the results listed above have been verified with the attached "DataSocket to access SV - Clear 1 Element First" VI (with breakpoints set in

various places) by running one instance of the VI locally (localhost) and running an instance of the VI remotely (from my laptop connecting to the SV

on my PC -- that is substituting my PC IP address for "localhost" when opening the connection).

 

I sort of expected to either see the "traditional FIFO buffer" behavior that we had seen before with one DataSocet connection OR to see the

"cached FIFO buffer" that we had seen when having multiple SV Nodes on the diagram. 

 

Am I missing something here, or are there some things to be aware of when hosting out multiple DataSocket connections to a buffered shared

variable?

 

Message Edited by LabBEAN on 02-18-2010 06:55 PM

Certified LabVIEW Architect
TestScript: Free Python/LabVIEW Connector

One global to rule them all,
One double-click to find them,
One interface to bring them all
and in the panel bind them.
0 Kudos
Message 31 of 42
(2,667 Views)

 

Back in Message 18:

 

 


BLAQmx wrote:

In the VI you created all peers remain in memory during the run of the VI because they are all in the same VI and share the same buffer.

Therefore after reading from the SV 5 times we do not clear the buffer because their are still peers on the block diagram that will access the buffer


 

Because of the results we were seeing, I interpretted this comment to mean that there was one shared buffer between all accessors (SV read

nodes) on the diagram.  This is, in fact, not the case.  There are no "cached" situations, where, as Wikipedia explains:   a "cache operates on the

premise that the same data will be read from it multiple times..."

 

Each write node writes buffered data to the deployed shared variable.  Each read node gets its own copy of the written data.  If, for example, in a

loop the entire buffer is overwritten between reads from the same read node, then the next time that read node executes, it will read only the

newest elements, just like you would expect out of a FIFO.

 

Each Read node (or DataSocket connection) receives it's own copy of the buffered data as soon as the connection is made.  In LV 8.6 connections

were made when the node first executed (like BLAQmx said earlier).  Today, the connections are made when the VI first runs.  So, all subsequent

read nodes "have the data" from preceeding writes.

 

What further complicates this is that we are trying to simulate "a real system" (where you don't have multiple nodes executing on the same

diagram) by placing multiple nodes on the same diagram.  This opens multiple buffer instances and confuses the results if you're expecting

"LabVIEW 8.6" behavior.

 

So... now BLAQmx can clarify what we're seeing above with the DataSocket (perceived) anamolies...

 

 

Message Edited by LabBEAN on 03-01-2010 05:04 PM

Certified LabVIEW Architect
TestScript: Free Python/LabVIEW Connector

One global to rule them all,
One double-click to find them,
One interface to bring them all
and in the panel bind them.
0 Kudos
Message 32 of 42
(2,624 Views)

For the sake of time and readability I am going to address each of your DataSocket Examples in an individual post.  It will probably take me a couple of days to address each example.  This post will be focusing on DataSocket to access SV - Multiple Connection - 1st Duplication.vi.  

 

For the sake of clarity I have modified the VI you have provided slightly.  In my VI (attached) all of the read loops read for 10 iterations.  This ensures we read everything in the buffer (this SV has a buffer size of 10).  I have also changed the data I am writing into the DataSocket writes.  Reading the block diagram left to right (in dataflow execution order) the first write writes 0-4, the second write writes 100-104, the third write writes 200-204, and the fourth write writes 300-304.  I have also removed all of the DataSocket reads that are not in For Loops to make my explanations below simpler.  

 

Some more definitions/semantics:

 

DS1 will refer to the first DataSocket connection/reference/client we are opening.

DS2 will refer to the second DataSocket connection/reference/client we are opening. 

NTBuf Buffer refers to the Shared Variable client side buffer that all clients (DS1 and DS2) both have access to.

A View refers to items accessible to DS1 and DS2.  For all intents and purposed DS1's view is DS1's buffer and DS2's view is D2's buffer.  

I will refer to the DataSocket write nodes as the indicator they are connected to; e.g.  DS Data, DS Data 2 etc.  

 

 

I am going to set through the code, and show what is going on in NTBuf's buffer and DS1 and DS2's views.

 

 Open DS1

NTBuf Buffer = empty

DS1 View = empty

DS2 View = Null (DS2 does not exist yet)

 

DataSocket Writing 0-4 on DS1

NTBuf Buffer = 0,1,2,3,4

DS1 View = 0,1,2,3,4

DS2 View =  Null (DS2 does not exist yet)

 

Open DS2

NTBuf Buffer = 0,1,2,3,4

DS1 View = 0,1,2,3,4

DS2 View = empty

 

DataSocket Writing 100-104 on DS2

NTBuf Buffer = 0,1,2,3,4,100,101,102,103,104 

DS1 View =  0,1,2,3,4,100,101,102,103,104 

DS2 View =  100,101,102,103,104 

 

DataSocket Reading (DS Data) on DS1

DS Data =  0,1,2,3,4,100,101,102,103,104 

NTBuf Buffer  =  0,1,2,3,4,100,101,102,103,104 

DS1 View = empty

DS2 View =  100,101,102,103,104

 

DataSocket Writing 200-204 on DS1

NTBuf Buffer = 100,101,102,103,104, 200, 201, 202, 203, 204 (0-4 were thrown out because NTBuf's buffer has a max size of 10 elements)

DS1 View =  200, 201, 202, 203, 204

DS2 View =  100,101,102,103,104, 200, 201, 202, 203, 204

 

DataSocket Reading (DS Data 2) on DS1

DS Data 2 =  200, 201, 202, 203, 204, 204, 204, 204, 204, 204, 

NTBuf Buffer = 100,101,102,103,104, 200, 201, 202, 203, 204

DS1 View = empty

DS2 View =  100,101,102,103,104, 200, 201, 202, 203, 204

 

 DataSocket Reading (DS Data 3) on DS2

DS Data 3 =  100,101,102,103,104, 200, 201, 202, 203, 204

NTBuf Buffer = 100,101,102,103,104, 200, 201, 202, 203, 204

DS1 View = empty

DS2 View =  empty

 

DataSocket Writing 300-304 on DS2

NTBuf Buffer = 200, 201, 202, 203, 204, 300, 301, 302, 303, 304 

DS1 View =  300, 301, 302, 303, 304

DS2 View =  300, 301, 302, 303, 304

 

DataSocket Reading (DS Data 4) on DS2

DS Data 3 =  300, 301, 302, 303, 304, 304, 304, 304, 304, 304

NTBuf Buffer = 200, 201, 202, 203, 204, 300, 301, 302, 303, 304

DS1 View = 300, 301, 302, 303, 304

DS2 View =  empty

 

Close DS1 and DS2 

 

Some take aways.  While all Shared Variables share a buffer under the hood the buffer we have access to is determined by a connection and each connections "view."  In the case of DataSocket we create one of these views each time we call a DataSocket Open.  In the case of Shared Variable Static Nodes we create a "view" for each Read node on the diagram.  At the end of this VI NTBuf Buffer still has elements in it, but no one can read from it because this data is in the buffer before the next client connection is opened and its view created.

 

One writer can add elements to everyone's view, but each connection/reference/client can only remove items from its own view.  Any writer can add elements to NTBuf's buffer, but no one can remove items from BTBuf's buffer unless it overflows the buffer; therefore removing the oldest elements in the buffer.  

 

This is a lot to take in, but once you can balance the concept of a shared buffer (NTBuf's buffer) and a view (DS1 and DS2) it begins to make sense.   

 

Cheers,  

Mark
NI App Software R&D
0 Kudos
Message 33 of 42
(2,611 Views)

 


BLAQmx wrote:

For the sake of time and readability I am going to address each of your DataSocket Examples in an individual post.

 



Don't worry about it, Mark.  Your explanation about writers writing to each read connection's "view" explains them

from what I can tell.  (But, speaking of "readability", we have to add our own carriage returns when posting,

since the images above make this page really wide.)

 

 


BLAQmx wrote:

At the end of this VI NTBuf Buffer still has elements in it, but no one can read from it because this data is in the

buffer before the next client connection is opened and its view created. 

 



Hmmm...  a new connection opens with 1 element in the read buffer regardless of the current buffer state

or whether a previous run left 10 elements or 0 elements in the buffer.

 

**Any way to fix this so that it behaves the way you mentioned?  That is, a client connection opens and ignores

all previous values (i.e. the buffer is "empty")?**

 

 

Notes, from this thread and the telecon, for connecting to a buffered Shared Variable through the DataSocket API

(**Do you agree with these?**):

1) For string SVs, there is not a direct correlation between the SV buffer setup and the DS buffer setup

 

2) A DS read buffer is initialized when a connection is opened, so previous writes are ignored

 

3) #2 is usually true, but there are cases when after opening a connection and reading,

you can get previous values

 

4) The read timestamp can reflect the time the value was written, but under certain conditions,

can appear to reflect the time the data is read

 

5) Each time a connection is opened, there is one element in the read buffer.

This is true even if we undeploy and redeploy the process.  So, we must loop until a timeout is obtained

to clear old values and then enter the main acquisition loop (where timeout can be set to -1)

 

 

#5 contradicts what I thought you mentioned in the telecon.  Were you under the impression that a

read with timeout = -1 would wait indefinitely *IF* we undeployed / redeployed first?

**If you were thinking of something else, let me know**:

 

To double check my logic here, I've included a simple VI to try this out. 

 

Undeploy, redeploy, and then run the VI.  The first read timeout is "-1".  So, if undeploying "cleared" the

buffer, then I would expect this read to wait indefinitely:

 

 

DataSocket to access SV - Timeout Check after Undeploy FP.png

 

 

DataSocket to access SV - Timeout Check after Undeploy Snippet.png

 

 

 

 

 

Message Edited by LabBEAN on 03-02-2010 03:09 PM

Certified LabVIEW Architect
TestScript: Free Python/LabVIEW Connector

One global to rule them all,
One double-click to find them,
One interface to bring them all
and in the panel bind them.
0 Kudos
Message 34 of 42
(2,591 Views)

LabBEAN wrote:

 


Hmmm...  a new connection opens with 1 element in the read buffer regardless of the current buffer state

or whether a previous run left 10 elements or 0 elements in the buffer.

 

**Any way to fix this so that it behaves the way you mentioned?  That is, a client connection opens and ignores

all previous values (i.e. the buffer is "empty")?**

 


You are correct.  When a new client comes online it will have the last value written to the SV in its buffer/view unless that SV has just been deployed.  That being said there is no way to completely clear the SV's buffer without undeploying and redeploying the variable (thereby avoiding the aforementioned behavior).  Your request is good feedback and I'll submit this to our feature team as a product request.  

 

 

 

1) For string SVs, there is not a direct correlation between the SV buffer setup and the DS buffer setup

 There is not a direct correlation

 

2) A DS read buffer is initialized when a connection is opened, so previous writes are ignored

All previous writes except the last write are ignored. 

 

3) #2 is usually true, but there are cases when after opening a connection and reading,

you can get previous values

Our developers tell me this is possible in some corner cases, but I do not have an example that demonstrates this.   

 

4) The read timestamp can reflect the time the value was written, but under certain conditions,

can appear to reflect the time the data is read

The timestamp associated with the Shared Variable is the time when the value change is processed on a machine hosting the SV.  For example if I have a SV hosted in a Windows machine, and I write a new value to it on a cRIO the timestamp associated with that write is the time when the SV Engine processes the value change on the Windows machine.  If this variable was hosted on the cRIO the timestamp will be the time the SV Engine in the cRIO registers the value change.  

 

5) Each time a connection is opened, there is one element in the read buffer.

This is true even if we undeploy and redeploy the process.  So, we must loop until a timeout is obtained

to clear old values and then enter the main acquisition loop (where timeout can be set to -1)

 When I tested this using Variable Nodes LabVIEW would wait (-1 timeout) on the first read node if I had just undeployed and redeployed the variable, and I had not yet written to the SV.  The same is not true when using DataSocket.  I need to take a closer look at our documentation, but this looks like a bug to me.  

 

Mark
NI App Software R&D
0 Kudos
Message 35 of 42
(2,580 Views)

Thanks for the followup, Mark.

 


BLAQmx wrote:

The timestamp associated with the Shared Variable is the time when the value change is processed

on a machine hosting the SV.


Ok, I'll be on the lookout for the case, which I thought I heard on the telecon, where "the timestamp

'appears' to reflect the time a client 'reads' the variable".

 


BLAQmx wrote:

When I tested this using Variable Nodes LabVIEW would wait (-1 timeout) on the first read node if I had

just undeployed and redeployed the variable, and I had not yet written to the SV.  The same is not true

when using DataSocket.  I need to take a closer look at our documentation, but this looks like a bug to me. 


Could you post a CAR # for this?


 

While R&D is working on the new Variable API (e.g. with buffering), could you throw in the idea to add

a timeout input?  When Shared Variables were first released, they didn't have the timeout either --guessing R&D

ran out of time for the 8 and 8.2 releases--, and I (and probably lots of others) made a product suggestion and...

today we have timeouts.

Message Edited by LabBEAN on 03-03-2010 09:47 AM

Certified LabVIEW Architect
TestScript: Free Python/LabVIEW Connector

One global to rule them all,
One double-click to find them,
One interface to bring them all
and in the panel bind them.
0 Kudos
Message 36 of 42
(2,559 Views)

LabBEAN wrote:


Ok, I'll be on the lookout for the case, which I thought I heard on the telecon, where "the timestamp

'appears' to reflect the time a client 'reads' the variable".

 


So it turns out there are a few different places the timestamp can come from including the case I described.  If we are using an OPC Server to update a bound variable then the timestamp will be the timestamp of the change on the OPC Server.   The behavior may change again in complicated cases using bound variables, etc.  The lack of consistency has been an issue brought up in R&D several times.  

 


LabBEAN wrote:  

BLAQmx wrote:

WhenI tested this using Variable Nodes LabVIEW would wait (-1 timeout) onthe first read node if I had

just undeployed and redeployed thevariable, and I had not yet written to the SV.  The same is not true

when using DataSocket.  I need to take a closer look at ourdocumentation, but this looks like a bug to me. 


Could you post a CAR # for this?


 

 Because the behavior of the DataSocket API is not consistent with the Static node, but is consistent when accessing other URLs (such as a datasocket URL) I am not entirely sure if this will be considered a bug.  We are currently discussing this question.  If we end up filing a CAR I will post the ID number in this thread.  

 

 


LabBEAN wrote:

 

While R&D is working on the new Variable API (e.g. with buffering), could you throw in the idea to add

a timeout input?  When Shared Variables were first released, they didn't have the timeout either --guessing R&D

ran out of time for the 8 and 8.2 releases--, and I (and probably lots of others) made a product suggestion and...

today we have timeouts.


 

 The Shared Variable API will have the timeout/blocking behavior in 2010.  Have you signed up for the 2010 Beta program?  If you want to participate in the beta you can sign-up at this URL: (http://ni.com/beta).  If you have any issues getting signed up let me know and I'll make sure you get added.  

 

Mark
NI App Software R&D
0 Kudos
Message 37 of 42
(2,548 Views)

After discussing the behavior of Datasocket behavior with reading Shared Variables that have just come online, we have concluded this is not a bug for two reasons.

 

 

  1. Changing this would require a major changes in the underlying code governing DataSocket behavior.
  2. Changing this could adversely effect applications that have already been designed with this behavior in mind.

 

Luckily even though the first read of a variable will result the in the DS Read executing the node will block as expected on the second call of the DataSocket Read.  

 

 

Mark
NI App Software R&D
0 Kudos
Message 38 of 42
(2,520 Views)

In the LV2010 documentation I did not find any mention of changes to the NSV's in the way of blocking or timeout behavior.

 

0 Kudos
Message 39 of 42
(2,518 Views)

BTW - I want to offer a big thank you to LabBean and BLAQmx for spending the time trying to sort through these thorny issues.

I am hoping that all this discussion will result in new white paper from NI on the subject.


 

Message 40 of 42
(2,507 Views)