LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

TCP packet while loop

I have a fairly detailed client/server application written in LabVIEW (it works great!) However, I plan to cut out the LabVIEW server portion and write it in Java. What I need to know is how exactly does TCP work in LabVIEW?

For example, if you look at the Simple Data Server in the Find LabVIEW examples, is a TCP packet created on each iteration of the while loop on the server and then received in exact same order for each iteration of the while loop on the client? It looks to me that this is the case. (I hope this makes sense!)

If this is how LabVIEW works then that would be great. I plan on writing a server in Java that will send waveform data to a LabVIEW client where it will be reconstructed via a Build Waveform.vi and then of course analyzed by Spectral Analysis and Tone VI's.

I am assuming that I would have to program a while loop in the Java server to send out (for example) 4 bytes of Time Stamp information (t0), 4 bytes of actual data (Y), and 8 bytes of distance between points (dt) for each of the four signal I wish to send. I am also going to send 4 bytes of data for a heart rate, 4 bytes for a respiratory rate, and 4 bytes for skin response. So, that would make it a total of 76 bytes for each TCP packet on each iteration of a while loop on the server and the LabVIEW client must receive this data in the EXACT same order and the EXACT same number of bytes for each piece of data I sent it in from the Java server.

I hope you all can follow this and any help will be greatly appreciated.
0 Kudos
Message 1 of 14
(4,619 Views)
I'm not sure exactly what you're asking. If you need to know "how exactly does TCP work in LabVIEW?", then I don't know the innards.
But if you need to know how to USE it, I can help.
I have used TCP to exchange data between a Delphi (Pascal) program on one machine and a LabVIEW program on another. The same LabVIEW program is communicating with other copies of itself on other machines (They're all working on the mathematical solution to a giant problem).

The thing is that data is data. LabVIEW doesn't care whether the other end is Pascal, Java, LabVIEW, Mac, Windows, whatever.

One thing that helped me is a simple rule: Once the connection is established, there is NO DIFFERENCE between client and server. Either one can talk. Either one can listen. What makes them different is totally up to you. What they say is totally up to you.

The client is the one that initiates the connection (dials the phone). The server is the one that waits on the connection (answers the phone). But after the connection is made, there is no distinction.

You need to devise a protocol that exchanges the data you need. You need to figure out if you want to open / close a connection for each packet, or open a connection and transmit packets blindly, or have the server request them.
For a case as simple as you describe, you might just have the LabVIEW server wait on a connection, read 76 bytes, unflatten it from a string into a cluster and store the cluster of data. Then read (with a timeout) another 76 bytes and repeat.

This protocol is simple, but if you ever miss a byte, you're hosed. Perhaps you need a request-response protocol to keep things in sync.

You'll need to pay attention to endian order. LabVIEW is always big-endian, regardless of platform. Macs are big-endian. Intel machines are little-endian. I don't know about Java. If you don't know what that means, ask.
Steve Bird
Culverson Software - Elegant software that is a pleasure to use.
Culverson.com


LinkedIn

Blog for (mostly LabVIEW) programmers: Tips And Tricks

Message 2 of 14
(4,611 Views)
Here are my files so maybe it will make a little bit more sense on what I am trying to figure out. Another question I have pertains to the TCP write on the server and the TCP read on the client. For example, when I run the TINIServerDemo2.vi and probe the string lengths from the Get Waveform Components.vi, I get 16 bytes from the time stamp data (t0), 8 bytes from the distance between points (dt) and 2052 bytes from the actual waveform data (Y). Each of these pieces of data are flattened to strings and then sent to a TCP write.

I am confused for two reasons. One is one the server side. Why do I need to run the the time stamp data (t0) and waveform data (Y) [but not the (dt) values?] through a string lenght.vi then to a TCP write and then run them again to another TCP write? I looks to me as you are telling the server to write a certain length of bytes and then write the actual data. That is best way I can make sense of it. Now, onto the client side.

On the client, I am telling TCP reads to read 4 bytes of time stamp data, 8 bytes of distance between points, and then 4 bytes of actual waveform data (repeated again for each simulated signal). Then I take those outputs, typecast them (except for the dt values) and put them into another TCP read that apparently actually reads the data.

I am having a hard time reconciling the fact that I am determining the length to be 2052 bytes of waveform data on server and only reading 4 bytes of it on the client side. I am VERY confused on the logic of how many bytes are being written and read at a given time. I feel this is necessary to determine what and how I structure the server in Java.

I have inlcuded my files for you to look at. I am trying to explain the best way I can, I am sure it would be easier in person!

Thank you very much.
0 Kudos
Message 3 of 14
(4,607 Views)
Why do I need to run the the time stamp data (t0) and waveform data (Y) [but not the (dt) values?] through a string lenght.vi then to a TCP write and then run them again to another TCP write?


This is commonly used on variable length data - the four-byte value tells the receiver how many bytes are in the following chunk.
It's necessary for the Y case - you could have 101 values in one packet and 102 in another (maybe not in your particular case, but in general, when you transmit an array, you include the element count). The same thing applies to strings.

I see no reason for it anywhere else. As long as both ends know that a double is 8 bytes, then just transmit the 8 bytes.

I take it you did not design this protocol? (Otherwise you wouldn't be asking ME about it...)

If you have liberty to change the protocol, then I would consider using a cluster:

Bundle the Skin Response, Heart Rate, Resp. Rate, and the four waveforms into a cluster.
(Use a typedef for easier changes).

Flatten the cluster into a data string.
Get the string length (unless you can guarantee the length will be the same every time).
Flatten the length into a string.
Transmit the length string and the data string.

On the receiving end, receive 4 bytes, unflatten into an I32 (data length).
Then receive that many bytes, unflatten them into a cluster, and there's your whole data block. No muss, no fuss.

I am having a hard time reconciling the fact that I am determining the length to be 2052 bytes of waveform data on server and only reading 4 bytes of it on the client side.<>

You are misreadimg the code. It reads four bytes to determine HOW MANY MORE BYTES to read. It then reads that many more.

Steve Bird
Culverson Software - Elegant software that is a pleasure to use.
Culverson.com


LinkedIn

Blog for (mostly LabVIEW) programmers: Tips And Tricks

Message 4 of 14
(4,606 Views)
I really appreciate your help with my VI's. I changed my VI's according to your suggestions and it worked well (and it is a bit more visually pleasing). I have a few more small questions about the client/server model in LabVIEW.



CoastalMaineBird wrote:the four-byte value tells the receiver how many bytes are in the following chunk.


Is 4 bytes an arbitrary number in LabVIEW? I tried it with 2 bytes and 3 bytes but I encounter errors. It is not much of a big deal, but knowledge is a wonderful thing!


You are misreadimg the code. It reads four bytes to determine HOW MANY MORE BYTES to read. It then reads that many more.


If this is the case, then I assume the way I have it set up now is that I can create one big packet on a Java server, send the length of the packet and then the actual data packet and then the LabVIEW client will read 4 bytes to determine how many more to read. I assume the actual lengths of the 4 brain wave signals, heart rate, skin response, and respiratory rate will not matter just as long as they are sent in a specific order and received in the same order according to the way I have it set up on my LabVIEW client.

I have attatched the new VI's for you to look at.

Again, thank you for your help!
0 Kudos
Message 5 of 14
(4,574 Views)
Is 4 bytes an arbitrary number in LabVIEW? I tried it with 2 bytes and 3 bytes but I encounter errors.


--- Four bytes was used because that's the length of an I32 - the native integer type.
As long as your data size doesn't exceed 64k, you can use two bytes if you want - just typecast the string length number to a U16, and flatten THAT to a string and send it. Read TWO bytes on the other end, unflatten it to a U16, then read THAT MANY more bytes, and you're there.

You could figure out a way to use three bytes, I suppose, but you'd be the first person to do it, because no one else has ever needed to. ;->

In your case, I submit that the advantage to sending a two-byte count instead of a four-byte count is dwarfed by the increased complexity of the code. Saving two bytes out of the thousand you're sending is just not worth it. Send the four bytes.


It is not much of a big deal, but knowledge is a wonderful thing!


--- Indeed.

If this is the case, then I assume the way I have it set up now is that I can create one big packet on a Java server, send the length of the packet and then the actual data packet and then the LabVIEW client will read 4 bytes to determine how many more to read.


--- Yes. BUT....

If this is the case, then I assume the way I have it set up now is that I can create one big packet on a Java server, send the length of the packet and then the actual data packet and then the LabVIEW client will read 4 bytes to determine how many more to read.


--- Sounds simple, doesn't it? But, as I mentioned before, you have to pay attention to the endian-ness of your data. I strongly suggest that your first Java-LabVIEW test NOT be of your whole cluster, but instead, send a simple four-byte integer, namely the value 1. That's right, the number 1. If you receive it and get a 1, you're golden. If you receive it and get the value 16,777,216, you have an endian problem. If you get something else, you have some other problem. If you don't know about endian problems, ask.

P.S. Give your CPU a break now and then. Your loop has no WAIT function in it, so you're composing a signal and transmitting it over and over, as fast as possible, doing nothing else. Insert a WAIT TILL NEXT MSEC MULTIPLE with a constant of 100 inside your transmitting loop. That's throttle back to a 10-Hz transmission rate. Your other programs will love you for it.

Steve Bird
Culverson Software - Elegant software that is a pleasure to use.
Culverson.com


LinkedIn

Blog for (mostly LabVIEW) programmers: Tips And Tricks

Message 6 of 14
(4,570 Views)
I have a simple Java program that sends the number 12 through a TCP/IP stream on the localhost. I am assuming that the form it will be received by the LabVIEW client will be a string (default for all TCP/IP connections??) I modified the LabVIEW Simple Client example to read 4 bytes from the stream and display it in a string indicator and it works fine. However, I also tried to use a cluster that contains a single string connected to an unflatten to string and then unbundled it (because this is the format I am using in my other VI's that you already seen) to be displayed in a second string indicator. But this does not work. As as side note, I can add a Decimal string to number.vi and convert the first string indicator to a numerical value and have that displayed in a numerical indicator. But again, this is really too simple for what I am doing. Or it appears that way to me.

Is there a way that Java can send "clusters" like LabVIEW or will I have to find another way to send multiple signals? Or will sending different pieces of data in one large packet considered to be a cluster? Or am I not doing something correctly?

I have attatched my Java code (Netbeans 3.6 IDE) and the simple client.vi.

Thanks again for your patience and your guidance.
0 Kudos
Message 7 of 14
(4,548 Views)
I have a simple Java program that sends the number 12 through a TCP/IP stream on the localhost.


The code I see sets "x" to 15. If you're receiving a "12", something's wrong...


I am assuming that the form it will be received by the LabVIEW client will be a string (default for all TCP/IP connections??)


Not only default, there is no other option. Strings are the only thing sent and received by the TCP functions. That's what all the flattening and unflattening is about.

However, I also tried to use a cluster that contains a single string connected to an unflatten to string and then unbundled it (because this is the format I am using in my other VI's that you already seen) to be displayed in a second string indicator. But this does not work.



Judging by your LabVIEW code, you're still not understanding something. You receive 4 bytes. That is the length of a native integer. If you got the number 12 (or 15) to work, then it's because your Java transmitter sent the value as a four-byte value (not unreasonable).

But if you want to send a string, you can't just cram any old string into a four-byte value. That's where the two-part transmission comes in - the first four bytes tell the receiver how many MORE bytes are following. The receiver knows to receive four bytes, convert it to an integer (displaying it as a string is not useful), and call it N. The transmitter then sends N more bytes. The receiver receives N more bytes, then converts the data from string to whatever. Every transmission is in two parts, every reception is in two parts.
If you want to transmit a string from Java, you'll have to send the length first, followed by the string itself.
Something like (Forgive me if my Java's a bit rusty):

String myString = "This is a test string";
// "pout" being your connection PrintStream
pout.print(myString.length()); //send out the length
pout.print(myString); //send out the string.

On the LabVIEW receiving side, receive four bytes, and unflatten them into an I32 (display it as "N" if you like). Then receive N more bytes, and display it directly as a string.

Bundling a string (or anything else) does nothing as far as the byte structure goes - it doesn't make the result any longer (or shorter).

Whenever you get lost, go back to the working LabVIEW server, where you flatten the huge cluster and send it. Display that string using hex display, and you can see exactly what gets sent. Duplicate that in your Java server.

Steve Bird
Culverson Software - Elegant software that is a pleasure to use.
Culverson.com


LinkedIn

Blog for (mostly LabVIEW) programmers: Tips And Tricks

Message 8 of 14
(4,544 Views)

@CoastalMaineBird wrote:
The code I see sets "x" to 15. If you're receiving a "12", something's wrong...

That is my fault, I orginally had the value of "12" being sent.


Judging by your LabVIEW code, you're still not understanding something. You receive 4 bytes. That is the length of a native integer. If you got the number 12 (or 15) to work, then it's because your Java transmitter sent the value as a four-byte value (not unreasonable).

Ok, it is making a lot more sense now. I had a hard time wondering why I needed two TCP reads and writes. I remember now that you need to send a length first then the actual data in Java/C++ etc etc.



If you want to transmit a string from Java, you'll have to send the length first, followed by the string itself.
Something like (Forgive me if my Java's a bit rusty):
String myString = "This is a test string";
// "pout" being your connection PrintStream
pout.print(myString.length()); //send out the length
pout.print(myString); //send out the string.

On the LabVIEW receiving side, receive four bytes, and unflatten them into an I32 (display it as "N" if you like). Then receive N more bytes, and display it directly as a string.


I will try the logic of this code in Java and see if I can get it to work...I sounds like it will! Not only am I a LabVIEW newbie but Java is new to me as well. I guess I get a double whammy.


Bundling a string (or anything else) does nothing as far as the byte structure goes - it doesn't make the result any longer (or shorter). Whenever you get lost, go back to the working LabVIEW server, where you flatten the huge cluster and send it. Display that string using hex display, and you can see exactly what gets sent. Duplicate that in your Java server.

Ok, it maybe be a stumbling block for me to get this to work, but at least I know now that the bundling "logic" in LabVIEW causes no "incompatability" errors if you know what I am getting at.

Thank you very much for your help in this matter. I will post again if I get stuck. Thanks again for all the help.
0 Kudos
Message 9 of 14
(4,530 Views)
Ok, it maybe be a stumbling block for me to get this to work, but at least I know now that the bundling "logic" in LabVIEW causes no "incompatability" errors if you know what I am getting at.


I strongly suggest you do not jump whole hog into using your app's real cluster. First, get an I32 across the fence (you've already done that).
Then send a string.
Then send a cluster of an I32 and a string. (Change both ends).
Then add a DOUBLE to the cluster, and make that work.
Then add an array of doubles, and make that work.

Steve Bird
Culverson Software - Elegant software that is a pleasure to use.
Culverson.com


LinkedIn

Blog for (mostly LabVIEW) programmers: Tips And Tricks

Message 10 of 14
(4,524 Views)