07-22-2004 02:07 PM
07-22-2004 04:37 PM
07-22-2004 04:38 PM
07-22-2004 04:59 PM
07-31-2004 10:07 AM
01-09-2009 04:39 AM
I'm having a similar problem with a data stream read using the TCP/IP Read function - I cast the data but the cast function was producing different results on different PCs (2 Dell, same build! One sony laptop, all WinXP sp2), so before reading this thread I was fiddling about with the data representation and found I64 worked for one Dell, double precision for the other and so on.
I'll have a look at byte order now, but I would like my code to be processor independent, is there a way to check which processor is being used, with LabVIEW?
01-09-2009 05:52 AM
The three PCs I am using:
x86 Family 15 Model 2 Stepping 9
x86 Family 6 Model 15 Stepping 11
x86 Family 6 Model 23 Stepping 6
.. what is my cast function doing?!
01-09-2009 07:11 AM
I guess you have to repack your string before sending it to the non Labview program(se posted picture). I think visual C uses litlle endian. As an example. In Labview the sgl number 123.123 is equal to to the hex byte array 42 F6 3E FA. Doing the same in visual C for a float, will give FA 3E F6 42. The same bytes but in a different order. The flatten to string function will hanlde this job for you. Use the ditailed help for pointer to examples. On this page http://www.61131.com/download.htm You will find a tool if you want to toy with number (Floating point to/from hex/binary conversion).
@grahamwebb
I think you should look at your code. Perhaps you lose or get some ekstra unwanted bytes in your programming/transmission.
01-09-2009 10:25 AM
grahamwebb wrote:I'm having a similar problem with a data stream read using the TCP/IP Read function - I cast the data but the cast function was producing different results on different PCs (2 Dell, same build! One sony laptop, all WinXP sp2), so before reading this thread I was fiddling about with the data representation and found I64 worked for one Dell, double precision for the other and so on.
I'll have a look at byte order now, but I would like my code to be processor independent, is there a way to check which processor is being used, with LabVIEW?
If it is I64 vs. DBL, byte order is not your problem!!!
I seriously doubt that your analysis is correct. If the typecast handling would depend on processor stepping, we all would have fallen on our nose many times before. 🙂
Create a string indicator after the TCP read, run the VI so the string indicator contains data, then change the indicator to a diagram constant. Disconnect the broken wire to the diagram constant. This way the constant contains typical raw data. Now attach the VI. Tell us what kind of numeric data is expected to be in the string.
I suspect one of the following:
01-09-2009 11:42 AM
Will do thank you, but it will take me a while to cut down the code to something I can post here, and I'm going home now so perhaps over the weekend ..
There is no handshaking with the source, it simply chucks out data to a TCP/IP port at about 100Hz, I'm using producer-consumer loops with a queue to prevent losing anything when the network load changes.
One data packet contains one frame of motion capture data (xyz coordinates for n number of markers). Packets are sent with packing to ensure they are the same size, so I use the 'packet size' byte to read the right amount of data to sync with the source, (and hope I can find the packet size byte!). But once it's reading and queuing data that all works nicely and each coordinates data appears when and where I expect it. On my PC up here it displays double precision nicely, but on other PCs the data get cast as silly numbers, eg: 1.3245x10^312. At one stage I scaled and shifted the data and found it is just that scaled and shifted by ^312.
I'll send more details later.