06-16-2022 03:35 PM
Greetings:
I am attempting to communicate via TCP and am having difficulty with coming up with a reliable way to deal with variable data being returned. I have a given list of commands I can send to the laser I'm talking to and those commands can be either commands to set a parameter or queries to find the value of that parameter. For example, the query "$status ?" will return "$STATUS\s4006000000000000\r\00". I have enabled the '\ codes' display option here.
The difficulty I'm having revolves around the "\r\00" at the end of the text string. I can not come up with a way for LV to correctly read the appropriate number of bytes without throwing an error. I started trying to use the TCP Read function with mode set to "standard" and attempt to figure out how many bytes would be returned from each query. The problem is that most queries will return varying amounts of data. Another example "$dpw ?" could return "$DPW\sXXXr\00" or "$DPW\sXX\r\00" where XXX and XX are three- and two-digit numbers depending on how the laser is configured. There is no way for me to know in advance if I'll be reading a three- or two-digit number.
I thought I had found a workaround that I thought worked perfectly - change the TCP Read mode to "immediate". With nothing else changed except the read mode, I could set the number of bytes read to a large value and the read would always return everything without errors. I could then parse out the pertinent data with ease. This always worked when I ran the code with Highlight Execution enabled. With Highlight disabled and the code running full speed, eventually the read data gets corrupted and is out of sync with the TCP Write.
My main code structure waits for user input and if no input happens after a few seconds, it runs a sequence of queries to update the display with things like shot count and current laser status and temperature.
The above shows three out of the six queries I am trying to troubleshoot. The Sequence 4/5 is a subvi I wrote to decode the STATUS word I used in the example in the first paragraph. It becomes immediately obvious the communication has been corrupted because the status decoding becomes nonsensical.
I have not yet tried putting wait timers in each sequence to see if that makes a difference. Stupid me would have thought that each sequence would complete before moving to the next, but something is definitely happening when I run full speed.
Any help would be greatly appreciated! Thanks.
06-16-2022 04:36 PM
First for the love of God, get rid of the stacked sequence structure. Learn about state machines. The stacked sequence structure should be permanently removed from the language.
Now, for you specific question. What I do when I have variable length data is create my own TCP read which consists of a read of 1 byte with the desired message timeout. This will be a minimum of several seconds usually. Once a single character is read I enter a loop where I read large blocks of data but use a short timeout. Something on the order of 50 ms or so. I will continue executing this loop and concatenating the data I read until I get a timeout error. I will throw away the error and then pass the received data along for processing. This works well in a command/response situation. If you are receiving asynchronous data then I would generally have a separate parallel loop which continually reads and passes any data to another process which is responsible for parsing the data. That task would know how to interpret the data and would process it according to the protocol/messaging scheme being used. In your case it appears all of the data is terminated with a \r\00. My parser for this would look for that pattern in the data and then process the single item.
06-17-2022 02:52 AM - edited 06-17-2022 02:53 AM
Mark's solution is a valid one and really the only one you can do if you want to stay without additional software installations for your end application. I don't usually recommend to use NI-VISA for TCP/IP communication unless it is part of an instrument driver that should work for an instrument that also supports RS-232, USB TMC, and/or GPIB communication, but if you already need NI-VISA for other parts in your program or don't mind the additional install on the computer of end users of your application, you might also look into doing your TCP/IP communication over VISA.
NI-VISA only supports single character message termination detection but lets you configure which character that should be. It is much more flexible that way than the CRLF termination mode that the native TCP Read node supports. This is a bit of a limitation but an understandable one, since just about any official RFC based protocol has since long ago standardized on CRLF line termination if the protocol is meant to have line endings. That said your device is really an exotic thing. That it only uses a single termination character is a bit special but that it also sends the terminating 00 byte with the message would hint on a strange firmware implementation.
06-17-2022 08:13 AM
Thanks for the reply. First off, I make no claims to elegance in my software development. I do what I need to do to get the job done. I have a whopping 4 hours of actual LV training that took place in 2000 in the form of a class entitled "Intro to LabVIEW" and the only real "programming" class I had was as a senior in high school back in 1983. So things like "state machines" are foreign concepts to me and feel like something bitheads are taught in their college classes.
I was thinking about doing what you suggest regarding a loop to grab data byte by byte but was hoping an easier solution was out there.
Thank you.
06-17-2022 08:28 AM
Thanks for the reply. I think at one point I tried using VISA, but creating a VISA resource name was problematic. If I recall, I had to use NI MAX all the time to manually specify an IP address so that a resource could be created. Since we run DHCP, having to do this daily if not hourly would become problematic. When I recently found the TCP functions, I thought my problems were solved. Apparently not.
What I am attempting to do is create a program to control a new laser system we are developing. The firmware developers think nothing about having to use programs like PuTTY to run the lasers. Being a lab rat, fat-fingering every single command is beyond frustrating. I will ask the FW people though, why they chose this particular termination sequence, since CRLF seems to be so standard.
-Scott
06-17-2022 09:15 AM - edited 06-17-2022 09:47 AM
If you have that direct contact to your developers definitely talk with them. Make them change the "\r\00" into a "\r\n" sequence and your are done as you can use the LabVIEW CRLF TCP Read Mode. That \r\00 will mess with other potential software too, so make your companies device a pain in the ass to use for many people.
Although that might be intentional with some manufacturers: We don't want any stinking hackers try to talk to our device on their own but rather make every honest user have to buy our own software solution (that we haven't decided yet when we are going to abandon it, after it hasn't created the hoped revenue because of its overpriced license fee).
Many hardware manufacturers seem to think that a software driver is another chance to make a few bucks more but very quickly find out that:
a) they are not in the software development for a very good reason
b) that supporting software costs a lot of time and effort that is difficult to maintain, as it is just a side business with people leaving regularly and then there is noone to keep supporting the existing solution.and training someone new is to costly
c) because their software is expensive for the relative crappy experience it offers, nobody buys it and because the protocol for it is to complicated to handle for average users other than the totally determined hackers, the few hardware components that were sold initially keep collecting dust on some shelves and nobody buys more of their hardware as it is simply impossible to get the "crap" to work. It takes years to build a good reputation for a good product and just a few public online reports in various fora to destroy it.
Instead providing some simple examples in one or two programming languages would make a huge difference, and if provided for free would not create huge expectations as to fully fledged implementations.
If I would develop hardware, my choices of examples would be currently a Python and a C source code library and then a LabVIEW one. C# might be in there too after the C sample.. Anything else can be gleaned from these for any other programming language if there's even a little determination involved. LabVIEW would be my first choice for internal testing and developing, but Python and C/C# examples have a much higher reach in nowadays world.
NI got successful by selling relatively expensive hardware and providing fully fledged software API drivers for free for their hardware. While many other manufacturers were struggling to get their cheaper hardware into the hands of users, with smaller margins on their hardware as they did not have the high volumes to do custom ASIC development that brought down the costs for an NI board far beyond that of their competition, but with better functionality and performance, NI kept giving away the driver software for free that the others tried to sell for extra money.
06-17-2022 12:59 PM
My solution isn't reading a byte at a time. It only reads a single byte for the initial read. After that it is ready blocks of data. Much more efficient than a brute force loop of single byte reads.
As for the programming side of things everyone has to start a some place. My advice about getting rid of the stacked frame structure was not trying to insult your coding abilities, it is about helping to teach you. There are TONS of issues you will encounter when using stack frame structures. Simply put, they should NEVER be used. I am trying to help you avoid lots of headaches in the future. There are tons of examples for state machines. They are pretty simple to pick up and it is a much better approach than the stacked frame structure. LabVIEW ships with state machine examples that you could use as the starting point for your application. Trust me, you will appreciate my advice in the long run.
06-17-2022
05:32 PM
- last edited on
11-06-2024
06:50 PM
by
Content Cleaner
The others here definitely have good advice, and I'd recommend listening to them.
I'd like to address your question of "it works in highlight execution mode but not in regular mode". Here's the timing diagram of what's happening from the POV of your instrument:
1: Command received
2: Process command, create reply
3: Transmit reply byte 1
4: Transmit reply byte 2...
..5: Transmit reply byte n
Your LV code does indeed wait on each sequence to finish. The issue is that when you're in highlight execution mode, you send the command (step 1), then while the little dot moves on the wire, your instrument finishes steps 2 through 5. By the time your code calls "TCP Read", it's all finished.
In "Immediate" mode, TCP Read returns when it gets any number of bytes. (https://www.ni.com/docs/en-US/bundle/labview-api-ref/page/functions/tcp-read.html)
This means that you call Write (step 1 happens), then you immediately call Read, you may be on step 4, where not all bytes have been read. Then you'll read too few bytes and get out of sync.
So, you have the issue of "how do I know what to return from TCP Read". Well, Termchars are the best way, but that's been discussed. The only other two options are return when you have a certain number of bytes or return after a certain time has elapsed. You don't know ahead of time how many bytes you have to get, so that leaves time alone.
In your application, your laser isn't sending unprompted data, so you can be sure that one and only one message will return when you send a command. You also know that it's probably very quick to reply, and you know an upper end of your bytes to receive. Right now this is basically what you're doing, except you're reading too fast.
The simplest answer here is to just add a "Stall Data Flow.vim" between TCP Write and TCP Read. Give it a value of, say, 100 ms (you can play with this; just make sure it's long enough to fully process and return a reply. You mention firmware, so it could be crazy fast and you could use 2 ms or something). Turn off Immediate mode, set your timeout low, and ask for 100 bytes (something more than you'll ever receive). See if that works.
I'd recommend making a subVI that does this for you too; a "Send message and get reply" function. That would bundle up the TCP Write, the Delay, and the Read. Now you can start to make some reusable code that can call one function that you've debugged instead of having to add all of this code for every single unique function.
(BTW, the other more complicated suggestions are good ones. The benefit there is that, if you made those functions, they're reusable anytime you have a similar application. Just adding a delay function should make this work for THIS application, but you'll have to repeat it every time. Same thing with the state machine suggestions- it WILL make your code more expandable, but I get it. Sometimes you just need to bang out a little helper program to automate a couple settings and you don't have the time budget to make something better. Maybe you have a deadline in 5 hours and just need some data. Still, I'd recommend following their advice if you see yourself using this code for a long time.)
06-21-2022 03:37 AM
I would have written the same answer.
Just a comment about sufficient timeout:
If the device sends back the requested information in five data packets (i.e. internally, that devices calls an equivalent of TCP write five times), it is entirely possible that it takes 200ms for the entire data to arrive at the PC.
The reason is: For each sent packet, the sender waits for an acknowledge-message [ACK] from the receiver, so if something goes wrong, it can re-sent that packet. But many devices do not have the capabilities to store many packets in memory, so they simply send a packet, wait for [ACK], and then send the next.
Now, to not flood the network with [ACK] messages, the receiver can include it into the next normal data packet sent back. If no data is to be sent back, it will send an [ACK] message alone after a timeout, which is 50ms on Win10.
That means: The device sends the first packet, Windows replies with [ACK] 50ms later, the device sends the next packet, Windows replies 50ms later, and so on... The fifth message is sent after 200ms...
Unfortunately, this is something you ony see when analyzing network traffic with e.g. Wireshark. I was able to speed this up by sending a character ignored by the device after reception of every single frame, because this avoids this 50ms...
06-21-2022 08:38 AM
Thanks everyone for their time in replying to my question. I really do appreciate it! As is typical around here, I'm going to have to shelve this micro project because I was doing it under the radar while my "boss" was "working" from home last week. I'll probably spend more time messing with this in the coming months, but not anymore this month.
Regards,
-Scott