LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Dynamic type lookup from variant dictionary

Not sure if it's possible to do what I want to do. It's been a while since I've worked with LabVIEW. 

 

I have a bunch of commands I want to create so that I can interrogate an external box. I've created a variant so I can look up a command by its name, and get a cluster containing the bytes that I need to send the external box so it responds to that command. Basically I always send it a 'header' just with different data. So the variant lookup produces the header that I serialize and send to the box. However, this is the easy part because its always the same cluster I send the box just with different data so it's easy to look it up.

 

On the other side of things, the box responds. But its response is a different number of bytes for each command (a different cluster that gets de-serialized into for each command). For example, CMD 1 results in the box sending three uint32s and one uint8. CMD 2 results in the box sending 2 uint64s. CMD 3 results in the box sending 17 uint8s. Etc. So, in this case I have 3 clusters that I manually de-serialize into for each command. I can't seem to add this to my variant dictionary and just look up the cluster based on name because it's not data, its a type.

 

Is there a way to dynamically look up this type based on the command like I do the header I send? What is an elegant way of handling this?

0 Kudos
Message 1 of 2
(2,162 Views)

What is the communication method here?  Serial, Ethernet, something else?   From your description, I'm visualizing serial communication, but that isn't clear.  Assuming it is serial, is there anything about the response that can help you determine that you've gotten a full response?  For example, ASCII type responses can end with a termination character such as  line feed or carriage return which allow VISA to be able to automatically determine when the message as ended.  Binary protocols might send a header with a message length that you read first and use that information so you can know how many more additional bytes you need to read.   Good message protocols are designed with one of these things in mind.   Bad message protocols don't and make knowing when a message is done ambiguous and needing hacks to work around it.

 

Assuming you have a bad scheme, but at least it sounds like it is good enough that depending on what you send, you know how many bytes to expect, put that number of bytes in your lookup table.  So if you send message A and expect 10 bytes, you'd extract the 10 and know to read that.  Message B, 80 bytes, when you select the command to send for message B, you get the command and also the number 80.  And so on for each message command.

0 Kudos
Message 2 of 2
(2,066 Views)