12-21-2010 12:28 PM
I am converting a string to a byte array and then converting that byte array to a hexadecimal array using the built in primitive like this
Is this the best way for performance? When I run the DETT I see many memory allocations. I think one for each element in the array. I think it has to do with the coercion dot. Also it only happens if I connect the indicator string. If I run this without the indicator string I don't see all of the memory allocations. (Maybe LabVIEW ignores the code since I am not doing anything with the result)
I just got the DETT and I am not extremely sure of how LabVIEW allocates memory. But I did do a test by running this code
I do not see all of the memory allocations in the DETT like I do with the number to hexadecimal string.
Solved! Go to Solution.
12-21-2010 12:41 PM
what does "DETT" stand for?
Ben
12-21-2010 12:41 PM
Well, you're really comparing apples to oranges, so you can't really conclude anything. In the first code you have to have allocations - the array of strings has to be created. This has nothing to do with the coercion dot. In the second implementation there is only one array allocation - the single numeric array.
As for the coercion dot, this was reported as a bug here: http://forums.ni.com/t5/BreakPoint/Monthly-bugs-August-2008/m-p/758921#M6325
As for "efficiency", please provide a few more details. What do you mean by a binary string, and converting that to an array of hex integers? Please provide an example.
12-21-2010
12:58 PM
- last edited on
05-23-2025
03:35 PM
by
Content Cleaner
@Ben wrote:
what does "DETT" stand for?
Ben
Maybe Desktop Execution Trace Toolkit?
http://sine.ni.com/nips/cds/view/p/lang/en/nid/206790
12-21-2010 01:02 PM
Your first solution creates an array of strings (yet only 2 character long), as they're shown in an indicator they'll need to allocate memory.
What is it you're really trying to do?
Are you trying to read a binary string and present the hex value? Then this is one solution. If it doesn't have to be a string as input you can make it really, really simple by just having a integer control connected to one integer indicator and change their "Format & Precision".
/Y
12-21-2010 01:59 PM
@smercurio_fc wrote:
Well, you're really comparing apples to oranges, so you can't really conclude anything. In the first code you have to have allocations - the array of strings has to be created. This has nothing to do with the coercion dot. In the second implementation there is only one array allocation - the single numeric array.
As for the coercion dot, this was reported as a bug here: http://forums.ni.com/t5/BreakPoint/Monthly-bugs-August-2008/m-p/758921#M6325
As for "efficiency", please provide a few more details. What do you mean by a binary string, and converting that to an array of hex integers? Please provide an example.
Tanks. I was not aware of how LabVIEW does the memory allocations. I thought it would create a large chunk of memory for the array in a single allocation and then fill it in. But aparently it needs to do an allocation for each element. By the way I tried another test that seems to confirm that. I had a vi with an array of strings control on the front panel that contained a few thousand elements. That control was not connected to anything and I saw the same results as I did with the first snippet.
The code in the first snippet is from an application I am working on. I am converting a string of binary data (binary string) to an array of hex strings - one element for each byte in the string. Is the code I posted the most efficient way of acomplishing that goal? I ask because I have seen code that looked like it could be done simpler but it was done the way it was for performance reasons.
I guess I could try different ways of acomplishing what this code does and benchmark them against each other and use the Desktop Execution Trace Toolkit to look for performance issues. I know you do not use the build array in a loop. The code snippet replaces a section of code that does just that and is much much faster.
12-21-2010 02:37 PM - edited 12-21-2010 02:41 PM
The end goal is to send a binary string over the network. I have to encode it using something like hex or base64 or whatever. Hex is the simplest.
Here is the encoder and decoder.
The encoder is about twice the speed as the decoder. So maybe I will expand on the question and ask what is the most efficient way in terms of memory and speed to encode and decode binary to hex representation?
12-21-2010 02:54 PM
Start by looking over the Bit Twiddler challenge.
Ben
12-21-2010 03:32 PM
@SteveChandler wrote:
The end goal is to send a binary string over the network. I have to encode it using something like hex or base64 or whatever. Hex is the simplest.
Here is the encoder and decoder.
The encoder is about twice the speed as the decoder. So maybe I will expand on the question and ask what is the most efficient way in terms of memory and speed to encode and decode binary to hex representation?
You meddle around too much i think. Just use the function i posted and add your number to hex string. Done.
In the recieving end i assume you want the number or do you want the binary string?
For reconvert to binary string i'd simply loop through all bits of the number recieved and build a string of '0' or '1'.
/Y
12-21-2010 04:19 PM
I looked at the code you posted but it doesn't do anything. Even if it did I need a hex encoded string and not a numeric. As I said I am sending this over the network.