LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Converting ASCII Hex into raw hex bit level code


@marc A wrote:
Somehow I think we might see some more caps lock abuse, though :)

Let's hope not. My ears hurt a little. Smiley Wink

BTW, it's nice to see that other people find the combined FP-BD functionality of the code capture tool useful as well.


___________________
Try to take over the world!
0 Kudos
Message 21 of 32
(4,405 Views)
Rick, from the way you last explained it, what Altenbach and I posted as well as what tst originally suggested, appear to be what you want. You just need to go through the string you generate and turn every 2 hex characters into a single character. Try adding that code to what you have and see if it works for you.
 
tst, yes I'm a big fan of the code capture tool. That specific feature sure came in handy for this example.
0 Kudos
Message 22 of 32
(4,397 Views)
I would only caution as to where in the code you put the conversion routines that have been proposed.  In your VI, you were concatenating 3 strings consisting of \01 (\ code representation) , a normal string of "2A", and a normal string of "2A49".  So you will only want to run the conversions on the latter 2 strings, and not the first string, which is already representing a single ASCII character.  Once the conversions are done, then you can concatenate the first string with the results of the conversions of the latter 2 strings.

Message Edited by Ravens Fan on 01-29-2007 05:19 PM

0 Kudos
Message 23 of 32
(4,391 Views)


@RickH wrote:

Attached is another example.  This is my VI that makes my calculations and convert them into a two byte hex.  I NEED TO KNOW HOW TO CONVERT THIS INTO NON_PRINTABLE ASCII.



OK, your VI does NOT covert anything into two byte HEX, it creates a formatted string containing two characters (00..FF). Two such characters represent exaclty one byte, so if you want to retain the information contained in two bytes, you would need four digits. (0000...FFFF). That's not waht you want apparently.

You start out with a DBL and do a lot of scaling. For sake of argument, let's assume that your final DBL is in the range of a 2 Byte unsigned integer (0...65536). All you need to do is convert it to a U16 datatype, then typecast it to a string (This would give you Big endian, If you need little endian, reverse the 2byte string).

Look at the attached image. All clear? (To see the non-printable characters, set the string display to hex. This won't change the underlying data). 🙂

Message Edited by altenbach on 01-29-2007 03:40 PM

0 Kudos
Message 24 of 32
(4,387 Views)

Altenbach,

 

Yes indeed.   The VI I shared is a program that takes 3 polynomials used for a polyfit and truncates them so that the results can be placed into a 16bit register.   The code mimics the function in a spreadsheet.  I'm currently playing with your example as it appears a bit more elegant.  Thanks!

 

 

Rick H.

  

0 Kudos
Message 25 of 32
(4,381 Views)
Looking a the left side of your code, the same applies there too.
 
Currently you're loosing 50% of the information because you format it into hexadecimal formatted string of only 2 characters (=1 byte), the scan the string back into a DBL. While it is probably OK to display the string for debugging purposes, there is no need to ever scan the string to get the number back. Simply take the U16 as shown above and convert it back to DBL if needed.
 
You have quite a bit of negative numbers, so it is possible that I16 would be a better choice. Modify as needed.
 
In general, you should be a bit more careful with coercions. For example, if you multiply a diagram constant with a DBL, make the constant also a DBL. There is a special tool for "+1", etc.
0 Kudos
Message 26 of 32
(4,378 Views)

Altenbach,

 

Thanks for the caveat, but the 16 bit conversion is a requiremnent, because  the floating point  is truncated when placed into the 16 bit register.  In this fashion, the truncated value is representative to the actual value used for the calibration and calculation.

 

 

Thanks,

 

Rick H

 

0 Kudos
Message 27 of 32
(4,372 Views)


@RickH wrote:
Altenbach

Thanks for the caveat, but the 16 bit conversion is a requiremnent, because  the floating point  is truncated when placed into the 16 bit register.  In this fashion, the truncated value is representative to the actual value used for the calibration and calculation.


Sorry, I don't think I ever suggested to skip any 16 bit conversion. Read my post again. Please let me know if anything was not clear.
0 Kudos
Message 28 of 32
(4,349 Views)
Oh yeah, sorry about that.  I did convert the hex single byte to  I16 as you suggested.  Thanks you for clarifying that.
 
Rick H
0 Kudos
Message 29 of 32
(4,342 Views)
 
 
this may help
 
 
- James

Using LV 2012 on Windows 7 64 bit
Message 30 of 32
(4,329 Views)