LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Could someone clarify labview endinaness?

I am working in a win-tell environment and everything I read says memory storage is in little endian. Yet, when I'm working in labview everything appears to be in big endian.

For example, one of my windows API calls returns a linked list of variable length structures. Since I have no equivalent structure in labview, the dll call results in a unsigned byte array, or alternatively a string, from which I must extract data in labview.

One parameter in the structure of the linked list is a 32 bit integer that provides an offset to the next structure. When you look at the data in labview, the first thing you notice is that the data is reversed, i.e., in the case of a 32 bit integer, the least significant byte is first follow
ed by the next, ect. To compose a 32 bit integer in labview you have to reverse the byte order. The conclusion is of course that the data returned by the API call is in little endian, and labview expect data in big endian.

What I find confusing is that endianess is supposed to be processor dependent. Yet in labview under windows everything appears to be in a big endian. So what�s the scoop? Is labview masking endianess, or can an intel box run in either mode, or what?

Even more confusing is that I'm writing code in C to interface with labview apparently without endianess consideration and it appears to work fine. Is endianess a configurable compiler option that is set by labview's include files? If so, how does swapping endianess affect performance, if at all? Further more, any other apps. that might call the dll would have to consider the endianess the dll expects, yes? If not, then perhaps I'm not doing anything in C that requires endianess consideration.

I am confu
sed...

A little help would be great; thanks.

Kind Regards,
Eric
0 Kudos
Message 1 of 11
(3,824 Views)
There is a pdf document called LabVIEW Data Storage Formats that is part of the shipping LabVIEW Bookshelf (Help menu>Search the LabVIEW Bookshelf) that states LabVIEW stores data in big endian format. This is done to maintain compatability between all of the different platforms that LabVIEW runs on.
0 Kudos
Message 2 of 11
(3,824 Views)
> I am working in a win-tell environment and everything I read says
> memory storage is in little endian. Yet, when I'm working in labview
> everything appears to be in big endian.
>

LabVIEW uses native endian-ness for data actually being manipulated,
numbers being added together, being plotted, etc. When it is time to
store the data to disk or transmit it over TCP or UDP, LV standardizes
it to network format, which is big-endian. So, the only time you should
have to deal with swapping bytes is when you are reading something from
file, or when it has been flattened to string, which is the way things
get transmitted over the network.

As for the linked list, I've been doing this for quite a few years, and
it is still easy to get confused when lookin
g at a memory address and
interpreting it by hand. It is easy to jump to the wrong conclusion
about whether something needs to be swapped or not. Use the above
guideline, and that should get you through the typical cases. For the
cases that aren't typical, a file that could be either endian-ness, for
example, it is often easier to take a subset and look at it both ways
and see which makes sense.

Greg McKaskle
0 Kudos
Message 3 of 11
(3,824 Views)
Greg,

I am nonetheless still confused... When I typecast a double into a structure who's first element is a 32 bit integer and second element is a 32 bit unsigned integer, I found the signed integer contained the most significant bits of the double and the unsigned integer contained the least significant bits, which is of course what I wanted. But that's confusing because it is what I would expect on a big endian machine, i.e., the most significant bits first...

One conclusion I might draw is that the typecast function makes things consistent from platform to platform, i.e., makes it appear as though I'm working in a big endian, and that it is utterly different from a C union. I don't know if that is correct, but I can test it.

With regard to th
e API call, I'll look at it again. Maybe my presumption regarding the typecasting node was erroneous. But what I found was that in order to interpret the data correctly, I had to swap the bytes of the embedded unicode strings, and swap the bytes and words of the 32 bit string lengths and structure offsets. Based on what you've said, the API call might be returning data in big endian. There is an endianess boolean parameter to the structure which I ignored since I assumed I was going to have to sort it out anyway. If in big endian, perhaps I can use the unflatten node rather than doing it by hand.

Anyway, thanks. I'll go away and see if I can't make some sense of it.

Kind Regards,
Eric
0 Kudos
Message 4 of 11
(3,824 Views)
Dennis,

Thanks... but it is still confusing.

Please read what Greg says below and my reply. I actually believe your correct in the sense that for consistency, labview makes things appear to be in big endian. But if you consider what Greg is saying, and considering labview interfaces with external code, the actual memory storage is apparently in the endian-ness of the machine. Greg might put it differently, but I think it is accurate to say that labVIEW masks the endianess of the platform on which it is being run and makes things appear to be in big endian. I'll await his reply, but that's how it seems to me.

It gets kinda confusing though when obtaining data from the OS API, particularly as the Windows API data may or may not be in native endi
an-ness.

Anyway, I have to run some test to make sure I understand what is going on.

Kind Regards,
Eric
0 Kudos
Message 5 of 11
(3,824 Views)
> I am nonetheless still confused... When I typecast a double into a
> structure who's first element is a 32 bit integer and second element
> is a 32 bit unsigned integer, I found the signed integer contained the
> most significant bits of the double and the unsigned integer contained
> the least significant bits, which is of course what I wanted. But
> that's confusing because it is what I would expect on a big endian
> machine, i.e., the most significant bits first...
>
> One conclusion I might draw is that the typecast function makes things
> consistent from platform to platform, i.e., makes it appear as though
> I'm working in a big endian, and that it is utterly different from a C
> union. I don't know if that is correct, but I can test it.


Ah. I should have mentioned that typecast is equivalent to casting from
the original type to string and then casting from the string to the
other type. As you supposed, this is to make the diagram run the same
on all sorts of platforms. This means that when going from an array of
int32s to an array of int16s, the int16s will always be in the same order.

> But what I
> found was that in order to interpret the data correctly, I had to swap
> the bytes of the embedded unicode strings, and swap the bytes and
> words of the 32 bit string lengths and structure offsets. Based on
> what you've said, the API call might be returning data in big endian.
> There is an endianess boolean parameter to the structure which I
> ignored since I assumed I was going to have to sort it out anyway. If
> in big endian, perhaps I can use the unflatten node rather than doing
> it by hand.
>

If you use typecast, you will need to swap things. If you have a
pointer being returned, then you usually arrange for the type to be a LV
type that is accurate rather than a generic pointer than needs to be
cast. If you can descibe the output types in the DLL dialog accurately,
then no swapping is needed.

I hope this helps to sort things out, but if not, provide more details
and ask again.

Greg McKaskle
0 Kudos
Message 6 of 11
(3,824 Views)
Greg,

Is there any particular reason why there is no LabVIEW byteswap
command which functions on a double?

Or a float for that matter?

The only ones provided byteswap 16 bit or 32 bit integers.

-- Harold



In article <3E1A32A0.9080507@austin.rr.com>,
Greg McKaskle wrote:

> > I am nonetheless still confused... When I typecast a double into a
> > structure who's first element is a 32 bit integer and second element
> > is a 32 bit unsigned integer, I found the signed integer contained the
> > most significant bits of the double and the unsigned integer contained
> > the least significant bits, which is of course what I wanted. But
> > that's confusing because it is what I would expect on a big endian
> > machine, i.e., the most significant bits first...
> >
> > One conclusion I might draw is that the typecast function makes things
> > consistent from platform to platform, i.e., makes it appear as though
> > I'm working in a big endian, and that it is utterly different from a C
> > union. I don't know if that is correct, but I can test it.
>
>
> Ah. I should have mentioned that typecast is equivalent to casting from
> the original type to string and then casting from the string to the
> other type. As you supposed, this is to make the diagram run the same
> on all sorts of platforms. This means that when going from an array of
> int32s to an array of int16s, the int16s will always be in the same order.
>
> > But what I
> > found was that in order to interpret the data correctly, I had to swap
> > the bytes of the embedded unicode strings, and swap the bytes and
> > words of the 32 bit string lengths and structure offsets. Based on
> > what you've said, the API call might be returning data in big endian.
> > There is an endianess boolean parameter to the structure which I
> > ignored since I assumed I was going to have to sort it out anyway. If
> > in big endian, perhaps I can use the unflatten node rather than doing
> > it by hand.
> >
>
> If you use typecast, you will need to swap things. If you have a
> pointer being returned, then you usually arrange for the type to be a LV
> type that is accurate rather than a generic pointer than needs to be
> cast. If you can descibe the output types in the DLL dialog accurately,
> then no swapping is needed.
>
> I hope this helps to sort things out, but if not, provide more details
> and ask again.
>
> Greg McKaskle
>
0 Kudos
Message 7 of 11
(3,824 Views)
Greg,

OK� Having review what I had originally done under LV4.1, I understand what was/is going on�

The objective is to get the list of all current running processes and their associated IDs as is presented by the Task Manager. I'm not going into the �how to� details 'cause it's documented by Microsoft in their Visual Studio's documentation and on their developer web site. Suffice it to say the information is maintained in the registry and simply requires registry query calls.

The registry query yields a fairly large block of small endian data that I tell labVIEW is a byte array. It is actually a linked list - a fairly complicated C structure containing, among other things, linkage offsets and C Unicode strings� As far as I kno
w, I cannot make labVIEW think a C string, let alone Unicode, is a native data type. Hence, I cannot tell labVIEW the dll call is returning a labVIEW structure. (I don�t know about active x so maybe there is a way buried there.)

Anyway, in a nutshell my situation is that I have a string of data in little endian, and as you pointed out, labVIEW expects data that has been cast to a string to be in big endian. I cannot, therefore, cast it back without doing my own byte/word swapping...

I considered calling the data a U32 array rather than U8 array, but then when I cast the Unicode string parts, I still have to swap words to get them back into proper order. Having spent some time thinking about it, I don't believe a solution exist that doesn't require some sort of swapping short of writing a conversion in C.

If there is an active x solution to this I would like to see it...

The question though wasn't about my unigue problem. The question was rather about labVIEW endian-n
ess and understanding the why of the solution. I appreciate the clarification.

Thanks,
Kind Regards,
Eric
0 Kudos
Message 8 of 11
(3,824 Views)
> Is there any particular reason why there is no LabVIEW byteswap
> command which functions on a double?
>
> Or a float for that matter?
>
> The only ones provided byteswap 16 bit or 32 bit integers.
>

No technical reason. I think there are VIs available that do this, but
they aren't distributed with LV.

Greg McKaskle
0 Kudos
Message 9 of 11
(3,824 Views)
> The registry query yields a fairly large block of small endian data
> that I tell labVIEW is a byte array. It is actually a linked list - a
> fairly complicated C structure containing, among other things, linkage
> offsets and C Unicode strings? As far as I know, I cannot make labVIEW
> think a C string, let alone Unicode, is a native data type. Hence, I
> cannot tell labVIEW the dll call is returning a labVIEW structure. (I
> don?t know about active x so maybe there is a way buried there.)
>

If there is an ActiveX wrapper, it will greatly simplify all of this,
though it seems that you are on the right track.

Greg McKaskle
0 Kudos
Message 10 of 11
(3,603 Views)