LabVIEW Idea Exchange

cancel
Showing results for 
Search instead for 
Did you mean: 
fabric

Specify mode when converting to integer: Wrap or Clip

Status: New

Currently, when converting a longer integer type (e.g. u32) to a shorter integer type (e.g. u8), the output value wraps.

 

By contrast, when converting a floating point type (e.g. dbl) to an integer type (e.g. u8), the output clips.

 

conversion.png

 

It would be great to be able to specify the output mode for conversion to integer: Wrap or Clip!

11 Comments
dthor
Active Participant

This is quite interesting, thanks for bringing it up. I've never noticed it before, but it could definitely be the source of some issues while coding. I'll keep an eye out for this in the future.

 

To me, this looks more like a bug: I think both conversions should act the same way. My vote is for Clip.

Dragis
Active Participant

I think this points out an important difference between integer and floating point types. Integer types are, aside from a few use cases like counters, logical types which makes them more similar to an array of booleans or flags. The conversions or casts between integer types will in general reinterpret the bits and nothing else.

 

Floating point types on the other hand are arithmetic types which attempt to maintain continuity with their values as you operate on them. Since I am partial to fixed-point, I will throw out there that fixed-point gives you a type that is performance-wise as efficient as integers (in most cases) but acts like an arithmetic type.

 

fxp-to-u8.png

fabric
Active Participant

Interesting bit of trivia regarding fixed-point conversions!

 

Generally however I am tied to floating-point types due to incompatibility of fixed-point types with many basic operations, e.g. "Add Array Elements", "Quotient & Remainder", "Interpolate 1D Array"..... It's quite a long list once you start looking! Smiley Wink

fabric
Active Participant

@dthor: I generally also prefer Clip, but there are still odd times when Wrap is useful...

 

[...thinks for a minute...]

 

...actually, maybe Wrap is not that useful when down-converting after all!

 

If this were filed as a bug and all down-conversions were Clipped, would any common use cases be affected?? Anyone still needing Wrap functionality could always resort to this little trick:

 

wrap.png

AristosQueue (NI)
NI Employee (retired)

> Anyone still needing Wrap functionality could always resort to this little trick:

 

That's a very expensive trick in terms of both memory and performance!

 

The casting behaviors of LabVIEW are -- for better or worse -- almost always matches to the industry standard casting behaviors that you will find in C, C++, C#, JAVA, and many other programming languages. But the Idea author makes a good point: there's no reason LV couldn't have some alternate modes on its casting functions, possibly going so far as to have those alternate castin methods be the default in the palettes (obviously we wouldn't remove the existing mechanisms because that would destroy existing user code).

 

This seems to me to be something that is worthy of evaluation.

AristosQueue (NI)
NI Employee (retired)

Now, having posted my previous post, which I did because I wanted to make it clear that I was supporting the idea, I do want to go into some of the price you would pay for having these other modes and some of the rationale for why things work as they do today.

 

Consider the UInt16 converting to a UInt8 scenario posted above. The UInt16 takes two bytes. When we convert to a UInt8, we simply drop the high byte, and whatever is in the low byte is the new value. Why? Because the presumption is that if a programmer is converting from a UInt16 to a UInt8 that he or she has *already* checked that the conversion is in range. If the programmer has already tested whether the UInt16 value fits inside a UInt8 value before even doing the conversion, then having extra code built into the conversion function to do that range check is just a performance penalty. As a primitive operation in LabVIEW (or any programming language), it makes sense then that the cast is implemented as "just drop the last byte".

 

Fabric commented that perhaps "Wrap" mode isn't useful after all. Wrap should perhaps be named "High Speed, Assumed In Range" mode.

 

The rules of floating point to integer are different because there isn't a simple masking operation that can do the cast for in-range values. I don't know the exact assembly instructions used to do the conversion -- it wouldn't surprise me if there was a single hardware instruction for doing this. Regardless, because so many systems rely on that same code, I'll wager that it is highly efficient, and the clipping mode is achievable without loss of speed.

dthor
Active Participant

@Aristose Queue wrote:

...almost always matches to the industry standard casting behaviors that you will find in C, C++, C#, JAVA...


Having never used any other language for more than a few months, I did not know this. I'm all for standards, so I'm going to retract my previous comment.

 

The Idea, however, is still valid. The default action should follow the industry standards, but I think it could be useful if there was the option to choose whether to drop the bits or to coerce to the new max. There would need to be a visual indication of which is being done.

AristosQueue (NI)
NI Employee (retired)

dthor:

There's two things that I think the community could comment on to help this idea.

 

1) What sort of glyph would adequately indicate this distinction?

2) How should LV communicate the idea that there's a performance hit for using the coerce version so that they do not use it when they know the value being coerced is going to be in range?

fabric
Active Participant

If I know my value is in range then my natural inclination would be just to use the Split Number primitive. Without any other understanding or inside knowledge I would assume that woud be the most efficient.

split.png

 

If I suspect that my number is not in range then I typically do one of the following. (The first is quicker/neater, but obviously more confusing to anyone sharing my code):

clip.png

 

If the primitive had two modes then I would not consider clip mode as adding a "performance hit"... Rather I would be happy that this new mode offered the best possible performance that LV can offer for clipping. That would be a performance gain! Smiley Wink

 

If the new mode was added then surely it would be sufficient to mention the performance differences in the help, right? I'm assuming the existing beahviour would remain as the default implementation.

 

As a side issue, the existing help does not provide any information about how the conversion is done, so I think many people would be flying blind on this one. If nothing else, the documentation needs to be updated.

fabric
Active Participant

(Yes, I know about the memory/performance hit associated with "easy clip method number one"...)