03-22-2020 10:59 AM
@rolfk wrote:
I usually wire the Init input from the Feedback node to the input. Seems to have the same effect but avoids to have to think about the value of the boolean constant depending on the slope you want to detect.
It depends if you want to trigger on init or not (One could also invert the input to the init terminal to get an unconditional init trigger).
Many newbies have problems with boolean logic, especially if it involves several primitives as in some of the above examples, (invert, invert some inputs, AND, implies, etc.).
With my comparison method, there is only one easy to memorize take-home message: "TRUE is greater than FALSE"!
(... and we already know of course that TRUE is different that FALSE :))
(Side note: Even though LabVIEW has some holes with that concept, for example array min&max does not accept boolean array inputs yet :D)
Yes, to see if a boolean changed you can use XOR, but that pigeonholes you because elsewhere you might have to figure out if a DBL numeric has changed and XOR will break the code immediately. Similarly, if you do XOR on integers, you get bitwise operations and another can of worms. Magically, "NOT EQUAL" works for booleans, integers, numerics, and ALL more complicated datatypes (aggregate mode). Universal! Simple! Nothing new to learn!
Imagine you want to create a malleable VI to see of the input has changed. Only one with NOT EQUAL.will be reasonable!
03-22-2020 11:16 AM - edited 03-22-2020 11:17 AM
@altenbach wrote:
With my comparison method, there is only one easy to memorize take-home message: "TRUE is greater than FALSE"!
I'm at "agree to disagree". During my formative days in programming, booleans were signed ints with the value -1. In those days, False would have been greater than True. Later wth C, the emphasis was that 0 evaluates as False and *any* non-zero value (positive OR negative) evaluates as True.
These characteristics don't extend into LabVIEW, but they helped form my sense of intuition about what's clear and unambiguous, and that still sticks with me a little.
I will admit that for that newbies that aren't fluent with boolean logic symbols, the compound arithmetic node *sounds* a little wrong (arithmetic? for logic?) and its online help is less, um, helpful. (Extra credit for those who read the end of that sentence and immediately thought of Tim the Enchanter.)
-Kevin P
03-22-2020 11:26 AM
@Kevin_Price wrote:
... Later wth C, the emphasis was that 0 evaluates as False and *any* non-zero value (positive OR negative) evaluates as True.
These characteristics don't extend into LabVIEW, ....
😄
03-22-2020 12:03 PM - edited 03-22-2020 12:17 PM
Oh yeah, *now* I remember that thread, even popped in briefly. In fact, now it raises another question, not really on topic but I'm not sure where else it'd belong.
In the thread altenbach linked, there's a detailed explanation by tst in msg #65, including a description of what the type cast function does. I was very recently in a thread where my understanding of the the Type Cast function was corrected. So now my question is, was it corrected correctly?
I.E., *does* Type Cast always make a buffer copy? Or does it just re-interpret the bits in place? Or will it depend on whether the new interpretation requires <= the number of bytes in the original type? (IIRC, C didn't do this kind of type-size enforcement, whereas LabVIEW is much more strongly typed and it seems it probably *would*.)
-Kevin P
03-22-2020 05:31 PM - edited 03-22-2020 05:33 PM
I believe it is not always making a copy, but the detaills of when it does is more complicated than just the Typecast itself. The Typecast basically is a stomper, meaning it does somehow modify the data so if there is any other consumer of that data (another wire sink) that needs the data as stomper too, then there must be a copy no matter what.
The Typecast certainly does check that the incoming data fits the alignment of the expected output data. If that is not the case it reallocates the data anyways and that basically involves a copy. For scalars it probably makes a copy too but that is pretty uninteresting.
Where it gets interesting it for arrays and here it may not do a real copy if the input array fits exactly into the output array BUT it always does Endianes swapping to from Big Endian and for all but the VxWoks PowerPC based RT targets this does involve shuffling bytes around so technically it is doing even more than a simple copy. That may be omitted when the input and output array element size is the same (e.g.from INT32 to SGL). But if they have not the same size Typecast always does some swapping.
03-22-2020 05:46 PM
@Kevin_Price wrote:
Oh yeah, *now* I remember that thread, even popped in briefly. In fact, now it raises another question, not really on topic but I'm not sure where else it'd belong.
In the thread altenbach linked, there's a detailed explanation by tst in msg #65, including a description of what the type cast function does. I was very recently in a thread where my understanding of the the Type Cast function was corrected. So now my question is, was it corrected correctly?
I.E., *does* Type Cast always make a buffer copy? Or does it just re-interpret the bits in place? Or will it depend on whether the new interpretation requires <= the number of bytes in the original type? (IIRC, C didn't do this kind of type-size enforcement, whereas LabVIEW is much more strongly typed and it seems it probably *would*.)
-Kevin P
Here's a case where a buffer copy is made. (Maybe always is a strong word, except for always trusting @rolfk. I am always amazed by his responses, even though most are over my head.)
For Untitled 2 1M array doubles 8M, indicator 8M, and there is one more buffer whose name I cannot remember. Total 24MBy.
For Untitled 3 1M array doubles 8M, indicator of U8 8M, there is one more buffer whose name I cannot remember, and 1 more additional 8M buffer, a data copy.
mcduff
03-22-2020 05:55 PM
@mcduff wrote:
Here's a case where a buffer copy is made.
Make sure debugging is disabled, else there needs to be a way to probe the arrays before and after the typecast. It is impossible to predict what the compiler actually does with all that. You probably want to typecast to U64 inside the loop anyway.
03-23-2020 08:51 AM - edited 03-23-2020 08:54 AM
@altenbach wrote:
Make sure debugging is disabled, else there needs to be a way to probe the arrays before and after the typecast
With debugging disabled, same result.
@altenbach wrote:
You probably want to typecast to U64 inside the loop anyway.
This is just an example that shows a buffer copy with type cast; it's not meant to be a real application. You are correct that typecasting inside the loop would reduce a buffer copy. That is because on each iteration only a scalar is converted and that memory space is being reused.
mcduff