09-04-2024 10:34 AM
I just noticed the function "Get Date/Time String" rounds to the nearest second when input "time stamp" is unwired, while it simply truncates when the input is wired.
I would expect the function to truncate in both cases.
I compared with "Format Date/Time String" for reference:
Fractional part > 0.5:
Fractional part < 0.5:
I guess it's a bug?
Regards,
Raphaël.
09-04-2024 10:41 AM
Maybe I have not had enough coffee yet, but I cannot see any bug in your images or description. Please elaborate what you think the bug is, because I think it is working as intended.
09-04-2024 10:44 AM - edited 09-04-2024 10:51 AM
Those functions are not running at the exact same time. Maybe try with the format string "%H:%M:%S%3u" to see the fractional seconds as well. Then you can tell if it's behavior or just when the function is being run.
Edit: after doing some testing, it seems like on a reasonable PC those functions do run at the same time down to the millisecond. I agree with the behavior you're seeing.
09-04-2024 11:03 AM - edited 09-04-2024 11:21 AM
@Frozen wrote:
Maybe I have not had enough coffee yet, but I cannot see any bug in your images or description. Please elaborate what you think the bug is, because I think it is working as intended.
The important part is here:
Both instances of "Get Date/Time String" operate on the "current time":
- the first one has an explicit Time Stamp wired from "Get Date/Time In Seconds", which gives the current time;
- the second one has nothing wired, which defaults to using the current time as well.
In both cases, the result should be the same, but the one with the Time Stamp unwired unexpectedly rounds to the nearest second.
The code executes in less than a millisecond on any decent machine.
Maybe this is clearer with an animation:
09-04-2024 11:34 AM
It is curious that the values differ, but rounding in timestamps has always been a little bit of a hairball (e.g. here and the link a the end)
I agree however that the values should probably match, timestamp wired or not.
09-04-2024 11:59 AM - edited 09-04-2024 12:02 PM
@altenbach wrote:
[...] but rounding in timestamps has always been a little bit of a hairball (e.g. here and the link a the end)
I find the behavior described in the linked thread quite logical (as you commented) :
- Floating point format is used to display a "generic number", so it tries to be as accurate as possible to the real value given the limited number of digits it can display, thus rounding to the nearest value.
- Absolute/Relative time formats are used to display an instant in time, the philosophy is that the displayed time should rather be a bit in the past than a bit in the future.
09-05-2024 09:57 AM
I have filed this issue as Bug 2847744 to LabVIEW R&D. Thanks for reporting it.
12-02-2025 07:52 AM
This is NOT A BUG! It is a (poorly-documented?) Feature.
A TimeStamp is saved as a 128-bit quantity called an "Extended Precision Float". The high 64 bits represent the number of (whole) seconds since the 1904 (Unix?) Time-Zero base of the clock (needless to say, even earlier times are represented by negative numbers of seconds). The low 64 bits represent the fraction of a second on the clock. If you take the output of, say, Get Date/Time in Seconds and convert it to an Extended Precision Float, you will get a Float with a 10-digit integer part and about a 7-digit fractional part, while if you convert it to a U64, you'll just get the (same) 10-digit integer.
I'm "guessing" that "behind the scenes", LabVIEW only deals with the (numerically simpler and probably faster) high 64 (integer) bits unless the user specifically asks for "fractions of a second" (which requires delving into the low 64 bits + the "sign" bit in case of Date/Times before 1904). So the "rounding rule" (possibly poorly-documented) is "If the user is only counting seconds and not fractions of seconds, use an Integer format, but if fractions of a second are specified (for example, with the %u format string), use the full 128-bit (unique?) Time/Stamp representation and the usual Float Rounding Rules".
Don't get me started with the Excel Variation on this topic. About a decade ago, I figured it out "by experimentation" for myself, but I haven't thought about it in "about a decade" since then. As I recall, Time Stamp, LabVIEW, and Excel came up before in the Forum -- I'm too lazy busy to try to find it all ...
Bob Schor
12-02-2025 08:43 AM
Hi Bob,
@Bob_Schor wrote:
I'm "guessing" that "behind the scenes", LabVIEW only deals with the (numerically simpler and probably faster) high 64 (integer) bits unless the user specifically asks for "fractions of a second" (which requires delving into the low 64 bits + the "sign" bit in case of Date/Times before 1904). So the "rounding rule" (possibly poorly-documented) is "If the user is only counting seconds and not fractions of seconds, use an Integer format, but if fractions of a second are specified (for example, with the %u format string), use the full 128-bit (unique?) Time/Stamp representation and the usual Float Rounding Rules".
Here is the context help image of this function:
I don't know where you see an input parameter that allows the user to "ask for fractions of a second", or even an input "format string". This function only produces a time string accurate to the minute or to the second (depending on the boolean input "want seconds?"). So, this function ignores the fractional part of the timestamp in all cases...
12-02-2025 09:29 AM
can you not use this?