LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How to optimize Timestamp to Unix ns conversion?

The solution I've come up with feels a bit clunky, I'm curious if anyone knows of any quicker ways to get at the full resolution capabilities of the timestamp format to convert to unix ns timestamp. I'm having trouble finding any functionality in LabVIEW that allows working with the full capabilities of the Timestamp data type.

unix ns.png

 

 

~ Helping pave the path to long-term living and thriving in space. ~
0 Kudos
Message 1 of 9
(254 Views)

I would subtract the epoch offset from the seconds before converting them to ns. Feels so much more logical. However at least on Windows the maximum resolution of the clock is 100 ns and Microsoft does not guarantee that it is that accurate, only that over time it won’t deviate more than the crystal frequency and less if automatic time synchronization over network or some other means is available. Factually LabVIEW only really uses the.most significant 32 bits in the fractional U64 and leaves the lower significant 32 bits zeroed.

The 100 ns resolution really results in only about 22 of the 32 bits being significant.

Rolf Kalbermatter  My Blog
DEMO, Electronic and Mechanical Support department, room 36.LB00.390
Message 2 of 9
(231 Views)

Yeah I was only seeing 20 bits set which corresponds to the 100ns. And that's true for system timestamps but I'm looking to generate unix ns timestamps to embed in log files for each data point so will also be doing the intermediate dt calcs which would potentially extend beyond those 20 bits. The double definitely loses precision around the tens of microseconds from what I can tell and I'm trying to match timestamp style (unix ns) we have from other systems around the company.

~ Helping pave the path to long-term living and thriving in space. ~
0 Kudos
Message 3 of 9
(169 Views)

My question was mostly around having to do the typecast. Seems odd there's no way to get access to the full resolution without going through a typecast. Are they still a serialization operation?

~ Helping pave the path to long-term living and thriving in space. ~
0 Kudos
Message 4 of 9
(166 Views)

And good call on moving the epoch conversion to a lower magnitude step, thanks.

IlluminatedG_0-1755702405217.png

 

~ Helping pave the path to long-term living and thriving in space. ~
0 Kudos
Message 5 of 9
(159 Views)

Not sure what you mean with serialization operation. Obviously it doesn't do Big Endian swapping (Typecast always assumes Big Endian on the Byte Stream side no matter what platform you are on) but since you have no Byte Stream (String) on either side it does not do the Big Endianization, otherwise your numbers would look odd.

 

Technically a Timestamp is a cluster of an I64 and an U64. Not sure why you decided to use a Fixed Point there.

 

#include "platdefines.h"

typedef struct
{
    int64_t seconds;
    union
    {
        uint64_t fraction;
        struct
        {
#if NI_BIG_ENDIAN
            uint32_t hi;
            uint32_t lo;
#else
            uint32_t lo;
            uint32_t hi;
#endif
        } fract;
    } u;
} ATime128, *ATime128Ptr;

 

 

As to trying to get an even higher resolution than 100ns!!! Are you sure? that makes not a lot of sense as trying to calculate that difference through the use of High Performance Counters is most likely not only not going to give you higher resolution but also takes a considerable amount of calculation time that puts your result easily off for close to 100ns or more.

Rolf Kalbermatter  My Blog
DEMO, Electronic and Mechanical Support department, room 36.LB00.390
0 Kudos
Message 6 of 9
(143 Views)

Fixed point to get the proper fractional representation handling.

 

Someone from NI R&D years ago said type casting in LabVIEW tends to be a serialize/deserialize operation to accomplish the type conversion.

~ Helping pave the path to long-term living and thriving in space. ~
0 Kudos
Message 7 of 9
(139 Views)

The U64 is fraction second, exactly the same as 64 bit fixed point with 0 integer bits. Use the fixed point and then doing the double conversion provides the correct decimal value. I'm not up to speed on doing integer math otherwise if there's another trick.

~ Helping pave the path to long-term living and thriving in space. ~
0 Kudos
Message 8 of 9
(135 Views)

@IlluminatedG wrote:

Fixed point to get the proper fractional representation handling.

 

Someone from NI R&D years ago said type casting in LabVIEW tends to be a serialize/deserialize operation to accomplish the type conversion.


There used to be some weird things in very old LabVIEW versions (< 6.0 or so) where it seemed to do endianization even when typecasting between non-stream based data such as float32<->(u)int32. That of course is quite nonsense.

 

Nowadays these things work fine and I would assume that a Timestamp is not different. It seems to work for you. 😁 Of course there might be the possibility that it does internally serialize to a byte stream (with Native Endian -> Big Endian conversion) and then back to the numeric type (with Big Endian -> Native Endian conversion) but that would seem quite braindead to do. Still, a LabVIEW Typecast is not the same as a C typecast as it does proper memory buffer checks and eventually extends the incoming memory with appended zero bytes if the output type is larger and that can cause surprising results when operating on to small serialized (string input) together with the fact that Typecast always assumes Big Endian in the serialized stream.

 

As to that pesky integer math, you basically want to divide the U64 with 2^64/1000000000=18446744073. If you do it with the quotient instead of a division you even avoid floating point conversions.

 

LabVIEW Timestamp to ns.png

Rolf Kalbermatter  My Blog
DEMO, Electronic and Mechanical Support department, room 36.LB00.390
Message 9 of 9
(87 Views)