10-18-2024 03:06 PM - edited 10-18-2024 04:01 PM
@Andrey_Dmitriev wrote:
My statement is: In a 32-bit environment, the pointers are 32 bits. I really don't want to see 64-bit here; any operation with 64-bit in a 32-bit environment is overhead, usually involving two registers — there are no 64-bit registers available at all in 32 bit env (I almost not using the 32-bit any longer, but the overall principle is imporant!). Why are we not saving 8 or 16-bit gray images always in 64 bits? That could also simplify our developers' work significantly, isn't?
Your reasoning has one flaw! A pointer is a pointer, if you use 8 byte to store a 32-bit integer you waste a "huge" amount of 4 bytes per pointer. Whoooa! Lets call the memory police!
If you have a 16-bit image of 1000 * 1600 pixels and you happen to save it as 64-bit image, you wasted a "mere" 1000 * 1600 * 6 bytes, which is "only" 9'600'000 bytes or 9.6 MBytes for a single image. Quite a difference!
You do NOT want LabVIEW to magically mutate a front panel control just because you happen to run the VI in a different bitness. Try to call that VI through Call by Reference or anything similar that needs a strict typedef. What I need to provide different connector panes for the different bitnesses? What a shitty design decision! Or I want to do the Call By Reference through a VI Server connection which requires to flatten the data to a network stream. One side runs in 32-bit, the other in 64-bit and bumm-crash, my VI server connection gets upside down because the bitstream is not bit-size correct between both sides! There are other places that could cause trouble with such an auto-morphing datatype. LabVIEW always has been strict data typed including having explicitly sized datatypes for anything except strings and arrays. It is a restriction of course, but one that has prevented many of the atrocities that you have to deal with when programming in C and any similar pseudo strict typed programming language. It does mean wasting 4 bytes when you want to store a pointer capable variable in 32-bit LabVIEW. But that is nothing I ever have had sleepless nights over! 😂
Maybe they should have created a new control with USZ in the terminal but use 64-bit anyways? You would not have your OCD about wasting 4 bytes triggered and LabVIEW would still work as it does now! 😁
Why do we have to deal with 32-bit size parameters for arrays and strings? Many of them never get bigger than 255 elements, so I want to propose a short Pascal string for LabVIEW! Those 3 bytes that can be saved when storing short strings with less than 256 characters for sure will help me have my application run a few minutes longer on a resource constrained system. 😋
One more flaw in your explanation. LabVIEW does not use two registers for that 64-bit value. In 32-bit it puts the lower 32-bit on the stack to call the function. It has to do that as anything else would corrupt the calling stack for that function. But when that parameter is passed to VIs it is not passed over the stack nor any registers. LabVIEW doesn't use the stack to execute VIs. VI execution is done through its own scheme where it passes around a parameter array matching the connector pane. But this is all located in the heap and can stay put in place across multiple calls of that VI. The stack, respective CPU registers are involved when calling C functions in LabVIEW and only in 64-bit. 32-bit always puts all the parameters on the stack instead but again only for C functions.
In most cases yes, but not always. The problem is that even if the calls are done sequentially, it is still not guaranteed that they will be executed (called) from the same thread, and some DLLs could be 'sensitive' to context switches. One example is the old NVIDIA CUDA library. To get this beast running, a 'special' NI Compute wrapper library was introduced to force all calls of all functions within the same thread, but not the UI thread (it is possible, but some work is required). In my career, I've encountered such a third-party DLL only once, and I finally used NI Compute for 32-bit and later my own solution for 64-bit.By the way, it was hard to explain my problem to the developer because he accepted the fact that the library was not thread-safe (parallel calls were not required), but he was unable to understand the crazy development environment where each call could be made from a different thread. (I'm too lazy to prepare a demo that would show how sequential calls could lead to a crash).
That's exactly because the CUDA library for some obscure reason did not want to bother the caller with a context pointer or something for a specific connection, but instead used TLS to save its internal state, assuming that any caller who calls CUDA always will do that from the same thread. So the NI CUDA library does some tricky pull-up exercises to force the call to all CUDA functions into a specific thread. The alternative would have been to put all CUDA API calls to run under UI Thread, but CUDA calls typically can take quite a bit of time to execute before they return control to the caller and that would result in LabVIEW starting to be nearly unusable from a user point of view.
10-19-2024 12:38 AM - edited 10-19-2024 12:40 AM
@rolfk a écrit :
@Andrey_Dmitriev wrote:
My statement is: In a 32-bit environment, the pointers are 32 bits. I really don't want to see 64-bit here; any operation with 64-bit in a 32-bit environment is overhead, usually involving two registers — there are no 64-bit registers available at all in 32 bit env (I almost not using the 32-bit any longer, but the overall principle is imporant!). Why are we not saving 8 or 16-bit gray images always in 64 bits? That could also simplify our developers' work significantly, isn't?Your reasoning has one flaw! A pointer is a pointer, if you use 8 byte to store a 32-bit integer you waste a "huge" amount of 4 bytes per pointer. Whoooa! Lets call the memory police!
Well, it is not about performance or memory saving; the LabVIEW compiler anyway produces not very efficient machine code with huge memory allocations, so saving plus or minus two commands or a few bytes is nothing, of course. When I'm writing something in C like 'for (int i = 0; i < 10; ++i){}', I don't worry about using a 4-byte variable just for ten iterations. And I know how to deal with 64 bits in a 32-bit environment, heap and stack, as well as calling conventions, so don't worry about this.
In general, yes, I'm voting for dynamic types on wires and terminals, like this:
You're voting against; it's OK. From my point of view, if 'pointer-sized' integers were introduced for CLFN, then they should have corresponding types on BD/FP. It is not guaranteed that they will always fit into 64 bits. Now dynamic types are forced to coerce to fixed types, and this is wrong by design from my point of view, resulting in I64/U64 presence in a 32-bit environment (which is absolutely meaningless and not necessary). Mixed 32/64 bit development should be not a problem, because I can always perform my own conversion if needed.
In C, for example, there are no guarantees that an 'int' will be 32 bits. If you want to use variables of a specific size, particularly when writing code that involves bit manipulations, you should use the 'Standard Integer Types' mandated by the C99 specification and size_t for pointer-sized. But there's at least a specification; C is covered by ISO/ANSI Standards. The mysterious 'G' programming language is maintained by NI only without any standard behind, designed based on 'the mood of the developer's dog or whatever.'
Your point about int32 on arrays is also good. Someone nailed int32_t to the for-loops and arrays (and why signed — don't ask me). As a result, when I'm building a histogram for a uint16_t image, I always have coercion from 16 to 32 bit on the index. It works, but it's not elegant. Why can't array indices accept 8 or 16 bit without coercion? Ah, because of fixed type. On the other hand, my PC has 384 GB RAM, so I can easily create an array or for-loop larger than 2147483647 elements/iterations. But no, I should use a while loop instead with my own 64-bit counter. And you will get no improvement because NI will say something like '...it requires a complete rewrite of the code generator, the compiler, and all supporting DLLs because the size of an array is a part of the array's data, and that is widely assumed to be a 4-byte signed integer...'. This is also applicable to USZ/SZ Types above. I can submit an idea, but it's a waste of time; it will never get implemented and will be declined, that's clear to me.
LabVIEW is great, ahead of its time, I still love it, and it has a lot of 'pros', but it also has some 'cons'.
10-19-2024 02:35 AM - edited 10-19-2024 03:00 AM
Basically you simply want to have LabVIEW to be even more like C. For what reason? LabVIEW is LabVIEW and while there are things that could have been solved better, this is not one.
If you really want dynamic data types you’ll have to use Python or C# auto datatype. I hate them both for their very inexplicit data type system. Having a variable named tinyInt, which is assigned a hashtable containing a few million key-value pairs? Sure why not and there is no way to know that but by going back to where it is assigned to know that. And in the next statement the programmer assigns to it a boolean and the byte interpreter is like, sure no problem you’re the boss and I never will even think about preventing you from shooting in your own foot! We live in a free country after all!
You claim mixed 32/64 bit is a problem the way it is now solved. But it’s not, at least not in LabVIEW! It works perfectly. I have many LabVIEW libraries that access shared libraries and that work beautifully not only in both 32-bit and 64-bit environments but also on multiple platforms. They usually don’t contain even one single conditional compile structure related to this. The OpenG ZIP library solves any such issues by using native LabVIEW datatypes on the calling interface. That’s one solution. Other libraries interface to existing shared libraries and when you consequently use pointer sized integers in the Call Library Node and an unsigned 64-bit integer on the diagram, things work just fine. Yes you waste 4 bytes per pointer when it runs in 32-bit but you just said that’s not what bothers you. Wanting to interface to structures containing pointers to structures that contain more pointers? Ok yes you get a difficulty there. It can be done with pointer arithmetic walking the memory yourself and I have done so in the past but it’s a crime to publish such code. You basically play compiler yourself and that’s simply insane. Writing a wrapper shared library in C is always faster, as you can let the C compiler worry about pointer sizes, element alignment, hidden parameter passing, flattening on the stack or passing as pointer of complex parameters passed by value. It can be all looked up but is a nasty thing to solve by hand.
And the current type system in LabVIEW won’t change. It would mean completely rewriting large parts of LabVIEW and being in fact a highly incompatible system in so many areas to what it is now. A huge effort that saves a few bytes on a platform that has no relevance anymore in a few years and will be sacked like the 32-bit versions of LabVIEW for Linux and Macintosh 7 years ago already!
128-bit CPUs? Yes they may eventually arrive but not likely for your normal PC for some 20 or more years. It will be speciality hardware for things like machine learning systems but we have other problems to solve first. Without a very different energy policy there simply won’t be enough energy for all those mega cluster hardware systems.
10-20-2024 01:59 AM - edited 10-20-2024 02:02 AM
@rolfk a écrit :
Basically you simply want to have LabVIEW to be even more like C. For what reason? LabVIEW is LabVIEW and while there are things that could have been solved better, this is not one.
You still don't understand me and are writing soo many words for such a small thing.
No, I don't want LabVIEW to be even more like C. There is no reason. I'm talking about a completely different things; and yes, this is one thing, which could be solved and designed much better im my humble opinion. Again, in a 32-bit environment, the pointers are 32-bit wide; in 64-bit they are 64; in 16-bit — 16; in future 128-bit — 128, and so on. That is obvious. Now, the wrong point (by design) is that the conversion from variable-type to the fixed type (64-bit in our case) is done inside of the CLFN. It is simply "hard-coded" casting inside. But this conversion is not a business of CLFN, it should be done outside if needed. This breaks all "single responsibility" principles of software engineering. It is regardless of programming language. Do you know any other language which does this in such a manner? As a result, you have three side effects — the first one is that you have unnecessary 64-bit on the output wire in a 32-bit environment, and your argument is something like "two unused words? Call memory police!"; the second effect — it works until we have no more than 64 bits, and your argument is "128-bit CPUs? Yes, they may eventually arrive but not likely" (yes, "640K ought to be enough for anybody"); and the last one — if I would like to cast 64-bit back to 32-bit in a 32-bit environment for my own "compatibility purposes," then I will get an obvious warning about possible loss of data, because I know that in this particular case the upper 32 bits are zeros, but the compiler doesn't. It is not elegant; it is wrong from a design point of view. I don't know how to explain it better. In Germany, usually something like "Ordnung muß sein" is used. Having USZ / SZ Type propagated to LabVIEW BD will not add any 32/64 bit conditional structures, it will pass flawless from one DLL to another through SubVIs regardless from bitness, and will be much better by design from my point of view.
Now, if you would like to have dedicated fixed and hard-coded "64-bit everywhere" for your own "structures compatibility" purposes or whatever else, then you should perform this casting from USZ/SZ to 64-bit which you so wanted. Not CLFN, but you yourself. And this casting will obviously work fine in both 32-bit and 64-bit environments; nothing will be different from your point of view.
Having both signed/unsigned types as well as some concerns about coercion dots are also questionable, but let's stop our discussion at this point.
In general, we think slightly differently. Before, I wrote tons of code using statically and strictly typed Fortran, Pascal, and Modula-2, then C (love pointers) and Delphi... Nowadays I'm just enjoying some "weakness" and "dynamics" offered by modern C#, Python and Java, like a fresh breeze, really. You hate this, but I love this; and conversely — you love fixed 64-bit type thrown out from CLFN for Pointer-sized integers, but I hate this, because this is not "generic" solution. We are just thinking absolutely differently.
10-20-2024 09:47 AM - edited 10-20-2024 09:53 AM
Well lets rest it at that. The current implementation is the easy solution without having to touch umpteen parts inside LabVIEW to deal with yet another new datatype. The alternative likely would have been to have no 64-bit support for many more years. At the time this was actively worked on, the LabVIEW team was already trying to determine how to go further and discussing LabVIEW NXG.
And I'm simply an egoistic jerk here. The way it works now works perfectly fine as long as there is no 128-bit version of LabVIEW, and I'm totally 200% sure that that won't happen until many many years, likely decades, past my retirement. So not a problem I ever will have to deal with, so f"ck it! 😁
National Instruments and even Emerson may likely not exist at that point anymore either. And the whole world will program in Taipan instead of Python. 😁 Or humanity will all bow to their AI overlords and let them do all the work. 😎
10-21-2024 03:31 AM
@rolfk wrote:
wiebe@CARYA wrote:
Note that on 32 bit LabVIEW, you shouldn't use pointer sized integers.
Unfortunately, that is wrong advice if you work with pointers to DLL functions. If it is a pointer you ALWAYS should configure the Call Library Node to use pointer sized integers for such a parameter. Yes, in LabVIEW it will be always treated as a 64-bit (unsigned) integer but that is intentional and correct, since LabVIEW is in its datatypes always bit size strict. The Call Library Node will, for pointer sized parameters, correctly convert from the 64-bit value to the 32-bit pointer when it executes in 32-bit LabVIEW and leave it at 64-bit when running in 64-bit LabVIEW. It also will correctly sign-extend any returned 32-bit pointer, when running under 32-bit, to the according 64-bit value.
Well, I simply couldn't get this to work.
Windows function pointer come from WoW32, not the 64 bit OS, so I had to use 32 bit pointers, not pointer sided integers.
I've just been through this, I couldn't get this code to work with pointer sized integers on both 32 bit and 64 bit LV. I had to resort to conditional disabled structures to switch between pointer sized integers (64 bit LV0 and 32 bit integers (32 bit LV).
Without a pointer sized integer type, it's a big mess anyway. You can't use a 64 bit integer as an input to a LV dll on a 32 bit LV.
10-21-2024 03:38 AM
@rolfk wrote:
Well lets rest it at that. The current implementation is the easy solution without having to touch umpteen parts inside LabVIEW to deal with yet another new datatype. The alternative likely would have been to have no 64-bit support for many more years. At the time this was actively worked on, the LabVIEW team was already trying to determine how to go further and discussing LabVIEW NXG.
And I'm simply an egoistic jerk here. The way it works now works perfectly fine as long as there is no 128-bit version of LabVIEW, and I'm totally 200% sure that that won't happen until many many years, likely decades, past my retirement. So not a problem I ever will have to deal with, so f"ck it! 😁
National Instruments and even Emerson may likely not exist at that point anymore either. And the whole world will program in Taipan instead of Python. 😁 Or humanity will all bow to their AI overlords and let them do all the work. 😎
I'd be very happy when 32 bit is eliminated though.
One less variable to consider. And although it's all manageable for dlls, AFAIK there isn't really a way to deal with 32\64 bit ppls.
10-21-2024 04:02 AM
wiebe@CARYA wrote:
@rolfk wrote:
wiebe@CARYA wrote:
Note that on 32 bit LabVIEW, you shouldn't use pointer sized integers.
Unfortunately, that is wrong advice if you work with pointers to DLL functions. If it is a pointer you ALWAYS should configure the Call Library Node to use pointer sized integers for such a parameter. Yes, in LabVIEW it will be always treated as a 64-bit (unsigned) integer but that is intentional and correct, since LabVIEW is in its datatypes always bit size strict. The Call Library Node will, for pointer sized parameters, correctly convert from the 64-bit value to the 32-bit pointer when it executes in 32-bit LabVIEW and leave it at 64-bit when running in 64-bit LabVIEW. It also will correctly sign-extend any returned 32-bit pointer, when running under 32-bit, to the according 64-bit value.
Well, I simply couldn't get this to work.
Windows function pointer come from WoW32, not the 64 bit OS, so I had to use 32 bit pointers, not pointer sided integers.
I've just been through this, I couldn't get this code to work with pointer sized integers on both 32 bit and 64 bit LV. I had to resort to conditional disabled structures to switch between pointer sized integers (64 bit LV0 and 32 bit integers (32 bit LV).
Without a pointer sized integer type, it's a big mess anyway. You can't use a 64 bit integer as an input to a LV dll on a 32 bit LV.
I see your words but no code whatsoever, so I can't really know what didn't work. I do it regularly and it always has worked so far.
10-21-2024 04:24 AM
@rolfk wrote:
wiebe@CARYA wrote:
@rolfk wrote:
wiebe@CARYA wrote:
Note that on 32 bit LabVIEW, you shouldn't use pointer sized integers.
Unfortunately, that is wrong advice if you work with pointers to DLL functions. If it is a pointer you ALWAYS should configure the Call Library Node to use pointer sized integers for such a parameter. Yes, in LabVIEW it will be always treated as a 64-bit (unsigned) integer but that is intentional and correct, since LabVIEW is in its datatypes always bit size strict. The Call Library Node will, for pointer sized parameters, correctly convert from the 64-bit value to the 32-bit pointer when it executes in 32-bit LabVIEW and leave it at 64-bit when running in 64-bit LabVIEW. It also will correctly sign-extend any returned 32-bit pointer, when running under 32-bit, to the according 64-bit value.
Well, I simply couldn't get this to work.
Windows function pointer come from WoW32, not the 64 bit OS, so I had to use 32 bit pointers, not pointer sided integers.
I've just been through this, I couldn't get this code to work with pointer sized integers on both 32 bit and 64 bit LV. I had to resort to conditional disabled structures to switch between pointer sized integers (64 bit LV0 and 32 bit integers (32 bit LV).
Without a pointer sized integer type, it's a big mess anyway. You can't use a 64 bit integer as an input to a LV dll on a 32 bit LV.
I see your words but no code whatsoever, so I can't really know what didn't work. I do it regularly and it always has worked so far.
I know, open sourcing will happen but isn't that easy 😑.
I'll try to get it working with the pointer sized integers, I know it never was a problem before.I had to fix other problems after changing that, so it might be one (of many) red harrings... The main problem might have been the calling convention. This is more forgiving on 64 bit than it is on 32 bit (at least for <4 parameters).
I do know that creating a callback VI in a dll would require the missing pointer sized integer data type. The workaround I use is to create a 64 bit integer and a 32 bit integer, and 2 build specs: one for 32 bit LV and one for 64 bit LV. I need 2 build scripts anyway, to create dll files with 32 and 64 in the name so it's just a minor annoyance...
10-21-2024 04:52 AM
wiebe@CARYA wrote:
@rolfk wrote:
wiebe@CARYA wrote:
@rolfk wrote:
wiebe@CARYA wrote:
Note that on 32 bit LabVIEW, you shouldn't use pointer sized integers.
Unfortunately, that is wrong advice if you work with pointers to DLL functions. If it is a pointer you ALWAYS should configure the Call Library Node to use pointer sized integers for such a parameter. Yes, in LabVIEW it will be always treated as a 64-bit (unsigned) integer but that is intentional and correct, since LabVIEW is in its datatypes always bit size strict. The Call Library Node will, for pointer sized parameters, correctly convert from the 64-bit value to the 32-bit pointer when it executes in 32-bit LabVIEW and leave it at 64-bit when running in 64-bit LabVIEW. It also will correctly sign-extend any returned 32-bit pointer, when running under 32-bit, to the according 64-bit value.
Well, I simply couldn't get this to work.
Windows function pointer come from WoW32, not the 64 bit OS, so I had to use 32 bit pointers, not pointer sided integers.
I've just been through this, I couldn't get this code to work with pointer sized integers on both 32 bit and 64 bit LV. I had to resort to conditional disabled structures to switch between pointer sized integers (64 bit LV0 and 32 bit integers (32 bit LV).
Without a pointer sized integer type, it's a big mess anyway. You can't use a 64 bit integer as an input to a LV dll on a 32 bit LV.
I see your words but no code whatsoever, so I can't really know what didn't work. I do it regularly and it always has worked so far.
I know, open sourcing will happen but isn't that easy 😑.
That wasn't my angle I was trying to get at. 😁 But I would really like to see an example where you got such a problem. It is hard to imagine for me how that could happen.
The main problem might have been the calling convention. This is more forgiving on 64 bit than it is on 32 bit (at least for <4 parameters).
64-bit doesn't really have multiple Calling Conventions. It's exactly one (fastcall), but of course that one is different than the two that LabVIEW supports in 32-bit. Instead of adding that, and causing a 32-bit - 64-bit nightmare when moving VIs from one to the other, they choose for the pragmatic option. Leave the 32-bit options and simply ignore them in 64-bit but maintain them anyways as part of the saved VI definition, so that it will still work when loaded in 32-bit. Yes, 64-bit Windows knows other calling conventions but they are not for user space code.
As to callback functions in a DLL from LabVIEW VIs, that is a concept that I pondered a few times but considered it simply not practical. I consider the solution more brittle than a spider rag, a total maintenance nightmare and pretty much nothing I would ever want to support for average LabVIEW users. They don't understand the difficulties and will mess up over and over again. If you can't write C code to implement the callback functions yourself (one of the more advanced C programming topics) you can't really comprehend the difficulties not even for just using an existing solution. But if you know how to write the according C code, its simply a ten fold simpler to do it that way than trying to do callback functions in LabVIEW.