LabVIEW Idea Exchange

cancel
Showing results for 
Search instead for 
Did you mean: 
GregSands

Change array index representation to U64

Status: New

Currently, the only representation for array indices is I32.  Given that indices can only be positive, and that modern systems are 64-bit, it would seem sensible to make this U64.

9 Comments
JackDunaway
Trusted Enthusiast

The array index is the tip of the 32bit iceberg. Are you more interested in specifically converting Index Array, or changing the LabVIEW "base" integer datatype for all functions from I32 to I64/U64?

 

As a side note, even though you're correct that the current implementation only utilizes the positive half the datatype, it keeps doors open to go ahead and use the signed integer, such as Negative Values in Index Array. I'm not specifically for or against that Idea, but I like how it attempts to cleverly leverage the full capability of the signed datatype. Plus, it's irksome to perform integer math only to find some functions (especially property nodes it seems) use U32 while most use I32, resulting in coercion dots all over the place, so I'd rather ALL functions adhered to the base signed integer.

 

As a final note, using an I32 allows indexing of a 2^31 element array, or 2GB of memory if you use the smallest 8bit datatype (U8, BOOL, I8). On the other hand, 2^63 would require 9.2EB of memory (for the purists, 8 exbibytes), which to put it into perspective is nine million of the 1TB hard drives I have sitting on my desk.

GregSands
Active Participant

I agree on adhering to a single default.  In fact, having considered what you say, perhaps it makes more sense for this to be I64 everywhere rather than U64.

 

And while 2GB can be addressed with I32, 2.1GB can't, and 4.1GB can't with U32.

JackDunaway
Trusted Enthusiast

 


@GregSands wrote:

And while 2GB can be addressed with I32, 2.1GB can't, and 4.1GB can't with U32.


 

Yep, that's what I was thinking of while crunching those outlandish numbers - it's not the reeeaaaally big numbers that matter, it's the numbers that just exceed the current threshold. Although reasonably attainable with today's hardware, those memory requirements are probably limited to a tiny sliver of the market. I'm not in that sliver, and even though I have a tendency to support forward-thinking ideas, this is just too far in the future to viable yet. (Take that market-driven opinion for what it's worth).

 

***EDIT: If I re-read this comment in 10 years, I will 😄 at myself ***

Manzolli
Active Participant

How long LabVIEW can live with the I32 limit? Any predictions? Hard drives with more that a terabyte are common nowadays. The firsts PCs, that I had worked with, had from 5 to 10 MB of hard drive space and 512 to 640 KB of RAM. Today I'm working with a 2 year old PC with 600 GB of HD and 8 GB of RAM. How long will take to have PCs with a TB of RAM memory?

André Manzolli

Mechanical Engineer
Certified LabVIEW Developer - CLD
LabVIEW Champion
Curitiba - PR - Brazil
JackDunaway
Trusted Enthusiast

Hardware capacity/capability aside, when's the last time you thought to yourself, "Oh no, this application is hosed 'cause I can't index an array with 2 billion elements..." The hardware will probably get there before the applications get there... GRANTED, I'm sure there's a market for this capability currently, I'm just not in that market.

 

(This time, I don't need to edit this post to include my postscript: "If I re-read this comment in 10 years, I will 😄 at myself" )

GregSands
Active Participant

Interestingly, some of the file routines (e.g. Set File Position) have already moved to I64, but others (e.g. Read from Binary File) have not. It would make sense to standardize everything on I64 - do it all at once, everywhere.  I do now think I64 is more sensible than U64, thanks Jack, and to my mind, it's inevitable, so probably doesn't even need suggesting as an idea!

 

Probably 10 years ago was when 1GB drives were standard, which would suggest about the same time until 1TB memory will be commonplace.  Some of my work already uses >1GB arrays, though usually as a 3D array rather than 1D which reduces the indices considerably.  But the 3D images I work with can extend from 1GB to >10GB, so if I had a machine that could handle it, I'd be working with (3D) arrays in that range already - at the moment I just break them into smaller sections.

 

JackDunaway
Trusted Enthusiast

Now, are we talking about total memory footprint, or number of elements to index? The footprint can be "large" if there are a "few" elements that are "large". The only scenario where 64 bits is needed to index is when the number of elements is greater than 2 billion, not simply when the array footprint is large.

 

Yes, file position pointers is the only notable mainstream switch to 64 bit I can think of, and that is for good reason... the number of bytes in a file can easily exceed 2 billion in mainstream applications.

 

And by the way, virtual Kudos to this Idea. I agree with the principle, it's just too early to give Kudos. The only way I can "prioritize" my R&D wishlist on this Idea Exchange is either by 1. Giving Kudos, or 2. Not Giving Kudos. For many Ideas, I have Not Given Kudos, not because they're bad Ideas, but just because they're not as high priority on my wishlist. (I would reckon many people have this same sentiment about this Idea and others...) I guess this falls into a third category: "Not Given Kudos Yet".

Manzolli
Active Participant

Same as Jack (may I call you only Jack?), some ideas I don't think they are essentials, some I don't use the functionality and some only the future will say if are good or not. Again, from the past, I worked in 80's with a computer that has only a 5"1/5 floppy disk that had 360 KB of space (double side & double density). In this floppy we use to have: OS, drivers, text, couple applications (text editor, spreadsheet, etc.) and our files! OK, quality was much worse, no GUI, no images, simple graphics... I brought this up because we should aim in the future, like GregS said, trying to no forget what we really need now and in the near future. One example of big index use is a pointer to a huge file (long high resolution video) that is accessed in a binary form. Maybe that's wy they already changed the index of some file functions do I64. I'm working in a project with cRIO that acquire data from 14 channels @ 51.2 KS/s which can generate, in 4.5 minutes, a data file with more than 1 GB. We are not sure when it will be, but we know that it will happens for sure.

André Manzolli

Mechanical Engineer
Certified LabVIEW Developer - CLD
LabVIEW Champion
Curitiba - PR - Brazil
ASInc
Member

This will be a necessary step eventually, but currently, I really don't see it being an issue.  Any 32bit OS (most of them still) can only address MAX 3GB of RAM.  Even if your array is made of U8's, you'll nearly run this out before you hit 2Billion elements.  Even on a 64Bit OS, with 8GB of available RAM (about the biggest you find even somewhat commonly today), a 1D array of SGL will use all the available ram before hitting the end of an I32 index.