LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Has anyone else seen issue with trying to allocate more than 1Gig of memory for a buffer?

Would DIADEM help with this?? I haven't used it yet. This sounds like a larger system that an automotive crash test simulator.
0 Kudos
Message 21 of 33
(2,398 Views)
You may not have to work with C, at least not directly. Try using HDF5 for your memory data instead of LV2 style globals. HDF5 is normally used as a file API, but it can also be used as a RAM disk. I tried this a couple of years ago and it worked well, but was slower than the LV2 global approach, so I never really used it (and I can't find my code...). However, since the HDF5 DLL is handling the memory, you may be able to get around the 1GByte “limit” of LV (HDF5 is natively 64-bit).

You can find all the information you want on HDF5, and a lot more besides, at the HDF5 homepage: http://hdf.ncsa.uiuc.edu/HDF5/

You can find a LabVIEW interface to HDF5 in the knowledgebase article Can I Edit and Create Hierarchical Data Format (HDF5) files in LabVIEW?. Ignore the high level abstraction layer and just use the raw calls to the API. When you open the the “file”, use a file access property with the access method set to core. This will make more sense when you have looked at the HDF5 API (or rather, warped your brain to the HDF5 API, it is very powerful, very general, and user unfriendly, but I consider it one of the best tools I know). You may have to write the function to set the access method to core, but that should be fairly simple with the rest as an example.

Good luck.
Message 22 of 33
(2,503 Views)
Thank you for the new update DF!


I will include an investigation the "HDF5 DLL" for what I need to do. My testing indicates that that have plenty of CPU left over, so this may be a viable option!

Ben


See this link for related Sea story.

http://forums.ni.com/ni/board/message?board.id=BreakPoint&message.id=9

Message Edited by Ben on 05-13-2005 08:09 AM

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 23 of 33
(2,504 Views)
Don't give up on the tag engine string yet. I used the ZLIB deflate vi to reduce a 448x30000 DBL array converted string. Before string size was 107520008 bytes. Size after ZLIB at level 9 was 104529 bytes.
Message 24 of 33
(2,356 Views)
Hi Uncle!

May "the Big guy" smile on you for your interest and assistance!

I will include investigation of ZLIB as one of the avuenues I will explore.

Since a large amount of the data I am after will be repating values, and the GUI response does not have to be instantaneous, compression may be a viable option.

I usually try to lay out a complete plan for my projects that includes the final destination and the path to get there.

Due to the fact that there appears to be few travelers that have approached the edge of the world (i.e 1 Gig) I am going to have to include a number of possible routes.

It will probably be about 1-2 months until I have to confront this journey so if other idea hit you please share.

If I make it to the "new world" I'll send a post card to help others.

Ben
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 25 of 33
(2,346 Views)
A couple more things to share...
  1. The 1GByte limit that has been mentioned several times in this thread is fuzzy and does not represent the total amount of memory you can get with LabVIEW. The real problem is trying to allocate the memory in one array. This results in a request to the OS for a contiguous memory space. This results in the fuzzy nature of exactly when you get the out-of-memory error. So, I would suggest breaking the data into a bunch of 1D arrays in this case. I would probably implement it as 400+ instances of the reference data set mentioned in the large data tutorial. You could then fairly easily create and access the data using the one additional level of indirection.

  2. If you go the HDF5 route, it includes native data compression, so you can get even more data into memory without having the play with the zlib VIs (HDF5 1.4.x native compression is zlib). It is probably a toss-up for whether it is more complex to go the HDF5 route or to build compression into a system like I mentioned in 1 above.
Message 26 of 33
(2,444 Views)
Hi All,

I appreciate the additional comments and suggestions!

I am going to have to stop work on this project while we re-group and acquire additonal funding to take this trip (The life of an hourly consultant).

When the "light goes green" I will investigate first the approach of using smaller arrays. The updates I recieved from NI support indicate that the single contiguous memory blocks required to support my 2-d arrays is what is getting in my way (this confirms your your suggestions DF).

Although this introduces an artifical division of data to accomodate the contiguous memory requirements, it does appear to be the ticket I need to travel to +1Gig-land.

I have know idea when the funding will be approved. I will update this thread as soon as I have the results.

Ben
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 27 of 33
(2,432 Views)
DFGray,

We have also been wondering about this contiguous block of memory, since doing the allocation of one large block does not pose a problem is C. Do you maybe have an idea if C code has more intelligence and is able to take several fragmented blocks of memory and treat them as one contiguous block ?

Also, if you take lots of 1D arrays to fragment the problem, how are you gonna treat all these arrays in LabVIEW, since it seems like a source of the ultimate 'spaghetti' code. Would you cluster them or something like that ?
Message 28 of 33
(2,412 Views)
RE: Spaghetti code.

That is one of the reasons the breaking down of buffers along artificial lines is not a natural move. My customer is going to requset the "plug-ins" they need for their test. They are not going to want to load 12 plug-ins just to support what they think of as a single widget. In order to keep this project modular but still avoid the memory issues, I will have to take the single request and in turn load a dozen plug-ins behind the scenes.

In my case I will probably have to introduce look-up tables to help in the selection of the correct data structures. A reader that is not aware of the memory challenge would take a look at the code and think this is overly complcated. I'll just have to drop disclaimers all over the code just to make it clear.

I do think it is time for LV start gearing up to handle the +1Gig world in a more elegant fashion. I figure it only about 2-3 years before more people will be pushing this limit.

The current word is that I may be able to return to this project in about 3 weeks. Until then I am on hold.

Ben
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 29 of 33
(2,406 Views)
Raistlin, I have no idea whether C is “smarter” than LV about allocating an array. Since LV is written in C/C++, that would seem weird. I think it is more likely that LV specifically allocates contiguous memory for performance, stability, or some other reason. However, since I do no write the internals of LV, I really don't know. Plus, I haven't developed anything in C for over 10 years, so my intimate knowledge of it is rather limited at this point.

As for breaking up the data into individual buffers, I don't think this would lead to spaghetti code if done correctly. I would think of the 2D array as an array of 1D arrays, one for each data channel. I would create a LV2 global data repository VI template to hold a single channel (see the large data tutorial for details). I would then use a loop to create as many instances of this as I needed (448? in this case). The end result would be an array of VI references, but could be though of as an array of pointers to arrays. Extracting a single channel or portion of a channel is easy - just reference the VI and extract the section. Functions would need to be created to extract 2D sections, but those would be very straightforward, similar to what we had to do before LV6i introduced all the nice array functions. The only down side to this is the VI overhead on each array. The final size of each array would govern whether or not this would be acceptable.
Message 30 of 33
(2,396 Views)