01-08-2009 03:42 PM
01-09-2009 06:04 AM
I agree about posting your code that is troublesome.
Since I will not be available today (yet another day behind the barbed wire) Let me say a couple of things.
Avoid strings when possible since they gooble up memory. Convert to numerics as soon as possible.
Since you are touching the same dat repeatedly and your data sets sound large, you maybe headed toward a specail AE to do your work. Short war story. A customer wanted to do cyclic voltametry at high rates. Inital development used queues to pass data from DAQ to display and logging. this was fine. When adding post processing and analysis they ran out of memory after long runs at high rates (customer did not anticipate LV would be able to run that fast but loved it when he found out). Analysis showed that his data sets required a good portion of the availabe physical memory. So we turned things inside out and replaced queues with an action engine that had an action for each procesing step (including saving to file). Using that approach we kept all of our data in one buffer and only pulled out the peices we need. Last I heard he was patatenting the system.
So I think you will end up with a AE in the end.
Have fun and "may all of your wires be straight and your buffer re-used" ![]()
Ben
01-09-2009 07:40 AM
For the queue approach, you actually can chunk your data very efficiently by using an array of queues. Instead of an 2D-array of 10x10000 data elements you could use 10 single-element queues with an 1D array of 10000 elements. So each time you access one of those 10 sets, all necessary buffer allocations/data copies are only made for the 10,000 elements and not all 100,000 elements.
This technique is also not in contrast with the AE approach, but might nicly fit in.
Felix
01-09-2009 11:50 AM
Thanks nathand, I didn't realize that I could have multiple items in a queue, I have never actually used them I am just begining to learn about them now. That is useful and I will keep that in mind as I determine whether to go with a FG or a queue.
I would post my code but I feel it would be very difficult for anyone else to be able to go through it, the code is spread across many VI's and it is not very easy to follow.
Thanks for input Ben, the data is being accessed a few times throughout the code. I have already started developing the FG for this data and think I am going to go this route for now anyway and see where it gets me. I would like to take advantage of this oppurtunity to learn how to create and use FG's even if I don't end up sticking with them in the end.
Thanks for that idea Felix, I hadn't thought of that either. I will consider implementing this mehtod also.
Thanks for everybody who contributed and helped me out with some ideas. I think you guys gave me enough knowledge to be able to get started anwyay with these modifications to the code to try and improve memory performance.
Wow, this has really been a great
experience.... I don't know how I have been writing LabVIEW code
without this message board over the past year!! There is so much help
and very knowledgeable people here. When I first considered about
posting some questions here I didn't anticipate this much respone at
all, and especially the speed of the responses, I had answers within an
hour (Thanks to Ben)!!! To me, that is unbelievable. I will definitely
be back soon.Thanks again guys
01-09-2009 01:47 PM
I told you I'd be back.... I am wondering what the most efficient way to build the data in a FG is. If I know the final size of the array should I intitialize it to be that size at the begininng and then use the replace subset fucntion to chunk in my data? Or should I grow the size of the array within the FG as I chunk in my data. I am currently using the reshape array fucntion to intitialize the array because it seems not to create a buffer when compared to the intitialize array function. Here is my initialization case:
Thanks for any help. ![]()
01-09-2009 01:56 PM
01-09-2009 04:11 PM
It will be more efficient to allocate the array once at its full size. I would recommend that you use reshape array on the first chunk of data, then use replace array subset for the following chunks. Initialize array may allocate a buffer unexpectedly when the array sizes are constants, is this what you're seeing? See here for more details.
amaglio wrote:I told you I'd be back.... I am wondering what the most efficient way to build the data in a FG is. If I know the final size of the array should I intitialize it to be that size at the begininng and then use the replace subset fucntion to chunk in my data? Or should I grow the size of the array within the FG as I chunk in my data. I am currently using the reshape array fucntion to intitialize the array because it seems not to create a buffer when compared to the intitialize array function. Here is my initialization case:
01-12-2009 01:19 PM
01-12-2009 01:23 PM
nathand wrote:
data, then use replace array subset for the following chunks. Initialize array may allocate a buffer unexpectedly when the array sizes are constants, is this what you're seeing? See here for more details.
Thanks for you help nathand. Yes, that is exaclty what I was seeing.
01-12-2009 02:26 PM
If you're not seeing a buffer allocation then one isn't taking place. Have you tried turning on the option (in preferences) to show constant folding? If you're using a simple test VI you may find that the build array is folded into a single constant array, so no buffer allocation is needed - but your real life case is probably more complex. I think you'll get the same results using either build array or reshape array, and both of them will cause a buffer allocation, according to this thread on LAVA (there's a question about reshape array towards the bottom of the first page, and I expect that build array is the same).