LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Not Enough Memory, Can I Free Buffer/RAM?

Hello, I am trying to capture a FM transmission and save to a .tdms file for at least 5MS/s for 60 seconds, and am coming upon an error where it states that there is not enough memory in my system.  When I run sessions of 10 seconds or longer, the program takes an extra 20 seconds or longer to finish, leading me to believe that the code might be saving to the buffer on my computer, and then saving that (ever increasing) buffer to the file every loop. 

 

I want the buffer to write the current Sample as an add on to the file and then flush the buffer every loop, so when the entire process is complete the .tdms file will have all ~300M+ Samples saved, the buffer will have 0 or 1 Samples, and the program will end right after the 60 seconds of recording is complete.

 

A screenshot of my .gvi and the error is attached.

Thank you

Download All
0 Kudos
Message 1 of 7
(3,776 Views)

First, you should have tagged your message as LabVIEW NXG, and not LabVIEW.

 

Is there a reason you are using NXG and not the current generation of LabVIEW?  One problem you'll find is that the heavy hitters on the forum such as myself, just have never used NXG or have only dabbled in it.  So looking at the code is rather hard as many of the new icons are unfamiliar, the pastel color scheme is hard on the eyes, and visual clues as to how things are coded just aren't clear.  For example, the tunnels across boundaries are different than in LabVIEW, so it is hard to tell if they are normal tunnels, auto-indexing, or concatenating auto-indexing.  The latter two would be an obvious source of memory errors.

 

I don't understand why you are closing the TDMS file inside the second while loop.  Also, why do your first while loop and the one inside of that exist at all?  They only execute one time.

 

If you are accumulating all of those samples in memory, you will definitely have problems.  300 million samples will eat up 2.4 GB assuming the data type is double precision where each value takes 8 bytes.  If you can change it to single precision, it would only take 4 bytes and cut the memory in half.  But there is no way to "free buffer/RAM".  Once you get that error dialog box (and even the syntax of the text in it is strange to me), your code is basically stuck and unrecoverable.

 

You need to figure out how to keep that buffer from growing.  You should be looking at a producer/consumer architecture where the acquisition is done in the producer loop, and the data passed to the consumer loop where it is written to the TDMS file via a queue.

0 Kudos
Message 2 of 7
(3,731 Views)

1. I would move the graph to be inside of the loop.

2. Use a Producer/Consumer setup so that your logging happens in parallel with your acquisition.

3. Do not close the file with each iteration.  Only close the file when you are completely done writing to it.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
Message 3 of 7
(3,721 Views)

I think you need to play whack a buffer dot. In LabVIEW, not NXG, I know there is a function that shows buffer allocations in the form of dots, you want to eliminate as many dots as possible because they may indicate properties. Here a guess at your dots:

  1. Your incoming data is complex - 1 dot
  2. You split into Re and Im (I & Q) - 2 dots
  3. You take your I  & Q and convert to a sin waveform? - 2 dots
  4. It looks like you then interleave your sin waveforms - 1 dots

You need to figure out what can be done in place and what can't, as it looks like you have a lot of buffer allocations.

 

mcduff

0 Kudos
Message 4 of 7
(3,712 Views)

First of all, array indices are I32 and converting an array size to I64 does not magically give you more than that. Use reasonable representations!

It is difficult to tell what's happening in all your code (unnecessary loops, backwards wires, overlapping wires, etc.) and I don't have access to NXG, USRP, and your hardware to play around.

 

LabVIEW is very good at reusing buffers and you definitely don't want to clear anything just to have it allocate that same buffer a nanosecond later again. What else is running on that computer? Maybe the disk cache is filling up because the HD cannot keep up with that massive flood of data. Hard to tell. Is this an SSD or conventional HD? Do you use bitlocker with software encryption? Does everything work well if you don't stream to disk? How big is a "sample"? Maybe it would be more efficient to stream the raw data to a flat binary file and do all the processing later.

0 Kudos
Message 5 of 7
(3,704 Views)

Sorry for all the issues.  I am using LabView Communications which is what I was presented with for this project.  It seems to be handicapped in many ways to the standard LabView, and I could find barely any information about LabView Communications online, so I assumed it wasn't NXG just to be safe.  I've updated my code using a Queue, so now for every data element I get, the code should offload the data to the file I create.

 

I have a loop at the beginning because I want everything to be ready for the USRP initiation, including choosing what file the data will save in, before the GPS detects a new minute has passed.  That is when both USRPs should start collecting data.  This is to help partially sync the USRP with another USRP connected to a seperate PC.  If I do not have that large while loop at the beginning, the USRP initiates before the file can be created and opened, resulting in syncing errors.

 

I am checking the CPU, memory, and network consumption with task manager as the code is running.  I am using an SSD, and I make sure that all the other programs combined are using no more than 5% of the CPU, memory, or network.  I disabled all encryption about two years ago after Bitlocker started forgetting my password for hours at a time.

 

I have been collecting up to 150M samples at 5MS/s over a time period of 30 seconds, and the queue seems to properly collect the data in a quick manner without taking 5 minutes to save it all afterwards (or produce the memory error).  I am able to now collect up to ~4 GB of data on a single tdms file, and can swap the queue to another file if I want more than that. 

My new .gvi is below.  I'll post on the NXG forums as well.

 

I am not sure how to make the array indices I32.  The waveform coming in is by default a complex double, and the array is simply being created based off of the variable type of the complex signal.  Maybe there is an array using I64 that should be I32 that I'm just not seeing?

 

Edit:  I just made the queue to consist of I32 instead of Double (64-bit).  Do you think that would help? (screenshot attached)

Download All
0 Kudos
Message 6 of 7
(3,675 Views)

Looking at your image and your descriptions, all I can say that it's all total nonsense, but maybe I am missing something.

 

A while loop that only spins once (both outer while loops!) is just a glorified sequences frame. Why so convoluted?

The array data is complex but the array index is I32. I have no idea why you would think it is reasonable to make the qiueue I32. (In your original code, you took the file size and converted the I32 size to I64 for no obvious reason). It makes absolutely no sense to do an equal comparison of the number wired to N and the number going out on the inside of N inside the FOR loop. They are always guaranteed to be equal, right? Did you mean to wire to the iteration terminal instead? That would still be pointless, because now the comparison would be guaranteed to be not equal because the highest index is guaranteed to be one lower.  Does your FOR loop ever spin more than once? How often does the teraterm.csv file change and what changes it?

 

(In any case, you probably would get better help in the LABVIEW COMMUNICATIONS SYSTEM DESIGN SUITE forum.There is also a USRP SOFTWARE RADIO forum, not sure what's more appropriate. This is not my field so please decide. I can move this thread for you if you want.)

 

0 Kudos
Message 7 of 7
(3,664 Views)