High-Speed Digitizers

cancel
Showing results for 
Search instead for 
Did you mean: 

NI 5124, max points measured with 256MB memory

I want to know if there is a simple calculation that i can do to know the maximum amount of points that i can measure with the 5124 scope.

With My application i want to measure a burst of 520 mS of signal at ones, the signal is downconverted from 406MHz  with the 5600. So the signal is at a minimum frequency of 5MHz. This burst i want t save completely and still be able to retrieve offline the following features:
- Frequency
- Power
- IQ
- Decode the message.

For this i need a high enough sample rate (minimum niquist). So as absolute minimum i need 520mS signal, but it would be nice to have at least 1 second of signal..

is there an ellegant sollution for this?

Regards

Joost van Heijenoort

0 Kudos
Message 1 of 17
(9,463 Views)

At the data rates you mention, the 5124 interface is fast enough to stream to memory/disk.  I have successfully streamed to disk at speeds in excess of 20MBytes/sec (10MHz, single channel).  To memory is faster. Your speed will depend on your computer hardware. I have attached an example of how to stream to disk. It requires the NI-HWS API, available on your driver CD you got with the 5124. You can modify the lower loop to store data in memory instead of on disk, if you wish, but I am not sure I would recommend it, given the size of your data. The two loop structure is used to get the full benefit of LabVIEW's multithreaded environment.

One other point. If you haven't done so yet, you will want to read Managing Large Data Sets in LabVIEW. Your data set, for a single channel, is going to be close to half a gigabyte. Just one extra copy will probably result in an out-of-memory problem. This is why I recommend you stream to disk. You can store as much as you want, that way, and it will be easily available for later use. Note that NI-HWS supports file sizes greater than 2GBytes on all versions of LabVIEW it can be used with.

Good luck. Let us know if you have any further problems.

0 Kudos
Message 2 of 17
(9,459 Views)
Thanks for the quick response.

but i think that in some cases 10Mhz sampling is even not enough. But maybe i can downconvert the data even more to lets say 100kHz. and then save that...

But thanks I will surely try this first..

best regards

Joost
0 Kudos
Message 3 of 17
(9,458 Views)
I've got an application where I need to sample a signal at 175MS/s. Since my PCI-5114 can only do either 125M or 250M, I'll be taking data at 250MS/s for ~23M samples max (around half that normally). Since the data lasts all of 100ms max, I've got it setup where it fetches an entire 8-bit data set, then I save that raw data to a binary file. Not that Vista's resource monitor is any good real time indicator, but it looks like I'm getting anywhere from 15-27MB/s to the disk. I'm pretty sure it's capable of more than that (it's a 4-disk RAID5 array) though it's not empty so I can't let Sandra or h2benchw tell me exactly what it's capable of.

So here's my question (and sorry for unearthing a dead thread, but this is pretty closely related to the previous poster's problem): is this the best way of acquiring the data? Currently it takes almost a second for the file to be written and I'd like to be able to acquire, fetch and save to disk in 0.3-0.5sec if at all possible. I've looked at your example, DFGray, and am wondering about this stream to memory thing. I've got 16G of ram so I wouldn't have to worry about buffer overflow if I could have a good continuous streaming from memory to disk (there's also about 0.5s worth of stage motion between acquisitions).


Message Edited by mlang on 02-01-2008 01:16 AM

Message Edited by mlang on 02-01-2008 01:17 AM
0 Kudos
Message 4 of 17
(8,956 Views)
Given what you have said, I would expect you to get somewhere from 20MB/sec to 100MB/sec in disk writing speed.  However, you are running a PCI based computer, so you may be bottlenecked by the system bus.  This will depend where your RAID controller is situated.  If your RAID controller is integrated into the motherboard from the chipset, you should be good.  If it is a plug-in PCI card, there may be bandwidth issues.  However, since you are not streaming, this should not be much of an issue.  Some things that may be hurting you include:
  1. What chunk size are you writing into your binary file?  The optimum is about 65000 bytes.  Anything more or less will slow you down, sometimes quite dramatically.  If you are attempting to write the entire multi-megabyte chunk at once, this is an issue.  Not e that is optimum is that for Win98/ME/2K/XP.  I have never tried it on a Vista system.  Play with this one yourself.  Note that NI-HWS double buffers and does this for you automatically.  It is also faster than the LV binary write, even when optimized.  Use NI-HWS.
  2. Are you creating a new file each time you write?  If you are, you may want to consider creating the file while acquiring your data.
  3. Are you using the same file, but closing and reopening it?  If so, this overhead, including the seek to the new position, will cause issues.  Leave the old file open and just stream to disk.
If you haven't read the tutorial on managing large data, it will give you some pointers.  Good luck.  Let us know if you need more help.  You should be able to do this.
0 Kudos
Message 5 of 17
(8,946 Views)

Yeah, that write speed seems more like it. It might just be that Vista's resource monitor isn't capturing the peak write spike. But it still takes too long to write the data in my opinion. The data RAID array is through the motherboard and there's a three drive RAID0 array for swap that goes through a PCIe RAID card (for number crunching vast GBs of data files after it's all taken). But the swap shouldn't be involved in writing to the disk so I think I'm good there.

As for the chunk size I'm writing to the binary file, I'm just using LV's built in write to binary file (left over from when I wrote it 4 years ago). There's no mention of an alterable chunk size. It definitely sounds like I'll need to check out the NI-HWS VIs. The one caviat is that all these files will later be opened in Matlab and the current algorithm works with the way the data is saved via the write to binary file VI. Would the file saved by NI-HWS be any different (i.e. header information regarding the waveform information) requiring a change to the Matlab code?

Creating the file while acquiring is an outstanding idea (one I should have thought of before, but I didn't have the system working to a point where I could have noticed the file save time was too long).

As for the large data set, the output from fetch goes into a hidden control on the front panel (for viewing/zooming, etc.) and the directly into the save routine. Any requests for the data (viewing in a waveform chart, user option saving, histograms, etc.) read directly from the local variable so there shouldn't be too many wire branches making extra copies. In auto acquisition it skips the front panel control all together and saves directly to a file.

I'll try out those suggestions tonight and give a report on the speed increase at all.

0 Kudos
Message 6 of 17
(8,939 Views)
Ahh, I was just re-reading the "Managing Large Data Sets" again and saw the mention of streaming to disk using discrete chunks instead of all at once. So would this be implemented similar to your streaming to disk application where you fetch inside a for loop (65k points if I read your adivce correctly) and keep the file pointer at the end of the data using NI-HWS?
0 Kudos
Message 7 of 17
(8,938 Views)
So I've implemented the change and it seems to work a bit better. I don't know if there's a drastic speed increase, but the intervals between stage motion (the only indication that an acquisition is done in the auto acquire mode) seem more uniform and on the quicker side of where they were last night.

I opted against using the "Fetch Relative To" property node in your example (which is for continuous data acquisition) in favor of the "Fetch Offset" property node used in the 'Fetch in Chunks' NI-SCOPE example (which is more along the lines of a single triggered acquisition like my situation). I also found that fetching relative to the read position wouldn't start the data stream at the normal pretrigger location, even though the position is supposed to be set to zero after each acquisition start. I would presume I would get the same waveform out using 'fetch relative to:read position' as I would using the scope soft panel, but it doesn't.

A similar problem I have now that I've messed with the "Fetch Relative To" property node is that now it seems like I have to make an extra acquisition before anything like trigger delay or number of samples to read is registered. In the manual mode, I used the "Fetch Relative To: Pretrigger" in order to get the section of data to be properly oriented in time. But it still exhibits the updating only after an acquisition has been made. Example: I setup a 1Msample acquisition at 50MS/S w/ a trigger delay of zero. I acquire and see my pretty graph. There's an interesting feature at t=8ms, so I set my trigger delay to 8ms, but I have to acquire twice in order to see my graph update with a waveform that starts near the interesting feature. Any idea why this is happening now?

Also, I've attached a screenshot of the acquisition loop. Let me know if this is approximately what you had in mind for fetching 65000 bytes (or 65000 data points w/ my I8 acquisition) at a time and streaming that to disk. Now that I look at it, it might make more sense to use another queue (like in your example) that passes the waveform ID and the chunk of data to a separate loop that is always waiting on data to write to disk. Also, when I save the files, I'd like to be able to load them and see the original Voltage/timing information on the file and I'm at somewhat of a loss as to how to include information like offset, gain, t0 and dt. Right now, even though I think it creates a weird header at the beginning of the file, I use the waveform attributes to keep this information.  I ended up saving those four items in reasonably related waveform attributes, but if there's a better way you can think of, let me know.
0 Kudos
Message 8 of 17
(8,932 Views)
Ah, figured out the not updating thing. That's all me. In my change to the code, I destroyed the data dependancy of the subsequent "update graph" command (sent via queue) and so it happened simultaneously w/ the fetch, and not strictly after. So a little error cluster wire that solved that problem.

I'd still like to know if the guts of the code (i.e. fetching 65000 pts, and then using the fetch offset to grab the next set) looks right or if I'm still not quite grasping what it takes to stream captured data to the disk as fast as possible.

Thanks,
Matt


Message Edited by mlang on 02-03-2008 01:01 PM

Message Edited by mlang on 02-03-2008 01:01 PM
0 Kudos
Message 9 of 17
(8,923 Views)
It looks OK to me, but you can get faster.  65,000 bytes is the optimum size for writing to disk, but is not the optimum for fetching from the scope card.  Try fetching somewhere between 300k and 1M points from the scope.  The HWS VIs will automatically break this into 65,000 byte chunks for writing to disk, so you don't need to worry about that.  I would split the acquisition and disk write into separate loops, as this will also give you a performance advantage (about 10%??).
0 Kudos
Message 10 of 17
(8,908 Views)