11-12-2015 01:28 AM - edited 11-12-2015 01:29 AM
Hello, all. I am currently working on a project which interfaces an NI2901 USRP with Labview Communications Suite 1.1. This is a doppler radar project that consists of generating a chirp signal, transmitting it continously, and reading samples of the resulting reflected signal. My problem is that whenever my IQ sampling rate is higher than 1MHz I receive an "Error -1074118647 occurred at niUSRP Fetch Rx Data (CDB WDT).gvi" message. I also would receive the same message for Tx fetch data. This is a problem as I need to generate a chirp signal that ramps from 0 to 100 kHz in 1 microsecond, which I do not think whould be feasible with just a 1MHz sampling rate. If I recall correctly, this USRP should have a sampling rate in the tens of Megahertz, which I need for my application. I should also mention that this USRP is connected to my PC via USB. The PC that I am working on is an i5 3.20GHz quad core with 8.00GB of RAM.
I understand that this is a common issue and have research any possible solutions. Unfortunately, any "solutions" just did not alleviate my problem. If any of you can provide any solution, or can reference me to any related previously solved post, then I would be very greatful.
Attached is the VI that I am using for my project, along with a screenshot of the resulting error message. I appreciate any help.
11-12-2015 02:21 PM - edited 11-12-2015 02:23 PM
Hey GenericAccount,
That sounds like a really interesting project! What solutions have you tried so far?
Basically, the problem is that data is not being pulled off the buffer fast enough. In general, large computer memory, lower sample rates, larger fetches will extend the time until you encounter an overflow error. If they increase the number of samples per fetch, they may be able to extend the time they are able to Tx/Rx without this error, or avoid it altogether.
To quote from a post on the USRP forums about a similar situation (several related threads on that forum for this error code):
"The IQ rate and number of samples per fetch are related, but not dependent on each other. The IQ rate specifies the number of samples per second to acquire (or transmit) and is typically determined by your RF application. The number of samples per fetch is typically something that is more determined by your computer. The USRP receives samples continuously. These samples are written into a small buffer. The driver then pulls the data out of the driver in chunks, or a certain number of samples every time a fetch is performed. It is important to set the number of samples per fetch to something reasonable because a fetch takes a little bit of time. If you aren't taking enough samples out of the buffer every fetch, then the buffer will overflow and you get an error.
If you are sampling at a higher IQ rate, that means more samples are being put into the buffer every second than there would be with a slower IQ rate. Because of this, the buffer will overflow faster at higher IQ rates if you are not pulling enough samples out of the buffer per fetch.
Typically after the fetch, your application is doing some type of processing. Once complete, the while loop iterates to perform another fetch. If the processing takes too long, your fetches or writes will overflow.
For example, an FFT after a fetch will likely keep up at low IQ rates but the FFT becomes the bottle neck at high IQ rates."
My suggestion would be to first try removing your file writes (temporarily for troubleshooting). This will help narrow down where the bottleneck may be. Next, try increasing the "Number of Samples Rx". If it's easier, you could start off with the unmodified TX and RX Continuous Async examples running side-by-side for benchmarking.
As an aside, I also noticed the array you are writing (TX) is about 101 elements. I'm not familiar with your algorithm in the MathScript Node but you may see underflow errors using small sets at higher IQ rates. You may want to consider increasing them if you come across this after resolving the overflow issue.
11-14-2015 12:53 AM
Hello, Decon. Thank you for lending a hand. It really is an interesting project and hope to take it further with USRP. Thus far, the solutions that I had tried was playing around with the sampling rate and the number of samples. I had successfully taken 1000 samples at a 10MHz sampling rate with the Tx Continously Async example from Labview. If I increase this rate, I get the error again. The strange thing is that if I go back to 10MHz, the error will appear again. It is like there is a fault that lingers when something goes wrong, or there is an inconsistency with the USRP operation.
I had tried removing the write operations within the Rx loop and increased the sampling rate to 2MHz. It had worked, but when I had run it again it crashed. Another test that I had done was set the sampling to 10MHz and gradually increased the number of samples. Unfortunately I still had underflow errors even at insanely high sampe sizes of 300~500 thousand.
When you mention "pulling enough samples out of the buffer per fetch", do you simply mean the number of samples that I am creating in my Labview project? Or are you refering to some sort of driver or USRP setup? Also, I had heard refernece to a FastSendDatagramThreshold registry key value, but nobody seems to give any explaination as to what it does.
11-17-2015 11:07 AM
Yes, on the Fetch RX Data function, the "Number of Samples Rx" terminal indicates how many samples will be pulled from the buffer on each software call to that function. The hardware sampling rate is loading up the buffer at a much faster rate. So, in general, we will want to pull off a much larger set of samples on each fetch to keep pace. How much is dependent on how fast the buffer is filling and how slow our loop rate (software calls) are going. Typically we do not expect to see the loop rates run much faster that ~1ms (conservatively speaking), and this will gradually degrade as additional processing is added.
For the FastSendDatagramThreshold registry, this is basically a setting for the threshold on Datagram (eg, UDP, TCP, etc) packet buffering is for WinSock, depending on size. For more details see this Microsoft article and Data Streaming Performance Tips from the USRP driver Help. However, I'm not sure that you need to adjust this with newer versions of the driver, and I believe you are streaming with USB as opposed to ethernet anyways.
11-17-2015 04:46 PM
11-24-2015 03:49 PM
Hey Generic
If you get time, I'd appreciate if you can try this out;
I've seen the lingering error after buffer timeout problem too. See if you can replicate it, then to try and "solve" it go to the top ribbon tab "VI >> Reset Control Values >> Reset All to Default". Then reenter the control values you know *should* work and, for me, this fixes it. If you can replicate that behaviour we might be on to a bug.
In regards to your overflow problem in the first place - yeh, those USRP read and write blocks are hungry for data and you've got to keep them fed on time (inline with your channel settings). An update every 1ms, on Windows? Probably not I'm afraid. Either try reacrchitecting the way you send data or look to USRP RIO to solve the problem. RIO could take that code and run it on the FPGA no problem at all, and you already have the extra software capabilities in your academic site license (I'm asuming your project is a student one).
You're in good hands with David here, I don't wanna interrupt that. Just my 2 pence.
11-29-2015 10:29 PM
Hello, Bowley. I have applied your advice to the lingering error. But, the thing is that sometimes it works, sometimes it doesn't. I can have settings with a successful run. If I change the settings and have a bad run I then will reset all to default to the same settings that had given me a successful run. Again, sometimes this will give me a successful run, or an error will occur again. Sometimes I will have to exit out of LabVIEW and restart.
Attached is my latest revised VI project file. I had incorporated a case structure such that the Tx/Rx routines will not run until my vector to transmit is readily built. This was done so that data is available to send to the Tx buffer before the while loop begins. I had some success with high sampling rates like 10MHz or 60 MHz and a chirp frequency range from 0 to 5MHz. Unfortunately, when the chirp frequency range is up to 5MHz, my Rx signal appears to be just noise. There wasn't a problem with this noise when my chirp range was dailed back to 0 to 100kHz with this same VI setup.
I have heard that implementing queues can resolve this issue. I have tried to find implementations or example of using queues while interfacing an USRP with LabVIEW Communications Suite. But, there just does not seem to be any available.
This buffer overflow/underflow problem seems to be a very popular issue with users. Yet, there never seemed to be any clear resolution offered to users from what I have seen on the NI forums. It is like NI refuses to offer any resolution and leaving users hung out to dry.
11-30-2015 06:22 PM
Hi GenericAccount,
Sorry to hear about your frustrations with the forums. Keep in mind that they are designed to be a collaborative environment centered around user involvement. NI employees are often engaged in these discussions, but you may find it easier to contact NI directly on your particular issue. In many cases, this is what users do (or simply solve the issue on their own) and they may forget to update their posts with resolutions. With a new product like LabVIEW Communications, collaborators such as yourself are helping to grow that community though!
As for implementing a queue (Producer-Consumer architecture) with USRP, you may be right about a lack of example for that specific scenario. However, there is a basic example that demonstrates this concept. You can find it by navigating to Examples>>Programming Basics>>Data Exchange>>Simple Queue. In this example, you can see that we are generating data in one loop (producer) and processing the data in another loop (consumer). So ideally you could pull your data off the buffer as fast as possible in one loop and use another loop to process, or perhaps write to disk, etc (typical things that would slow down your fetch loop). Really you can implement it for any kind of lossless loop-to-loop communication that can also aid as an additional buffer of sorts. I'm assuming that you may be referring to something along these lines anyways. I think for your particular code, you may want to remove the slowdowns (like file writes) initially until we can achieve the desired rates. Then we can gradually add these back in, possibly looking to implement queues or similar architectures. I'll try to take another look at it though.
12-03-2015 02:53 AM
Thanks for your advice on queues, Deconn. I really, really appreciate you taking the time to help me out here. Your advice did bring forth some progress. A queue was added to my program, which is attached with this message.
I had implemented a queue for sending data out to the Tx routine. My testing had consisted of sending out a 0 to 100kHz chirp signal of 1000 samples. My results seemed fine when I had set the sampling rate at 1MHz and 2MHz. I was able to view the sampled Rx signal in Matlab. The sampling rate was then increased to 5MHz and the following error message had occurred:
Error 1 occurred at Enqueue Element in Tx Continuous Async.gvi
Possible reason(s):LabVIEW: An input parameter is invalid. For example if the input is a path, the path might contain a character not allowed by the OS such as ? or @.
This message was followed by a Tx underflow message. Most likely this was because it wasn’t receiving data from the queue. Despite this error, I was still able to collect usable Rx samples to work with in Matlab. The story is the same when I had set the sampling to 10MHz. I had tried replicating this error by incorporating my chirp signal into the Simple Queue example that comes with LabVIEW Comm Suite. But, everything ran fine with no errors.
I next had the sampling rate fixed at 10MHz and began increasing the chirp frequency range. Increasing the stop chirp frequency from 100kHz to 500kHz yielding usable Rx data. I had managed to record the Rx data with fs at 20MHz, chirp stop frequency of 10MHz, and chirp signal of 1000 sample points being transmitted.
So, it seems like I may be on the right track to overcoming my buffer overflow problem. However, the issue of the VI sometimes working and sometimes not when there has not been any changes to the parameters still lingers. I am at a lost when trying to figure that out. But, the introducing the queues is creating new errors, resulting in the following question:
What could be causing the enqueue error?
12-04-2015 11:44 AM
Hello GenericAccount,
The error you are receiving may be caused by an invalid queue reference being inputted into one of the “Queue” LabVIEW functions being used in your program. At some point in your program, you might be invalidating the queue reference which would cause subsequent calls to throw this error.
There are of course other reasons why this error could occur but based on the fact that the error is generated by the Enqueue Element.gvi function, an invalid reference is my best guess.
What is interesting is that the error is only generated when the sampling rate is increased to 5Mhz. The error may be a byproduct of another underlying problem which is not immediately evident. In an attempt to narrow down the issue, could you clarify which queue is throwing the error? Based on your problem description it sounds like the Enqueue Element.gvi for the TX queue is generating the error. Is this correct?
Best Regards,
j_boi