11-02-2007 12:28 PM
Thanks for taking the time to read my post.
I am using Labview 8 on Win XP CPU 3.4, 2GB RAM PC.
I am using a microcontroller PIC24 (on the Explorer 16 development board), which I have programmed to acquire an AC singal at a rate of about 4 kHz. The PIC24 has a 10bit ADC and the acquire value is then padded to16bits in total (format: 000000xxxxxxxxxx).
The serial setting for the pic and my Labview program are 115200, 8 data bits, no parity, 1 stop bit.
In order to send the acquired value to my PCs serial port (Remember UART = 8-bits of data only) I divide the 16 bit word into MSB and LSB. I then send the MSB and LSB 8-bit value one after the other.
The total data rate for this communication is about 64kbps.
The problem is that when I run the code I have written in Labview the CPU usage shoot up to 70% and I also see buffer overruns (a non continuous sine wave). If I a time delay in my while loop the buffer overruns increase.
Also if I try and use the ‘bytes at port’ VI the data I get is meaningless.
I would be grateful if someone can look at my code and give me some suggestion as to how I could make the ‘Visa Read’ VI more efficient.
Regards
Alex
11-02-2007 01:53 PM
11-06-2007 11:56 AM
Thanks for your quick reply Prashant.
I was pretty certain that the problem was not the size of the buffer, but rather the rate at which data is removed from the PICs buffer.
I tried it anyway and had the same buffer overrun errors (non-continuous sinewave)
To make things a bit more clear, the PIC has a small, 4-bytes Hardware buffer and I have implemented a 2049-byte software buffer before that.
I am still troubleshooting and will come back once and if I have a question.
Regards
Alex
11-12-2007 01:40 PM
Dear all,
I do not know if you have been following my post, but I am still getting buffer overruns (a non continuous sine wave) when using VISA Serial Read.
The only way to avoid this, is by making the VISA Read VI, read 2-bytes at a time (with no time delay in the main while loop). However, when I do this I see the CPU usage shoot up to around 60% (which is something I would expect anyway, as the main while loop is executing as fast as possible).
I have attached the working code below and would appreciate ANY comments BIG or small.
I am still puzzled as to why when I connect the ‘Bytes at Port’ Property Node, the data I get is not correct.
I have gone through the Labview Examples, as well as the LV Basics 1 course examples (which are similar) and I have also looked in the Labview for Everyone / Labview Graphical Programming books.
However, I have found the examples to be far too simple, for what I am trying to achieve.
I am seriously thinking of purchasing the LV Instrument Control Self-Paced Course, but I am not quite certain this would help me much. I have read the Course outline provided by NI, but this did not provide me with more valuable information.
Can anyone that has ‘done’ this course advice me as to whether the material contains info on ‘high’ speed acquisition using VISA Serial Read/Write?
The course is slightly price at a cost of around £240(with academic discount) and as far as I understand the courses examples (might) use two HP instruments (Multimeter and Function Generator) and a Tektronix oscilloscope, all of which I am not in direct contact with.
Regards
Alex
11-16-2007 05:25 AM
11-16-2007 12:04 PM
Thanks for your reply Tom.
Using queues was on my 'to do' list to try and see if the CPU usage drops.
I am happy to tell you that when I run the code you sent me the CPU usage drops to about 10-20%, which is obviously a significant reduction.
However, when I stop sending data, the CPU usage climbs back to the 50-60%. Which is something a bit puzzling? I should point out here that I am using a Pentium D 3.4GHz ('pseudo' dual core CPU), which could account for the fact that my CPU drops to 20%, but why it climbs back to 60% I simply do not know.
I was also under the impression that one should we always add a time delay (eve a very small one, say 1ms), in any while loop? Should I not?
You mentioned that the LV Instrument Control Self-Paced Course, does not deal directly with “advanced VISA programming, for example high-speed communication over VISA”.
Would you consider my application to be Advanced Visa programming?
I am also looking into adding two more PIC transmitters that would transmit the same amount of data (64kbts each). This would add even more load on the CPU, thus making it paramount to reduce the CPU usage for acquiring data to the absolute minimum.
However, I have not really thought of how I would go about implementing reading 3 PICs concurrently without loss of data. I would like the process to run in the background, while I view the data read in a state machine architecture. Could you suggest what a possible solution would be?
The PIC would transmit data to COM1, COM2 and COM3. I would then have to read data from each COM port respectively by using some global variable that would inform me when I have data at each port.
I should also point out that each PIC has a 2048 byte software buffer and a 4 byte hardware buffer after that. The software buffer should optimally be reduced rather that increased, but the elimination of buffer overruns is paramount.
Regards
Alex
11-19-2007 07:53 AM
11-20-2007 09:53 AM
Thanks for your reply Tom.
I added a 1 ms time delay in the 'false' case as you suggested and the CPU usage did drop to a reasonable, but slightly high 15-20%. Thanks very much for your suggestion.
However, I am still getting nowhere with the 'Bytes at port' property node.
I will try your suggestions:
* Switch off debugging when you run the VI: go File >> VI Properties >> Execution >> Un-check allow debugging.
* Combine as much of the arithmetic in the bottom loop as possible (eg. multiplication and division can be combined into one multiply/divide)
* Remove some of the indicators in the bottom loop.
in order to see if I can reduce the CPU usage even more.
Your last suggestion will be the easiest to implement.
As for the second one, I can implement the normalization function in the PIC itself. However, I was hoping that the Goliath called Intel Pentium D, would be much 'better' at data acquisition and number crunching and that I would not have to pass on the last duties to (the) slower David (PIC).
I would also be grateful if you could advice on the question I asked you on the previous post.
That is, how would I go about reading 3 or more Serial Ports without lose of data.
I am slightly concerned as to the number of serial port I can read, due to high CPU usage just for one port.
I am also interested in doing an FFT on all my acquired signals, which would quite clearly increase the CPU usage even more.
Regards
Alex
11-21-2007 11:15 AM
01-11-2008 09:31 AM
Thanks for your reply Tom.
I have been trying to add an FFT to my program but I have been facing some trouble getting the 'Amplitude and Phase Spectrum VI' to work.
I am reluctant to add the Express FFT and would rather like to create my own FFT (and THD measurement) VI in order to reduce the CPU load. I have gone throught 'The fundamental of FFT-based Signal Analysis ...' tutorial. Also the examples that come with labview were not very helpful
Hope you can be of help.
Regards
Alex