LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Visa Read problem from a PIC24's UART port

Thanks for taking the time to read my post.

 

I am using Labview 8 on Win XP CPU 3.4, 2GB RAM PC.

 

I am using a microcontroller PIC24 (on the Explorer 16 development board), which I have programmed to acquire an AC singal at a rate of about 4 kHz. The PIC24 has a 10bit ADC and the acquire value is then padded to16bits in total (format: 000000xxxxxxxxxx).

 

The serial setting for the pic and my Labview program are 115200, 8 data bits, no parity, 1 stop bit.

 

In order to send the acquired value to my PCs serial port (Remember UART = 8-bits of data only) I divide the 16 bit word into MSB and LSB. I then send the MSB and LSB 8-bit value one after the other.

 

The total data rate for this communication is about 64kbps.

 

The problem is that when I run the code I have written in Labview the CPU usage shoot up to 70% and I also see buffer overruns (a non continuous sine wave). If I a time delay in my while loop the buffer overruns increase.

 

Also if I try and use the ‘bytes at port’ VI the data I get is meaningless.

 

I would be grateful if someone can look at my code and give me some suggestion as to how I could make the ‘Visa Read’ VI more efficient.

 

Regards

Alex

0 Kudos
Message 1 of 11
(6,586 Views)
Alex,

Did you try setting the recieve buffer size to be greater than 4096?

Prashant.
0 Kudos
Message 2 of 11
(6,572 Views)

Thanks for your quick reply Prashant.

 

I was pretty certain that the problem was not the size of the buffer, but rather the rate at which data is removed from the PICs buffer.

 

I tried it anyway and had the same buffer overrun errors (non-continuous sinewave)

 

To make things a bit more clear, the PIC has a small, 4-bytes Hardware buffer and I have implemented a 2049-byte software buffer before that.

 

I am still troubleshooting and will come back once and if I have a question.

 

Regards

Alex

0 Kudos
Message 3 of 11
(6,543 Views)

Dear all,

 

I do not know if you have been following my post, but I am still getting buffer overruns (a non continuous sine wave) when using VISA Serial Read.

 

The only way to avoid this, is by making the VISA Read VI, read 2-bytes at a time (with no time delay in the main while loop). However, when I do this I see the CPU usage shoot up to around 60% (which is something I would expect anyway, as the main while loop is executing as fast as possible).

 

I have attached the working code below and would appreciate ANY comments BIG or small.

 

I am still puzzled as to why when I connect the ‘Bytes at Port’ Property Node, the data I get is not correct.

 

I have gone through the Labview Examples, as well as the LV Basics 1 course examples (which are similar) and I have also looked in the Labview for Everyone / Labview Graphical Programming books.

 

However, I have found the examples to be far too simple, for what I am trying to achieve.

 

I am seriously thinking of purchasing the LV Instrument Control Self-Paced Course, but I am not quite certain this would help me much. I have read the Course outline provided by NI, but this did not provide me with more valuable information.

 

Can anyone that has ‘done’ this course advice me as to whether the material contains info on ‘high’ speed acquisition using VISA Serial Read/Write?

 

The course is slightly price at a cost of around £240(with academic discount) and as far as I understand the courses examples (might) use two HP instruments (Multimeter and Function Generator) and a Tektronix oscilloscope, all of which I am not in direct contact with.

 

Regards

Alex

0 Kudos
Message 4 of 11
(6,505 Views)
Hi Alex,

I've looked into your code to try to improve its performance and reliability. I've re-designed the VI slightly to include a queue structure. This means that two parallel loops will run; one grabbing the data from the serial port, the other processing it as necessary. This means that the data will be read from the serial quickly and reliably and added to the queue, from which the data will be taken by the second loop. This means there is no loss of data if the processing doesn't keep up with the reading.

I also made some other minor changes - for example, you no longer need to send or look for empty strings due the to the way the queue structure works. This also helps to improve the program's speed. The VISA read vi has been set to synchronous rather than asynchronous which should also help to reduce the load on the CPU (have a look at the help for the VI to find out more about this). The CPU load is still quite high (60-70% on a 1.8GHz Centrino single-core laptop that I was using to test the code) but it should be an improvement. Also, there will be no loss of data.

In relation to your query regarding the "Bytes at port" property node, this should be working fine. I tested it here and the values it gives appear to be correct. What you might have seen which confused you was when the input buffer overflowed. When I sent data to your VI and introduced loop timing (so that the buffer overflowed) it would go up to the maximum buffer size very quickly before "draining" the buffer as fast as possible. While this was happening, the data stopped being sent. If you were seeing the number of bytes at port go up and down quickly, this was probably why.

With regard to the LabVIEW Instrument Control self paced course: this course is designed as an introduction to instrument control including programmatic control of instruments via VISA in LabVIEW. It teaches the basics of communications with a variety of instruments through various methods but does not cover more advanced VISA programming, for example high-speed communication over VISA. I hope this answers your queries regarding the course.

Regards,
Tom

Applications Engineering, NI UK
0 Kudos
Message 5 of 11
(6,463 Views)

Thanks for your reply Tom.

 

Using queues was on my 'to do' list to try and see if the CPU usage drops.

 

I am happy to tell you that when I run the code you sent me the CPU usage drops to about 10-20%, which is obviously a significant reduction.

 

However, when I stop sending data, the CPU usage climbs back to the 50-60%. Which is something a bit puzzling? I should point out here that I am using a Pentium D 3.4GHz ('pseudo' dual core CPU), which could account for the fact that my CPU drops to 20%, but why it climbs back to 60% I simply do not know.

 

I was also under the impression that one should we always add a time delay (eve a very small one, say 1ms), in any while loop? Should I not?

 

You mentioned that the LV Instrument Control Self-Paced Course, does not deal directly with “advanced VISA programming, for example high-speed communication over VISA”.

 

Would you consider my application to be Advanced Visa programming?

 

I am also looking into adding two more PIC transmitters that would transmit the same amount of data (64kbts each). This would add even more load on the CPU, thus making it paramount to reduce the CPU usage for acquiring data to the absolute minimum.

 

However, I have not really thought of how I would go about implementing reading 3 PICs concurrently without loss of data. I would like the process to run in the background, while I view the data read in a state machine architecture. Could you suggest what a possible solution would be?

 

The PIC would transmit data to COM1, COM2 and COM3. I would then have to read data from each COM port respectively by using some global variable that would inform me when I have data at each port.

 

I should also point out that each PIC has a 2048 byte software buffer and a 4 byte hardware buffer after that. The software buffer should optimally be reduced rather that increased, but the elimination of buffer overruns is paramount.

 

Regards

Alex

0 Kudos
Message 6 of 11
(6,439 Views)
Hi Alex,

I've looked into the points you raised and should be able to help you.

The reason the CPU usage goes up so much when there is no data on the port is that the loop becomes unregulated. Normally, this produced loop at the top of the program is timed by the VISA read VI, which waits for data to become available and reads it. However, when there is no data available, the case structure bypasses the VI and the loop runs as fast as possible. To resolve this, you may be able to put a 1 ms (no more) wait in the "false" case when the top loop is not reading data - this means the loop will execute no more than 1000 times a second, which isn't going to be an issue for your CPU. This should also reduce the CPU loading under normal operation significantly; sorry I missed this first time round! Make sure that this does not lead to buffer overruns, however, as each time the serial port has no data the program must wait for 1ms before reading again.

The bottom loop does not need any timing on it, as it simply executes once for each element in the queue. If there is no data in the queue, the loop waits for data. The queue structure also means that there will never be data loss unless there is a buffer overflow.

If you still have CPU usage issues after introducing timing to the "false" case in the top loop, you could try the following:
* Switch off debugging when you run the VI: go File >> VI Properties >> Execution >> Un-check allow debugging.
* Combine as much of the arithmetic in the bottom loop as possible (eg. multiplication and division can be combined into one multiply/divide)
* Remove some of the indicators in the bottom loop.

I hope introducing loop timing to the false case in the top loop will resolve CPU usage without overflowing the buffer. I have not been able to test that here yet. If it causes buffer issues then post back and I'll set up the com link again and look at other solutions for you.

Interpretations of "advanced" will always be relative, but from your code and posts you seem to have more than a basic understanding of LabVIEW and VISA communications.

Let me know how you get on with the code, good luck!

Best regards,
Tom

Applications Engineering, NI UK
0 Kudos
Message 7 of 11
(6,416 Views)

Thanks for your reply Tom.

 

I added a 1 ms time delay in the 'false' case as you suggested and the CPU usage did drop to a reasonable, but slightly high 15-20%. Thanks very much for your suggestion.

 

However, I am still getting nowhere with the 'Bytes at port' property node.

 

I will try your suggestions:

* Switch off debugging when you run the VI: go File >> VI Properties >> Execution >> Un-check allow debugging.
* Combine as much of the arithmetic in the bottom loop as possible (eg. multiplication and division can be combined into one multiply/divide)
* Remove some of the indicators in the bottom loop.

in order to see if I can reduce the CPU usage even more.

 

Your last suggestion will be the easiest to implement.

 

As for the second one, I can implement the normalization function in the PIC itself. However, I was hoping that the Goliath called Intel Pentium D, would be much 'better' at data acquisition and number crunching and that I would not have to pass on the last duties to (the) slower David (PIC).

 

I would also be grateful if you could advice on the question I asked you on the previous post.

 

That is, how would I go about reading 3 or more Serial Ports without lose of data.

I am slightly concerned as to the number of serial port I can read, due to high CPU usage just for one port.

I am also interested in doing an FFT on all my acquired signals, which would quite clearly increase the CPU usage even more.

 

Regards

Alex

 

0 Kudos
Message 8 of 11
(6,389 Views)
Hi Alex,

I've had another look at your program and set up the serial communications again; unfortunately I don't think there's much else you can do to optimise the program. The high CPU use at this stage is a result of the very fast looping rates and their associated overhead rather than the brute processing required. The best way to reduce CPU use further is through buffering the data more in hardware or software so that the serial read loop will only need to run at e.g. 1ms intervals and read/process more data at once. I don't know if this is possible with your setup.

As for reading data on 3 COM ports simultaneously, you have a few options:
* Go File >> VI Properties >> Execution and allow re-entrant execution. This will allow multiple instances of the program to run in multiple CPU threads so that it multi-tasks them properly.
* Create a main VI containing 3 instances of your current VI with suitable inputs and outputs (e.g. input which com port to use, output the data being read). These VIs would then be run in parallel.
* Create 3 queues within your current VI, each dealing with a different COM port in parallel.
There aren't really any special considerations you need to make other than the fact that the 3 ports should be read/processed in parallel; LabVIEW will automatically thread your application for you to give you the benefits of your multi-core CPU. I tried adding an FFT (using an express VI, which isn't necessarily the most efficient method) to your current program and the CPU usage only increased slightly; try implementing it and see what results you get on your CPU. You may not have too much of a problem with proper parallel processing. Again, please note that to get further CPU usage reductions you should look at buffering the data to a greater extent.

The "bytes at port" property node should be working fine, if you have specific issues with it, please detail them here. In all tests that I've conducted, it's behaved as expected.

Good luck with your program!
Tom

Applications Engineering, NI UK
0 Kudos
Message 9 of 11
(6,372 Views)

Thanks for your reply Tom.

I have been trying to add an FFT to my program but I have been facing some trouble getting the 'Amplitude and Phase Spectrum VI' to work.

I am reluctant to add the Express FFT and would rather like to create my own FFT (and THD measurement) VI in order to reduce the CPU load. I have gone throught  'The fundamental of FFT-based Signal Analysis ...' tutorial. Also the examples that come with labview were not very helpful

Hope you can be of help.

Regards

Alex

0 Kudos
Message 10 of 11
(6,204 Views)