05-27-2009 09:05 PM
Hi All,
Colleagues and I have had some very recent experience with serial buffers when using the ARM Embedded module (1.1) for LabVIEW, using the LPC23xx series controller (we are using the LPC2368/87). I thought it best to share should others find some crazy things going on.
We were losing packet information when large packets came through on the normal 64 byte allocation for the incoming serial buffer. These large packets happened fairly regularly with other small packets in between. At times we found that 2 bytes would be dropped thus losing the CRC check and forcing a drop in the packet information. It took some time to work why (surely it was to do with our firmware!) but we soon realised we had to adjust this serial buffer size from 64 to something much larger - 256 or 512 - even though we have another buffer within our producer loop. This serial buffer change is done in the ARM_serial.c file, easily found using Keil uVision; line #48:
#define SER_IN_BUF_SIZE 64
Even though we initialised the buffer size (using "Serial Port Init.vi") to 512 there was no change to the hardcoded value of 64.
In terms of RAM allocation, a change in the 'serial in buffer' size (power of 2 change) will generate a x4 memory allocation in your heap space (ZI data). This means, if I increase my buffer size by 256 bytes I also reduce my available heap space (run-time RAM) by 1024 bytes. This can be significant if you are very tight on memory (changed using line #100 in your 'target'.s file - LPC2300.s for me).
I hope that helps for somebody.
05-28-2009 03:02 PM
Hi David,
You are right, the serial driver (ARM_serial.c) doesn't use the buffer size input on the Serial Port Init.vi. The easiest way to change the buffer size is to modify line #48, as you mentioned.
The buffer size input is present on the top level VI for consistency in API (the same VI is used over multiple targets) and while aloocating a large buffer is useful for some targets, it might not be advisable for smaller embedded targets. This might have been the motivation behind the design decision. While this was probably done for efficiency reasons, the 64 byte buffer never proved a problem for us internally in our testing (and we do use it to move large chunks of data). Also, the buffer is a circular buffer and if your top level VI reads the buffer frequently enough, it shouldn't lose any data.
Having said that if anyone wants to increase the size of the buffer, they can implement the change that you mention keeping in mind the tradeoff with the available heap size. I would like to see if more people encounter this.
Thank you,
Jaidev Amrite
05-28-2009 07:12 PM
Resolving this problem did take us some time, first in identifying why we were losing packets after setting the buffer size on the init.vi to 512 and second in realising that it wasn't working. We had packets (largest ones) with a size of 130 bytes coming in every 100 ms. Trying to handle these packets takes time and heap space - if you use more than one queue (for more than one consumer loops). In terms of memory efficiency we have stayed away from using too many queues due to dynamic memory allocation; if too many packets came in and we weren't able to process them quick enough then the heap will become full and the controller will crash ("Memory Abort" error - as indicated in the LabVIEW processor status window).
We previously went over the known issues site and couldn't find a mention of serial buffer size allocation input on the ...init.vi (see: http://digital.ni.com/public.nsf/websearch/270545BCCF971FE9862574F20049095C?opendocument&Submitted&&...).
You mentioned that it is an intended design, which is surprising. Having the option for the user to control their hardware settings using firmware (LabVIEW) would have been a real plus for NI.
I also noticed that the serial buffer size is allocated for each port - well, we think so anyway. We have 4 ports on our controller, which is why we see a quadruple increase in heap allocation with an increase in buffer size. Is there some way for us to isolate the buffer size to each port, thereby giving the (default) 64 bytes to the unused ports and increasing the allocation to those that need it ? This would put more control in the user to maximise their memory usage with more efficiency, especially if all you are using are 2 ports and are tight with memory.