LabVIEW Embedded

cancel
Showing results for 
Search instead for 
Did you mean: 

ARM UDP Receive Buffer

0 Kudos
Message 11 of 22
(5,087 Views)

Part 3

 

Again, rename then from ZIP to RAR, and download WinRAR from http://www.rarlab.com/download.htm to extract them.

 

Thanks,

 

Tony

0 Kudos
Message 12 of 22
(5,086 Views)

Hey  Tony,

 

Thanks for posting this code. Stephen will be working on setting up an example tomorrow and has been talking with R&D about the best next steps.

 

Will keep you posted.

Regards,
Claire Reid
National Instruments
0 Kudos
Message 13 of 22
(5,072 Views)

Hi Tony,

 

I have some things for you to try from our R&D team.

 

1. Instead of using a timed loop, replace this with a normal while loop.

2. Verify your network. Try connecting the two devices by using a switch.

3. Try to determine if this is a LabVIEW specific problem, or if it the hardware itself. Are you familiar with C enough to use a UDP example to try this same thing outside of LabVIEW, for testing purposes?

 

Regards,

Stephen S.

National Instruments
Applications Engineering
0 Kudos
Message 14 of 22
(5,056 Views)

Stephen,

 

Thanks for the ideas, here's my experiences so far:

 

1.  I've tried this VI with both a normal while loop and a timed loop.  If I use a timed loop, no packets ever arrive - at all.  The only way I can receive packets is to use a normal while loop.  (I'm guessing the timed loop is pre-empting something deeper down in the RTOS?)

2. My original test setup was using a total of 10 of these boards.  (LM3S8962)  They all exhibit the exact same behavior when programmed with the same main VI from LabVIEW.

3. Another engineer here has been able to modify one of the Luminary Micro examles (written in C, using the uIP stack, called "enet_uip", from the LMI CD also included with the eval kit) to do essentially what the VI does, and that program does not show this behavior even when the program is loaded onto another 6 boards.

4. I've tried connecting these series of boards with:

  a. Crossover cable (obviously only 2 boards connected)

  b. 8-Port Hub

  c. 5-Port netgear switch

  d. 24-Port netgear L2/L3 managed switch

with no change in this behavior.

 

Thanks again,

 

Tony

0 Kudos
Message 15 of 22
(5,051 Views)

Hi Tony,

 

I wanted to give you the heads-up that Stephen was out on Friday, so we'll be pushing this information onto R&D on Monday. You've done a lot of troubleshooting already and that's really helpful. Thanks!

 

 

Product Support Engineer
National Instruments
0 Kudos
Message 16 of 22
(5,031 Views)

Hi Tony,

 

Since communication is inherently an indeterministic  process, LabVIEW runs the communication stack in a separate thread of the same priority as the main VI thread. Also, in LabVIEW every Timed Loop spawns a new thread with a higher priority than the main VI thread (and the communication thread) . So if your VI has no Timed loops, in an ideal world, the main VI thread and the communication thread should get equal processor time. A Timed loop, if introduced, will preempt both the aforementioned threads until it sleeps. This should help explain why a timed loop with very fast timing will completely preempt the communication. The bunching of packets can be explained as follows: The main VI thread runs for a certain amount of time, and queues up as many packets as the number of iterations of the sender loop. The communication thread then kicks in and transmits all the queued packets and so on. 

 

The best way to get a regular transmit rate on the sender would be use a while loop with some amount of wait so that that the main VI thread sleeps and the communication thread gets enough processor time. Since thread scheduling in this case is handled completely by RTX (our RTOS), I cannot give you a hard timeline - it is something we will have to figure out empirically. However, a rate of 1KHz seems entirely too fast.

 

LabVIEW uses the RL-TCPnet stack so comparing the performance to the uIP stack might not be an "apples to apples" comparison. Also, it is likely that the C example does only UDP communication while LabVIEW does many other processes in the background. 

 

Please let me know if I can help explain this better.

 

Thanks,

Jaidev  

Message Edited by Jaidev on 12-07-2009 05:02 PM
Senior Product Manager
National Instruments
0 Kudos
Message 17 of 22
(5,018 Views)

Jaidev,

 

Thanks for the reply.  I'm a bit confused by your reply, it almost sounds like you're answering a different question.

 

1.  I would agree that Ethernet communication is - theoretically - inherently indeterministic, however in special cases (i.e. a quiescent network) its timing properties can be adequately characterized for our needs for "determinism."  Your explanation of LabVIEW's threading and communication (as well as the interaction between an "agressively" [a.k.a. less than 100ms] timed loop and IP communications) is excellent and details exactly what we don't want to happen.  That is, we don't want the main VI thread to queue any packets.  We want the packets delivered to the VI immediately.  Please note, this buffering/queueing only occurs with received packets, not with transmitted packets.

 

2. The way that we were able to achieve a regular transmit rate on the sender was to use a 1ms timed loop with a UDP Send call.  As the WireShark capture demonstrates, your RTOS is completely capable of sending a packet every 1ms +/- a few microseconds.  This, however, is completely immaterial to the problem reported.  This behavior was constructed purely to demonstrate the buffering of packets received and will never be used in the actual functionality of the processor, despite its ideal performance.  (Pity..)

 

3. NI's choice of which IP stack is, as far as I'm concerned, fine.  It's the management of that stack that seems to be the issue.  True, uIP is a much simpler stack - although it does support IP, TCP, UDP, and several other protocols (http://www.sics.se/~adam/uip/index.php/Main_Page) - it was chosen because of its simplicity to demonstrate that the hardware itself is not the issue, as requested by another support representative.  I'm afraid that if you had opened the C example, (which NI distributes as a part of the Evaluation kit, since the CD containing this example comes inside the LM3S8962 box) you would have seen that it also runs a full web server (HTTP) at the same time as our tiny little GPIO twiddling modification, as well as a few other minor background tasks.  Even with the additional burden of a web server, we were able to see (very regularly) less than 5us worth of "jitter" in the UDP broadcast packet distribution over the above mentioned network equipment.  (Jitter being defined as the difference in time between the first LM3S8962 seeing the packet and the last LM3S8962 seeing the packet.)  Unfortunately, in this case, it seems like LabVIEW's ability to multithread is a detriment to our ability to do what we need to do.

 

So, to sum up, we'd like to simply remove any and all UDP Packet Receive buffering, and have the UDP Receive VI return packets as soon as they arrive.  (Even if this adversely affects the behavior of other threads running in the default priority pool.)  I've looked around through the available source for the LM3S8962 Target, and the only mention of RX Buffering (in LM3S_EMAC.h line 75, #define RXFC_MASK 0x0000003F) seems to have no affect when changed and recompiled.

 

It that is not feasible, then we would like to increase the priority of the Receive Packet thread in RTX so that it's able to actually service the packets as soon as possble after they arrive, instead of semi-randomly. (Our tests indicating the timing was around 27ms +/- 27ms.  i.e. within less than a microsecond up to ~55ms.)

 

Thanks again for your time,

 

Tony

 

0 Kudos
Message 18 of 22
(5,010 Views)

Hi Tony,

 

I'm sorry, it seems my first response was indeed a little hasty. In this case, it is the polling thread on the receiver that shows you the buffering behavior. As I explained before, this thread runs at the same priority as the main VI thread. I will show you how you can raise the priority on this thread. It is also then necessary to add some delay to it so it doesn't starve the main VI thread. This code can be found in:

 

 <LabVIEW 2009>\Targets\Keil\Embedded\RealView\Drivers\RL-ARM\TCP\RLARM_TCPWrapper.c

 

 

The thread is spawned on this line with the default priority:

 

 

os_tsk_create_user (tcp_task, DEFAULT_PRIORITY, &tcp_stack, sizeof(tcp_stack));

 

You can change the DEFAULT_PRIORITY to (DEFAULT_PRIORITY + 10)

 

Now we can add the delay in tcp_task, which is the function that actually processes the received packets 

 

 

__task void tcp_task (void) {

/* Main Thread of the TcpNet. This task should have */

/* the lowest priority because it is always READY. */

 

while (1) {

os_mut_wait(stackMutx, (-1)); // Get stack mutex...

main_TcpNet();

os_mut_release(stackMutx); // Release stack mutex...

os_tsk_pass();

}

 

}

 

You can add  os_dly_wait(1); as the last line of the while loop. The function argument is the delay in milliseconds. This way, the thread will poll the socket at every one ms. Your main VI might be slightly starved but you can play with the delay to get the behavior that you need.

 

Your final code should look like this:

 

 

__task void tcp_task (void) {

/* Main Thread of the TcpNet. This task should have */

/* the lowest priority because it is always READY. */

 

while (1) {

os_mut_wait(stackMutx, (-1)); // Get stack mutex...

main_TcpNet();

os_mut_release(stackMutx); // Release stack mutex...

os_tsk_pass();

os_dly_wait(1); //wait one ms
}

 

}

 

 Rebuild the project once you have made these changes

 

 

 

 

 Hope this helps,

Jaidev 

Message Edited by Jaidev on 12-09-2009 04:29 PM
Message Edited by Jaidev on 12-09-2009 04:30 PM
Senior Product Manager
National Instruments
Message 19 of 22
(4,967 Views)

Jaidev,

 

Thanks for the instructions.  They were very clear and easy to implement.  I'm really starting to understand more of what's going on under the hood, and I like what I see.

 

Unfortunately, after playing with it a good bit, it appears that the problem is yet elsewhere, probably deep inside main_TcpNet();

I tried, as you suggested, increasing the priority of the tcp_task() thread, but there was no discernable effect.  Just out of curiosity, I continuously increased the priority until somewhere around DEFAULT_PRIORITY+30 it appeared that tcp_task() was starving the other threads - sometimes packets would get through and sometimes they wouldn't.  When they did get through, it was frequently much, much later.  (Sometimes as late as 150ms!)  And, yes, I did add a 1ms wait at the end of the forever-while block.

 

If, as I understand it, main_TcpNet() is a stock RL-NET function (that cannot be modified) then the LabVIEW Embedded system itself is not a suitable choice for what we need.  Our timing needs are just too stringent, unless this function can be modified.

 

Thanks again,

Tony

0 Kudos
Message 20 of 22
(4,940 Views)