Hi,
I am in the process of changing over an old program to what appears to be an improved structure. Originally, the entire program was one big case statement; now, it is four loops - one for reading network packets, one for writing them, one for processing the network commands, and one for checking the front panel.
The problem is that although the structure is probably more "appropriate" in terms of proper LabView programming, it just doesn't seem to meet timing requirements. The network read loop reads in a header, length, command, and data, whereas the old structure read in the header, length, command, (then branched to the command handler), and then the data was read in in the middle of the command processing.
With this old structure, we did not progress from one case in the case statement to the next until the current one was complete because there was only a single loop. Hence, the entire packet was read in before the command was processed and even though the VISA read had to wait, the timing was fine. Now, because sometimes there is no character available, the program goes off and does other things and by the time it gets back to the VISA read, we have dropped a character or two. We are running at 115kBPS, so it only takes about 89uS per character.
One possible solution I supposed is to make the VISA read interrupt driven. Another is to somehow manipulate the timing. I am not sure how to ensure that the network read loop gets highest priority. I know that one can put a mS delay block in the loops - I put a 0mS delay in the network read loop. What really happens inside of that delay - how are the other tasks called? Is there a way to control this? I am used to C programming where I can set priorities, but I haven't found such functionality in LabView - does it exist?
Sorry for the long explanation, but I wanted to be clear.
Thanks,
Jason