03-30-2007 03:53 AM - edited 03-30-2007 03:53 AM
Message Edited by Belzar on 03-30-2007 03:55 AM
Message Edited by Belzar on 03-30-2007 03:56 AM
03-30-2007 04:45 AM
03-30-2007 04:57 AM
03-30-2007 07:29 AM
03-30-2007 07:43 AM
03-30-2007 07:57 AM
03-30-2007 12:14 PM
03-30-2007 12:30 PM
03-30-2007 01:19 PM
03-30-2007 05:32 PM - edited 03-30-2007 05:32 PM
Something stinks worse now. By the way this is running in LV RT.
Here is a new diagram that represents in a simple way what I am trying to do (explanation below image):
Using Method 4 (to prevent the displayed allocation at the case statement tunnel in Method 2), I had a mysterious 5 microseconds between writing to a shift register at the end of a loop and reading from it at the start.
Just switching to Method 5 (everything else the same), I got it to go to 1 microsecond. Hmm, I didn't think 400% overhead is necessary for reading an int (or enum in my case) from a cluster.
I also have an additional mysterious 4 usec appearing within the loop itself (inside the case statement) probably caused by something similar.
My guess is it revolves around cluster copies that the "buffer allocation" tool fails to announce. My diagram looks the same to the buffer allocation tool no matter which method I choose.
Thanks to Execution Trace Tool for rooting it out.
If I have a chance, I'll try to create an example that shows this timing without all the complexity of my application.
Message Edited by Belzar on 03-30-2007 05:33 PM