Comparing to C# and VB.NET is unfair because they are .NET languages and probably have various optimizations precisely for this. You should try checking another non .NET environment.
Unfortunately, I don't know of another non-.NET environment that can call .NET 2.0 code. I understand that C# and VB.NET will be faster -- but I wanted to make sure that my .NET function wasn't taking too long either. I needed a good control measurement.
I think that DLL calls are executed in the calling VI's thread, which is usually set by default to the thread of the caller. If the caller has to do a lot of UI interaction, LV can set the VI to run the entire VI in the UI thread to avoid the overhead of thread switching. As an option to try, you could try changing the thread of your .NET calling VI in the VI Properties>>Execution dialog.
OK, I will try that (on Monday -- I'm not at work now.)
I can't look at your code, but I don't how you would measure the time it takes to do the actual calling of the .NET framework.
If you have LV 8.2 on Windows, you can. It's in the attachment in the original post.
Basically, to measure timing in LV you use a three stage process where you get the ms timer before your operation, perform the operation and get the ms timer after and then subtract. To force the correct order you use the error wire or (since the ms timer does not have error terminals) a flat sequence structure with 3 to 5 frames.
Since the lowest resolution is a single ms, you usually run your actual code in a loop to get some statistical result from many calls.
I did basically what you describe. The only difference is that instead of using the ms timer, I called Win32's QueryPerformanceCounter (available in kernel32.dll) which has accuracy in the 1/10 to 10 microsecond rage depending on your computer. For my computer, I think its accurate to about 3 microseconds. And I did use a statistics object that combined data from 400 runs. I ran the LV program multiple times with very consistent results.
For a .NET function, you would probably have to run a function which does nothing so that you can measure just the overhead of calling the framework itself, but I don't know how accurate that would be.
My .NET function (also viewable in the attachment in the original post) does only one thing -- call QueryPerformanceCounter and store it in a local variable. I know this doesn't take long, because that was my control measurement. Anyway, a later call into the .NET library would report the time and then I could use that to determine the entry and exit times.
In any case, I would expect it to be slower than .NET languages.
Me too. But I'm finding LV to be 1000 times slower. A millisecond is a very long time on modern computers.