LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

LabVIEW takes a long time to call a function in .NET

It seems to me that LabVIEW takes a minimum of about a millisecond to make a call into .NET.  This is a huge obstacle as I need to make several hundred calls into a .NET library every second.  By contrast, Visual Basic .NET and C# have overheads of a microsecond or less -- 3 orders of magnitude difference.
 
For reference, I have a 3GHz Pentium 4 computer non-hyperthreading (Family 15, Model 3, Stepping 4), 1 GB of RAM, running Windows XP Professional.
 
I am wondering if:
  • LabVIEW really does have such a huge overhead?
  • If I am measuring the time correctly?
  • If either of the two above questions is "no", what am I doing wrong?  (What is my misunderstanding of LabVIEW control flow and how do I accurately measure the time overhead?)
  • Is there a way to make calls into .NET with less overhead?

I've attached some files to this e-mail.  (I apologize for the name of the VIs.)

I would be extremely grateful for any help or insight into my problem.  Many thanks!

- Kevin Hall

0 Kudos
Message 1 of 13
(4,775 Views)
Comparing to C# and VB.NET is unfair because they are .NET languages and probably have various optimizations precisely for this. You should try checking another non .NET environment.

Calling .NET in LV is still relatively rare and is probably far from optimized.

I think that DLL calls are executed in the calling VI's thread, which is usually set by default to the thread of the caller. If the caller has to do a lot of UI interaction, LV can set the VI to run the entire VI in the UI thread to avoid the overhead of thread switching. As an option to try, you could try changing the thread of your .NET calling VI in the VI Properties>>Execution dialog.

I can't look at your code, but I don't how you would measure the time it takes to do the actual calling of the .NET framework. Basically, to measure timing in LV you use a three stage process where you get the ms timer before your operation, perform the operation and get the ms timer after and then subtract. To force the correct order you use the error wire or (since the ms timer does not have error terminals) a flat sequence structure with 3 to 5 frames.
Since the lowest resolution is a single ms, you usually run your actual code in a loop to get some statistical result from many calls.

For a .NET function, you would probably have to run a function which does nothing so that you can measure just the overhead of calling the framework itself, but I don't know how accurate that would be. In any case, I would expect it to be slower than .NET languages.

___________________
Try to take over the world!
0 Kudos
Message 2 of 13
(4,767 Views)
You might also wish to check out the blog of the former head of the LV .NET team here.

___________________
Try to take over the world!
0 Kudos
Message 3 of 13
(4,763 Views)
Comparing to C# and VB.NET is unfair because they are .NET languages and probably have various optimizations precisely for this. You should try checking another non .NET environment.
 
Unfortunately, I don't know of another non-.NET environment that can call .NET 2.0 code.  I understand that C# and VB.NET will be faster -- but I wanted to make sure that my .NET function wasn't taking too long either. I needed a good control measurement.

I think that DLL calls are executed in the calling VI's thread, which is usually set by default to the thread of the caller. If the caller has to do a lot of UI interaction, LV can set the VI to run the entire VI in the UI thread to avoid the overhead of thread switching. As an option to try, you could try changing the thread of your .NET calling VI in the VI Properties>>Execution dialog.

OK, I will try that (on Monday -- I'm not at work now.)
 
I can't look at your code, but I don't how you would measure the time it takes to do the actual calling of the .NET framework.
 
If you have LV 8.2 on Windows, you can.  It's in the attachment in the original post.
 
Basically, to measure timing in LV you use a three stage process where you get the ms timer before your operation, perform the operation and get the ms timer after and then subtract. To force the correct order you use the error wire or (since the ms timer does not have error terminals) a flat sequence structure with 3 to 5 frames.
Since the lowest resolution is a single ms, you usually run your actual code in a loop to get some statistical result from many calls.
I did basically what you describe.  The only difference is that instead of using the ms timer, I called Win32's QueryPerformanceCounter (available in kernel32.dll) which has accuracy in the 1/10 to 10 microsecond rage depending on your computer.  For my computer, I think its accurate to about 3 microseconds.   And I did use a statistics object that combined data from 400 runs.  I ran the LV program multiple times with very consistent results.

For a .NET function, you would probably have to run a function which does nothing so that you can measure just the overhead of calling the framework itself, but I don't know how accurate that would be.
 
My .NET function (also viewable in the attachment in the original post) does only one thing -- call QueryPerformanceCounter and store it in a local variable.  I know this doesn't take long, because that was my control measurement.  Anyway, a later call into the .NET library would report the time and then I could use that to determine the entry and exit times.
 
In any case, I would expect it to be slower than .NET languages.
Me too.  But I'm finding LV to be 1000 times slower.  A millisecond is a very long time on modern computers.
0 Kudos
Message 4 of 13
(4,757 Views)
You might also wish to check out the blog of the former head of the LV .NET team here.
 
I have searched his blog, but haven't found anything about this.  Brian was very helpful last year when I had some other problems with .NET and LV.  National Instruments lost a gem in him when he left.  I hope he has fun in Washington. 
0 Kudos
Message 5 of 13
(4,755 Views)
I think that DLL calls are executed in the calling VI's thread, which is usually set by default to the thread of the caller. If the caller has to do a lot of UI interaction, LV can set the VI to run the entire VI in the UI thread to avoid the overhead of thread switching. As an option to try, you could try changing the thread of your .NET calling VI in the VI Properties>>Execution dialog.
 
I tried changing the the preferred thread of execution to "data collection" then to "i/o" (I don't have LV open, so these aren't the exact names).  These didn't help at all.  Then I tried raising the thread priorities to "time critical" for both the "data collection" then to "i/o" preferred thread execution options.  Again, things did not change significantly.
 
It appears that I'm really hitting the limit of LabVIEW's ability to communicate with .NET. 
 
Note to LabVIEW Developers:  .NET is becomming more prominent on Windows and more libraries -- even instrumentation and data collection libraries -- are being written in .NET.  The speed with which LV accesses .NET needs to improve in the future.
0 Kudos
Message 6 of 13
(4,725 Views)
Hi Zmeson,

Thanks for reporting this.  This was reported to R&D (#3U19DT7K) for further investigation. This has been reported a couple other times, so I'm sure R&D will take a close look!
Doug M
Applications Engineer
National Instruments
For those unfamiliar with NBC's The Office, my icon is NOT a picture of me 🙂
0 Kudos
Message 7 of 13
(4,714 Views)
Dear App. Engineer,

What's the latest on this?
0 Kudos
Message 8 of 13
(4,449 Views)
Hi Churasco_Bob,

This was fixed for LabVIEW version 8.5.

Eric C
Applications Engineer
National Instruments
Eric C.
Applications Engineer
National Instruments
0 Kudos
Message 9 of 13
(4,417 Views)
Thank You.
I will upgrade, and give it a whirl. 
Our application relies heavily on the .NET Framework 2.0 SP1

Churrasco
0 Kudos
Message 10 of 13
(4,411 Views)