LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Where to get documentation for resources/*.dll libraries ?

> I don't think you need to make a big deal about the overhead in calling a subVI versus calling the dll directly. At least at the lower level analysis functions (i.e. power spectrum.vi), there's nothing much in there except the Call Library Function Node. I doubt that you'd even be able to measure the difference between using the subVI or not.

This does not seem to be the case. Using the attached vi, I find around 8% overhead. However, even if the call to the DLL produces the same result than the call to the vi, I'm not sure I'm using it "thegoodway", since it's... not documented 😉 As for the memory requirements, the direct call _seems_ to perform better.
0 Kudos
Message 11 of 19
(1,165 Views)
I tried your vi and I got an 8% difference too.
A conclusion may be the following: copy and paste the code from the analisys vi's into your vi. This must be "the good way", as they are provided by NI.
This way, you can have the additional benefit to get rid of some unnecessary step, like choosing the right target code and managing the vi reference (I'm not sure about why a reference is opened: this is probably done to keep the vi in memory).
Paolo
-------------------
LV 7.1, 2011, 2017, 2019, 2021
0 Kudos
Message 12 of 19
(1,159 Views)
I don't see the difference when I compare apples to apples. Your example is calling SpectrumH in the LVAnalysis dll. The VI you have is Auto Power Spectrum which does some additional logic and calls Power Spectrum which is a wrapper to the Call Libary Function Node. Replace Auto Power Spectrum with Power Spectrum.vi and see what kind of difference you see.
0 Kudos
Message 13 of 19
(1,154 Views)
picpanter wrote :
> A conclusion may be the following: copy and paste the code from the analisys vi's into your vi. This must be "the good way", as they are provided by NI.

This is essentialy what I've done for the test program. Yes, NI is usually right when giving code, IMHO, but for the testcase, I've stripped a little logic, it didn't impact the results, but it decreased dramatically the execution time for the "direct call" case, which otherwise would be around three times SLOWER. To me this means that there is something that I don't know / understand, this is the reason why I'd like some doc. Do someone from NI read us ? Could you tell me why those DLL's are not documented (or am I missing some obvious thing) ?

picpanter wrote :
> This way, you can have the additional benefit to get rid of some unnecessary step, like choosing the right target code and managing the vi reference (I'm not sure about why a reference is opened: this is probably done to keep the vi in memory).

Thank you for your effort, I guess I know how to achieve my goal now. I'm still curious about why there is no doc, though.

Dennis wrote :
> I don't see the difference when I compare apples to apples. Your example is calling SpectrumH in the LVAnalysis dll. The VI you have is Auto Power Spectrum which does some additional logic and calls Power Spectrum which is a wrapper to the Call Libary Function Node. Replace Auto Power Spectrum with Power Spectrum.vi and see what kind of difference you see.

Did I post the wrong vi ? My vi is supposed to be able to time the execution time for three computations : using the Auto Power Spectrum vi, the Power Spectrum vi and a kind of direct call to an obscure and undocumented DLL. Did you by chance try it ?

You would have seen that the overhead induced by calling Auto Power Spectrum rather than Power Spectrum vi is far less important than what I am actually talking about (the overhead due to using the {,Auto} Power Spectrum vi rather than using a direct call to the DLL).

0 Kudos
Message 14 of 19
(1,149 Views)
Sorry, I didn't see the ability to select just power spectrum and when I just re-ran it, I saw some differences too. I wonder if the fact that the VI is re-entrant is the cause. Going to have to experiment some more.
0 Kudos
Message 15 of 19
(1,140 Views)

My guess here is (and sorry if you're not happy getting a guess from the blue bars 🙂 ) is that LabVIEW can't guarantee in-placeness when calling the DLL in the subVI. LabVIEW generally checks for in-placeness in subVIs to determine if it needs to create a new buffer for the output values, or if it can reuse the input buffers. If the input buffers are not resized or otherwise altered, LabVIEW will try to reuse them and thus save the trouble of allocating new memory.

My guess here is that LabVIEW can't know if the dll call is going to resize the array or not, since dlls generally operate like black boxes. Therefore it has to create an additional buffer copy for the output. This will take some time and add some overhead. Turn on Show Buffer Allocations for your test VI and you can see indeed that an additional buffer is created for the output data of the VI and not the CLFN.

Regarding the documentation of the DLL, NI has done a great deal of work describing in detail the algorithm used for the standard LabVIEW analysis functions, as well as how to use them in their native implementation (subVIs). I would imagine if you wanted to circumvent that implementation, you're on your own, but you certainly have a lot of examples to work with. Hope this helps!

Jarrod S.
National Instruments
0 Kudos
Message 16 of 19
(1,131 Views)
You might consider merging the appropriate FFT functions on your palettes. Merging VIs on the functions palette is an old trick where you have LabVIEW drop the block diagram code of a particular VI rather than the subVI itself when you drag it from the functions palette to your block diagram. Merging the LV FFT VI might be a simple way to use code correctly while avoiding the overhead associated with an actual subVI call. You can think of this similarly, though it's by no means an exact analogy, of inlining code in a C-based environment.
 
Here are instructions on how to merge VIs. Try it out! It's a great way to be able to add prebuilt block diagram code to your VI.
Jarrod S.
National Instruments
0 Kudos
Message 17 of 19
(1,123 Views)

Sorry to keep reposting, but I just thought of a good reason why it might not be a good idea to directly call lvanalys.dll rather than going through the correct subVI interface. You might have issues if the dll function calls are changed in future versions of LabVIEW. If you use the FFT subVI, then the later version of LabVIEW would automatically replace it in your code with the appropriate new version of the FFT VI that called the lvanalys dll correctly when you converted it. If you are directly calling the dll, LabVIEW won't know how to convert the dll function calls appropriately. This could cause you major headaches.

I have no idea if this is very likely to be a problem or not likely at all, but it's something to keep in mind. Hope this helps!

Jarrod S.
National Instruments
0 Kudos
Message 18 of 19
(1,123 Views)
Thank you Jarrod, for taking the time, even wild guesses are welcome : it makes me feel less stupid 😉
I, too, guess that your technical analysis is right. I agree with you that NI code and internal are usually great, that's why I orginaly posted the question : I wanted to know what I was missing. You answered it very clearly : the correct thing is to call the subvi, not the DLL, that's why it's not documented. I've been conviced of this for some time now and the timing tests enforced this. A mere 8% perf might seem huge when doing ultimate optimization, but I do not seek for such a thing, since the maximum number of points used is quite low, I'll have no memory problem either.
I still think that those kind of math functions would better be public ones, but this is just a matter of taste after all.
Thank you for your time.

0 Kudos
Message 19 of 19
(1,112 Views)