07-12-2017 01:03 AM
@Perin wrote:
@ Blokk
"Usually the last step for a mature application is to create an executable from it, and run it using the LabVIEW Run-Time Engine. This usually leads to better performance, since the LabVIEW Development environment does not need to be loaded."
Sorry for mentioning it as project we got those result only in the application.
But then why you showed us the Dev environment memory usage in your first post?
07-12-2017 01:20 AM
@Blokk wrote:
Another side to reduce performance eating problems is your GUI! Do you really need to show ALL that many indicators to the user? A professional GUI gives the user options to only show the "actually" or "temporary" important GUI elements. SubPanels are good things to use them in such cases, since it reduces the Top level VI's complexity!
We are splitting all the indicators by the name of screens and showing the corresponding screens by inserting them into the subpanel. Actually the same way as you told.
@Blokk wrote:
The problem with this approach is that it is really performance hogging. Specially if you have lots of indicators so references. Have you heard about clusters? Using clusters (with Type definition and bundle/unbundle by names) can solve your "too many wires" problem. But you also mentioned OOP. It is even more robust for such cases (I am not programming in OOP).
Yeah we are using typedef cluster wherever its required.
07-12-2017 01:23 AM
@Perin wrote:
@Blokk wrote:
Another side to reduce performance eating problems is your GUI! Do you really need to show ALL that many indicators to the user? A professional GUI gives the user options to only show the "actually" or "temporary" important GUI elements. SubPanels are good things to use them in such cases, since it reduces the Top level VI's complexity!
We are splitting all the indicators by the name of screens and showing the corresponding screens by inserting them into the subpanel. Actually the same way as you told.
@Blokk wrote:
The problem with this approach is that it is really performance hogging. Specially if you have lots of indicators so references. Have you heard about clusters? Using clusters (with Type definition and bundle/unbundle by names) can solve your "too many wires" problem. But you also mentioned OOP. It is even more robust for such cases (I am not programming in OOP).
Yeah we are using typedef cluster wherever its required.
If so, why updating indicators via their reference? Sorry, but i got confused 🙂
07-12-2017 01:25 AM
@Blokk wrote:
But then why you showed us the Dev environment memory usage in your first post?
Sorry for showing the wrong screen shot but by running the application only 100MB is getting reduced
07-12-2017 01:32 AM
@Blokk wrote:
If so, why updating indicators via their reference? Sorry, but i got confused 🙂
By using reference, we can able to change the property of a particular indicator with an ini file.
07-12-2017 01:47 AM
@Perin wrote:
@Blokk wrote:
If so, why updating indicators via their reference? Sorry, but i got confused 🙂
By using reference, we can able to change the property of a particular indicator with an ini file.
Set properties using property nodes via references is totally OK (unless repeated too often). But updating the VALUE of an indicator via property node has caveats compared to using wire...
07-12-2017 03:33 AM
@Blokk wrote:
Set properties using property nodes via references is totally OK (unless repeated too often). But updating the VALUE of an indicator via property node has caveats compared to using wire...
Initially we used wire only but we faced a lot of problems in that(its time consuming and many human error while wiring), so that only we started using reference(if any error occur we can simply adjust the ini file to correct it).
But my question is why its size got reduced after 30 mins, is there anyway to minimize it from beginning.