03-10-2021 11:32 AM - edited 03-10-2021 11:33 AM
@richjoh wrote:
Have you tried lowering "Limit Compiler Optimization" under the Tools/Options menu, Environment tab, at the bottom.
Set that slider to 0, then you need to make Labview recompile its VIs. I think you have to close Labview then reopen to make it recompile with the new setting. To be sure, make a cosmetic change once LV is reopen and save the VI.
They recommend to setting the value back to 5 when done.
This might've been it. I had tried setting that to 0 but I hadn't tried restarting LabVIEW after that. I've set it back to 5 and its still somewhat normal.
It seems that stopping and restarting LabVIEW and switching back and forth between those settings has helped.
I'll keep testing and update this if it does stay stable.
03-15-2021 05:06 PM
Other things you could have tried to force a recompile:
03-25-2021 09:45 AM
Ok so. That was definitely the solution as far as I'm concerned. I had forgotten to mark that as the answer.
After some time testing, I can confirm it's working as normal. LabVIEW is operating decently fast with the default compiler optimizations setting (5). Just flipping it back and forth and restarting LabVIEW multiple times did the trick.
Before it didn't matter if I was hogging to much memory with other applications. Now its back to normal so that's good
09-11-2022 11:34 PM
I tried to give a kudo but the software seemed to burp. I'll try again later, because you deserve it.
Why the kudo? Because you took just that little bit more of your time to show where I could find what the manual assumed I knew and didn't say (lest I become bored, I guess.) ...yes, I checked the manual, first IOW.
Now my experience:
Not a large VI, really. But in the subVIs eventually a memory intensive VI is encountered. The main VI contains only a couple of subVIs. The reason for the intense memory usage is the FFT is working with a 2.5MB dataset. The f(t) itself is stored in a monster (from my perspective) text file, read in, analyzed, and then I display the magnitude and phase information in a couple of X,Y plots. I'm using the two-sided -- I want to see those complex numbers do their thing. (I must admit some of the reason I'm saying this is perhaps another reader out there can explain a better way to work with such a large dataset.)
The whole operation takes around 10 seconds. That wasn't the problem. I don't mind that, especially if I'm getting correct information.
My numbers were this:
Prior to run, about 65MB memory.
After one run, jumps to 950MB.
During one run, jumps to 1.3GB, max.
After further runs memory usage stays around 950MB.
Moving controls and then reverting, memory usage drops to 650MB and then returns to 950MB after another run.
CPU usage never exceeds 20%.
Moving the slider in question to 1 from the default 5 caused a 4-5 second plus annoyance when working with the editor to drop to a reasonable 1 second and a normal control click speed. (That was the worst. Click a control and wondering whether you did. That was so annoying.)
Well there you go. Perhaps the above will be of assistance to others in some way. (Also, yes, about the large dataset thing, I'm open to advice.)
09-12-2022 02:06 AM
small error: my data set is 2.5M, not 2.5MB. There are 2.5x10^6 captured points in f(t), the waveform.