03-01-2015 06:08 AM
What happens if you parallize the WOAC loop?
--- I tried decimating the array of VI RefNums and running four FOR loops on those. That should reduce the odds of having Instance #0 be the last to quit, but be the first to be waited on.
No change in behavior though.
I think the big problem was that I was CONDITIONALLY doing the WOAC. I later found that that is a no-no - it leaves the VI reference in memory if I fail to COLLECT. So the next time up, the original batch of VIs is STILL in memory, but I've lost the reference to them.
Blog for (mostly LabVIEW) programmers: Tips And Tricks
03-01-2015 09:33 AM
Strange question. Did the conditional WOAC throw a compiler warning?
Thinking through it, a WOAC in a case structure should warn us that it might not be a good idea.
03-01-2015 09:40 AM
Did the conditional WOAC throw a compiler warning?
No warnings. If I closed the container window from a menu item, i did NOT do the collection (because I did not need to wait on it).
If I closed it from a QUIT operation, then I DID wait on the collection.
But not collecting the results was a bad idea.
Blog for (mostly LabVIEW) programmers: Tips And Tricks
03-01-2015 01:09 PM
CoastalMaineBird wrote:
- Async calls are actually a call pool, so you may have 72 refs in that array but they should be equal to one another. Meaning you really just need the ref and a count.
--- I'm not sure why you think that - I don't think that could possibly work. If I had two calls to the same VI, I should be able to do something to one of them without doing it to the other, and if there's only one ref, that cannot happen. In any case, I tested it and I get a different RefNum for each instance, as I would expect:
Ah I see, I've never used async calls without using 0x40. (in which case its a call pool you can change the size of with the VI server method).
If you don't use the x40 option don't you run into the bullet point "Serial or parallel execution" in this help page?
http://zone.ni.com/reference/en-XX/help/371361L-01/glang/start_asynchronous_call/
03-02-2015 04:10 AM
If you only want to wait for all processes it sounds like a Rendevous to me. The event and counter works ofcourse also and basically do the same thing.
/Y
03-02-2015 06:47 AM
If you don't use the x40 option don't you run into the bullet point "Serial or parallel execution" in this help page?
I didn't consider option 0x40 as it just didn't come to mind.
That bullet point doesn't apply to the way I'm doing things. I am opening a DIFFERENT reference for each instance. I showed above how the refs are all different numbers.
Does that mean that I have 72 copies of the same VI code in memory?
Perhaps I could save RAM by launching ONE ref with that 0x40 option, multiple times.
I need all 72 to work in parallel, with separate data spaces.
Blog for (mostly LabVIEW) programmers: Tips And Tricks
03-02-2015 07:43 AM
I still have trouble with this, it seems.
1... Judging by the WORKING SET SIZE memory parameter from Windows, I have a memory leak, When I OPEN the window, the usage goes UP, when I close the window, usage does NOT go down. That may be a separate problem.
2... The CALL and FORGET, or CALL and COLLECT just seem to to be flaky. Sometimes they work great, sometimes they don't. It's like walking thru sludge, it just is extremely slow in starting them up. Even after a computer restart, sometimes it's good, sometimes not.
So, I've gone back to my original way of launching, which is to use the RUN VI method. I went away from this because I thought the SET CTL VALUE things a bit inelegant. But in fact, it works, first time every time, lickety split.
This has worked for several years, but in a different situation: in the old system, when I close the container window, I don't care about how long it takes to shut down - it's not possible to quit the program before that happens. But in the new system, it's possible to quit from the main window, so I need to close the container window and know that everything has properly finished, before quitting.
I still have the memory leak, but the operation is smooth as can be. And the empty-queue trick works fine for detecting the closure.
Anybody see a downside to this method?
Blog for (mostly LabVIEW) programmers: Tips And Tricks
03-02-2015 08:05 AM - edited 03-02-2015 08:09 AM
Wow, OK clone pools. 72clones take some time to launch. All of those dataspaces need to be allocated. Use the allocate clone method and bury the time with your init routine.
And refs; yeah they are different numbers....run them into an equals primitive,... They point to the reenterant original!
03-02-2015 11:53 AM
Are you sure it's a memory leak? LabVIEW is somewhat notorious for not releasing memory back to Windows, but it will happily reuse that already-claimed memory itself if it needs it (in preference to asking for more from the operating system). If you're not seeing steady growth in memory use, it may not be a leak.
03-02-2015 01:19 PM
Are you sure it's a memory leak?
No, I'm not sure. The more I work on this, the less I know.
It turns out it's NOT "first-time every-time" as I said earlier. It's sometimes yes, sometimes no.
I changed the block code so that the first thing it does is to wait 5 sec. That means that nothing is happening except the launch code as shown above. The individual blocks aren't competing for CPU time with the container window.
Still, sometimes it runs lickety split, and sometimes it's trudging thru sludge.
Blog for (mostly LabVIEW) programmers: Tips And Tricks