01-17-2013 03:51 AM
I just today tried for the very first time to use the "Wait on multiple" primitive for notifiers.
I noticed rather quickly that it does not do what I thought. It returns as soon as ANY of the notifiers returns new data, I thought it would wait until ALL notifiers have new data.
I have the problem where I am subscribing to a list of values within my program which can vary in length (with no clear upper boundary). These values are published at "more or less" the same time but sequentially calling "Wait for notification" results in watiging multiple updata cycles as the act of reading one notifier causes sufficient delay to "miss" the simultaneous update of the next notifier.
Is there no option to
a) Pass in a timestamp from which point on the wait funtion should interpret new values? This would allow a sequential call of notifiers without inducing the extra wait but at the same time ensuring the data is as fresh as or fresher than the specified timestamp.
b) Have a wait for all function where the function only returns when ALL notifiers have updated their data.
I have seen the "Wait for ALL2 example but it does not do what I want. It simply waits for ONE notifier and then reads the rest without waiting. This can lead to race conditions in my code.
Shane.
01-17-2013 05:03 AM
Sorted it (more or less) using a parallel for loop with a relatively large number of threads. Multiples of this number still require missing an update cycle but it's a lot better than before.
It's still be cool to have a primitive "Wait on ALL" for notifiers....
Shane.
01-17-2013 06:07 AM - edited 01-17-2013 06:09 AM
I believe the proper solution for what you want could be a rendezvous, but I suspect it's not what you want as a) RVs don't have data and b) all sides are required to wait and it seems you only want one place to wait.
One thing which I believe should work is calling WoN With History in a non-parallel for loop (iterate over your notifiers) with an infinite timeout and set ignore previous to F. The "With History" part will allow each iteration of the for loop* to remember its notification details and so each notifier which already has a notification will allow the loop to proceed immediately, but those which don't will stop the loop until they get a notification. The result should be that the loop finishes when all notifiers have a notification.
* Technically it's each reference given to WoNWH, but I'm assuming the references are basically unchanging.
01-17-2013 07:09 AM
Are you recommending an array of WoNWH references to the For Loop?
Since the notifications are not sequential, would that cause an erratic behavior?
I guess it would work, but since I have never tried this, I'd be concerned about the behavior.
I've done something similar with Rendez-vous, which worked okay. And I did use WoNWH.
01-17-2013 07:51 AM
Because Ignore Previous is F, there shouldn't be a problem with the order of the notifiers, as notifiers with an existing notification will proceed immediately.
One problem with my suggestion, though, is that Shane suggested a concept of a "cycle", which seems to mean that there are a bunch of notifications which go togheter. If the code starts being late, it now has a problem where it responds to notifications from the previous cycle and it gets stuck with stale data in further iterations as well.
A solution to this issue might be to use an occurence or something similar to synchronize the calls (by marking the start or end of the cycle or both), so that if the code skipped a cycle for some reason, it does one loop where it runs with a timeout of zero so that it can clear the buffers and be resync'ed to the cycle. Shane would have to say if that's relevant.
01-17-2013 08:27 AM
The problem with accessing the notifiers sequentially is that they are "more or less" simukltansously updated meaning that there are some small timing differences between them and they may also be updated ad different rates. For the purpose of generalisation I have to assume they are completely uncoordinated. Their sources are varied and issuing occurrences all over the place is certainly more trouble than it's worth. We have a lot of modules running in parallel with their own processes and have no central synchronisation.
Just because one notifier registers a new value it doesn't automatically mean that I can assume the others have been updated also.
The example shipped with LV does NOT do what I want because it simply waits for the first new notifier and then reads ALL without waiting for new. Not what I want at all.
Shane.
01-17-2013 10:19 AM
I don't understand. Either you have a "cycle" (all notifiers being updated at more or less the same time) or they're completely unrelated. I'm pretty sure what I suggested should work, but it depends on your exact needs, which I'm not entirely clear about. I think I understand, but I'm not sure.
Can you upload a simple example simulating the situation you have? Basically, it should generate parallel running VIs with each VI updating some notifier (ideally, they should include a cycle number in the data, assuming that's relevant. I would suggest simply logging the start time and doing a Q&R on the difference between the current time and the start time and the "cycle length" to get that data). This would help with having a baseline.
01-17-2013 04:50 PM - edited 01-17-2013 04:50 PM
Example code would be kind of hard, it's a rather large application which has grown organically over the years and simply cannot be split up (without investing months of work). Yeah, that's my daily code.
How do I best explain. There is a main source of data which is broadcast via Notifier (say every 20ms). this is received by several modules who in turn do something with it and publish further data, which may again be listen to by another module and gets again published.
As such it's a chain reaction of notifiers which is "kind of" synchronised but any module can take a little longer to process and pass on the data, hence it not really being reliable to call it "synchronous".
I want to listen for the end result of all this communication (The "Final" data) from different stages of processing. The stages can be a few microseconds apart or several milliseconds (with a repeat rate of 20ms, that's kind of chaotic).
I want to be able to define a point in time "X" and have a notifier wait until EVERY notifier has new data AFTER that time X. i.e. the next round of "new" data.
Could a different architecture change the behaviour, hell yes. Could it be improved, yes. Will this happen in the next year. Hell no.
Shane.
01-17-2013 05:52 PM
In that case, my suggestion can work IF you're able to let the final destination have a "reset" step, similar to how the attached example (2011) works. Each time you click reset, the listener clears the buffers and waits until all the notifiers have new data.
Note that the example does have one flaw - because there is no sync and the resetting is sequential, there is no single time X for which you can guarantee that the messages will come after. In theory it's possible that the first notifier will get a message after it was reset but before the last notifier was reset. In practice, though, it can be expected that all notifiers will be reset within a single ms, so I think this should do what you want.
01-18-2013 12:54 AM
The "new" values don't have to be part of the same "cycle" at all, the leeway in the system is OK with this. Also, there's no real way of knowing which value is "last" because the modules can be spawned dynamically and they can register and publish dynamically so there's no registration of who does what with which signal and that overhead would probably be rather cumbersome to implement.
I have gthe solution with parallel for loop up and running and it's OK within the range of expectations but it's got an inherent upper limit of threads (N from the For Loop configuration) meaning that a multiple of this number of channels still incurs at least one cycle delay as the notifiers can be operated in only in parallel groups of N.
The only way to get 100% what I'm looking for would appear to be a new primitive.
Shane.