DQMH Consortium Toolkits Discussions

cancel
Showing results for 
Search instead for 
Did you mean: 

Logger and Acquisition modules approach

Hello!
I am currently working on the development of an application for a test bench, which will interact with various devices (such as power supplies, gauges, valves, pumps, termocouples, etc.) using either serial or TCP protocols. The application will also be able to work with DAQ devices and log and monitor the collected data.

 

At the moment, I am stuck in the development phase of the logger and acquisition modules. For example, I have 4 acquisition DAQ tasks (each on a separate NI card) and about 20 different devices that need to collect data. It's difficult for me to determine the most effective method for collecting and logging the data.

 

1. At first, I considered making modules for each type of acquisition device (DAQ, thermocouples, vacuum gauges, etc.) and implementing as many helper loops as the number of devices. For example, the DAQ module would have 4 helper loops with a private request wakeUpLoop for all of them. This approach makes scaling difficult, as for each new device added to the system, I would need to create a new helper loop. Additionally, I am not sure how to transfer data from these helper loops to the logger. It would be easier if there was only one helper loop where I could iterate over an array of TaskIDs using DAQmx Read and send a request ("Append Data" with collected data payload) to the logger. However, this would cause the data to shift by the polling time for each DAQmx Read per task.

 

2. So, I thought I should create cloneable modules. For example, Main VI starts as many cloneable DAQ modules as there are DAQ tasks (in this case, there would be 4), each with one helper loop. This leads to the fact that I can't use requests with replies (passing module ID = -1), since I will only receive a reply from one module.

I'm considering the following solution (see attached picture):

example.jpg

The logger has a public request event "Send Data" that every acquisition module (AI DAQ, vacuum gauge, etc.) can use to transfer data to the logger's EHL. The EHL will enqueue the data, which will then be bundled into a data cluster at the MHL. Additionally, the Logger has a helper loop that enqueues a "Save To File" message every N seconds. This event is processed by the MHL, which transfers the data cluster to SaveToFile.vi.

 

I would like to find a way to store data with the closest possible timestamps, because it would allow us to estimate different parameters within a single time "slice".

 

I'm sure there's a more sensible approach than this. I would greatly appreciate any suggestions you may have on that.

 

Best regards,

Yan

0 Kudos
Message 1 of 13
(453 Views)

@Shatoshi wrote:

...I am currently working on the development of an application for a test bench, which will interact with various devices (such as power supplies, gauges, valves, pumps, termocouples, etc.) using either serial or TCP protocols. The application will also be able to work with DAQ devices and log and monitor the collected data.

 

At the moment, I am stuck in the development phase of the logger and acquisition modules. For example, I have 4 acquisition DAQ tasks (each on a separate NI card) and about 20 different devices that need to collect data. It's difficult for me to determine the most effective method for collecting and logging the data.

...

I'm considering the following solution (see attached picture):

...

I would like to find a way to store data with the closest possible timestamps, because it would allow us to estimate different parameters within a single time "slice".

...

It's not clear whether your requirement is to have exactly one time-base that all measurements will use, or whether each set of measurements will have its own time scale. Your post implies that all measurements track to one time base. If so, your best bet is to 'align' the sampling clocks as close as possible to the hardware, rather than upper levels of your application layers.

Your present implementation is a 'pull' kind of architecture. This is suitable when the 'client' wants to be in control of when information is needed. I would not recommend this approach for data acquisition, as you would then have to deal with questions like - what should your DAQ clones do when the request never comes or is delayed enough to cause buffer overruns? There are related considerations like polling (requesting) too fast or too slow, and increased messaging traffic.

On the flip side, 'push' kind of architecture, where DQMH also shines, is event driven. Your DAQ modules broadcast their waveforms, and it is up to the consumer (Logger module, or something in between that could stitch together your different DAQ sources to one common time base) to process the data or reject it.

Alternately, if you intend to log at different rates, you could consider logging to different files, one for each time-base, or choose something binary like TDMS or HDF.

Message 2 of 13
(412 Views)
@Shatoshi wrote:

...I am currently working on the development of an application for a test bench, which will interact with various devices (such as power supplies, gauges, valves, pumps, termocouples, etc.) using either serial or TCP protocols. The application will also be able to work with DAQ devices and log and monitor the collected data.

 

At the moment, I am stuck in the development phase of the logger and acquisition modules. For example, I have 4 acquisition DAQ tasks (each on a separate NI card) and about 20 different devices that need to collect data. It's difficult for me to determine the most effective method for collecting and logging the data.

...

I'm considering the following solution (see attached picture):

...

I would like to find a way to store data with the closest possible timestamps, because it would allow us to estimate different parameters within a single time "slice".

...


I guess Shatoshi meant that each set of measurements from the modules should be sent without timestamps and collected in Data (type def cluster connected to the MHL) at different moments. Simultaneously, every N seconds, a prioritized message would be sent from the helper loop to the MHL via a queue, triggering the MHL to save data from the Data typedef cluster to file.

Therefore, the core question is: "Is it acceptable to continuously request measured data from modules to EHL of Logger module?".  I suspect there may be no practical difference between requesting and broadcasting in this context, as the EHL would operate continuously in eather case.

0 Kudos
Message 3 of 13
(381 Views)

 

@Shatoshi, there's almost never one correct solution when it comes to app architecture.

 

One way to do that, which would change your approach, would be to send data from your acquisition modules through a broadcast. That way, you are not coupling acquisition modules with your logger.

Then, instead of making the logger module listen directly to the broadcasts, I would have a proxy module in charge of listening to all the "New Data" broadcasts using one Helper Loop. This proxy module would be responsible for broadcasting all the data at a defined frequency. Any modules, like the Logger module, that need to get the data would need to listen to the proxy module. Adding a new module using data will be easier.

 

This is a big picture. There may be things that prevent you from using this design, but I wanted to share it as an example.

Hope it could help.

 

 


Olivier Jourdan

Wovalab founder | DQMH Consortium board member | LinkedIn

Stop writing your LabVIEW code documentation, use Antidoc!
Message 4 of 13
(338 Views)

@Dhakkan wrote:

 

Your present implementation is a 'pull' kind of architecture. This is suitable when the 'client' wants to be in control of when information is needed. I would not recommend this approach for data acquisition, as you would then have to deal with questions like - what should your DAQ clones do when the request never comes or is delayed enough to cause buffer overruns? There are related considerations like polling (requesting) too fast or too slow, and increased messaging traffic.

On the flip side, 'push' kind of architecture, where DQMH also shines, is event driven. Your DAQ modules broadcast their waveforms, and it is up to the consumer (Logger module, or something in between that could stitch together your different DAQ sources to one common time base) to process the data or reject it.


Isn't the implementation I described actually a "push" type? Acquisition modules use the Request to the Logger module to send new data every loop iteration of their helper loops. Therefore, the consumer (Logger module) then stitches the data together into its data cluster, for example, by bundling it by name. 

 

This interaction between the Acquisition and Logger modules is similar to that used in the DQMH CML template example. I'm trying to expand it into a many-to-one relationship.

 

Please let me know if I have misunderstood anything.

0 Kudos
Message 5 of 13
(283 Views)

@Olivier-JOURDAN wrote:

 

One way to do that, which would change your approach, would be to send data from your acquisition modules through a broadcast. That way, you are not coupling acquisition modules with your logger.


Thank you for pointing that out! I completely forgot that I was actually coupling the modules by using the request instead of a broadcast.

 


@Olivier-JOURDAN wrote:

 

Then, instead of making the logger module listen directly to the broadcasts, I would have a proxy module in charge of listening to all the "New Data" broadcasts using one Helper Loop. This proxy module would be responsible for broadcasting all the data at a defined frequency. Any modules, like the Logger module, that need to get the data would need to listen to the proxy module. Adding a new module using data will be easier.


Do I undertstand correctly that there will be a separate case for each broadcast event source within the proxy's helper loop, or there is a way to process a common case that can be used for several event sources, similar to what is shown in the the attached image?

 

several sources.png

0 Kudos
Message 6 of 13
(276 Views)

@Shatoshi wrote:

 

Isn't the implementation I described actually a "push" type? Acquisition modules use the Request to the Logger module to send new data every loop iteration of their helper loops. Therefore, the consumer (Logger module) then stitches the data together into its data cluster, for example, by bundling it by name. 

You are correct! I misread your 'request' between the two - DAQ Clone and Logger. And I see from your reply to @Olivier-Jourdan that broadcast is indeed the way to go.

 

0 Kudos
Message 7 of 13
(269 Views)

@Olivier-JOURDAN, described design requires adding of additional helper loop, which should have event structure. But how this structure will work when data from different broadcast sources come continuously and frequently. I mean is it OK for event case structure to process so much data simultaneously. Is it possible to bump into race conditions in this approach?


As an example, I have around 50 different modules and everyone broadcasts data (part of them is just singletone, but another is cloneable), hence it means event structure in helper loop should operate continuously and may cause to be stuck, isn't it?  

0 Kudos
Message 8 of 13
(242 Views)

@Shatoshi  a écrit :

@Olivier-JOURDAN wrote:

 

Then, instead of making the logger module listen directly to the broadcasts, I would have a proxy module in charge of listening to all the "New Data" broadcasts using one Helper Loop. This proxy module would be responsible for broadcasting all the data at a defined frequency. Any modules, like the Logger module, that need to get the data would need to listen to the proxy module. Adding a new module using data will be easier.


Do I undertstand correctly that there will be a separate case for each broadcast event source within the proxy's helper loop, or there is a way to process a common case that can be used for several event sources, similar to what is shown in the the attached image?

 

several sources.png


Both are possible:

But I would tend to go with one case for all "New Data" events. You will need to ensure that all your "New Data" events use the same data structure. You can also dynamically register the events in an array to make adding a new Acquisition module easier.


Olivier Jourdan

Wovalab founder | DQMH Consortium board member | LinkedIn

Stop writing your LabVIEW code documentation, use Antidoc!
0 Kudos
Message 9 of 13
(218 Views)

@Tenru  a écrit :

@Olivier-JOURDAN, described design requires adding of additional helper loop, which should have event structure. But how this structure will work when data from different broadcast sources come continuously and frequently. I mean is it OK for event case structure to process so much data simultaneously. Is it possible to bump into race conditions in this approach?


As an example, I have around 50 different modules and everyone broadcasts data (part of them is just singletone, but another is cloneable), hence it means event structure in helper loop should operate continuously and may cause to be stuck, isn't it?  


I don't see a risk of race condition here, but you're right to point out the possibility of having too much data to handle. Although this is not directly related to Event Structure, it has all the necessary features to ensure your code is performant enough to handle your data flow.

If you're unsure how to do that, I encourage you to check out examples 08a and 08b here. They demonstrate some ways to handle overflowed Event Structure.


Olivier Jourdan

Wovalab founder | DQMH Consortium board member | LinkedIn

Stop writing your LabVIEW code documentation, use Antidoc!
0 Kudos
Message 10 of 13
(210 Views)