DQMH Consortium Toolkits Discussions

cancel
Showing results for 
Search instead for 
Did you mean: 

Logger and Acquisition modules approach

Hello, @Olivier-JOURDAN !

 

I hope you can help me with some guidance on how to accurately determine which cloneable acquisition modules are broadcasting data so that I can distribute the data to their respective charts and arrange for logging in a file. I don't like the idea of the case structure for data distributing (if it's module 1->graph 1, if it's module 2->graph 2, etc.).

 

I remember you suggested creating a "proxy module responsible for listening to all "New Data" broadcasts", instead of having it done inside the Logger Module. I might switch to that later, but it will still be an issue for the proxy module, won't it?

 

For example, I have three CSV files with AI channel settings. The Main Controller Module reads these files and starts the DAQ AI cloneable modules for each file. Therefore, there is a relationship between the module ID and Task name. When I transfer the modules to the "Start Acquiring" state, they begin broadcasting data within a helper loop timeout case. Here are the DAQ AI Module's broadcast data payload and helper loop:

 

AI DAQ broadcast payload.png

 

DAQ AI Helper Loop.png

 

The Logger module has its own helper loop that is registered for broadcast events from each acquisition module. The data is bundled by name into a data cluster, depending on which acquisition module (DAQ, temperature, pressure, etc.) transmitted the data. The logger also has another helper loop where the "Save to File" message is queued in the timeout case. Here are the Logger Module's data cluster and Helper Loops:

 

Logger Data Cluster.png

 

Logger Helper Loops.png

I can't think of a way to do this without using the case structure for the cloneable modules broadcasted data. I would really appreciate any help!

0 Kudos
Message 11 of 13
(95 Views)

Hi @Shatoshi,

Before giving you any advice, I have some questions.

What are your requirements for the data file content?

You want to save data acquired by each clone instance, in the same file, continuously without loss? 


Olivier Jourdan

Wovalab founder | DQMH Consortium board member | LinkedIn

Stop writing your LabVIEW code documentation, use Antidoc!
0 Kudos
Message 12 of 13
(68 Views)

@Olivier-JOURDAN wrote:

Hi @Shatoshi,

Before giving you any advice, I have some questions.

What are your requirements for the data file content?

You want to save data acquired by each clone instance, in the same file, continuously without loss? 


1) We have an application that requires a file with a pretty basic structure for post-processing data: cap is a timestamp followed by columns for all parameters (recalculated data, pressures, temperatures). Data is appended line-by-line throughout the day.

 

2) That's correct, but there is one thing to note: the time between lines in the file is typically around 0.5-2 seconds (depends from application to application). This means that some data may be missed in certain cases. The UI charts are updated in real-time as soon as data is received from the devices.

 

I would like to share that I have implemented a slightly different approach. Instead of using the logger's data cluster, I believe it is more efficient to use a (local) FGV. This FGV contains a map (key -> channel name (for cloneable DAQ modules) or module name (for singleton modules), value -> waveforms) as a data container, which is populated by keys whenever the logger receives broadcast data from acquisition devices. Logger has a timeout case where it saves data from a FGV to a file. I think I might be using "Get Date/Time" function right here to write a line with this timestamp.

 

What do you think?

0 Kudos
Message 13 of 13
(55 Views)