DQMH Consortium Toolkits Discussions

cancel
Showing results for 
Search instead for 
Did you mean: 

Large data transfer between modules

Solved!
Go to solution

"Premature optimization" doesn't refer to optimization that is premature.  It is design complications introduced in hope of improving performance.  There is a high danger of the complicated design being weaker in a number of ways, including poorer performance. I've certainly seen uses of DVRs that involve more copying than a by-value implementation.

 

The opposite of "premature optimization" is to design in a simple way then benchmark.  One does want to benchmark early, which is why I asked you how long copying 300kb is.  I'll guess it is less than 100us. 

0 Kudos
Message 11 of 15
(1,303 Views)

@FlatCat wrote:

Wow. I wish I could sneak a peak at your code. 🙂


Unfortunately, it is work for hire, so we cannot share, that said, I have been taking notes in hopes of coming up with a generic example for a blog post, a presentation, and/or a video. That would be too late for you, though 😉

 


@FlatCat wrote:

As I said above, right now I have a big Main that has separate loops for acquisition/analysis/display, and then user interface. I will separate them. 


We tried multiple approaches before settling with what worked for this application. We decided to keep the acquisition and analysis inside the same module in separate loops. The acquisition acquires as fast as possible and sends via a Queue all the data to another loop that assembles the frames, that loop sends to another loop, also via a regular queue to another loop for analysis. The final loop sends the request event to the display module to display. We separated the display because that meant the DQMH module could run headless and never swapped threads with the user interface thread. That thread swapping can make a huge difference in this type of application. 

 

We have gone back and forth between using queues, using events, having the payload be a DVR, not be a DVR. It all boils down to the array size and acquisition speed. You will need to benchmark and see what works best for your application. For the majority of the applications  I have worked on before, an event with a DVR as a payload was all we needed. In one of those applications, we used the approach to create the DVR in the originating code and deleting the DVR in the receiving end, the primitive that destroys the DVR returns the last value. I don't remember all the details but in benchmarking, for that particular application, that approach worked better than just having the same DVR overwritten.

 

Good luck!

Fab

For an opportunity to learn from experienced developers / entrepeneurs (Steve, Joerg, and Brian amongst them):
Check out DSH Pragmatic Software Development Workshop!

DQMH Lead Architect * DQMH Trusted Advisor * Certified LabVIEW Architect * Certified LabVIEW Embedded Developer * Certified Professional Instructor * LabVIEW Champion * Code Janitor

Have you been nice to future you?
0 Kudos
Message 12 of 15
(1,283 Views)

Hi Fab,

 

First of all, Im one of the admirer of the DQMH concept. Thank you for a simple and great architecture!!

I was looking at using DQMH architecture for one of our large size LabVIEW application. One of my focus is having an efficient(minimum resource utilization) and faster way of transferring very big size data stream between modules. Basically from Hardware module to Signal processing module and to multiple User Interface modules. DVR was something I had it in my mind and was browsing around to see the case-studies/experience discussion etc. I found your explanation very interesting.

 

For the majority of the applications  I have worked on before, an event with a DVR as a payload was all we needed. In one of those applications, we used the approach to create the DVR in the originating code and deleting the DVR in the receiving end, the primitive that destroys the DVR returns the last value. I don't remember all the details but in benchmarking, for that particular application, that approach worked better than just having the same DVR overwritten.”

 

Did you get a chance to look into this further? What is your suggestion on better way of using DVR for data transfer between DQMH modules(Create destroy for each transfer or overwrite on the same location), especially from efficiency in resource utilization and speed perspective?

 

Thank you

Adarsh

LabVIEW from 2006

CLA from 2014

 

0 Kudos
Message 13 of 15
(1,082 Views)

@FabiolaDelaCueva wrote:
It all boils down to the array size and acquisition speed. You will need to benchmark and see what works best for your application.

I believe that this statement holds true. There's no way to tell what works better for your specific application because the differences might be subtle - or even based on your very specific implementation of any of these designs.

 

PS: For what it's worth, we're designing an application where we have to pass around IMAQ images. We decided to create an IMAQ copy in the acquisition module, and then dispose of that copy it in the processing module (after processing has finished). On top of that, the processing module will be cloneable, so we can have multiple processings done in parallel.




DSH Pragmatic Software Development Workshops (Fab, Steve, Brian and me)
Release Automation Tools for LabVIEW (CI/CD integration with LabVIEW)
HSE Discord Server (Discuss our free and commercial tools and services)
DQMH® (Developer Experience that makes you smile )


0 Kudos
Message 14 of 15
(1,071 Views)

 

Thank you for the response!

I agree it depends on the data size and acquisition speed.
Just wanted to bring out the experience comments from the performance perspective w.r.t the size of the ones application. Reason being, we cant simulate some of the conditions to take design decisions.

 

 

Thank you

Adarsh

LabVIEW from 2006

CLA from 2014

0 Kudos
Message 15 of 15
(1,053 Views)