LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Combining Map and arrays?


@VinnyAstro wrote:

But according to what you're telling me wiebe@CARYA, even the first point wouldn't be very effective in term of performances.


It's important to first establish what performance is needed.

I'd go for a convenient solution, and stick with it if it's fast enough.

 


@VinnyAstro wrote:

@drjdpowell wrote:

Before they created Maps, people often used Variant Attributes for similar applications.  You might search on examples of "Variant Attributes".   Attributes are like Maps but always String names and Variant values.


I've seen this yes, would this be more adapted to what I'm looking for above?


I don't think there's any benefit over a map with a variant vs variant attributes.

 

The benefit of a map is that you don't have to use variants. With variant attributes you have to use a variant.

 


@VinnyAstro wrote:

I will also look into SQL, have no idea what this is yet.


SQL is an interface to a database.

0 Kudos
Message 11 of 33
(1,824 Views)

wiebe@CARYA wrote:

@VinnyAstro wrote:

@drjdpowell wrote:

Before they created Maps, people often used Variant Attributes for similar applications.  You might search on examples of "Variant Attributes".   Attributes are like Maps but always String names and Variant values.


I've seen this yes, would this be more adapted to what I'm looking for above?


I don't think there's any benefit over a map with a variant vs variant attributes.

 

The benefit of a map is that you don't have to use variants. With variant attributes you have to use a variant.


My point there was is that one can find existing examples of a Map-like structure with Variants.  You would be better to implement what you learn with a Map containing a Variant.  There is not, as of yet, a lot of Map examples as Maps are so new.  Variant Attributes are old, and thus have more examples.

0 Kudos
Message 12 of 33
(1,815 Views)

@drjdpowell wrote:

wiebe@CARYA wrote:

@VinnyAstro wrote:

@drjdpowell wrote:

Before they created Maps, people often used Variant Attributes for similar applications.  You might search on examples of "Variant Attributes".   Attributes are like Maps but always String names and Variant values.


I've seen this yes, would this be more adapted to what I'm looking for above?


I don't think there's any benefit over a map with a variant vs variant attributes.

 

The benefit of a map is that you don't have to use variants. With variant attributes you have to use a variant.


My point there was is that one can find existing examples of a Map-like structure with Variants.  You would be better to implement what you learn with a Map containing a Variant.  There is not, as of yet, a lot of Map examples as Maps are so new.  Variant Attributes are old, and thus have more examples.


And it's a good point.

 

I hope it's clear for OP now... If you want to use maps with variants, look for variant attributes for reference.

 

There is actually a slight benefit when reading a variant attribute vs reading a variant map value:

 

Map vs Variant Attributes.png

 

The variant output gets the type of the default, avoiding an additional Variant To Data...

 

If keys are strings, and values are variants, using Variant Attributes might not be that bad.

0 Kudos
Message 13 of 33
(1,806 Views)

Wouldn't it make sense to use classes for this?  A parent class which would be the type for the Map and children classes that would encapsulate the types of data the you want stored in the members of the Map.

Randall Pursley
0 Kudos
Message 14 of 33
(1,791 Views)

@rpursley8 wrote:

Wouldn't it make sense to use classes for this?  A parent class which would be the type for the Map and children classes that would encapsulate the types of data the you want stored in the members of the Map.


A very good idea 😉.

 

See Re: Combining Map and arrays? - NI Community

Message 15 of 33
(1,773 Views)

wiebe@CARYA wrote:

 

Also, if you're going to set\get large amounts of data from a map (or array), it will be hard to avoid copies.

Can you define "Large"?

 

Here is my use case:

For now, while trying to understand all of this a bit better, I have a cluster made of 32 cluster of [1DBL + 1Array DBL] and then a few other (~10) single elements (like booleans and numerics).  The size of the arrays isn't defined yet, but I was thinking of 1024 elements as one 1 set of data point is collected at a frequency of 0,5 to 5 Hz (That would mean having ~1min40s to 35 min max displayed data). I collect only single data points, spaced between 1 to 25 ms dependy on the acquisition frequency, so the cluster is updated quite frequently.

When the array is full I'm deleting the oldest element and putting the new data point at the end.

If I'm not wrong a DBL is 8 Bytes? That would represent in the worst case (I'm never requesting all of the data and some arrays will be empty) ~256KB of data to copy. Once for the display at a rate between 50ms to 1s. And another copy for logging the data, at a frequency in the same range than the acquisition.

 

For now, this is for one device, later this will be for up to 4 devices (but user sees/controls only onde device at a time) meaning that these copies happen 4 times.

So worst of the worst would be 256k*2*4 =2MB of data copied about every 500ms.

In reality, most of the time there is about 10 different data requested, so that would represent about 640kB of data.

 

Is this a lot of data or not?

Is this a lot ?

0 Kudos
Message 16 of 33
(1,748 Views)

Copying 256 kB at 1-5 Hz? I wouldn't be too worried. Of course it's good practice to not be wasteful, but you shouldn't optimize before it's needed.

 

The first thing I would do is invest some time in a stress test. Try out whatever construct(s) you come up with, and see how the solution performs in the test. Pick the simplest and most practical solution that succeeds. Try to keep the interface general, so you can switch implementation later. OO is great for that (when done right).

 

I have a 3 day buffer for >50 values at 1 Hz. That is 99 MBi.

 

To display the data, it needs to be decimated (for each pixel get the enter, min, max and exit point of all data in that pixel).

 

Managing this data takes <10% CPU on my modest 5 year old laptop, while doing all acquisition and user interaction as well.

 

The data is displayed in up to 18 independent trends... Obviously the CPU goes up a little for each channel that's been shown. When the user starts interacting (zooms in, etc.) I create a copy of the real time buffer (to avoid data disappearing while analyzing). Potentially 18 X 99 MBi. Again, not really a problem.

 

The trick here is to avoid copies. My trend module collects real time samples, and stores them in predefined a 2D array (aka circular buffer). A pointer keeps the current position.

 

Adding (growing) an array requires occasional expensive memory copies. Preallocating this data just costs memory that you're going to need anyway.

 

Circular buffers come in all sorts. Mine is by wire \ by value. You can do global or by reference, but I like to keep things simple.

 

Because the decimation happens on the by wire \ by value data, there's no need for copies. Any global or by reference implementation will either block the buffer while decimating or create a copy to do this. That means CPU load.

0 Kudos
Message 17 of 33
(1,743 Views)

wiebe@CARYA wrote:

Copying 256 kB at 1-5 Hz? I wouldn't be too worried. Of course it's good practice to not be wasteful, but you shouldn't optimize before it's needed.


Great thanks! My main issue atm is that I'm mostly learning by doing, but I'm afraid to do wrong and then to learn wrong 🙃 So I'm trying to get the good practice early 🙂

 


wiebe@CARYA wrote:

The first thing I would do is invest some time in a stress test. Try out whatever construct(s) you come up with, and see how the solution performs in the test. Pick the simplest and most practical solution that succeeds. Try to keep the interface general, so you can switch implementation later. OO is great for that (when done right).

Also what I'm trying to do.... Except I'm not using OO for now. I need training for that and... it's been refused to me for now 😒

So what I'm doing is trying one technique (that I've explained in the first thread I mentioned at the top of this convo) and I am trying another technique now that should be more scalable, all of that using Version control.

 


wiebe@CARYA wrote:

The trick here is to avoid copies. My trend module collects real time samples, and stores them in predefined a 2D array (aka circular buffer). A pointer keeps the current position.

 

Adding (growing) an array requires occasional expensive memory copies. Preallocating this data just costs memory that you're going to need anyway.

Two questions about this:

When is a copy created? And is my example and statements below correct?

Memory management.png

The actions on the array (dfa at index 0 + ba the new value) above is basically what I am doing for every of the 32 clusters I mentioned before.

 


wiebe@CARYA wrote:

Circular buffers come in all sorts. Mine is by wire \ by value. You can do global or by reference, but I like to keep things simple.


I thought of using a FGV (or whatever it is named) circulating my big cluster with typically 3 actions: Write; Read; Clear; but that would be ideal because the Write will and has to happen very fast and often (writing one element every 1 to 25ms); While the Read will take a bit more time but less often (reading the entire cluster twice (display+log) every 0,5-5Hz).

I was afraid of bottleneck issues and so I decided to continue with queues as I already have everything in place for it.

0 Kudos
Message 18 of 33
(1,729 Views)

@VinnyAstro wrote:

Two questions about this:

When is a copy created? And is my example and statements below correct?

Memory management.png

The actions on the array (dfa at index 0 + ba the new value) above is basically what I am doing for every of the 32 clusters I mentioned before.


That's 256*(16+8), a timestamp is 16 bytes! So, triple your estimate. 

 

Deleting the 1st element could need a copy (or at least a move). LabVIEW probably adds an "array subset object" (something like a stride\stride c++ object) to the original to avoid this. But when you keep adding elements, at some point the memory needs to be either moved or copied, as the memory allocated for the original size (+ x%) will not be enough anymore..

 

Putting an object on a queue doesn't require a copy, but if you continue to modify the data, there needs to be a copy. That's why it might be faster to enqueue the single samples, and add it to a (circular) buffer on the receiving end(s). You might get multiple copies, but you're not copying al the time. So you're trading memory for speed.

 

You'll get these kind of trade offs all the time. A circular buffer is much faster to write, a little slower to read, compared to a simple array as you used it. That's slower to write, but you can read instantly. Another trade off is that a circular buffer is more complex.

0 Kudos
Message 19 of 33
(1,724 Views)

wiebe@CARYA wrote:


I don't think there's any benefit over a map with a variant vs variant attributes.


There was some tests performed when Maps were introduced. It did perform slightly better, around 1% or so, probably due to optimization possibilities.

G# - Award winning reference based OOP for LV, for free! - Qestit VIPM GitHub

Qestit Systems
Certified-LabVIEW-Developer
0 Kudos
Message 20 of 33
(1,713 Views)