LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Slice and sum a 2D-array

Solved!
Go to solution

Hi Flonares,

 


@Flonares wrote:

the index and the length are giving me issues.


What about simple debugging using Highlight execution and probes?

Your VI is doing EXACTLY what you programmed to do!

Either calculate the used indices in your VI "by hand" or watch your VI with highlighting enabled to understand your own code…

 

  • Your "size" indicator is labelled wrong: you will get an array of "row×col", but you labelled "col×row"!
  • There is a MatrixSize function that works with any 2D array and provides direct access to row and column numbers…
  • Learn about basic array handling in LabVIEW, especially to way LabVIEW uses the indices (row, col, page, …)!
Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
Message 11 of 20
(2,035 Views)

Hi I did an example dealing with 2D array or 1D array 

LVNinja_0-1721134350688.png

from the front panel you can see that the results are the same 

LVNinja_1-1721134403041.png

 

Message 12 of 20
(2,017 Views)

Hello LVNinja,

 

Many thanks for your example, it helped a lot. I realized now I made a stupid error when explaining my problem:
I stated 64*40000, which lead everyone to think of a 64 by 40000 2D array, when I actually wanted to express that the initial input array is a 1D array with 64 sets of 40000 points - in other words, 256000 points in a 1D array.

Now I feel really stupid...

Apologies for this monumental error.

 

Cheers,

Fl0

0 Kudos
Message 13 of 20
(1,997 Views)

@Flonares wrote:

Hi Paul, thanks for your reply. I realized that some important information was missing.

 

The array is only composed of two columns with 64*40000 rows. Ideally, the problem can be reduced to a 1D array with 64*40000 points. So one would need to divide the 1D array into 64 equal parts and sum them together.


Is this what you are trying to do?

aa.png

Message 14 of 20
(1,965 Views)
Solution
Accepted by topic author Flonares

@Flonares wrote:

I actually wanted to express that the initial input array is a 1D array with 64 sets of 40000 points - in other words, 256000 points in a 1D array.


So here's what I might do, and you don't even need to know the size of each set.

 

altenbach_0-1721146912950.png

 

This assumes that the various sets are appended and not e.g. interlaced. Also if the size is not evenly divisible, it will omit the last partial set.

 

Message 15 of 20
(1,950 Views)

Hi altenbach,

 

Many thanks for your example, which is really great.

 

However, I have a question, to which I will point what I believe would be a possible solution, but I wish to know if there are better ways: How do I guarantee that all 64 sets have exactly 40000 points? Let´s imagine that there are 6400 additional points which are meaningless (well behaved junk that I can precisely locate and is sample-size independent) to the 40000 we already have.the actual size of the array. In your example, if I have have an extra 6400 in the end, then each of the 64 sets would have automatically 640 extra points. My solution would be to use the "Array Subset" and trim off the junk data in the 1D array before proceeding with the averaging. (In reality, I have about 800 points that are due to a .8s start delay between the DAQ (1kHz acquisition rate) and my laser.)

 

Would you advise me a different tactic?

 

Best,

Fl0

 

 

Download All
0 Kudos
Message 16 of 20
(1,733 Views)

Hi Flonares,

 


@Flonares wrote:

Would you advise me a different tactic?


Can you explain what you want to show with your two images?

"Different tactic": please attach your VI so we can see the whole picture!

 


@Flonares wrote:

My solution would be to use the "Array Subset" and trim off the junk data in the 1D array before proceeding with the averaging.


This is a valid solution.

A better solution would be not to include that junk in your measurement data while acquiring the data…

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
Message 17 of 20
(1,682 Views)

Hello GerdW,

 

Please disregard the pictures - I just wanted to know what were they called, since at first I could not locate them in my Tools Palette, but in the meantime I understood what they were and forgot to delete them as I stepped into the next topic.

 

 


@GerdW wrote:

 

This is a valid solution.

A better solution would be not to include that junk in your measurement data while acquiring the data…


True, unfortunately I am oblivious as to how to overcome that issue. Since that issue is focused on different topic (DAQ) than the one stated here, I will open another question soon and share the VI there - it might be of importance for someone else too.

 

Right now I am in the process of writing down the VI I had in mind to deal with all the topics we discussed about, and I can´t thank you all enough for time and help - it has proven invaluable. Will keep you all posted on the progress!

 

Cheers,

Fl0

0 Kudos
Message 18 of 20
(1,659 Views)

@Flonares wrote:

Let´s imagine that there are 6400 additional points which are meaningless (well behaved junk that I can precisely locate and is sample-size independent) to the 40000 we already have.the actual size of the array. In your example,


If you say "well behaved" there is no problem. It's just math, right?

 

You can easily fix the other constant (# of samples/subset) values and adjust the inputs to "reshape array" accordingly. It's all just basic math. If you know the subset length, you can can calculate the number of average by doing minor changes to the code. Any excess samples that don't form a full set will be discarded..

 

Note that "reshape array" is special in that it will also pad with zeroes if the math is not right. For example if you have only three points and reshape to 64x40000, you'll get that final size with only the first three points nonzero.

 

You simple need to know sufficient information about your data, maybe from some "out of band information" (instrument setting, etc.)   in order to analyze it correctly.

 

(The signal generation is irrelevant for your problem, because you'll substitute your own data for that.)

 

Message 19 of 20
(1,622 Views)

If you already knwo ahead of time the amount of junk just get rid of it before entering the code.

 

0 Kudos
Message 20 of 20
(1,487 Views)