07-16-2024 06:18 AM - edited 07-16-2024 06:19 AM
Hi Flonares,
@Flonares wrote:
the index and the length are giving me issues.
What about simple debugging using Highlight execution and probes?
Your VI is doing EXACTLY what you programmed to do!
Either calculate the used indices in your VI "by hand" or watch your VI with highlighting enabled to understand your own code…
07-16-2024 07:53 AM
Hi I did an example dealing with 2D array or 1D array
from the front panel you can see that the results are the same
07-16-2024 09:09 AM
Hello LVNinja,
Many thanks for your example, it helped a lot. I realized now I made a stupid error when explaining my problem:
I stated 64*40000, which lead everyone to think of a 64 by 40000 2D array, when I actually wanted to express that the initial input array is a 1D array with 64 sets of 40000 points - in other words, 256000 points in a 1D array.
Now I feel really stupid...
Apologies for this monumental error.
Cheers,
Fl0
07-16-2024 10:25 AM
@Flonares wrote:
Hi Paul, thanks for your reply. I realized that some important information was missing.
The array is only composed of two columns with 64*40000 rows. Ideally, the problem can be reduced to a 1D array with 64*40000 points. So one would need to divide the 1D array into 64 equal parts and sum them together.
Is this what you are trying to do?
07-16-2024 11:22 AM - edited 07-16-2024 11:24 AM
@Flonares wrote:
I actually wanted to express that the initial input array is a 1D array with 64 sets of 40000 points - in other words, 256000 points in a 1D array.
So here's what I might do, and you don't even need to know the size of each set.
This assumes that the various sets are appended and not e.g. interlaced. Also if the size is not evenly divisible, it will omit the last partial set.
07-17-2024 05:57 AM
Hi altenbach,
Many thanks for your example, which is really great.
However, I have a question, to which I will point what I believe would be a possible solution, but I wish to know if there are better ways: How do I guarantee that all 64 sets have exactly 40000 points? Let´s imagine that there are 6400 additional points which are meaningless (well behaved junk that I can precisely locate and is sample-size independent) to the 40000 we already have.the actual size of the array. In your example, if I have have an extra 6400 in the end, then each of the 64 sets would have automatically 640 extra points. My solution would be to use the "Array Subset" and trim off the junk data in the 1D array before proceeding with the averaging. (In reality, I have about 800 points that are due to a .8s start delay between the DAQ (1kHz acquisition rate) and my laser.)
Would you advise me a different tactic?
Best,
Fl0
07-17-2024 07:15 AM
Hi Flonares,
@Flonares wrote:
Would you advise me a different tactic?
Can you explain what you want to show with your two images?
"Different tactic": please attach your VI so we can see the whole picture!
@Flonares wrote:
My solution would be to use the "Array Subset" and trim off the junk data in the 1D array before proceeding with the averaging.
This is a valid solution.
A better solution would be not to include that junk in your measurement data while acquiring the data…
07-17-2024 07:55 AM
Hello GerdW,
Please disregard the pictures - I just wanted to know what were they called, since at first I could not locate them in my Tools Palette, but in the meantime I understood what they were and forgot to delete them as I stepped into the next topic.
@GerdW wrote:This is a valid solution.
A better solution would be not to include that junk in your measurement data while acquiring the data…
True, unfortunately I am oblivious as to how to overcome that issue. Since that issue is focused on different topic (DAQ) than the one stated here, I will open another question soon and share the VI there - it might be of importance for someone else too.
Right now I am in the process of writing down the VI I had in mind to deal with all the topics we discussed about, and I can´t thank you all enough for time and help - it has proven invaluable. Will keep you all posted on the progress!
Cheers,
Fl0
07-17-2024 08:23 AM - edited 07-17-2024 12:56 PM
@Flonares wrote:
Let´s imagine that there are 6400 additional points which are meaningless (well behaved junk that I can precisely locate and is sample-size independent) to the 40000 we already have.the actual size of the array. In your example,
If you say "well behaved" there is no problem. It's just math, right?
You can easily fix the other constant (# of samples/subset) values and adjust the inputs to "reshape array" accordingly. It's all just basic math. If you know the subset length, you can can calculate the number of average by doing minor changes to the code. Any excess samples that don't form a full set will be discarded..
Note that "reshape array" is special in that it will also pad with zeroes if the math is not right. For example if you have only three points and reshape to 64x40000, you'll get that final size with only the first three points nonzero.
You simple need to know sufficient information about your data, maybe from some "out of band information" (instrument setting, etc.) in order to analyze it correctly.
(The signal generation is irrelevant for your problem, because you'll substitute your own data for that.)
07-17-2024 12:39 PM
If you already knwo ahead of time the amount of junk just get rid of it before entering the code.