LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Proper use of case structures for array insertion

Salutations,

I have an array that looks of the form:

0 0 0 0
1 2 3 0
0 1 0 3
0 0 0 0

I want to be able to remove the lines that are full of zeros.  My idea was to use a summation of the row and check to see if the value was greater than zero.  This True or False statement would be sent to a case structure that would then insert the rows that are non-zero'd (true) when added together and just ignore when the summation was zero (false).  This idea has failed me, because the case structure requires information to be sent for each case.  I was hoping someone would have a clever trick for removing the 0'd rows.

Thank you,
E. Smith
0 Kudos
Message 1 of 7
(4,815 Views)

Hello,

I have attached a VI that does what you're looking for.  Notice in the VI I use the Array Max and Min function to determine if the row is full of zeroes.  Adding the row elements won't work...what if you have a row that has 1, -1, 1, -1 in it?  Notice I also use the Delete From Array function and a shift register, and I keep track of the current row index based on whether or not I have deleted a row.

Hope this helps...and don't worry, the more you program in LabVIEW, the easier it will be for you to come up with stuff like this... 🙂

-D

0 Kudos
Message 2 of 7
(4,803 Views)
In the attached VI are a couple more ways of doing it...
0 Kudos
Message 3 of 7
(4,793 Views)
To test a row for all zeroes, simply feed it through a "equal zero" from the comparison palette, then through a "AND array elements" from the boolean palette. Summing array elements will potentially fail once you allow negative numbers.
 
Attached is one other possibility.
Message 4 of 7
(4,784 Views)

So who wants to benchmark and see which of the myriad options presented here is the fastest one?  🙂

-D

0 Kudos
Message 5 of 7
(4,779 Views)

😄

Here's my wild guess, assuming very large arrays:

Darren: "delete from array" is potentially problematic, because for each row you delete, ALL higher rows will need to be moved down a row. This is a lot of data shuffling. if you do this thousands of times.

Warren #1: Building an array in a loop is potentially expensive, because of memory reallocations.

Warren #2: Same problem as Darren.

Altenbach: All array operations are "in place", followed by a final trimming ar the end. One might try to use index array inside the loop instead of the autoindexing, but causal tests in the past did not show a difference.

I could be way off, of course. Anyone want to benchmark them and post the results? 🙂

Message 6 of 7
(4,774 Views)

@altenbach wrote:
To test a row for all zeroes, simply feed it through a "equal zero" from the comparison palette, then through a "AND array elements" from the boolean palette. Summing array elements will potentially fail once you allow negative numbers.
 
Attached is one other possibility.


You are of course right about the flaw with the summing. I just wasn't thinking past the problem as it was posed. I'm also impressed by your method of sifting thru the array in place. Later in the day yesterday I happened to notice that your method would benefit one of the OpenG library routines and suggested as much. It looks like it will be incorporated there.

Thanks!

0 Kudos
Message 7 of 7
(4,738 Views)