07-01-2020 02:58 PM
See the lower half of the VI.
This will work for any number of columns, and any number of rows (but each column must have the same number of rows.)
07-01-2020 03:45 PM - edited 07-01-2020 03:48 PM
I assume you are aware that your code is faulty, because e.g. row [A0,B2,C0] occurs at least twice in your output.
It is also a really bad idea to test with square arrays. Here's a scalable solution that works with any possible array dimensions (until you run out of memory, of course).
Note: If you want to cycle the leftmost element fastest, use two inner FOR loops. First create the array of indices then reverse the blue array before autoindexing on the loop where you index out the elements)
07-01-2020 03:58 PM
@RavensFan wrote:
See the lower half of the VI.
Ouch! This is either a FOR loop equivalent (If we are lucky! :)) or an infinite while loop (if we are unlucky and the equal comparison involving a DBL fails! :(<unlikely here, but still....> ).
07-01-2020 04:02 PM
I'm assuming that integers compared to an integer within a double will still work.
You're right, I should've made that a For Loop with the original x^y wired to the N terminal.
07-01-2020 04:19 PM
@RavensFan wrote:
I'm assuming that integers compared to an integer within a double will still work.
I agree that you are probably right but the exponentiation operates purely on DBL (notice the coercion dots) and I have no idea what's under the hood (Intel MKL?) and if it is guaranteed that all possible integer inputs really result in true integers in DBL representation. For example if they internally use logarithms etc., all bets are off. I haven't tested this. Probably safer to set the output configuration of "-1" to I32. 😄
07-01-2020 05:29 PM
This works... only problem is I have so many possibilities I keep getting memory errors... DOH! I guess there will just be some cases I will not be testing, lol.
Thanks for your help everyone! I really appreciate it!
07-01-2020 06:38 PM
Like I said, it gets exponential!
You may have to look at a Design of Experiments scenario where you vary multiple parameters between tests so you can reduce the number of tests. Then the final analysis methods have means of attributing which degrees of freedom had more effect on the results and which did not.
07-01-2020 08:04 PM - edited 07-01-2020 08:11 PM
Took a bit to figure it out but it was a really fun challenge, and I'm definitely gonna clean it up some and add it to my useful VI arsenal. This will work even if each attribute has a different number of values. Not sure if you will run into memory issues but it could maybe be optimized a bit more.
Config files would work nicely for attribute/value combinations because each section can be an attribute and each key can be a possible value, but you could get the "original array" easily enough from Excel, database, etc.
Hope this helps 😁
Saying "Thanks that fixed it" or "Thanks that answers my question" and not giving a Kudo or Marked Solution, is like telling your waiter they did a great job and not leaving a tip. Please, tip your waiters.
07-02-2020 02:09 AM
Typically you would represent ragged arrays as 1D arrays of cluster where each cluster contains a 1D array of strings. Now each inner array can have a different size. This has the advantage that also empty strings can be a valid value to be shuffled. 😉
@FireFist-Redhawk wrote:
...and I'm definitely gonna clean it up some and add it to my useful VI arsenal.
Here' are some ideas. Arguably simpler. 😉 Currently it gives the same rows as yours, but in different order. That can be fixed, of course with a little more code.
07-02-2020 02:44 AM
@altenbach wrote:
Typically you would represent ragged arrays as 1D arrays of cluster where each cluster contains a 1D array of strings. Now each inner array can have a different size. This has the advantage that also empty strings can be a valid value to be shuffled. 😉
Here's how that could look like.