04-20-2026 01:44 PM
The recently concluded GDevCon ASIA 2026 had a coding challenge (it is now open to all, see here) where interested participants needed to develop a bot VI to play a card game against other similarly coded bots. The coding challenge was bound to put everyone's skills to the test, so I decided to test mine right back! I wanted to win, of course, but I also wanted to use the challenge as an opportunity to level up my unit testing and test driven development (TDD) skills, and boy, did I have fun doing so!
The Setup
Contestants were provided a template in a zip file. The zip file had a main VI with some inputs and outputs, some default typedefs, and empty folders where you could add your subVIs and other typedefs you created. Upon completing the logic inside the provided main VI, you were required to zip up all the folders, and upload your code to the website. The site would then play your created bot against those provided by other contestants, and rank them depending on how many matches each bot won. The provided solution needed to only include VIs, subVIs and typedefs. No projects, classes, libraries or malleable VIs were allowed.
My Repository and Code Setup
Setting up a git repository was the obvious choice because I knew I might need to revert to older versions of code if needed. I am a proponent of creating multiple branches to try things out, but I wanted to try working with a single branch this time around, and use the commands git provides to restore the working copy to previous commits if needed. Learnt something new while experimenting with the git commands, but more on that in its own section towards the end.
The code setup took a little thinking. Although projects could not be part of the submitted solution, they could be used as a scaffolding for all of my code and tests, and even better, for zipping up all of the code for uploading! If you are just starting out in LabVIEW, one advice I would like to give is to start using projects early. They make life very easy and code organization a lot more simpler than working without projects.
After a little tinkering, my repo folder structure and project looked like this:
One advantage using projects gives you is you do not necessarily need to follow the folder structure on disk. As you may have noticed, I have the problem description next to the project file on disk but I have it inside a virtual folder inside my project for better organization. When you open up the project, the code is organized into the following virtual folders:
These settings let me implement the requirements of the challenge, namely, to zip everything inside the Source Folder without including the source folder. I could do this now by just right clicking on the build spec and clicking "Build". Super convenient! The one sticking point was there was no version number placeholder, so I had to increment the attempt in the build destination manually before building.
Tests Organization
One thing I did not touch upon in my presentation at GDevCon ASIA 2026 is where one must store their tests. You could do as I decided to do here. Store the tests in a separate folder, but have them in the same project as your source code. This makes it very easy to write tests. The downside, though, is if someone opens your project without downloading the toolkits required for the tests, the project will have broken code. The source code will still run, but the tests will be broken.
Another school of thought is to keep the tests in a separate project. That project can be opened when you want to write/run your tests. Your source code project can be shipped/deployed to others who do not have the testing framework installed. Both ways work. Pick whichever is convenient to you.
The Way of the TDD
The first thing I wanted to to was convert each card to a numeric value. So, I wrote a parameterized test for it. Parameterized tests are a special kind of test you can create with LUnit (after you install the VIPM package). A single parameterized test is repeated across the set of parameters you define. You can also build the test description using the inputs and expected outputs. For example, when I run the "Test Card to Score" parameterized test, I see the following output:
I defined the test parameters (card typedef and score), built the test description, and wrote the test (which looks a little ugly because I had to handle and equate NaNs):
Then in a separate VI that the framework created for me, I defined the test cases:
One test, one VI to programmatically build the test descriptions, and a 90 element parameters array (programmatically generated) later, I had this test ready. Of course, I wrote a failing test first where I had my VI output "Inf" for all cases, and then filled in the logic and confirmed everything passed.
A similar VI I wanted to develop was "Card to Face" which maps each card to a distinct "Face Value". This was similar to "Card to Score", except the score was capped at 10, while the face values went up to 13. This let me find identical cards, etc. Same as above, create parameterized test that fails, fill in logic that makes it work, confirm the tests pass and refactor if needed.
Did I need these tests?
Honestly, when I started with tests for these bits of "primitive" code, I did ask myself, "Do I really need tests for these?" I know I can code them to output the correct scores and face values. So do I need to bother with tests? Turns out, I did.
First, as you may have noticed, the test cases output NaNs in a few cases. These are inputs where the card and suit do not make sense when paired (such as an "Ace of Joker"). By writing the tests, I knew the only times I would see a NaN is if I have these weird combos. So if I did see a NaN in my actual simulator Main VI, it means a card combination was not genuine. A genuine card combo would never generate NaN. Secondly, I decided to refactor from case structure to map lookup and variant attribute table lookup to speed up my code. (Spoiler alert: For some reason, the decision times (benchmark metric provided by the Web UI) increased with those lookups as opposed to calculating the output via case structures). But every time I refactored, all I needed to do was run my tests and assure myself I did not break anything by adding an incorrect lookup entry. I also used the same parameters array I previously created for the tests to create said map and variant, so that was an added bonus! And lastly, when I was writing the test for a particular VI, I was not thinking of what the rest of my code should be doing! I was focused on just what that VI should output given a particular input. It was amazing!
Creating more tests and a bit of "Cheating"
I proceeded that way, creating tests for modules (converting between Face Value and Score, testing I only output "None" out of "Your Play" under the right circumstances, and not any other time, etc). I also "cheated" a little. There were one or two small, straightforward subVIs I created first before thinking I might be better off with tests for them as I may need to refactor the VIs later. In those cases (just one or two), I directly wrote the parameterized test and ran it to ensure it passes after the code was developed. I am curious if this is something more experienced TDD developers catch themselves also doing from time to time (and if so, how often?)
Integration Tests (kinda)
After all my basic subVIs were in place, along with their associated tests, it was time to finish my player bot. I thought to myself, as that is my whole application, it cannot have a "unit" test for it. I confess I was also eager to see how my player bot will perform against the others, so I just finished the logic and submitted and came first! However, as the other participants improved their logic and my rank started slipping, I realized I was wrong about not needing a test. You see, as I ran the games I lost on the web UI, I came across scenarios where the real me would have played differently. In order to ensure my player VI behaved correctly, I created another parameterized test that I called "Integration Test Decision Logic". This test would define all the inputs (which I manually entered by pausing the move on the web UI) and expected outputs of the player VI. When I ran the test, it would fail, because the VI was doing something else. Then I implemented the logic until the test passed. As and when I found new behavioral improvements I needed to make, I added a new element in the parameters array, and implemented logic that made the test pass.
The best part? I could confirm none of my new changes were causing the old tests to fail! I also spent some time converting parts of my decision logic into subVIs that I needed to reuse in a few places, and then refactored those subVIs to remove unnecessary inputs. Having tests helped ensure my changes did not cause anything to break.
One more important thing to note is I did have to refactor the tests. M initial version of the decision logic tests did not have many parameters, because my code used very few inputs. As I kept adding new inputs, I had to modify the tests to account for those new inputs.
At the very end of the contest, I deliberately made a change in strategy as to when cards will be dropped to improve my rank. I did not update the test cases to reflect this changed behavior. So if you go and run the tests now, you will see two failures:
The very fact that those two tests failed showed me my VI was behaving differently. Feel free to explore the failures to understand what the old behavior was and try to figure out what the new logic is doing. Consider it homework! 😁
Git Lesson
Being on a single branch, I ran into an issue. Some of my optimizations actually worsened my decision time in the UI. So I needed to go back to an older version of the code. I also did not want to remove the intervening commits (so a hard reset was out!) I just wanted the source code from the older commit, while I also wanted to keep all the zip files from the latest commit.
The naive way to do this would have been to checkout the older commit, copy everything I wanted to a temp folder, come back to the new commit, delete the source code and paste from the temp folder. And I almost caved and did it. But then, a little bit of experimentation helped me figure out a more elegant way:
git restore . --source HEAD~9
git restore "Zip for Submission"/. --source HEAD
git add .The first line replaced the working copy with code from the 10th latest commit (commit that is 9 commits behind head). This removed all the zips created after that too. The second line brought back the contents of the zips folder from the latest commit. The last line staged all files. Beautiful, elegant, simple!
Want to explore the code and tests for yourself?
I made the repo public. Look at the tests first. (Leave the integration tests to the end.) Run the tests to see what the expected behavior should be for the VIs. Then study the VIs. Finally, look at the integration tests to understand the strategy change.
Note: You will need to install Lunit and the Parameterized Tests Add On packages from VIPM to run the tests.
If you have any comments/thoughts/questions, I look forward to hearing from you!