LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Tic Tac Toe Coding Challenge



altenbach a écrit:...Could you post them with a passworded diagram (instead of removed diagram). I cannot open them in LabVIEW 8.0 where I develop.

Sorry Christian, I never remember the pro or cons of diagram removal !
Here is a more compact, password protected version (7.0).

Since it is slightly different from the previous one, please use this last version as provisionnal reference instead.

Some scores, averaged for 100 plays, and calculated according to the challenge rules (2 for a win, 1 for a draw) :

CC ref player against the random player : 198 as first player, 188 as second player

CC ref player against himself : 108/92

My own score, against the CC ref player : 152/108

Chilly Charly    (aka CC)
Message 41 of 183
(3,953 Views)
Hello Everybody,

Is there a chance that some nice person could save the templates and CC's version as LV 6.1 please?

I want to avoid having to install an evaluation version of LV 7.0 just to do the challenge......

Thanks,

Shane.
Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
0 Kudos
Message 42 of 183
(3,937 Views)

This will be my last entry for a few days.

Although I don't think that I have run on the right direction, my best score against the ref player is now 176/150. Try to beat this !

Hope I'll be able to enjoy your own progress. Good wiring !

 

Chilly Charly    (aka CC)
Message 43 of 183
(3,936 Views)
Shane,
 
Here is a  6.1 version. Not sure it's working.
hope to see your contributions soon !

Message Edité par chilly charly le 05-03-2006 12:07 PM

Chilly Charly    (aka CC)
Download All
Message 44 of 183
(3,942 Views)


Bruce Ammons a écrit: ... Kevin, you raise some interesting points.  I didn't expect the possibility of different outcomes from the same players.  Perhaps I will put a loop in so each match is played 100 times as you suggested.  The scoring system is designed to be automatic, so I don't have to set up every match.  Another possibility is running each player against the random player a million times or so to see how they do.  Then we could have a tie breaker for all the ones that win or draw 100% of the time...

Bruce and Kevin,

One possible approach is to evaluate the way the opponent is playing the game, and to adapt one's strategy accordingly : I believe there are some ways to always end up with a draw against strong opponents. However, using systematically this attitude would be a mistake against weaker opponents, against which a winning strategy could be applied. Of course, this requires a few runs before deciding which is the best approach. So I vigorously sustain Kevin's suggestion to score the players after a significant number of runs. I'll post my first submission as soon as the evaluation details will be decided.

Bruce, why do you think you can't compete ? Even if you can't pretend to be ranked as a normal competitor since you'll have access to other's code (although that's not mandatory), I think  you shouldn't remain a "passive" observer... 😉

Chilly Charly    (aka CC)
Message 45 of 183
(3,944 Views)

Very nice CC!

I have lots of improvements to do!  😉

Message 46 of 183
(3,946 Views)

I should be able to post something up tonight.  I threw some stuff together quick last night and faced off with CC ref.  As 1st player, I "succeeded in losing" slightly less often than CC.  As 2nd player, I got trounced.  With one extremely minor tweak in my evaluation function, I was able to only barely get beat as 2nd player.  When I was 1st player we had a dead heat.  I let it run for hours and we had over 2.5 million draws with no victories either way.

I have rethought my evaluation function a bit, and I plan on trying some new weightings that (for the moment) seem to make a little more sense.   As I considered more specific scenarios, I realized that I as a human would have occasionally chosen differently than my algorithm might.  I'll try to fix that tonight, and then I'll see how good my intuition is...

Bruce -- heartily agreed that we should let the comptetion progress a bit before tinking with rules / scoring.  I think some tweaks *will* be helpful, but I'm not prepared to argue exactly *which* tweaks will be best.  Further discussion here should hopefully start leading to some fair degree of consensus.   I also support your goal to make this competition one where people can do well without being LabVIEW experts.  As many of us are seeing, it isn't terribly hard nor does it take very long to hack up a pretty decent algorithm.  Hopefully that'll result in dozens of entries and a very lively competition / discussion.  With several past challenges, I would tinker a bit but generally abandon the effort before completion due to time constraints.  The couple I saw through to completion *did* require quite a bit of time, and the winners clearly needed to call on their in-depth LabVIEW expertise.

Extremely vague outline of my base concept: Consider all possible plays, perform an evaluation on them, select the "best" play.  If there are several that tie for best, pick one of them at random.  At this point, the evaulation is purely static, based only on the current state of the board.  No trending from the past and no predictions of the future.

Other vague thoughts: may want to break down algorithm somewhat like chess -- one algorithm for the opening, something different for the mid-game, and yet something else for the end game.  My approach "feels like" more of a mid-game algorithm and is probably not too bad for openings either.  I'll probably mess around with some different endgame ideas though.

Ta ta for now,

-Kevin P.

 

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
Message 47 of 183
(3,917 Views)


@Kevin Price wrote:

I should be able to post something up tonight.  I threw some stuff together quick last night and faced off with CC ref.  As 1st player, I "succeeded in losing" slightly less often than CC.  As 2nd player, I got trounced. 


I achieved similar results.

With a minor tweek, his winnings increased by 10%, which is still far away if he is the 1st player. But improved by approx 20% if he is the 2nd player.   But twice as many draws (which is still insignificant...).  I haven't spent much time developing a gaming strategy, but thank you CC for setting the initial bar!  (you deserve many stars for that).

R.

Message 48 of 183
(3,905 Views)


altenbach a écrit:

It seems that it is easy to fully analyze the problem and make a god-like program that fits in a few MB.



Let's call it Deep Toe Smiley Very Happy


LabVIEW, C'est LabVIEW

Message 49 of 183
(3,887 Views)
OK, here's what I came up with.  Results are mixed.

When matched against CC's reference player, it does better at losing than CC's.  But when matched against the Random player, it does worse at losing than CC's.

Results for KP_2a (matched vs. CC_ref)
--------------------------------------
KP_2a plays 1st: 4.0% forced losses, 0 forced wins, 96% draws
KP_2a plays 2nd: 0 forced losses, 0.1% forced wins, 99.9% draws

Results for KP_2a (matched vs. Random)
--------------------------------------
KP_2a plays 1st: 86% forced losses, 0 forced wins, 14% draws
KP_2a plays 2nd: 75% forced losses, 0 forced wins, 25% draws


Re: discussion on scoring.  While KP_2a slightly beats out CCref head-to-head, CCref shows much more dominance against Random.  If ranked by win-loss record from head-to-head matchups, KP_2a wins.  But if ranked by "dominance", i.e., point total from all matches, CCref wins due to its dominance over the weak Random player.

As a point of reference regarding code contests where algorithms get matched up, check out the Roshambo (Rock-Paper-Scissors) Programming Competition page(s) starting here.  (And don't miss the tongue-in-cheek strategic gambits here.)  That's a game where no strategy can reliably beat Random play head-to-head, so the idea of dominance over weak players was used in the scoring system.  Another concept present was that to be awarded a match Win in a head-to-head match, you needed to beat your opponent by an amount that's more-or-less statistically significant.

-Kevin P.

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
Message 50 of 183
(3,862 Views)