Latest: oddshrimp4.3 of 4 June 2014.
I participated in the 2010 Google AI Challenge game programming competition, Planet Wars, ranking 130 out of several thousand entrants with the entry oddshrimp. Now, three years later, I have returned to my old program to see how much better I could have done if only I were not so confused. And, to be fair, if I had other natural advantages like more time and the source code of my opponents. Answer: A lot!
Haskell source code downloads at the bottom, including the original contest entry and far stronger new bots. The latest beats all other than bocsimacko.
27 November 2010. The contest version. My goal was to get into the top 100, and going into the finals I thought my odds were good: I had just managed to add alpha-beta and that had to be an improvement, right? My tests showed that at least it was no worse, so I imagined I could hold the line as other bots also made eleventh-hour improvements.
But no, oddshrimp ended up at rank 130. More careful testing after the contest showed that, just as other contestants from #1 Gabor Melis to #350 krokokrusa found, alpha-beta was often a disimprovement, or at least difficult to make work well. I had shot myself in the foot with a classic weapon, the insufficiently tested last-minute change.
Looking back at it, this code is a disaster area. Rubble and fires everywhere. The evaluation function, with no particular justification, scores each planet according to its ships and growth as soon as the last fleet lands there, so that each planet is scored at a different time. I must have tested it and found it reasonable compared to some alternative, but I don’t remember that at all.... But peculiarities of the evaluator hardly matter, because the move generator is insane. It generates attacks against individual targets and ranks them according to an internal score which is at best loosely related to the global evaluator, and then instead of returning a thoughtfully-selected set of alternatives it returns the set of the top attack, the top two compatible attacks, the top three compatible attacks, and so on. That is all the evaluator can choose between. It’s a surprise that the bot plays as well as it does, and I instantly understand its weird decisions to, for example, attack the enemy’s easily-defended rear planets: “Oh, that’s because it’s insane.”
I think I felt the time pressure of the contest too much! Gotta relax a little and think.
13 November 2013. For this version I turned off the anti-helpful alpha-beta, ripped out complicated features that did not improve results, and fixed bugs. A few of the bugs, at least. The overall behavior is similar, but like the code it is cleaner and less accident-prone.
I did not touch the best part, the positioner which moves ships to the front lines. Various people have called it “supply lines” or “reinforcement” and a few other names. It runs as a separate pass after the main move selection, and all it does is find planets closer to the enemy and splice in moves to forward ships there. It’s simple-minded but it works great for a bot at this strength level.
Also unchanged since oddshrimp12 is another separate pass, the move splitter. I thought of it as my secret weapon, since I didn’t notice any other bot that seemed to behave similarly (though now I know that a lot of people had similar ideas—it’s just hard to notice). If a move was planned from A to B and there was a friendly planet X exactly in between, so that the distance A to X plus X to B was the same as A to B, then it would repoint the destination to X. The theory was that that would increase the bot’s later options; when the ships arrived at X, the situation may have changed so that B was no longer the best final destination. It helped in my tests when I added it—which was before the controversial map change during the contest.
My tests suggest that oddshrimp14 would have ranked around 90 if it had run in the final contest. And I reasonably could have done it, too, if I were only a little bit smarter.
26 November 2013. But if I were a lot smarter I could have done a lot better. For this version I junked the move generator and wrote a sane one from scratch. The bot simulates the future until the last fleet lands and stores it as Fate; this code is inherited, not new. It examines Fate for goals: Attack that enemy, take that neutral, defend this friendly planet. Goals which are not worth it or are infeasible are dropped immediately. Then it looks for moves to achieve each goal. It scores the moves in a principled way, by expected gain in ships by a cutoff time, which it calculates not by re-simulating the future but in a few lines of code with static analysis. It’s straightforward and fast and extremely limited: It offers only one alternative, so the evaluator isn’t needed; it chooses only one goal per turn; it can launch from more than one planet (toward the single goal) but all launches are immediate, so only planets which happen to be at the same distance from the goal can share the load.
It is already about the same strength as oddshrimp14, maybe even a little stronger. Simple and principled beats capable but haphazard. Its style of play is quite different; it goes for aimed hammer-blow attacks rather than the old bot’s hailstorm attacks. And it already shows the most striking feature of the oddshrimp2.x family: It has the killer instinct. When it gets a decisive advantage it finishes off the opponent quickly and viciously (well, as viciously as it can when launching against only one enemy planet per turn). Many contestants, even much stronger ones, were satisfied to exploit advantages slowly and look milquetoast in comparison.
28 November 2013. 2.2 extends the bare-bones move generator to achieve multiple goals at the same time. It returns the goal combination with the best score by essentially the same trivial static analysis. The other limitations stand. The goals are so few, and the move assignment so fast, that it tries to assign moves for every combination of goals with no simplifying assumptions (it’s order 2n and still never times out; it could use dynamic programming to speed up assignment but doesn’t). That turned out to be a strength: It can discover, for example, that a combination of little goals is better than one big goal which uses up the available ships. Most top bots, including #1 bocsimacko and #2 iouri, make more complicated calculations and can’t afford to compare all alternatives. (Of course, that could also be a sign that it’s not a big advantage!)
This version is both clearly stronger and clearly incomplete, so I didn’t bother to run a large tournament to estimate its strength closely. The 1000 games I did run against 10 opponents suggest it might have come out in the 60s if it had been in the contest, but that could be off.
7 December 2013. 2.3 extends the move assignment step to find moves in the future, in a limited way. Move assignment works outward from the goal planet: If we don’t have enough ships at the nearest neighbors to achieve the goal, it moves on to the next nearest neighbors. This version relaxes the assumption that all launches are simultaneous, so that when it’s considering a launch from the ring of neighbors at range r from the goal, it also includes ships available at neighbors closer than r—all ships are to arrive at the goal simultaneously, so those ships will be launched in the future. Usually more ships are available in the future than now, and the algorithm favors shorter range attacks that give the enemy less reaction time, so it can happen that a goal is achieved entirely with future moves.
It sounds basic, but it adds a lot of complexity. I still didn’t feel like re-simulating the future (even though the whole mechanism is right there) to find out the future ship availability when there is more than one launch at different times from the same planet, so I worked out another simple static analysis of Fate that gave a conservative but mathematically safe bound—which then led me to find rare but deadly bugs in the calculation of Fate. I also had to extend the positioner, which I had left unchanged since oddshrimp12, to avoid moving away ships that would be needed in the future. Positioning takes future moves into account in deciding where the front line is, which makes it occasionally position ships to a planet only because it intends to capture it later, sometimes to bad effect.
The bot does not save future moves between turns but recalculates them from scratch. That’s why it sometimes makes seemingly uncoordinated attacks, or saves up ships on a back planet for a while only to later send them to the front without making an attack. The opponent did something and oddshrimp changed its mind.
In this version I also experimented with the move splitter and ended up dropping it. No more secret weapon. Testing showed that the splitter made play slightly worse; it wasn’t statistically significant, but it has to be an improvement to be worth it. It may have lost its value when the map generator changed, way back during the contest.
12 December 2013. Version 2.4 has no big advances, only bug fixes and minor improvements. One improvement, the biggest single one, was to allow recapture as well as defense when a planet is forecast to be lost; it was an oversight in 2.1 that survived until now. This finally caused the exponential goal selection algorithm to bite in rare complicated situations. Speeding up move assignment was enough to work around the problem for now.
2.4 has both devastating weaknesses and impressive strengths. The most obvious weakness is that it doesn’t realize that the opponent gets to make moves too. It is perfectly happy, when starting a game where the two home planets are next to each other, to send away all its ships to take distant neutrals and get crushed instantly. Nearly as bad is its reluctance to take neutrals at certain times, especially late in the opening phase. That reluctance causes the bot to score only 98% against bswolf (rank 950). It also scores only 90% against InvaderZim (rank 396), when I would expect better. Most decisions are made by grossly simplified static analysis and are often unrealistic. “Sure, I can take that enemy planet!” Maybe not, the static analysis assumes that the enemy does not act until the next turn, so a planet at the end of a supply line (the most likely target) may survive. “I can send ships there, but then I’m using up all these ships.” No you’re not, surviving ships can relaunch from there, you idiot!
The deadly weaknesses make the strengths look even more impressive. I estimate that oddshrimp2.4 would have ranked roughly 35 in the contest—see tournament results. Though it barely has a concept of good ship positioning itself, when the opponent’s ships are in a bad position it pounces instantly. It’s that killer instinct. It especially shines in messy tactical situations like base trades—which its combination of hammer-blow attacks and neglect of defense can provoke. I’ve seen it win lost games against tough opponents when the would-be winner started a correct finishing attack but went wrong after oddshrimp counterattacked and stirred up a mess. Oddshrimp’s tactical skill doesn’t come from seeing the future clearly like bocsimacko’s, it comes from seeing every way to combine the immediate possibilities, a very different ability and apparently a valuable one that is not common in the contest bots.
There’s tons of room to improve the move generator, but all along I’ve been thinking of it as a move generator alone, one part of a larger program to be written around it later. That’s why I didn’t fix the weaknesses—I’m startled that they hurt as little as they do. It’s time to figure out how to take the opponent’s options into account and to look farther into the future, with smarter evaluation, and search, and game theory, and maybe (this is a key idea and I have yet to see anyone mention it by name) progressive move refinement. Once the wider program takes shape it should become clear which direction to take the move generator in, whether to make it fast and dumb or slow and thoughtful, or both for different purposes, or perhaps break it into multiple stages. It could be “Here are the likely moves. No good? OK, try the unlikely moves too,” or in the case of move refinement “Here’s my first try. Oh, then that happens? Let me fix it, here’s my second try.”
3 January 2014. The key improvement to start the oddshrimp3.x family is lookahead search. It works a little strangely. The bot assumes that it plays a move while the opponent sits still, then on the next turn it sits and the opponent plays a move in reaction. It’s a way to accept the opponent’s possibilities without the trouble of simultaneous moves.
I gave the move generator two levels of effort. For the bot’s moves, it lists every possibility that it thinks might be worthwhile, whether only 1 move (doing nothing) or thousands. If there are thousands, a time controller cuts the search short near the time limit. For the opponent, it picks only 1 move, the one that the static analyzer prefers, as in oddshrimp2.4. So the search can be seen either as a severely pruned 2-ply search or as the stump of a heavy playout.
Picking the opponent’s move may happen many times, so it must be blazing fast. Instead of Just Trying Everything, it now combines goals best-first, efficiently cuts off combinations that can’t beat the current best or that are infeasible because they need more ships than there are, and limits its work to no more than 100 trial move assignments (so that it usually, but not always, finds the best move). Time to pick the opponent move averages under 600 microseconds and the assumed opponent’s play is slightly different than oddshrimp2.4’s but virtually the same strength (actually a few percent better, due to a bug fix and some tweaks).
With an actual search, oddshrimp needs an actual evaluation function. The evaluator counts ships at a deadline, so its answers almost match the static analysis. But there are 3 adjustments to make it a little cleverer: 1. It recognizes wins and losses. It’s better to be playing on 101 ships down than be down 100 ships but the game is over because they’re the opponent’s 100 ships. 2. It adds a tiny tradedown bonus to encourage trading ships when ahead and discourage it when behind. 3. It subtracts 0.5 ship if the bot is behind and the move is “do nothing”, to prefer action over inaction when losing.
There is also a tweak to positioning and a few others, but search makes the big difference. I estimate oddshrimp3.1’s contest rank as roughly 20-25—see tournament results. Compared to the reckless oddshrimp2.4, the style is like night and... morning twilight, at least. Oddshrimp3.1 is still aggressive, but it is no longer reckless and avoids moves that lose a planet without compensation. It’s much tougher overall, stronger against most opponents and weaker against none, but unfortunately its new caution reinforces its slowness in taking neutrals. Its losses are most often due to not taking enough neutrals early. I know what to fix next.
8 January 2014. I have big ideas to improve the move generator, the search, and the evaluator, and I wasn’t intending to make another release until I’d found a major improvement, but I hit on a minor improvement that is interesting to write up. A strong program needs continuous attention to detail, it does not work to hew it into rough shape and only then attend to the fine points. (Compare the code with 3.1 and you’ll see oddments of four other experiments, one failed, one untested, two in progress.)
I changed the static analysis of ships needed to capture an enemy planet to be looser for the bot’s move on the current turn than for the opponent’s move on the next turn. For the bot’s move, the opponent is assumed to defend by sending ships from other planets only to the point that the other planets won’t be lost to ships in flight. Thus the opponent has fewer defenders and the bot can make smaller attacks, or in other words, aggression is easier and so it’s favored. Search will decide whether the aggression is successful, and the positioner will add ships to the attack force if appropriate. For the opponent’s move there is no further search, so the opponent cautiously assumes that the bot can defend with all ships, preventing the evaluator from overestimating the opponent’s chances. It’s a small step in a process of freeing the move generator from constraints that it doesn’t need any more.
The point of the improvement may be hard to feel without sensitive intuition for how search works. The test gains are just big enough and consistent enough against the top ranks to convince me that the improvement is likely real. I get best estimate elo 3420 and rank 21-22, only a hair better than 3.1, so it’s not enough to be sure. It’s satisfying that oddshrimp now beats its recent nemesis Manwe56 with no score other than 56% (and it’s statistically significant, I laid in extra games to verify it).
24 January 2014. I thought the most natural way to make neutrals more attractive was to increase the evaluation horizon that is shared between the static analyzer and the evaluator. Then the cost in ships would be amortized over a longer period. I also sometimes saw oddshrimp go wrong by taking a neutral in the midst of a hot fight, so a dynamic horizon made sense. I settled on a scheme where the horizon increases when the sides are far apart and decreases when the closest friendly and enemy planets are close together. Taking neutrals more greedily in the early part of the game did not work as well, either by itself or as an adjustment to the distance scheme.
I also fixed a primordial bug in calculating the number of ships needed for defense and lifted a limitation in figuring the ranges at which a goal is achievable, improvements which together made at least as big a difference as the dynamic horizon. I loosened restrictions on taking center neutrals which oddshrimp2.x versions needed to avoid suicidal moves. And I added a growth shortfall calculation to the evaluator so that the bot can understand earlier when the game has passed the point of no return and it is no longer possible to change the result. The growth shortfall makes virtually no contribution, since by the time you notice the abyss under your feet you have already stepped into it, but it is cheap and mostly accurate so I kept it.
This version shows a more balanced style of play. It is greedier about taking neutrals and less aggressive in snapping up your planets, but no less dangerous if you leave an opening. I estimate its rank as roughly 15—see tournament results.
Now that the glaring weaknesses are dimmed a bit, it’s time to work on smarts. #1 bocsimacko and #2 iouri both get a lot of their strength from smart evaluation that tells them what planets can be taken and how long they can be held. And I have already written similar code in Evaluator.hs that’s just waiting to be finished up and tried out. But I notice two things about oddshrimp3.3. First, it still plays 90% of its moves within 10 milliseconds, 1% of the time limit. Second, the move generator still wastes a lot of work and could be blinding fast instead of merely blazing fast. The bot has a lot of room to think harder. I kind of want to do the different thing and try smart search before smart evaluation. I think smart search can beat bocsimacko.
8 February 2014. One more weakness dazzled me: In oddshrimp3.3 the positioner still runs as a separate pass after move selection, meaning that the bot does not evaluate the same move that it plays. It made sense in oddshrimp2.4 which could not evaluate its moves, but with the search in oddshrimp3.1 it became a drawback. I was slow to realize. Oddshrimp3.4 runs the positioner for each move, so it now has a clearer understanding of what it’s getting itself into.
I also made a further shift toward greedy play with a change to the positioner that I call “hang-back positioning”. If there are neutrals behind the front lines, then rear planets which are closer to the front than the enemy is (so that they can react in time to attacks) hold on to their ships instead of sending them forward. This puts less pressure on the enemy but makes it easier to take neutrals, and taking neutrals was the weak spot. The bot’s play is now close to evenly balanced between greed and aggression.
Each of these changes tests as a clear improvement. The estimated rank is still 15—see tournament results—but that’s because there’s an unusually wide gap between rank 15 and rank 14.
23 February 2014. A new search algorithm can consider alternatives deep into the future. For no strong reason, I named it “lattice search”. Here was my thinking: Virtually all bots project the game state into the future considering ships in flight. That amounts to searching one branch of the full game tree, the branch where both players do nothing each turn. Game tree search traditionally considers alternatives at every choice point, but in this game there’s no need; the null move is often good, and we can pick choice points at any depth in the tree and simply assume that all other choices are to stand pat. Lattice search is just alpha-beta with two decisions: 1. Choice points may be at arbitrary depths in the tree (it’s still alpha-beta, so players alternate and simultaneous moves are not supported). This version has a hardcoded list of depths, but another idea is to choose depths based on game events. 2. At the first ply the move generator returns all choices. At each later choice point, the move generator provides one or two choices. They are the null move and (if it’s different) the static analyzer’s 1-ply move à la oddshrimp2.4. Fewer choices means deeper search.
The move generator has limitations in finding future moves and the evaluator is simple-minded, so I designed the algorithm thinking that the deepest choices should be far in the future. I tested depths up to 30 turns ahead. Total bust! The winner was the most traditional version, which makes choices on alternate turns with no gaps. Good one, world! Crush my creativity, why don’t you! Oddshrimp4.1 has a plain, fixed 6-ply search.
Luckily the arbitrary-depth feature was easy to implement. I put more work into speeding up reinforcement and ship-availability calculation and eliminating wasted computation in the move generator. Choosing a single 1-ply move is now too fast to measure correctly without writing a benchmark for the purpose, which I won’t bother with (I got a mean time under 1 microsecond, which I don’t trust because I think it’s less than the timer’s granularity). Generating all moves is even more improved. The bot moves on average in around 30 milliseconds (though that includes moves where no alternatives are found at the top level and so no search is done), and only rarely has to be cut off at the time limit when there are too many ply-1 moves. Iterative deepening should be a big win in a future version.
Oddshrimp4.1’s best-estimate elo is about 3500, contest rank 5. It now puts up a fight against bocsimacko before losing, and it crushes weaker opponents with improved openings and short-term tactical skill. The melee battles that it mastered early went from “you have a whiff of hope if you start even” to “you’d better start with an edge or you’re dead for sure” against all opponents that I have. Old strengths are amplified, but now old weaknesses stand out. To my new eyes, everything looks weak. The positioner doesn’t consider the situation, the move generator is oblivious to some important moves, the evaluator is dumb, and the search itself is unable to do what I wanted and surface information from deep in the future.
19 April 2014. Following the pattern of past versions, I worked on every component other than the search. I made improvements to the move generator, the positioner, and the evaluator. No one improvement is decisive, but the overall effect is big.
The biggest improvements are in the move generator. “Safe availability”, a change to the ship availability calculation, enables desperado attacks (see Planet Wars strategy) and improves tactical skill in sharp positions. A complex of changes related to capturing neutrals was the top win. Version 4.1 classifies a neutral as safe to take or not depending on which side can reach it sooner, and takes safe neutrals with minimal force (the positioner may add more ships). 4.2 calculates whether each neutral could be sniped by the enemy after it is taken and classifies it as safe, contested, or unsafe. It takes safe neutrals with minimal force and contested neutrals with enough force to prevent immediate sniping (or skips them if that can’t be done). It makes the ply-1 moves a little cleaner, but the main benefit is in the extremely narrow lattice search, where the higher quality moves improve oddshrimp’s understanding not only of neutrals, but of everything that happens. Contested neutral captures had been polluting the search tree and obscuring the truth of the position. I was surprised by the degree of improvement.
The positioner is logically part of the move generator, but in the code it’s a separate module. I spent over a week trying to write a whole new positioner, but never got one better than the original. It looks simple but it’s surprisingly nuanced. Instead I taught the old positioner which neutrals the move generator was interested in taking, so that it no longer hangs back near boring neutrals. The change makes the bot more aggressive without making it less greedy—basically, it’s more efficient. I added two different methods for the positioner to propose alternate ways to position ships, on the theory that the search could compare the alternatives and choose the best one. It turned out that the search was not quite able to choose correctly, but I left the failed experiments in the code (turned off) because the next search will be better.
In the evaluator, I dropped the terms that measure doing nothing, tradedown, and growth shortfall because they didn’t help. I added two new terms that each improved results. One penalizes long moves for both sides. It tries to measure the opportunity cost of having ships in flight. The other is called “indirect wealth” and makes a planet more valuable if it is close to high-growth planets. That encourages oddshrimp to send its ships where they’ll be wanted later. Every nearby planet contributes to indirect wealth, even neutrals that are too expensive to take (I tried fancier measures but they worked no better).
I fixed no bugs. I dug for coding errors and could not turn one up. Oddshrimp is starting to get fairly reliable.
Oddshrimp4.2 scores about 1:2 versus #1 bocsimacko and, head-to-head, outscores every other opponent I have including #2 iouri and #3 Slin. Compared to the previous version, 4.2 is smoother, more aggressive, and more accurate, avoiding both blunders and small slip-ups that 4.1 is prone to. It appears to have gained an inkling of what a space advantage is, and often wins by dominating center planets that threaten to snipe or counter and so restrict the opponent’s space to expand. I think that the new understanding of neutrals, the more aggressive positioner, and the indirect wealth evaluation all contribute to that inkling. (There is no evaluation term to encourage taking the center. I tried one and couldn’t get it to work well.)
Next up is search. I still think that smart search can beat bocsimacko, and oddshrimp’s search is not yet smart.
4 June 2014. Ack! I found out that I’m not using the best map generator. I knew that there was an old map generator and a new map generator, but I didn’t know that the new map generator went through different versions. The latest contest map generator, used to create the final maps, makes slightly larger and more interesting maps than the older version I’d been using, and more interesting is better. Oddshrimp4.2 plays worse on the new maps and is weaker than #2 iouri rather than stronger. The goalposts moved back.
4.3 regains my bragging rights. The search now uses iterative deepening. It searches 4 ply, then 6 ply, then 8 ply, as time is available. Increasing 2 ply at a time is an old trick to reduce unstable move ordering; the player that moves last in the tree has an advantage. Play with iterative deepening to 8 ply is about 20% slower than straight searching to 8 ply, since move ordering information is only saved at ply 1, but it still produces stronger play because the previous iteration offers a better move if the search doesn’t finish (which mostly happens in complex positions where good moves matter). Most moves can be searched to 10 or 12 ply within the time limit, and many deeper, but for some reason that made play worse. Apparently the extreme narrowness of the search is starting to pinch. Also the time controller is not quite capable enough, and it can allow deep searches to time out and lose. Have to fix that soon.
The search is still none too smart, but I found 2 changes to make it a little less dumb. First, the ply-skipping feature turned out to be a win if used with a light touch after ply 6, so the last iteration searches plies 1 2 3 4 5 6 8 9, skipping ply 7. I’m glad I didn’t rip that out. Second, and this is my favorite idea, the search is a little wider at ply 2. Oddshrimp4.2 blunders when the move generator wants the opponent to take neutrals at ply 2 even though the opponent has a deadly attack. Searching widely for the deadly attack at ply 2 was painfully expensive. Unhappy with pruning, I finally realized that I could call on the move generator to ignore neutrals and generate only aggressive moves. At ply 2 the move generator now generates 3 moves, the null move, its best move, and its best aggressive move ignoring neutrals. So the search width now tapers like this: Search all plausible moves at ply 1, up to 3 moves at ply 2, and up to 2 moves at other depths. It’s a cheap change, and it eliminates a class of blunders which sometimes caused quick losses.
One tiny tweak is to add a small bonus in the move generator for captures of enemy planets. It makes the bot a tad more aggressive, but the real point is to allow it to take 0-growth enemies if nothing else beckons. It stops a tiny proportion of games from looking silly when the enemy survives to the end owning only a 0-growth planet surrounded by oddshrimp’s vast fleets.
Oddshrimp4.3 looks down on all opponents except bocsimacko. It is devastatingly strong in the middle game but comparatively much weaker in the opening, the most important phase of the game. My plans for the next major features are all aimed at improving opening play. 1. Convert the evaluator and the move generator to a Bayesian framework. That should allow better comparison of greed versus aggression in the opening. 2. Add smarts to the search with a quiescence search. That may include tuning move generation to prefer quieter moves deep in the tree. Most short losses are now due to miscalculating opening tactics because the search did not reach quiet positions that can be evaluated accurately. 3. Write a special-purpose (more expensive) opening move generator, or a special-purpose (more selective) opening search. Those three ideas should be enough to beat bocsimacko, and if not then I also have ideas about how to push smarts into the whole search. Oddshrimp’s nearly-constant-width search hardly makes sense for this game.
Haskell source code. If you don’t have Haskell, get the Haskell Platform. Compile with “ghc --make -O2 MyBot.hs” or for a small speed bump “ghc --make -O2 -funbox-strict-fields MyBot.hs”. The zip files include the bot only. One way to get a full setup is with a starter package from the original website. See this forum thread for other bots.
Versions 12 and 14 have an implicit “1.” in front: 1.12 and 1.14. I didn’t add it at the time because I didn’t realize there’d be more.
download | rank (mostly estimated) | test tournaments | date | notes |
oddshrimp12.zip | 130 | 27 Nov 2010 | contest entry | |
oddshrimp14.zip | 90, maybe a little worse | 13 Nov 2013 | fixed-up contest entry | |
oddshrimp2.1.zip | 90, maybe a little better | 26 Nov 2013 | new move generator | |
oddshrimp2.2.zip | 28 Nov 2013 | multiple targets | ||
oddshrimp2.3.zip | 7 Dec 2013 | future moves | ||
oddshrimp2.4.zip | about 35 | 2.4 results | 12 Dec 2013 | |
oddshrimp3.1.zip | 20-25 | 3.1 results | 3 Jan 2014 | 2-ply search |
oddshrimp3.2.zip | 20-25 | 8 Jan 2014 | ||
oddshrimp3.3.zip | about 15, barely | 3.3 results | 24 Jan 2014 | |
oddshrimp3.4.zip | about 15, solidly | 3.4 results | 8 Feb 2014 | |
oddshrimp4.1.zip | about 5 | 4.1 results | 23 Feb 2014 | 6-ply “lattice search” |
oddshrimp4.2.zip | about 2 | 4.2 results | 19 Apr 2014 | |
oddshrimp4.3.zip | better than 2 | 4.3 results | 4 Jun 2014 | up to 8 ply |
minor updates 15 July 2014
original version December 2013