archive by month
Skip to content

Steamhammer’s opening selection surprised me

Steamhammer lost a game to XIMP by Tomas Vajda, for the first time since December. When I saw the result listing I thought “That has to be due to a new bug.” But no, it is actually an unplanned effect of a deliberate feature.

After Steamhammer finishes the exploration phase of opening learning (the first 5 games), it does one extra check that I didn’t mention in my post on the topic (you can insert this as step 1.5 in that post’s list). If all games so far are wins, or if all games with the current expected enemy plan are wins and the enemy is believed to always follow the same plan (even if we don’t always recognize it), then Steamhammer takes a “business as usual” shortcut and bypasses the rest of its analysis. “So far I’ve been playing openings for this matchup, or openings to counter the enemy plan,” it thinks. “And it worked perfectly. All I have to do is more of the same.”

But since I carried over the old game record files, it wasn’t playing its standard openings against XIMP. Before this version, Steamhammer was hand-configured to play a specific counter that always wins. That opening is the only one to appear in the game records before this. Now the hand configuration is gone, and the business-as-usual shortcut applied, so Steamhammer chose a different opening to counter XIMP’s cannons, a 3 hatch ling bust. But XIMP makes too many cannons for that to work, and zerg lost.

Hmm... I have 3 different thoughts. First, this should be the only loss. In the next game, Steamhammer will notice “Hey, I know an opening that always wins. Play that!”

Second, retaining the old game records has been an interesting test. And I think I want to keep it up for a while, because I’m still learning from it. But I also need to delete the old game records and do a blank slate test, because that is how the system was designed to work. It will behave differently when learning from scratch. For one thing, it will likely take several tries to hit on its XIMP-beating opening, doing worse at first. For another, it should discover quickly that it knows a way to put up a fight against Iron, and start doing better in that matchup. I was originally planning to do a blank slate test on SAIL, but SAIL remains down.

Third, I think the opening selection has a lot of room for improvement. It barely takes the predicted plan into account, it is not skilled at taking the map into account, the 5 game exploration phase is sometimes too long and sometimes too short, the exploration that happens after the exploration phase is not tuned correctly, and the whole system is rather ad hoc. I should drop in a proper machine learning algorithm and figure out a correct exploration policy. Those steps will make the rest of the opening selection apparatus structurally simpler.

Still a lot do to!

Trackbacks

No Trackbacks

Comments

McRave on :

It's good you're keeping with your strict rule of no hardcoding, I have a lot of respect for you and any code which doesn't contain:

if (Broodwar->enemy()->getName())

I think that's true progress in BWAPI.

jtolmar on :

One thing you could try is putting Steamhammer's version numbers in its match logs. Each version/matchup has an unknown true winrate and some number of win/loss samples. You can use fisher's exact test or a chi squared test to find the probability that those two winrates are the same, and if that probability gets below some threshold, you know to start ignoring old data.

Add Comment

E-Mail addresses will not be displayed and will only be used for E-Mail notifications.

To prevent automated Bots from commentspamming, please enter the string you see in the image below in the appropriate input box. Your comment will only be submitted if the strings match. Please ensure that your browser supports and accepts cookies, or your comment cannot be verified correctly.
CAPTCHA

Form options

Submitted comments will be subject to moderation before being displayed.