archive by month
Skip to content

CIG 2018 - 8 game limit and other problems

I checked ISAMind and verified that it is affected by the same 8 game problem as Locutus and McRave: Its learning files store data for only 8 games total, not about 125 games as expected. Tscmoo also has a suspiciously small amount of learning data and may be affected. Ziabot has a problem with its learning data, but it looks like a different problem. Other bots appear unaffected, as far as I can judge—I could be wrong, because I don’t understand how they all work.

It’s unclear what effect the 8 game problem had on the tournament. In the best case, the CIG organizers pulled data incorrectly and the tournament itself ran normally. That seems unlikely to me. More likely, learning data for some bots was lost 8 rounds before the end. In that case, it is possible that most of the tournament ran normally, and one error near the end did not much affect results. The fact that the affected bots finished high supports the hypothesis—though, like PurpleWave, they could have finished high because they’re that good even when handicapped. In the worst case, there may have been repeated problems throughout the tournament. I’ll see if I can think of a way to use the detailed results log to narrow down the possibilities.

I feel that CIG 2018 had a lot of problems.

  • The 8 game problem, affecting Locutus, McRave, ISAMind, and possibly Tscmoo.
  • JVM bots did not write learning data at all, affecting PurpleWave, Tyr, and Ecgberht.
  • Ziabot’s learning problem (it might be a bot bug rather than a tournament bug, but Zia has always been reliable for me).
  • Overkill’s build order breakdown.

That’s a large proportion of the entrants affected by tournament surprises of one kind or another. What other problems are there that I haven’t noticed? I’ve only watched a few replays so far.

When I wrote to the CIG organizers to warn them that Steamhammer might crash a lot, they sent back what I found to be a rude reply which implied that giving them heads-up was a wrong thing to do. That is of course down to language and cultural differences. But still, communicating with the participants is part of running a tournament.

I may skip CIG next year.

Trackbacks

No Trackbacks

Comments

Barcode on :

ouch! Well luckily there's enough other tournaments! Looking forward to steamhammers performance on SSCAIT this year

MicroDK on :

Yah, a lot of problems with CIG 2018. The worst being lack of communication about these problems with learning data. Did anyone notify the organizers about the problems with learning data? And did they get a reply from the organizers? Luckily Microwave does not seem to be affected. LetaBot is also affected by the 8 game problem, though it has 9 games in its write data file.

LetaBot on :

My bot isn't affected because there is not learning in my CIG 2018 bot. The extra output would be nice to have in full though, but that is more for my own analysis.

Quatari on :

I don't know whether you follow the SSCAIT Discord, so I'll pass on what I wrote from there:

The 8-9 rounds problem also affected CUNYbot and LetaBot. Zia and CUNYbot had problems learning, but that is Zia's own fault (and CUNYbot's fault, the 8-9 rounds problem notwithstanding) because they do not use a different file per opponent like the instructions on the Rules page say you should. LetaBot also doesn't use a different filename per opponent, so data was clobbered, but it didn't affect the win rates for CIG 2018 because LetaBot doesn't use the data for learning purposes. The data for tscmoo looks complete to me (exactly 125 records for every opponent), so if it turns out that it did have a problem learning, maybe it was due to LF6 or due to bugs within tscmoo, as opposed to a problem with how the competition was run. The data for all the other bots that you didn't mention looks complete, in as far as ignoring crashes and how the bots are coded anyway, e.g. Microwave onkly has relative win / loss numbers capped at 10 for each build order, and Overkill writes extra records.

From what I can see, it looks like the read/write folders were cleared for some bots (CUNYbot, ISAMind, Locutus, McRave, LetaBot) but not for other bots, near the end of the run, sometime between game number 40813 and game number 40975 (i.e. sometime between 13:15:42 and 16:00:24 on 7 Aug 2018). It's hard to figure out which games the data corresponds to for bots that don't store the map name / timestamp for each game (e.g. just the total wins and total losses for each strategy), or only write something when the game ends (not also write something when the game starts). Note that the data that was stored for these bots all looks fine (i.e. contains 125 rounds ignoring the rough figures for expected crashes): Overkill, SRbotOne, Stormbreaker, MegaBot, tscmoo, UAlbertaBot, ZZZKBot, Aiur, Microwave (and Steamhammer), and Zia (in spite of Zia not using a different filename for each opponent).

I e-mailed the CIG organizers and got a reply about why only the last 8-9 rounds of data was retained for Locutus, McRave, ISAMind, CUNYBot, LetaBot. Paraphrasing what I can because their translation to English isn't great: They said they don't know what happened - they say nobody touched the computers while the competition was running, and their computer resource management team investigated and couldn't find the problem. They say they assume that the agents managed by the tournament manager at that time of the problem (guessing CUNYbot, ISAMind, Locutus, McRave, LetaBot) were missing due to a crash by something else. They will try other ways to prevent this issue from recurring at the next competition. They also seemed to think that learning was working for those bots throughout the competition until near the end of the competition (as opposed to not working at all until near the end of the competition), then the data was lost, then only the data since then was retained. It is not so clear to me that that is true though - I think it would require some analysis of the results to only look at the win rate graphs of those bots against the bots that don't have learning capabilities (i.e. ignore results against bots like tscmoo that learn, and look at whether the trend is climbing/falling vs level). I also told them about why learning didn't work for PurpleWave, Tyr, Ecgberht, but more about that later...

Re. learning not working at all for PurpleWave, Tyr, Ecgberht:

Ecgberht didn't include run_proxy.bat in his submission so it seems to be a problem in what the competition organizers wrote in run_proxy.bat. Learning worked for tscmoo because he included a run_proxy.bat file in his submission that uses hardcoded paths but is ok apart from that, i.e.:
cd C:\tm\starcraft\
C:\tm\starcraft\bwapi-data\ai\tscmoo.exe
It would have been more portable to write just:
bwapi-data\AI\tscmoo.exe
PurpleWave didn't include run_proxy.bat in his submission, so I think the competition organizers wrote this (which is wrong):
cd C:\tm\starcraft\bwapi-data\AI
java -jar PurpleWave.jar
Similarly for Tyr.

Note: The other client/proxy bots are Korean and Sling. In their questionaire answers, Korean said they would use File I/O. Sling and Korean don't have any logic to write to files though. Korean and Sling appear to have been set up correctly (even though their submissions don't contain run_proxy.bat).

By the way, I really love your blog - it's fantastic!

Quatari on :

Just to explain what was wrong with the run_proxy.bat files: I investigated and found the reason is because it appears that the CIG organizers wrote a run_proxy.bat for each of them (because the submission by each of those authors didn't include a run_proxy.bat file to use) that uses the wrong working directory. E.g. for Ecgberht, a run_proxy.bat file containing:
cd C:\tm\starcraft\bwapi-data\AI
java -jar Ecgberht.jar

The above is incorrect - instead, it should have contained e.g. just this (the cd command shouldn't be necessary, and cd commands etc should be discouraged because competitions like SSCAIT have rules forbidding changing the working directory):
java -Djava.library.path=bwapi-data\AI -jar bwapi-data\AI\Ecgberht.jar
or just this if it doesn't need any libraries:
java -jar bwapi-data\AI\Ecgberht.jar

Note that by default, the working directory is the Starcraft program folder (e.g. C:\TM\Starcraft if that is where Starcraft is installed), not the C:\TM\Starcraft\bwapi-data\AI directory, so run_proxy.bat shouldn't need to change the working directory, and neither should any bots while they are starting/running.

I suggested to the CIG organizers that next year they check that the bots which state they use File I/O in their questionnaire actually produce data in the read/write folders, as a sanity check that the run_proxy.bat file is working and that their bot's logic for writing data is working. And also consider adding an instruction to the submission instructions that client/proxy should submit a run_proxy.bat file that starts the bot without changing the working directory (the working directory will actually be the Starcraft program folder, not the bwapi-data\AI folder under it), e.g. just:
java -Djava.library.path=bwapi-data\AI -jar bwapi-data\AI\MyBot.jar
or just:
bwapi-data\ai\MyBot.exe

Jay Scott on :

A very thorough analysis. I am impressed. There should be something in the rules and/or submission instructions about run_proxy, so that people have a chance to get it right themselves. It’s not an obvious point; the tournament manager’s workings are opaque.

Add Comment

E-Mail addresses will not be displayed and will only be used for E-Mail notifications.

To prevent automated Bots from commentspamming, please enter the string you see in the image below in the appropriate input box. Your comment will only be submitted if the strings match. Please ensure that your browser supports and accepts cookies, or your comment cannot be verified correctly.
CAPTCHA

Form options

Submitted comments will be subject to moderation before being displayed.