TY - GEN
T1 - Replay-based strategy prediction and build order adaptation for StarCraft AI bots
AU - Cho, Ho Chul
AU - Kim, Kyung Joong
AU - Cho, Sung Bae
PY - 2013
Y1 - 2013
N2 - StarCraft is a real-time strategy (RTS) game and the choice of strategy has big impact on the final results of the game. For human players, the most important thing in the game is to select the strategy in the early stage of the game. Also, it is important to recognize the opponent's strategy as quickly as possible. Because of the 'fog-of-war' in the game, the player should send a scouting unit to opponent's hidden territory and the player predicts the types of strategy from the partially observed information. Usually, expert players are familiar with the relationships between two build orders and they can change the current build order if his choice is not strong to the opponent's strategy. However, players in AI competitions show quite different behaviors compared to the human leagues. For example, they usually have a pre-selected build order and rarely change their order during the game. In fact, the computer players have little interest in recognizing opponent's strategy and scouting units are used in a limited manner. The reason is that the implementation of scouting behavior and the change of build order from the scouting vision is not a trivial problem. In this paper, we propose to use replays to predict the strategy of players and make decision on the change of build orders. Experimental results on the public replay files show that the proposed method predicts opponent's strategy accurately and increases the chance of winning in the game.
AB - StarCraft is a real-time strategy (RTS) game and the choice of strategy has big impact on the final results of the game. For human players, the most important thing in the game is to select the strategy in the early stage of the game. Also, it is important to recognize the opponent's strategy as quickly as possible. Because of the 'fog-of-war' in the game, the player should send a scouting unit to opponent's hidden territory and the player predicts the types of strategy from the partially observed information. Usually, expert players are familiar with the relationships between two build orders and they can change the current build order if his choice is not strong to the opponent's strategy. However, players in AI competitions show quite different behaviors compared to the human leagues. For example, they usually have a pre-selected build order and rarely change their order during the game. In fact, the computer players have little interest in recognizing opponent's strategy and scouting units are used in a limited manner. The reason is that the implementation of scouting behavior and the change of build order from the scouting vision is not a trivial problem. In this paper, we propose to use replays to predict the strategy of players and make decision on the change of build orders. Experimental results on the public replay files show that the proposed method predicts opponent's strategy accurately and increases the chance of winning in the game.
UR - http://www.scopus.com/inward/record.url?scp=84892392317&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84892392317&partnerID=8YFLogxK
U2 - 10.1109/CIG.2013.6633666
DO - 10.1109/CIG.2013.6633666
M3 - Conference contribution
AN - SCOPUS:84892392317
SN - 9781467353113
T3 - IEEE Conference on Computatonal Intelligence and Games, CIG
BT - 2013 IEEE Conference on Computational Intelligence in Games, CIG 2013
T2 - 2013 IEEE Conference on Computational Intelligence in Games, CIG 2013
Y2 - 11 August 2013 through 13 August 2013
ER -