Hyper-Q Learning of Mixed Strategies in Multi-Player Normal Form Games

This paper proposes an extension of Q-Learning, dubbed "Hyper-Q" Learning,
which can learn mixed strategies in multi-player normal form matrix or stochastic games. Factors governing the possible convergence of Hyper-Q learning are addressed, including observability of the opponents' mixed strategies. A model-free Bayesian technique is proposed for mixed strategy estimation given the history of observed actions. Hyper-Q is tested in Rock-Paper-Scissors against an Iterated Gradient Ascent (IGA) player,and a Policy Hill Climber (PHC) player. The Hyper-Q learner is able to signi cantly exploit both of these opponents, and with Bayesian estimation it achieves much better results than with simple Exponential Moving Average estimation.

By: Gerald J. Tesauro

Published in: RC22801 in 2003

LIMITED DISTRIBUTION NOTICE:

This Research Report is available. This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. It has been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g., payment of royalties). I have read and understand this notice and am a member of the scientific community outside or inside of IBM seeking a single copy only.

RC22801.pdf

Questions about this service can be mailed to reports@us.ibm.com .