Reinforcement Learning for Dynamic Pricing in Service Markets

We study price dynamics in an electronic service market environment consisting of buyers and competing service providers. In this markeT, each service provider has limited capacity to serve the buyers. We present price dynamics in a two-seller market when buyers use comparison shopping agents to know about price and expected delay at each service provider. Each seller uses an automated pricing agent to reset the price at random intervals in order to maximize his expected profits. A Q-learning algorithm for pricing agent is developed and comparative experimental study on various other adaptive strategies is presented.

Further, we present a new multi-time scale actor-critic-type algorithm for multi-agent learning in the underlying stochastic games. Preliminary experimental results on convergence of the proposed algorithm in a degenerate version of the dynamic pricing game and also on convergence of the algorithm in iterated general-sum bi-matrix games are presented.

By: K Ravikumar, Gaurav Batra, Rohin Saluja

Published in: RI02006 in 2002

LIMITED DISTRIBUTION NOTICE:

This Research Report is available. This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. It has been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g., payment of royalties). I have read and understand this notice and am a member of the scientific community outside or inside of IBM seeking a single copy only.

ri02006.pdf

Questions about this service can be mailed to reports@us.ibm.com .