Sunday, November 04, 2007

Smoothed Analysis of the Price of Anarchy

Given some system in which a lot of selfish agents are interacting (a game), we would like to predict what might happen. If we have some way of measuring the quality of an outcome, we might want to know how much worse an outcome the selfish players might light upon compared to the best solution possible. But what will the selfish players do? Typically, people assume that they will play according to a Nash equilibrium. This is a solution concept with lots of problems for computationally limited agents, that I have written about here. Lets just assume for a post that players do fall into Nash equilibrium (This isn't crazy in certain games (like potential games) in which natural iterative better-response play leads to a Nash equilibrium eventually.) Which Nash equilibrium will the agents end up in?

Typically we skirt this issue by defining the "price of anarchy" to be the ratio of the value of the worst equilibrium to the value of the optimal solution -- a pessimistic bound. Will we really end up in the worst Nash equilibrium?

A related problem comes up in the design of algorithms. How should we measure the running time of an algorithm? The worst case running time? Some notion of average case? Average case running time is misleading because our notion of 'average' probably won't coincide with the data's. But worst case running time can also be misleading if all worst-case inputs are somehow contrived and 'brittle' to small changes. Smoothed analysis is a compromise -- the smoothed complexity of an algorithm is measured by taking a worst case input, but then measuring the running time of the algorithm with respect to small perturbations of that worst-case input. The idea is, if the bad inputs are extremely brittle to perturbation, we are unlikely to actually encounter bad inputs.

So maybe we can do a smoothed analysis of the price of anarchy. In a recent paper, Christine Chung, Katrina Ligett, Kirk Pruhs, and I propose studying the stochastically stable states of a game -- the states that computationally simple agents, who make mistakes (noise) in their play with some small probability each turn. In general, the solution concept that this leads to can be completely different from the Nash equilibria of the game, but in potential games, it turns out that the stochastically stable states are a subset of the Nash equilibria. What does this mean? The stochastically stable states in these games are the Nash equilibria that are resilient to these occasional mistakes in play, and the price of anarchy taken over the worst case stochastically stable states ("the price of stochastic anarchy") can be viewed as a smoothed analysis of the price of anarchy -- The worst we are likely to encounter in a noisy, imperfect world.

Here's an example: Suppose Aaron and Bob have programs that they want to run on a computer, and there are two computers, X and Y. Aaron's program costs epsilon to run on X, and 1 to run on Y. Bob's program costs epsilon to run on Y, and 1 to run on X. Both players can choose what machine that they will run their programs on, and will pay the sum of the costs on the machine they picked. For example, if both Aaron and Bob pick machine X, they will both pay 1 + epsilon.

The optimal solution is obvious: Aaron plays on X, Bob plays on Y, and both pay epsilon. This is also a Nash equilibrium -- no player can unilaterally improve his payoff by switching machines (Bob could switch machines, but that would increase his cost from epsilon to 1 + epsilon).

If instead Aaron plays on Y and Bob plays on X, both players now pay 1, and this is again a Nash equilibrium, although a much worse one -- either player could switch machines, but because of congestion, would increase their cost from 1 to 1+epsilon.

So the price of anarchy of this game is 1/epsilon, which can be arbitrarily large. But is the bad equilibrium really realistic? If Aaron is playing on Y every turn, and Bob is playing on X, if Aaron makes a mistake one turn (and plays on X), then Bob can take advantage of this and increase his payoff by moving to X. The optimal equilibrium on the other hand is resilient to mistakes like this -- both players are already paying the smallest cost that they could ever pay in this game. It turns out that although the price of anarchy in the 2 player "load balancing" game that I just described is unbounded, but the price of stochastic anarchy is just 2. So the price of anarchy gave us an unrealistic measure of how bad things could get, because of the existence of bad, but brittle equilibria.

In our paper, we consider the general case of the n player m machine load balancing game, and prove that the price of stochastic anarchy is bounded. What other games could benefit from a 'smoothed' analysis of the price of anarchy?

UPDATE: Our paper is now available here.


Anonymous said...

Is the paper available?

Aaron said...


It will be shortly. Send me an email if you want a heads up.