Wednesday, September 04, 2013

Coordination when Information is Scarce: How Privacy can Help.

This post is the first in a (hopefully) series of posts about how tools and ideas from differential privacy can be useful in solving problems in game theory. The connection might at first be surprising, but is in fact very natural. The field of differential privacy studies algorithms whose inputs are generated by the reports of \(n\) players. It provides tools for bounding the sensitivity of the output of the algorithm in a precise, probabilistic sense to unilateral changes to the input by a single one of these players. This notion of unilateral deviations is very similar to the notions of stability at equilibrium that game theory is concerned about, and this is the root of the connection. For a general overview of the area, let me plug the survey on Differential Privacy and Mechanism Design that Mallesh and I wrote for the most recent issue of SIGecom Exchanges. In this post, I'll discuss a result that Michael Kearns, Mallesh Pai, Jon Ullman, and I have had kicking around for awhile. We only recently hit upon the interpretation of the result that I'll discuss here though, which I rather like. For more details, you can check out the arXiv paper, which we revised a couple of weeks ago. I should add that the story that accompanies our technical contributions is the result of discussions with many people as we've gone about and presented it. Tim Roughgarden and Ricky Vohra deserve special thanks.



In game theory, there are several popular abstractions for what players in a game know about their opponents.

  • In games of complete information, players know exactly the utility functions that each of their opponents is trying to optimize. 
  • In games of incomplete information, players don't know what their opponent's utility functions are. In Bayesian games, players utility functions are parameterized by "types", which are drawn from a known (possibly correlated) prior distribution. Agents know the distribution from which their opponents' types are drawn, but not the types themselves. 
In games of complete information, the standard solution concept is the Nash equilibrium: informally, everyone should be playing an action such that no single player can benefit by making a unilateral deviation. Players can randomize, so actions are evaluated in expectation over the random choices of one's opponents, but their utility functions are known.

In games of incomplete information, strategies are thought of as mappings from types to actions. The standard solution concept is the "Bayes Nash Equilibrium", which informally states that everyone should be playing a strategy such that no single player can benefit by making a unilateral deviation, where now strategies are evaluated in expectation both over the random choices of one's opponents, and also over the random realization of the types of one's opponents, as drawn from the known prior distribution.

Games of complete information are attractive for a number of reasons: not least of which that they are much cleaner and more analytically tractable than games of incomplete information. However, when talking about large \(n\) player games, where players don't all know each other, have limited abilities to communicate, and might think of their types as proprietary information, games of incomplete information are probably a more reasonable model of reality.

Unfortunately, the quality of Bayes Nash Equilibria can be much worse than the corresponding Nash equilibria of the full information game, even in the worst case over type realizations. The reason is that not knowing the utility functions of your opponents can make coordination very difficult. Consider the following toy example, borrowed from Bhawalkar and Roughgarden:

There are \(n\) players, and \(n + \sqrt{n}\) "goods" to be distributed. Each player \(i\) has utility \(u_{i,j} \in \{0,1\}\) for receiving a good \(j\). The game is played as follows: each player \(i\) names a single good \(s_i\). Each good \(j\) is allocated at random to one of the players that named it. Now suppose that types are drawn from the following distribution. The goods are randomly partitioned into two sets, \(S\) and \(T\), with \(|S| = n\) and \(|T| = \sqrt{n}\). For every player \(i\) and good \(j \in T\), \(u_{i,j} = 1\). A perfect matching \(\mu\) is selected at random between agents \(i\) and goods in \(S\). Agents have value \(u_{i,\mu(i)} = 1 \) for the good \(\mu(i) \in S\) that they are matched to, and value \(u_{i,j} = 0\) for every good \(j \neq \mu(i)\) in \(S\) that they are not matched to. In other words, every player has a "special" good in \(S\) that she uniquely desires, and everyone desires all of the goods in \(T\). If we are in the complete information setting, then since everyone has the option of requesting their special good \(s_i = \mu(i)\), in any reasonable equilibrium (with nobody requesting a good they don't want) \(n\) goods are allocated to bidders who desire them, for total social welfare of \(n\). However, in the incomplete information setting, because of the symmetry of the distribution, no player can distinguish their special good from amongst the \(\sqrt{n} + 1\) goods that they value. Its not hard to see that in this case, in equilibrium only \(O(\sqrt{n})\) goods get allocated.

So if we find ourselves in a setting of incomplete information, we might nevertheless prefer to implement an equilibrium of the game of complete information defined by the realized types of the players. How can we do that, especially if we have limited power to modify the game?

One tempting solution is just to augment the game to allow players to publicly announce their types (and thus make the game one of complete information). Of course equilibria of the complete information game might not be unique, so to help solve the coordination problem, we could introduce a weak "proxy" that players can choose whether or not to use. Players can "opt in" to the proxy, which means that they report their type to the proxy, which then recommends an action for them to play. At this point they are free to follow the recommendation, or not. Alternately, they can "opt out" of the proxy, which means they never report their type, and then just choose an action on their own. It would be nice if we could design a proxy such that:

  1. Simultaneously for every prior on agent types, it is a Bayes-Nash equilibrium for agents to opt-in to the proxy, and then follow its suggested action, and
  2. Given that all players behave as in (1), that the resulting play forms an equilibrium of the complete information game induced by the actual, realized types of the players. 
Its tempting to think that to design such a proxy, it is sufficient to have it compute a Nash equilibrium of the complete information game defined by types of the players who opt in, and then suggest that each player play her part of this Nash equilibrium. After all, if everyone is opting in and playing their part of a Nash equilibrium, how can you do better than to do the same? By definition, in a Nash equilibrium, your suggested action is a best response given what all other players are playing. And in fact, in the toy allocation example we discussed above, this works, since there is an equilibrium of the complete information game that is simultaneously optimal for all players.

In general, however, this approach fails. The flaw in our reasoning is that by opting out, you change what the proxy is computing: it is now computing a different equilibrium, for a different game, and so by opting out, you are not merely making a unilateral deviation from a Nash equilibrium (which cannot be beneficial), you are potentially dramatically changing what actions all other players are playing. The Nash equilibrium condition does not protect against deviations like that, and in fact its not hard to construct an example in which this is a real problem. Consider, e.g. toy example #2: There are \(n\) players who each have one of two types -- (M)ountain, or (B)each. Each player has two actions -- they can go to the beach, or go to the mountain. Players prefer the activity that corresponds to their own type, but they also like company. So if a \(p\) fraction of people go to the Mountain, an M type gets utility \(10\cdot p\) if he goes to the mountain, and \(5\cdot (1-p)\) if he goes to the beach. A B type gets utility \(5\cdot p\) if she goes to the mountain, and \(10 \cdot (1-p)\) if she goes to the beach. A proxy that suggests that all players go to the beach if type M is in the majority, and otherwise suggests that all agents go to the mountain always computes a Nash equilibrium of the complete information game defined by the reported types. Nevertheless, any player that is pivotal has incentive to opt-out of the proxy, since this causes the proxy to send all other players to her preferred location.

But what if it were possible to compute an equilibrium in such a way so that whether or not any player \(i\) opted in had very little affect on the distribution over actions suggested by the proxy to all other player \(j \neq i\)? In this case, the problem would be solved: any unilateral deviation from the all-opt-in-and-follow-the-proxy's-suggestion solution would not (substantially) affect the play of one's opponents, and so they would all continue playing their part of an equilibrium of the complete information game, defined on all player's realized types. The equilibrium condition would now directly guarantee that such a deviation could not be beneficial.

So to design a good proxy, its not enough to just have an algorithm that computes an equilibrium of the game defined by the reported types, but it is enough if that algorithm also satisfies a strong enough stability condition. It turns out that a sufficient "stability" condition is a variant on differential privacy: informally, that any unilaterial deviation by a single player \(i\) should not change (by more than a \( (1\pm \epsilon)\) factor) the probability that any particular profile of \(n-1\) actions is suggested to the \(n-1\) players \(j \neq i\).

And in fact it is possible to implement this plan, at least for certain classes of games. We study the class of "large games", which informally, are \(n\) player games in which the affect of any player \(i\)'s action on the utility of any player \(j \neq i \) is bounded by some quantity which is diminishing in \(n\). The Beach/Mountain example is of this type, as are many large population games.  Our main technical result is an algorithm satisfying the required differential privacy stability condition, which computes a correlated equilibrium of any large game. The upshot is that in large games, it is possible to design a very weak proxy (that doesn't have the power to make payments, force players to opt in, or enforce actions once they do opt in) that implements an \(\eta\)-approximate correlated equilibrium of the complete information game, as an \(\eta\)-approximate Bayes Nash equilibrium of the partial information game, no matter what the prior distribution on agent types is. Here \(\eta\) is some approximation parameter that is tending to zero in the size of the game -- i.e. as the game grows large, the equilibria become exact.

To emphasize the purely game theoretic aspects of this problem, I have ignored the fact that differential privacy of course also provides a good guarantee of "privacy". Aside from straightforward incentive issues, there are other reasons why players might be reluctant to announce their types and participate in a complete information game: perhaps their types are valuable trade secrets, or would be embarrassing admissions. Because our solution also happens to provide differential privacy, it is able to implement an equilibrium of the complete information game, while maintaining the privacy properties inherent in a game of incomplete information.

To conclude, differential privacy thus far has been primarily a problem-driven field, that has borrowed techniques from many other areas to solve problems in private data analysis. But in the process, it has also built up a strong conceptual and technical tool kit for thinking about the stability of randomized algorithms to small changes in their inputs. I hope that this and future posts serve to convince you that there is something to be gained by borrowing the tools of differential privacy and applying them to solve problems in seemingly unrelated fields.


No comments: