Friday, November 28, 2008

Incentives, computation, and privacy

What do computer scientists bring to the study of game theory? One thing is simply a computer science style of analysis: worst case input, competitive ratios, approximation factors, etc. Perhaps the more interesting thing, however, is the study of how incentive constraints and computation interact: since we want to actually implement any mechanism we might dream up (in principle), it is subject not only to traditional game theoretic constraints such as truthfulness, but also standard computational limitations.

One fundamental question is: What can be computed by algorithms that satisfy constraint X, and it is natural to substitute "incentive compatibility" for "constraint X". An interesting study along these lines appeared in the last FOCS: "On The Hardness of Being Truthful"by Papadimitriou, Schapira, and Singer. This paper gives an example of a natural NP hard problem that can be well approximated efficiently, can be implemented truthfully, but cannot be efficiently well approximated by a truthful implementation: that is, they show that for this problem, truthfulness and computational limitation restrict what is algorithmically possible, although neither limitation is insurmountable on its own. 

They call the problem the "Combinatorial Public Projects Problem" (CPPP), although it is really just submodular maximization. In the CPPP, there are n agents, and m potential public goods. a mechanism for the CPP problem will select a subset of k of these goods. The agents have an opinion over which subsets are best: each agent i has a submodular valuation function f_i over sets of goods. The mechanism wishes to maximize the social welfare: it wants to pick the set of k goods that maximizes the sum of the agents valuation functions. We'll call this sum F.    Notice that since each f_i is submodular, F is submodular as well, and so the problem the mechanism faces is simply submodular maximization, and it is easy to show that the greedy algorithm (which selects at each stage the good that produces the largest marginal utility, until k goods are selected) achieves a (1-1/e) approximation. Not bad! But also not truthful -- agents may be incentivized to misreport their valuations. Whats more, the classic VCG mechanism achieves perfect social welfare, and enforces truthfulness -- but of course cannot be implemented efficiently, since it involves solving an NP hard problem exactly. What Papadimitriou, Schapira, and Singer show is that no truthful efficient algorithm can guarantee social welfare better by any polynomial factor than Sqrt(m) -- a huge gap from what is possible with either constraint alone.

Now whenever you see a striking impossibility result like this, its tempting to ask how resilient it is. Does it still hold if no player by himself constitutes a large fraction of the total social welfare? What if we relax truthfulness to approximate truthfulness, so that perhaps players can benefit by lying, but can't gain more than some small epsilon?

It turns out this particular problem is no longer so hard when we relax some of our constraints. Anupam Gupta, Katrina Ligett, Frank McSherry, Kunal Talwar, and I have a mostly unrelated paper, called "Differentially Private Approximation Algorithms". In this paper we also ask "What can be computed by algorithms that satisfy constraint X?", but in this paper, constraint X is differential privacy. We get mostly positive results: many NP hard problems, including the CPPP problem can be well approximated efficiently by privacy preserving algorithms. So what does this have to do with truthfulness?

In their 2007 FOCS paper "Mechanism Design via Differential Privacy", Mcsherry and and Talwar observed that mechanisms that satisfy differential privacy are also approximately truthful: privacy guarantees that any single player who changes his input does not change the probability of any output by more than a (1+epsilon) factor; therefore, he cannot increase his utility by more than a (1+epsilon) factor by deviating from a truthful input. In general, lets say that a mechanism satisfies gamma-truthfulness if no agent can benefit by more than an additive gamma factor by deviating from truthful behavior. So suppose in the CPP problem we scale every agent's valuation function so that her maximum valuation is 1. As a corollary of one of our approximation results, we show that for any gamma, there exists an efficient, gamma-truthful mechanism that achieves an additive O(k log m * log(1/gamma)/gamma) approximation factor. (Contrasting with the impossibility result of a multiplicative Sqrt(m) factor for 0-truthful mechanisms). In particular, for instances in which we have OPT = Ω(k log m * log(1/gamma)/gamma), this is an approximately truthful constant factor approximation algorithm.

I think that computer scientists are just starting to scratch the surface of the interplay between game theory and computation. It seems that there is a rich theory of incentive compatible computation waiting to be discovered, and that algorithmic game theory will prove to fit in nicely with the rest of theoretical computer science as a discipline that studies the fundamental abilities and limits of computation.

 

No comments: