Friday, February 21, 2014

Lecture 6 -- Implementing Nash equilibria ex-post in incomplete information games (Part 2)

In Lecture 6 of our Privacy and Mechanism Design course, we continue our study of the use of differential privacy to implement equilibrium of complete information games -ex post- in settings of incomplete information augmented with an extremely weak "mediator".

Last class, we proved that to accomplish this task, it would be sufficient to design an algorithm that:

  1. Satisfied (joint) differential privacy, and...
  2. Computed an (asymptotic) Nash equilibrium of the complete information game induced by reported types. 
We also took a brief detour into private algorithmic techniques, and learned how to privately maintain a count on a stream of numbers. 

This class, we partially implement this agenda. We define large games and congestion games, and give an algorithm for privately computing Nash equilibrium in large congestion games, which uses our newly learned ability to count. We conclude that in such games, with the help of a weak mediator, we can enjoy the nice properties of the pure strategy Nash equilibria of these games in the complete information setting, even in incomplete information settings when player types can be arbitrary (i.e. not necessarily drawn from any prior distribution). 

There remains one more lecture in this series -- we will show that if we strengthen the power of our weak mediators somewhat, by giving them the ability to verify reported player types, then we can extend these results to arbitrary large games (i.e. not necessarily congestion games) by privately computing correlated equilibria. 

This last part will have to wait a couple of weeks though. Next week, we will have Jamie Morgenstern giving us a guest lecture on her very interesting recent work joint with Avrim Blum, Ankit Sharma, and Adam Smith. 

Sunday, February 16, 2014

Lecture 4 -- The Empirical Implications of Privacy Aware Choice

Just barely out of order, the notes for Lecture 4 of our privacy and mechanism design class are now online.

In this lecture, Rachel Cummings told us about her very interesting work with Federico Echenique and Adam Wierman, studying what implications preferences for privacy might have in the revealed preferences setting.

The idea, in a nutshell, is this: suppose some entity is observing all of your purchase decisions online, and trying to deduce from them what your utility function over goods is. For example, if it observes you purchasing beer when you could have had milk for the same price, it will deduce that you strictly prefer beer to milk.

Now suppose that you understand that you are being observed, and you have preferences not only over the goods that you buy, but also over the set of deductions that the observer is going to make about you. Given a set of prices and a budget, you will choose which bundle of goods to buy to optimize this tradeoff between your utility for the bundle, and your cost for what is learned about you by the observer. As always, however, the observer only gets to see what you buy.

Can the observer tell whether or not you care about privacy? Can he deduce what your preferences over goods are? Can he prove that you are not buying goods totally at random?

One of the main results discussed in this lecture, is that for sufficiently general forms of the preferences that agents may hold, the answer to all of these questions is "no". Specifically, all sets of observations are rationalizable by some privacy-aware utility function.

Friday, February 14, 2014

Lecture 5 -- Implementing Nash equilibria ex-post in incomplete information games (Part 1)

Welcome back (after a brief hiatus) to the privacy and mechanism design course. Last week we had a guest lecture by Rachel Cummings about her work on the Empirical Implications of Privacy Aware Choice.  We will have notes and a blog post about that soon, but for now, welcome to Lecture 5!

In this lecture, we consider the following problem. Suppose we have an n player game. We might want to implement a Nash equilibrium of the complete information game (which often have nice properties), but naturally find ourselves in the common situation when the n players do not know each others types. Is there a modification of the game that can implement the complete-information game equilibrium as a simple ex-post Nash equilibrium, which does not require that players know anything about each others types, and which requires no coordination?

The modification of the game we consider is the introduction of a very weak "mediator", which essentially only has the power to listen and to give advice, but not to verify or to enforce. Players will have the option of reporting their type to the mediator, in which case it will suggest an action for them to play. However, they can:

  1. Completely ignore the mediator and choose an action to play on their own
  2. Lie to the mediator about their types, and/or
  3. Use the mediator's suggestion however they like, not necessarily following it (i.e. the mediator can not enforce its suggestion)
What we would like is that players should all follow "good behavior", which involves

  1. Truthfully reporting their type to the mediator, and then
  2. Faithfully following the mediator's suggestion.

We prove that if the mediator offers its suggestions by computing a Nash equilibrium of the complete information game defined by players reported types, and satisfies (a variant of) differential privacy, then the mediator makes "good behavior" an asymptotic ex-post Nash equilibrium of the incomplete information game, and that in this ex-post equilibrium, players end up playing a Nash equilibrium of the original game of complete information (defined by the realized player types).

This leaves us with the minor task of actually figuring out how to compute a Nash equilibrium privately. Here, we realize that thus far in the class, we have been skating by knowing nothing about private algorithm design except for the exponential mechanism. So we take a detour to learn some more basics. We get as far as learning how to count, which turns out to be enough. This will set us up for next lecture, when we can actually implement this agenda and design a mediator that satisfies the hypotheses we need.

(The idea of designing a mediator by privately computing an equilibrium comes from joint work with Michael Kearns, Mallesh Pai, and Jon Ullman, which shows how to privately compute correlated equilibria in arbitrary large games. We will cover this later. The implication in this lecture about mediators that privately compute Nash equilibria comes from joint work with Ryan Rogers, which shows how to privately compute Nash equilibria in large congestion games. )