In the third lecture of our course on privacy and mechanism design, we go back to using differential privacy as a tool in mechanism design, and cover a remarkable result of Nissim, Smorodinsky, and Tennenholtz.
Traditionally in mechanism design, social welfare holds a special place in the pantheon of objective functions: this is because in any social choice setting, we can use the VCG mechanism to exactly optimize for social welfare while guaranteeing dominant strategy truthfulness. In general, it is not possible to truthfully optimize most other objective functions -- and even for social welfare, the VCG mechanism requires the ability to charge payments. It is the rare setting when we can achieve some objective truthfully without payments, and even rarer when that objective is not social welfare.
In todays lecture, we give a general mechanism that in any social choice setting (satisfying certain technical conditions) approximately optimizes any low-sensitivity objective function (social welfare satisfies this condition, but so do many others). Moreover, this mechanism is strictly dominant strategy truthful, and crucially does not require the use of payments. Of course, we can't get around classical impossibility results for free -- the tradeoff, as always, is that we are only optimizing our objective approximately up to some additive loss. However, in large economies, this loss will often become negligible as the size of the population grows.
After class, Rachel Cummings will give our theory seminar on her own work in privacy and mechanism design: The Empirical Implications of Privacy Aware Choice, which is joint work with Federico Echenique and Adam Wierman, both from Caltech. Be there or be square!
No comments:
Post a Comment