Just barely out of order, the notes for Lecture 4 of our privacy and mechanism design class are now online.
In this lecture, Rachel Cummings told us about her very interesting work with Federico Echenique and Adam Wierman, studying what implications preferences for privacy might have in the revealed preferences setting.
The idea, in a nutshell, is this: suppose some entity is observing all of your purchase decisions online, and trying to deduce from them what your utility function over goods is. For example, if it observes you purchasing beer when you could have had milk for the same price, it will deduce that you strictly prefer beer to milk.
Now suppose that you understand that you are being observed, and you have preferences not only over the goods that you buy, but also over the set of deductions that the observer is going to make about you. Given a set of prices and a budget, you will choose which bundle of goods to buy to optimize this tradeoff between your utility for the bundle, and your cost for what is learned about you by the observer. As always, however, the observer only gets to see what you buy.
Can the observer tell whether or not you care about privacy? Can he deduce what your preferences over goods are? Can he prove that you are not buying goods totally at random?
One of the main results discussed in this lecture, is that for sufficiently general forms of the preferences that agents may hold, the answer to all of these questions is "no". Specifically, all sets of observations are rationalizable by some privacy-aware utility function.
No comments:
Post a Comment