Wednesday, January 10, 2018

Fairness and The Problem with Exploration: A Smoothed Analysis of the Greedy Algorithm for the Linear Contextual Bandit Problem

Bandit Problems

"Bandit problems" are a common abstraction in machine learning. The name is supposed to evoke the image of slot machines, which are also known as "One-armed bandits" (or so I am told... Somehow nobody speaks like this in the circles I run in.) In the classic formulation, you imagine yourself standing in front of a bank of slot machines, each of which is different and might have a different distribution on payoffs. You can sequentially decide which machine's arm to pull: when you pull the arm, you get a sample from that machine's reward distribution. Your goal is to pull the arms in such a way so that your average reward approaches what it would have been had you played the optimal strategy in hindsight: i.e. always pulled the arm of the machine that had the highest mean reward. The problem is tricky because you don't know the reward distributions to start. The key feature of the problem is that in order to observe a sample from the reward distribution of a particular machine, you actually have to pull that arm and spend a turn to do it.

There are more complicated variants of this kind of problem, in which your choices may vary from round to round, but have differentiating features. For example, in the linear contextual bandit problem, in every round, you are presented with a number of choices, each of which is represented by some real valued vector, which may be different from round to round. You then get to choose one of them. The reward you get, corresponding to the action you choose, is random --- but its expectation is equal to some unknown linear function of the action's features. The key property of the problem is that you only get to observe the reward of the action you choose. You do not get to observe the reward you would have obtained had you chosen one of the other actions --- this is what makes it a bandit problem. The goal again is to choose actions such that your average reward approaches what it would have been had you played the optimal strategy in hindsight. In this case, with full knowledge of the underlying linear functions, the optimal strategy would simply compute the expected reward of each action, and play the action with highest expected reward (this may now be a different action at  each round, since the features change). Again, the problem is tricky because you don't know the linear functions which map features to expected reward. 

Bandit problems are characterized by the tension between exploration and exploitation. At any given moment, the algorithm has some guess as to what the best action is. (In a linear contextual bandit problem, for example, the algorithm might just run least-squares regression on the examples it has observed rewards for: the point predictions of the regression estimate on the new actions are these best guesses). In order to be able to compete with the optimal policy, the algorithm definitely needs to (eventually) play the action it thinks is the best one, a lot of the time. This is called exploitation: the algorithm is exploiting its knowledge in order to do well. However, since the algorithm does not observe the rewards for actions it does not play, it also will generally need to explore. Consider someone in front of those slot machines: if he has played each machine once, he has some weak beliefs about the payoff distribution for each machine. But those beliefs have resulted from just a single sample, so they may well be very wrong. If he just forever continues to play the machine that has the highest empirical reward, he might play the same machine every day, and he will never learn more about the other machines  --- even though one of them might have turned out to really have higher average reward. He has fooled himself, and because he never explores, he continues to be wrong forever. So optimal algorithms have to carefully balance exploration and exploitation in order to do well. 

There are various clever schemes for trading off exploration and exploitation, but the simplest one is called epsilon-Greedy. At every round, the algorithm flips a coin with bias epsilon. If the coin comes up heads (with probability 1-epsilon), the algorithm exploits: it just greedily chooses the action that it estimates to be best. If the coin comes up tails (with probability epsilon), the algorithm explores: it chooses an action uniformly at random. If you set epsilon appropriately, in many settings, you have the guarantee that the average reward of the algorithm will approach the average reward of the optimal policy, as time goes on. Fancier algorithms for bandit problems more smoothly interpolate between exploration and exploitation: but all of them inevitably have to manage this tradeoff. 

Fairness Concerns

(Contextual) bandit problems are not just an interesting mathematical curiosity, but actually arise all the time in important applications. Here are just a few:
  • Lending: Banks (and increasingly algorithms) consider applicants for loans. They can look at applicant features, and observe the repayment history of applicants they grant loans to, but they cannot observe counterfactuals: would an applicant not given a loan have defaulted if he had been given the loan? 
  • Parole Decisions: Judges (and increasingly algorithms) consider inmates for parole. In part, they want to predict which inmates will go on to re-offend if released, and not release those inmates. But it is not possible to observe the counterfactual: would an inmate who was not released have gone on to commit a crime if he had been released? 
  • Predictive Policing: Police chiefs (and increasingly algorithms) consider where in their city they want to deploy their police. In part, they want to predict where crimes will occur, so they can deploy a heavier police presence in those areas. But they also disproportionately observe crimes in the areas in which police are deployed.
  • Sequential Clinical Trials: Certain kinds of drugs affect patients differently, depending on their genetic markers. But for a new drug, the relationship might be unknown. As patients show up, they can be assigned to different clinical trials, corresponding to different drugs --- but we cannot observe the counterfactual: how would a patient have reacted to a drug she was not given? 
I picked these four examples (rather than the more commonly used example of ad targeting) because in each of the above examples, the decision made by the algorithm has an important effect on someone's life. This raises issues of fairness, and as we will see, these fairness questions relate to exploration in at least two different ways. 

First, there is an issue that was recently raised by Bird, Barocas, Crawford, Diaz, and Wallach: that exploring can be unfair to the individuals who have the misfortune of being present during the exploration rounds. Consider the example of sequential clinical trials: if a patient shows up on an "exploitation" round, then she is given the treatment that is believed to be the best for her, using a best effort estimate given the information available. But if she shows up during an "exploration" round, she will be given a random treatment --- generally, one that current evidence suggests is not the best for her. Is this fair to her? What if her life depends on the treatment? Exploration is explicitly sacrificing her well-being, for the possible good of future individuals, who might be able to benefit from what we learned from the exploration. It can be that when we are dealing with important decisions about peoples lives, we don't want to sacrifice the welfare of an individual for the good of others. Certainly, our notional patient would prefer not to show up on an exploration round. In various other settings, we can also imagine exploration being repugnant, even though it is in principle necessary for learning. Can we (for example) randomly release inmates on parole, even if we believe them to be high risks for committing more violent crimes? 

There is another, conflicting concern: if we do not explore, we might not correctly learn about the decisions we have to make. And there are a number of reasons we might expect to see insufficient exploration. Exploration is necessary to maximize long-term reward, but decision makers might be myopic. For example, the loan officers actually making lending decisions might be more interested in maximizing their annual bonuses than maximizing the long-term profits of the bank. Even the CEO might be more interested in quarterly share prices. But myopic decision makers won't explore (tangent: We had a paper at EC thinking about how one might incentivize myopic decision makers to nevertheless be meritocratically fair). The algorithms in use in many settings might also not have been designed properly --- if our learning algorithms don't explicitly take into account the bandit nature of the problem, and just treat it as a supervised classification problem, then they won't explore --- and this kind of mistake is probably extremely common. And finally, as we argued above, we might believe that it is simply not "fair" to explore, and so we intentionally avoid it. But a lack of exploration (and the resulting failure to properly learn) can itself be a source of unfairness. The "feedback loops" that result from a lack of exploration have been blamed by Lum and Isaac and by Ensign, Friedler, Neville, Scheidegger, and Venkatasubramanian as a primary source of unfairness in predictive policing. (See Suresh's recent blog post). Such algorithms can over-estimate crime in the poor neighborhoods in which police are deployed, and underestimate crime in the rich neighborhoods. If they don't explore to correct these mis-estimates, they will deploy more police to the poor neighborhoods, and fewer to the rich neighborhoods, which only exacerbates the data collection problem.  

Our New Paper

These two concerns both lead us to wonder how bad things need be if a decision maker doesn't explore, and instead simply runs a greedy algorithm, that exploits at every round. Maybe this is because we believe that greedy algorithms are already being run in many cases, and we want to know how much of a risk we are at for developing pernicious feedback loops. Or maybe we want to run a greedy algorithm by design, because we are in a medical or legal setting in which exploration is unacceptable. This is the question we consider in a new paper, joint with Sampath Kannan, Jamie Morgenstern, Bo Waggoner, and Steven Wu. (P.S. Bo is on the job market right now!)

Specifically, we consider the linear contextual bandit problem, described at the beginning of this post. A decision maker must choose amongst a set of actions every day, each of which is endowed with a vector of features. The reward distribution for each action is governed by an unknown linear function of the features. In this case, the greedy algorithm is simply the algorithm that maintains ordinary least squares regression estimates for each action, and always plays the action that maximizes its current point prediction of the reward, using its current regression estimate. 

Motivated in part by the problem with exploration in sequential drug trials, Bastani, Bayati, and Khosravi previously studied a two-action variant of this problem, and showed that when the features are drawn stochastically from a distribution satisfying certain strong assumptions, then the greedy algorithm works well: its reward approaches that of the optimal policy, without needing exploration! We take their result as inspiration, and prove a theorem of this flavor under substantially weaker conditions (although our model is slightly different, so the results are technically incomparable). 

We consider a model in which the number of actions can be arbitrary, and give a smoothed analysis. What that means is we don't require that the actions or their features are drawn from any particular distribution. Instead, we let them be chosen by an adversary, who knows exactly how our algorithm works, and can be arbitrarily sneaky in his attempt to cause our algorithm to fail to learn. Of course, if we stopped there, then the result would be that the greedy algorithm can be made to catastrophically fail to learn with constant probability: it has long been known that exploration is necessary in the worst case. But our adversary has a slightly shaky hand. After he chooses the actions for a given round, the features of those actions are perturbed by a small amount of Gaussian noise, independently in each coordinate. What we show is that this tiny amount of noise is sufficient to cause the greedy algorithm to succeed at learning: it recovers regret bounds (regret is the difference between the cumulative reward of the greedy algorithm, and what the optimal policy would have done) that are close to optimal, up to an inverse polynomial factor of the standard deviation of the Gaussian noise. (The regret bound must grow with the inverse of the standard deviation of the noise in any smoothed analysis, since we know that if there is no noise, the greedy algorithm doesn't work...) 

The story is actually more nuanced: we consider two different models, derive qualitatively different bounds in each model, and prove a separation: see the paper for details. But the punchline is that, at least in linear settings, what we can show is that "generically" (i.e. in the presence of small perturbations), fast learning is possible in bandit settings, even without exploration. This can be taken to mean that in settings in which exploration is repugnant, we don't have to do it --- and that perhaps pernicious feedback loops that can arise when we fail to explore shouldn't be expected to persist for a long time, if the features we observe are a little noisy. Of course, everything we prove is in a linear setting. We still don't know the extent to which these kinds of smoothed analyses carry over into non-linear settings. Understanding the necessity of exploration in more complex settings seems like an important problem with real policy implications.  


No comments: