bad because you are certain that you have done your best to represent the values of the securities as objectively as possible.

Moreover, you aren’t dealing with real cash; you are only playing with numbers that are many steps removed from cash. Their abstractness allows you to view your actions more as a game, and not as something that actually affects people’s homes, livelihoods, and retirement accounts. You are also not alone. You realize that the smart financial engineers in the offices next to yours are behaving more or less the same way as you and when you compare your evaluations to theirs, you realize that a few of your coworkers have chosen even more extreme values than yours. Believing that you are a rational creature, and believing that the market is always correct, you are even more inclined to accept what you’re doing—and what everyone else is doing (we’ll learn more about this in chapter 8)—as the right way to go. Right?

Of course, none of this is actually okay (remember the financial crisis of 2008?), but given the amount of money involved, it feels natural to fudge things a bit. And it’s perfectly human to behave this way. Your actions are highly problematic, but you don’t see them as such. After all, your conflicts of interest are supported by the facts that you’re not dealing with real money; that the financial instruments are mind-bogglingly complex; and that every one of your colleagues is doing the same thing.

The riveting (and awfully distressing) Academy Award–winning documentary Inside Job shows in detail how the financial services industry corrupted the U.S. government, leading to a lack of oversight on Wall Street and to the financial meltdown of 2008. The film also describes how the financial services industry paid leading academics (deans, heads of departments, university professors) to write expert reports in the service of the financial industry and Wall Street. If you watch the film, you will most likely feel puzzled by the ease with which academic experts seemed to sell out, and think that you would never do the same.

But before you put a guarantee on your own standards of morality, imagine that I (or you) were paid a great deal to be on Giantbank’s audit committee. With a large part of my income depending on Giantbank’s success, I would probably not be as critical as I am currently about the bank’s actions. With a hefty enough incentive I might not, for example, repeatedly say that investments must be transparent and clear and that companies need to work hard to try to overcome their conflicts of interests. Of course, I’ve yet to be on such a committee, so for now it’s easy for me to think that many of the actions of the banks have been reprehensible.

Academics Are Conflicted Too

When I reflect on the ubiquity of conflicts of interest and how impossible they are to recognize in our own lives, I have to acknowledge that I’m susceptible to them as well.

We academics are sometimes called upon to use our knowledge as consultants and expert witnesses. Shortly after I got my first academic job, I was invited by a large law firm to be an expert witness. I knew that some of my more established colleagues provided expert testimonials as a regular side job for which they were paid handsomely (though they all insisted that they didn’t do it for the money). Out of curiosity, I asked to see the transcripts of some of their old cases, and when they showed me a few I was surprised to discover how one-sided their use of the research findings was. I was also somewhat shocked to see how derogatory they were in their reports about the opinions and qualifications of the expert witnesses representing the other side—who in most cases were also respectable academics.

Even so, I decided to try it out (not for the money, of course), and I was paid quite a bit to give my expert opinion.* Very early in the case I realized that the lawyers I was working with were trying to plant ideas in my mind that would buttress their case. They did not do it forcefully or by saying that certain things would be good for their clients. Instead, they asked me to describe all the research that was relevant to the case. They suggested that some of the less favorable findings for their position might have some methodological flaws and that the research supporting their view was very important and well done. They also paid me warm compliments each time that I interpreted research in a way that was useful to them. After a few weeks, I discovered that I rather quickly adopted the viewpoint of those who were paying me. The whole experience made me doubt whether it’s at all possible to be objective when one is paid for his or her opinion. (And now that I am writing about my lack of objectivity, I am sure that no one will ever ask me to be an expert witness again—and maybe that’s a good thing.)

The Drunk Man and the Data Point

I had one other experience that made me realize the dangers of conflicts of interest; this time it was in my own research. At the time, my friends at Harvard were kind enough to let me use their behavioral lab to conduct experiments. I was particularly interested in using their facility because they recruited residents from the surrounding area rather than relying only on students.

One particular week, I was testing an experiment on decision making, and, as is usually the case, I predicted that the performance level in one of the conditions would be much higher than the performance level in the other condition. That was basically what the results showed—aside from one person. This person was in the condition I expected to perform best, but his performance was much worse than everyone else’s. It was very annoying. As I examined his data more closely, I discovered that he was about twenty years older than everyone else in the study. I also remembered that there was one older fellow who was incredibly drunk when he came to the lab.

The moment I discovered that the offending participant was drunk, I realized that I should have excluded his data in the first place, given that his decision-making ability was clearly compromised. So I threw out his data, and instantly the results looked beautiful—showing exactly what I expected them to show. But, a few days later I began thinking about the process by which I decided to eliminate the drunk guy. I asked myself: what would have happened if this fellow had been in the other condition—the one I expected to do worse? If that had been the case, I probably would not have noticed his individual responses to start with. And if I had, I probably would not have even considered excluding his data.

In the aftermath of the experiment, I could easily have told myself a story that would excuse me from using the drunk guy’s data. But what if he hadn’t been drunk? What if he had some other kind of impairment that had nothing to do with drinking? Would I have invented another excuse or logical argument to justify excluding his data? As we will see in chapter 7, “Creativity and Dishonesty,” creativity can help us justify following our selfish motives while still thinking of ourselves as honest people.

I decided to do two things. First, I reran the experiment to double-check the results, which worked out beautifully. Then I decided it was okay to create standards for excluding participants from an experiment (that is, we wouldn’t test drunks or people who couldn’t understand the instructions). But the rules for exclusion have to be made up front, before the experiment takes place, and definitely not after looking at the data.

What did I learn? When I was deciding to exclude the drunk man’s data, I honestly believed I was doing so in the name of science—as if I were heroically fighting to clear the data so that the truth could emerge. It didn’t occur to me that I might be doing it for my own self-interest, but I clearly had another motivation: to find the results I was expecting. More generally, I learned—again—about the importance of establishing rules that can safeguard ourselves from ourselves.

Disclosure: A Panacea?

So what is the best way to deal with conflicts of interest? For most people, “full disclosure” springs to mind. Following the same logic as “sunshine policies,” the basic assumption underlying disclosure is that as long as people publicly declare exactly what they are doing, all will be well. If professionals were to simply make their incentives clear and known to their clients, so the thinking goes, the clients can then decide for themselves how much to rely on their (biased) advice and then make more informed decisions.

Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату