If full disclosure were the rule of the land, doctors would inform their patients when they own the equipment required for the treatments they recommend. Or when they are paid to consult for the manufacturer of the drugs that they are about to prescribe. Financial advisers would inform their clients about all the different fees, payments, and commissions they get from various vendors and investment houses. With that information in hand, consumers should be able to appropriately discount the opinions of those professionals and make better decisions. In theory, disclosure seems to be a fantastic solution; it both exonerates the professionals who are acknowledging their conflicts of interest and it provides their clients with a better sense of where their information is coming from.
HOWEVER, IT TURNS out that disclosure is not always an effective cure for conflicts of interest. In fact, disclosure can sometimes make things worse. To explain how, allow me to run you through a study conducted by Daylian Cain (a professor at Yale University), George Loewenstein (a professor at Carnegie Mellon University), and Don Moore (a professor at the University of California, Berkeley). In this experiment, participants played a game in one of two roles. (By the way, what researchers call a “game” is not what any reasonable kid would consider a game.) Some of the participants played the role of estimators: their task was to guess the total amount of money in a large jar full of loose change as accurately as possible. These players were paid according to how close their guess was to the real value of the money in the jar. The closer their estimates were, the more money they received, and it didn’t matter if they missed by overestimating or underestimating the true value.
The other participants played the role of advisers, and their task was to advise the estimators on their guesses. (Think of someone akin to your stock adviser, but with a much simpler task.) There were two interesting differences between the estimators and the advisers. The first was that whereas the estimators were shown the jar from a distance for a few seconds, the advisers had more time to examine it, and they were also told that the amount of money in the jar was between $10 and $30. That gave the advisers an informational edge. It made them relative experts in the field of estimating the jar’s value, and it gave the estimators a very good reason to rely on their advisers’ reports when formulating their guesses (comparable to the way we rely on experts in many areas of life).
The second difference concerned the rule for paying the advisers. In the control condition, the advisers were paid according to the accuracy of the estimators’ guesses, so no conflicts of interest were involved. In the conflict- of-interest condition, the advisers were paid more as the estimators overguessed the value of the coins in the jar to a larger degree. So if the estimators overguessed by $1, it was good for the advisers—but it was even better if they overguessed by $3 or $4. The higher the overestimation, the less the estimator made but the more the adviser pocketed.
So what happened in the control condition and in the conflict-of-interest condition? You guessed it: in the control condition, advisers suggested an average value of $16.50, while in the conflict-of-interest condition, the advisers suggested an estimate that was over $20. They basically goosed the estimated value by almost $4. Now, you can look at the positive side of this result and tell yourself, “Well, at least the advice was not $36 or some other very high number.” But if that is what went through your mind, you should consider two things: first, that the adviser could not give clearly exaggerated advice because, after all, the estimator did see the jar. If the value had been dramatically too high, the estimator would have dismissed the suggestion altogether. Second, remember that most people cheat just enough to still feel good about themselves. In that sense, the fudge factor was an extra $4 (or about 25 percent of the amount).
The importance of this experiment, however, showed up in the third condition—the conflict-of-interest-plus- disclosure condition. Here the payment for the adviser was the same as it was in the conflict-of-interest condition. But this time the adviser had to tell the estimator that he or she (the adviser) would receive more money when the estimator overguessed. The sunshine policy in action! That way, the estimator could presumably take the adviser’s biased incentives into account and discount the advice of the adviser appropriately. Such a discount of the advice would certainly help the estimator, but what about the effect of the disclosure on the advisers? Would the need to disclose eliminate their biased advice? Would disclosing their bias stretch the fudge factor? Would they now feel more comfortable exaggerating their advice to an even greater degree? And the billion-dollar question is this: which of these two effects would prove to be larger? Would the discount that the estimator applied to the adviser’s advice be smaller or larger than the extra exaggeration of the adviser?
The results? In the conflict-of-interest-plus-disclosure condition, the advisers increased their estimates by another $4 (from $20.16 to $24.16). And what did the estimators do? As you can probably guess, they did discount the estimates, but only by $2. In other words, although the estimators did take the advisers’ disclosure into consideration when formulating their estimates, they didn’t subtract nearly enough. Like the rest of us, the estimators didn’t sufficiently recognize the extent and power of their advisers’ conflicts of interest.
The main takeaway is this: disclosure created even greater bias in advice. With disclosure the estimators made less money and the advisers made more. Now, I am not sure that disclosure will always make things worse for clients, but it is clear that disclosure and sunshine policies will not always make things better.
So What Should We Do?
Now that we understand conflicts of interest a bit better, it should be clear what serious problems they cause. Not only are they ubiquitous, but we don’t seem to fully appreciate their degree of influence on ourselves and on others. So where do we go from here?
One straightforward recommendation is to try to eradicate conflicts of interest altogether, which of course is easier said than done. In the medical domain, that would mean, for example, that we would not allow doctors to treat or test their own patients using equipment that they own. Instead, we’d have to require that an independent entity, with no ties to the doctors or equipment companies, conduct the treatments and tests. We would also prohibit doctors from consulting for drug companies or investing in pharmaceutical stocks. After all, if we don’t want doctors to have conflicts of interest, we need to make sure that their income doesn’t depend on the number and types of procedures or prescriptions they recommend. Similarly, if we want to eliminate conflicts of interest for financial advisers, we should not allow them to have incentives that are not aligned with their clients’ best interests—no fees for services, no kickbacks, and no differential pay for success and failure.
Though it is clearly important to try to reduce conflicts of interest, it is not easy to do so. Take contractors, lawyers, and car mechanics, for example. The way these professionals are paid puts them into terrible conflicts of interest because they both make the recommendation and benefit from the service, while the client has no expertise or leverage. But stop for a few minutes and try to think about a compensation model that would not involve any conflicts of interest. If you are taking the time to try to come up with such an approach, you most likely agree that it is very hard—if not impossible—to pull off. It is also important to realize that although conflicts of interest cause problems, they sometimes happen for good reason. Take the case of physicians (and dentists) ordering treatments that use equipment they own. Although this is a potentially dangerous practice from the perspective of conflicts of interest, it also has some built-in advantages: professionals are more likely to purchase equipment that they believe in; they are likely to become experts in using it; it can be much more convenient for the patient; and the doctors might even conduct some research that could help improve the equipment or the ways in which it is used.
The bottom line is that it is no easy task to come up with compensation systems that don’t inherently involve —and sometimes rely on—conflicts of interest. Even if we could eliminate all conflicts of interest, the cost of doing so in terms of decreased flexibility and increased bureaucracy and oversight might not be worth it—which is why we should not overzealously advocate draconian rules and restrictions (say, that physicians can never talk to pharma reps or own medical equipment). At the same time, I do think it’s important for us to realize the extent to which we