If collaboration increases dishonesty, what can we do about it? One obvious answer is to increase monitoring. In fact, this seems to be the default response of the government’s regulators to every instance of corporate misconduct. For example, the Enron fiasco brought about a large set of reporting regulations known as the Sarbanes-Oxley Act, and the financial crisis of 2008 ushered in an even larger set of regulations (largely emerging from the Dodd-Frank Wall Street Reform and Consumer Protection Act), which were designed to regulate and increase the supervision of the financial industry.
To some degree, there is no question that monitoring can be helpful, but it is also clear from our results that increased monitoring alone is unlikely to completely overcome our ability to justify our own dishonesty—particularly when others stand to gain from our misbehavior (not to mention the high financial costs of compliance with such regulations).
In some cases, instead of adding layers and layers of rules and regulations, perhaps we could set our sights on changing the nature of group-based collaboration. An interesting solution to this problem was recently implemented in a large international bank by a former student of mine named Gino. To allow his team of loan officers to work together without risking increased dishonesty (for example, by recording the value of the loans as higher than they really were in an effort to show larger short-run profits), he set up a unique supervisory system. He told his loan officers that an outside group would review their processing and approval of loan applications. The outside group was socially disconnected from the loan-making team and had no loyalty or motivation to help out the loan officers. To make sure that the two groups were separated, Gino located them in different office buildings. And he ensured that they had no direct dealings with each other or even knew the individuals in the other group.
I tried to get the data from Gino in order to evaluate the success of his approach, but the lawyers of this large bank stopped us. So, I don’t know whether this approach worked or how his employees felt about the arrangement, but I suspect that this mechanism had at least some positive outcomes. It probably decreased the fun that the loan work group had during their meetings. It likely also increased the stress surrounding the groups’ decisions, and it was certainly not cheap to implement. Nevertheless, Gino told me that overall, adding the objective and anonymous monitoring element seemed to have a positive effect on ethics, morals, and the bottom line.
CLEARLY, THERE ARE no silver bullets for the complex issue of cheating in group settings. Taken together, I think that our findings have serious implications for organizations, especially considering the predominance of collaborative work in our day-to-day professional lives. There is also no question that better understanding the extent and complexity of dishonesty in social settings is rather depressing. Still, by understanding the possible pitfalls involved in collaboration, we can take some steps toward rectifying dishonest behavior.
CHAPTER 10
A Semioptimistic Ending
Throughout this book, we’ve seen that honesty and dishonesty are based on a mixture of two very different types of motivation. On the one hand, we want to benefit from cheating (this is the rational economic motivation), while on the other, we want to be able to view ourselves as wonderful human beings (this is the psychological motivation). You might think that we can’t achieve both of these objectives at the same time—that we can’t have our cake and eat it too, so to speak—but the fudge factor theory we have developed in these pages suggests that our capacity for flexible reasoning and rationalization allows us to do just that. Basically, as long as we cheat just a little bit, we can have the cake and eat (some of) it too. We can reap some of the benefits of dishonesty while maintaining a positive image of ourselves.
As we’ve seen, certain forces—such as the amount of money we stand to gain and the probability of being caught—influence human beings surprisingly less than one might think. And at the same time other forces influence us more than we might expect: moral reminders, distance from money, conflicts of interest, depletion, counterfeits, reminders of our fabricated achievements, creativity, witnessing others’ dishonest acts, caring about others on our team, and so on.
ALTHOUGH THE FOCUS of the various experiments presented here was on dishonesty, it is also important to remember that most of the participants in our experiments were nice people from good universities who will likely attain positions of some power and influence later on in life. They were not the kind of people one typically associates with cheating. In fact, they were just like you, me, and most of the people on this planet, which means that all of us are perfectly capable of cheating a little bit.
Though that may sound pessimistic, the half-full part of the story is that human beings are, by and large, more moral than standard economic theory predicts. In fact, seen from a purely rational (SMORC) perspective, we humans don’t cheat nearly enough. Consider how many times in the last few days you’ve had the opportunity to cheat without getting caught. Perhaps a colleague left her purse on her desk while she was away for a long meeting. Maybe a stranger in a coffee shop asked you to watch her laptop while she went to the restroom. Maybe a grocery clerk missed an item in your cart or you passed an unlocked bicycle on an empty street. In any of those situations, the SMORC thing to do would be to take the money, laptop, or bike or not mention the missed item. Yet we pass up the vast majority of these opportunities every day without thinking that we should take them. This means that we’re off to a good start in our effort to improve our moral fiber.
What About “Real” Criminals?
Across all of our experiments we’ve tested thousands of people, and from time to time, we did see aggressive cheaters who keep as much money as possible. In the matrix experiment, for example, we have never seen anyone claim to solve eighteen or nineteen out of the twenty matrices. But once in a while, a participant claimed to have solved all twenty matrices correctly. These are the people who, having made a cost-benefit analysis, decided to get away with as much money as possible. Fortunately, we didn’t encounter many of those folks, and because they seemed to be the exception and not the rule, we lost only a few hundred dollars to them. (Not exactly thrilling, but not too bad.) At the same time, we had thousands and thousands of participants who cheated by “just” a few matrices, but because there were so many of them, we lost thousands and thousands of dollars to them—much, much more than we lost to the aggressive cheaters.
I suspect that in terms of my financial losses to the aggressive and to the small cheaters, our experiments are indicative of dishonesty in society at large. Very few people steal to a maximal degree. But many good people cheat just a little here and there by rounding up their billable hours, claiming higher losses on their insurance claims, recommending unnecessary treatments, and so on. Companies also find many ways to cheat a little bit. Think about credit card companies that raise interest rates ever so slightly for no apparent reason and invent all kinds of hidden fees and penalties (which are often referred to, within companies, as “revenue enhancements”). Think about banks that slow down check processing so that they can hold on to our money for an extra day or two or charge exorbitant fees for overdraft protection and for using ATMs. All of this means that although it is obviously important to pay attention to flagrant misbehaviors, it is probably even more important to discourage the small and more ubiquitous forms of dishonesty—the misbehaviors that affect all of us most of the time—both as perpetrators and as victims.
A Word About Cultural Differences