Free speech is a big one. The U.S. is where these platforms are based. So they have inherited a lot of norms about free speech and the First Amendment. We pay a price for our love of free speech in the U.S., and that price includes some of the world’s most lax defamation laws. There are plenty of countries where it’s a lot easier to sue somebody for defaming you, where satire isn’t as well protected, where hate speech laws are stricter—in Europe, for instance, they take a different approach to these questions. But these are American companies, so they tend to take a more maximalist view of free speech. How do you fulfill that expectation at scale while balancing other considerations around safety and accountability?What if the things that people say cause harm?
Then it has to be dealt with. But what I’ve seen is that everyone wants it both ways. They want the platforms to be responsible for everything that’s said on them, but they also want the platforms to stand down when it’s their own speech that might be interfered with. It’s just an incredibly challenging thing to get right.It sounds like you’re saying that platforms of a certain scale and significance always face difficult choices that involve trade-offs. But if such trade-offs are inevitable, one could argue that those choices shouldn’t be left in the hands of private companies, particularly when the social and political consequences can be so severe. If there are trade-offs that must be made, they should be made through an open, democratic process, using a political forum or a regulatory mechanism of some kind.
If what you’re saying is that these kinds of debates should take place in the public arena, then yes, absolutely. There needs to be transparency and accountability. And there’s certainly room for smart regulation by smart governments with the right motives. But personally, based on what I’ve seen and observed, regulation can be slow-moving. It can be inefficient. It can be counterproductive. And it can cause a whole new set of controversies.
Returning to the Russia question: Fundamentally, the government didn’t do its job. Preventing Russian interference in the election should’ve been the business of the intelligence community from the start. I never personally felt like they were involved. But if they had been, that in itself would have been a different scandal. Imagine people’s reaction to finding out the FBI and the CIA or whoever were collaborating closely with the big platforms to combat Russian disinformation. People would freak out about government censorship and surveillance.
Even smart regulation will raise those kinds of concerns. Then, globally, it’s a very different picture. Liberals in the U.S. have a tendency to assume that government regulation has our interests at heart. That’s not true in many of the more explicitly authoritarian countries where these companies operate.Another perspective is that, so long as these platforms are private entities trying to maximize profit and shareholder value, they will always be incentivized to put their bottom line over the well-being and interests of their users, and society more broadly.
Well, companies certainly act in their own interests. They have competition. They have to make money to stay in business.
But when I think about how new consumer features come about, it’s generally not driven by people thinking about growth or market share in a systematic way. It’s certainly not driven by people deliberately thinking about how to take advantage of users. On the contrary: it’s usually about people trying to create value for users.
There are different ways to approach that. One is the more extreme, visionary version: You think you know what the world needs. You say, this is how people should interact with each other on the internet, or this is how businesses should do payroll, or whatever.We might call that the Steve Jobs approach.
Yeah. The other approach is more evidence-driven, which is what I’m more familiar with. It involves gathering data about what people want, what people are frustrated by. At this particular company, it might look like doing research into the pain points of a particular user experience. Then you look for ways to address those concerns.
That incremental development is first and foremost about creating value for users. And sure, if it takes off, business imperatives come into play—how is this going to affect growth, market share, that sort of thing. But that’s usually not where it starts. It starts from more of a problem-solving mindset. I would say it’s an engineer’s approach to the world. It works by identifying friction in existing systems and trying to make them more efficient.
At the end of the day, you need to be building something that people want. You need people to get value from it. Otherwise they won’t use it.
That said, with platforms this big, managing that value can get difficult. You have to balance the different considerations we discussed earlier, across many different groups of people at a global scale. You have to find a way to serve one audience while trying not to frustrate or alienate another audience. It’s complicated.
Inside VoicesYour job was to tell the public a story about a product. But companies don’t just talk to the public—they also talk to themselves. They tell themselves a story about what they’re doing, whether in the form of an explicit mission statement or an unofficial mantra. And these internal stories seem especially important in Silicon Valley: don’t be evil, move fast and break things, and so on.
Humans need stories. They need to create an accounting for themselves, a sense of how it all fits together. But sometimes people are skeptical about corporate missions or mantras. They think they’re too vague or idealistic.I think of the scene in Silicon Valley where all the startup founders are onstage explaining how their complicated technical product is going to “make the world a better