The architecture of the modern web poses grave threats to humanity. It’s not too late to save ourselves.
By Adrienne LaFrance
The doomsday machine was never supposed to exist. It was meant to be a thought experiment that went like this: Imagine a device built with the sole purpose of destroying all human life. Now suppose that machine is buried deep underground, but connected to a computer, which is in turn hooked up to sensors in cities and towns across the United States.
The sensors are designed to sniff out signs of the impending apocalypse—not to prevent the end of the world, but to complete it. If radiation levels suggest nuclear explosions in, say, three American cities simultaneously, the sensors notify the Doomsday Machine, which is programmed to detonate several nuclear warheads in response. At that point, there is no going back. The fission chain reaction that produces an atomic explosion is initiated enough times over to extinguish all life on Earth. There is a terrible flash of light, a great booming sound, then a sustained roar. We have a word for the scale of destruction that the Doomsday Machine would unleash: megadeath.
Nobody is pining for megadeath. But megadeath is not the only thing that makes the Doomsday Machine petrifying. The real terror is in its autonomy, this idea that it would be programmed to detect a series of environmental inputs, then to act, without human interference. “There is no chance of human intervention, control, and final decision,” wrote the military strategist Herman Kahn in his 1960 book, On Thermonuclear War, which laid out the hypothetical for a Doomsday Machine. The concept was to render nuclear war unwinnable, and therefore unthinkable.
Kahn concluded that automating the extinction of all life on Earth would be immoral. Even an infinitesimal risk of error is too great to justify the Doomsday Machine’s existence. “And even if we give up the computer and make the Doomsday Machine reliably controllable by decision makers,” Kahn wrote, “it is still not controllable enough.” No machine should be that powerful by itself—but no one person should be either.
The Soviets really did make a version of the Doomsday Machine during the Cold War. They nicknamed it “Dead Hand.” But so far, somewhat miraculously, we have figured out how to live with the bomb. Now we need to learn how to survive the social web.
People tend to complain about Facebook as if something recently curdled. There’s a notion that the social web was once useful, or at least that it could have been good, if only we had pulled a few levers: some moderation and fact-checking here, a bit of regulation there, perhaps a federal antitrust lawsuit. But that’s far too sunny and shortsighted a view. Today’s social networks, Facebook chief among them, were built to encourage the things that make them so harmful. It is in their very architecture.
I’ve been thinking for years about what it would take to make the social web magical in all the right ways—less extreme, less toxic, more true—and I realized only recently that I’ve been thinking far too narrowly about the problem. I’ve long wanted Mark Zuckerberg to admit that Facebook is a media company, to take responsibility for the informational environment he created in the same way that the editor of a magazine would. (I pressed him on this once and he laughed.) In recent years, as Facebook’s mistakes have compounded and its reputation has tanked, it has become clear that negligence is only part of the problem. No one, not even Mark Zuckerberg, can control the product he made. I’ve come to realize that Facebook is not a media company. It’s a Doomsday Machine.
The social web is doing exactly what it was built for. Facebook does not exist to seek truth and report it, or to improve civic health, or to hold the powerful to account, or to represent the interests of its users, though these phenomena may be occasional by-products of its existence. The company’s early mission was to “give people the power to share and make the world more open and connected.” Instead, it took the concept of “community” and sapped it of all moral meaning. The rise of QAnon, for example, is one of the social web’s logical conclusions. That’s because Facebook—along with Google and YouTube—is perfect for amplifying and spreading disinformation at lightning speed to global audiences. Facebook is an agent of government propaganda, targeted harassment, terrorist recruitment, emotional manipulation, and genocide—a world-historic weapon that lives not underground, but in a Disneyland-inspired campus in Menlo Park, California.
The giants of the social web—Facebook and its subsidiary Instagram; Google and its subsidiary YouTube; and, to a lesser extent, Twitter—have achieved success by being dogmatically value-neutral in their pursuit of what I’ll call megascale. Somewhere along the way, Facebook decided that it needed not just a very large user base, but a tremendous one, unprecedented in size. That decision set Facebook on a path to escape velocity, to a tipping point where it can harm society just by existing.
Limitations to the Doomsday Machine comparison are obvious: Facebook cannot in an instant reduce a city to ruins the way a nuclear bomb can. And whereas the Doomsday Machine was conceived of as a world-ending device so as to forestall the end of the world, Facebook started because a semi-inebriated Harvard undergrad was bored one night. But the stakes are still life-and-death. Megascale is nearly the existential threat that megadeath is. No single machine should be able to control the fate of the world’s population—and that’s what both the Doomsday Machine and Facebook are built to do.
The cycle of harm perpetuated by Facebook’s scale-at-any-cost business model is plain to see. Scale and engagement are valuable to Facebook because they’re valuable to advertisers. These incentives lead to design choices such as reaction buttons that encourage users to engage easily and often, which in turn encourage users to share ideas that will provoke a strong response. Every time you click a reaction button on Facebook, an algorithm records it, and sharpens its portrait of who you are. The hyper-targeting of users, made possible by reams of their personal data, creates the perfect environment for manipulation—by advertisers, by political campaigns, by emissaries of disinformation, and of course by Facebook itself, which ultimately controls what you see and what you don’t see on the site. Facebook has enlisted a corps of approximately 15,000 moderators, people paid to watch unspeakable things—murder, gang rape, and other depictions of graphic violence that wind up on the platform. Even as Facebook has insisted that it is a value-neutral vessel for the material its users choose to publish, moderation is a lever the company has tried to pull again and again. But there aren’t enough moderators speaking enough languages, working enough hours, to stop the biblical flood of shit that Facebook unleashes on the world, because 10 times out of 10, the algorithm is faster and more powerful than a person. At megascale, this algorithmically warped personalized informational environment is extraordinarily difficult to moderate in a meaningful way, and extraordinarily dangerous as a result.
These dangers are not theoretical, and they’re exacerbated by megascale, which makes the platform a tantalizing place to experiment on people. Facebook has conducted social-contagion experiments on its users without telling them. Facebook has acted as a force for digital colonialism, attempting to become the de facto (and only) experience of the internet for people all over the world. Facebook has bragged about its ability to influence the outcome of elections. Unlawful militant groups use Facebook to organize. Government officials use Facebook to mislead their own citizens, and to tamper with elections. Military officials have exploited Facebook’s complacency to carry out genocide. Facebook inadvertently auto-generated jaunty recruitment videos for the Islamic State featuring anti-Semitic messages and burning American flags.
Even after U.S. intelligence agencies identified Facebook as a main battleground for information warfare and foreign interference in the 2016 election, the company has failed to stop the spread of extremism, hate speech, propaganda, disinformation, and conspiracy theories on its site.
Neo-Nazis stayed active on Facebook by taking out ads even after they were formally banned. And it wasn’t until October of this year, for instance, that Facebook announced it would remove groups, pages, and Instragram accounts devoted to QAnon, as well as any posts denying the Holocaust. (Previously Zuckerberg had defended Facebook’s decision not to remove disinformation about the Holocaust, saying of Holocaust deniers, “I don’t think that they’re intentionally getting it wrong.” He later clarified that he didn’t mean to defend Holocaust deniers.) Even so, Facebook routinely sends emails to users recommending the newest QAnon groups. White supremacists and deplatformed MAGA trolls may flock to smaller social platforms such as Gab and Parler, but these platforms offer little aside from a narrative of martyrdom without megascale.
In the days after the 2020 presidential election, Zuckerberg authorized a tweak to the Facebook algorithm so that high-accuracy news sources such as NPR would receive preferential visibility in people’s feeds, and hyper-partisan pages such as Breitbart News’s and Occupy Democrats’ would be buried, according to The New York Times, offering proof that Facebook could, if it wanted to, turn a dial to reduce disinformation—and offering a reminder that Facebook has the power to flip a switch and change what billions of people see online.
The decision to touch the dial was highly unusual for Facebook. Think about it this way: The Doomsday Machine’s sensors detected something harmful in the environment and chose not to let its algorithms automatically blow it up across the web as usual. This time a human intervened to mitigate harm. The only problem is that reducing the prevalence of content that Facebook calls “bad for the world” also reduces people’s engagement with the site. In its experiments with human intervention, the Times reported, Facebook calibrated the dial so that just enough harmful content stayed in users’ news feeds to keep them coming back for more.
Facebook’s stated mission—to make the world more open and connected—has always seemed, to me, phony at best, and imperialist at worst. After all, today’s empires are born on the web. Facebook is a borderless nation-state, with a population of users nearly as big as China and India combined, and it is governed largely by secret algorithms. Hillary Clinton told me earlier this year that talking to Zuckerberg feels like negotiating with the authoritarian head of a foreign state. “This is a global company that has huge influence in ways that we’re only beginning to understand,” she said.
I recalled Clinton’s warning a few weeks ago, when Zuckerberg defended the decision not to suspend Steve Bannon from Facebook after he argued, in essence, for the beheading of two senior U.S. officials, the infectious-disease doctor Anthony Fauci and FBI Director Christopher Wray. The episode got me thinking about a question that’s unanswerable but that I keep asking people anyway: How much real-world violence would never have happened if Facebook didn’t exist? One of the people I’ve asked is Joshua Geltzer, a former White House counterterrorism official who is now teaching at Georgetown Law. In counterterrorism circles, he told me, people are fond of pointing out how good the United States has been at keeping terrorists out since 9/11. That’s wrong, he said. In fact, “terrorists are entering every single day, every single hour, every single minute” through Facebook.
e Atlantic. She was previously a senior editor and staff writer at The Atlantic and the editor of TheAtlantic.com.
Courtesy: The Atlantic
Full Article:
https://www.theatlantic.com/technology/archive/2020/12/facebook-doomsday-machine/617384/