BY MATTHEW HERBERT
We humans are forever predicting the end of the world. My own guess is roughly five billion years from now, when the sun will explode and burn out. This eventuality probably wouldn’t come up much as a topic of conversation, but my seven-year old routinely asks about it.
Like me, he is made vaguely sad by the idea that everything everywhere has an expiration date, even if that date is unimaginably far in the future. Not only will we not be around to worry about the demise of the solar system, but even if there are any descendents of Homo sapiens alive to contempate the last sunset, they will be as different from us as we are from bacteria, their faculties for grasping and representing reality utterly alien to our modes of cognition and perception.
Of course what is astronomically more ikely is that all the earth’s life forms will have run their course eons before our tiny corner of the Milky Way unwinds according to the laws of thermodynamics.
But there’s something definitive about the end of the world that brings it within the scope of our imagination nonetheless. It doesn’t matter how far in the indeterminate future it may lie; it still looms as a finality. We can’t let it go.
Why not, though? I mean, five billion years on a human scale is eternity. There’s a strong case for just calling the world neverending, especially in conversations with seven-year olds.
But we don’t.
The prospect of the world’s end doesn’t just haunt us vaguely, fluttering in the backs of our minds, Ian McEwan writes, it grips us and actively shapes far too much of our public lives:
Thirty years ago, we might have been able to convince ourselves that contemporary religious apocalyptic thought was a harmless remnant of a more credulous, superstitious, pre-scientific age, now safely behind us. But today, prophecy belief, particularly within the Christian and Islamic traditions, is a force in our contemporary history, a medieval engine driving our modern moral, geopolitical, and military concerns. The various jealous sky gods–and they are certainly not one and the same god–who in the past directly addressed Abraham, Paul, or Mohammed, among others, now indirectly address us throught the daily television news. These different gods have wound themselves around our politics and our political differences.
Our secular and scientific culture has not replaced or even challenged these mutually incompatible, supernatural thought systems. Scientific method, skepticism, or rationality in general, has yet to find an overarching narrative of sufficient power, simplicity, and wide appeal to compete with the old stories that give meaning to people’s lives.
This passage is from McEwan’s 2007 essay, “End of the World Blues,” one of the best essays of the 2000s in my opinion. In it, McEwan takes a cool, dissecting look at our tendency to create and believe in stories about the way(s) we think the world will end. Most of these stories–lurid, violent, and deeply unintelligent–are clothed as religious prophecies. They tend to involve plagues, fiery demons, scarlet whores, sometimes mass suicides, almost always a culling of the unrighteous.
The most distressing thing about apocalypse stories, McEwan writes, is not (just) their power to make people believe them, but their power to make people wish for them to come true. It was not just Hitler, gun in his hand, catatonic in his bunker, who cursed the world as worthy of extinction once it had shown itself undeserving of his gift. It’s a thought that crosses many people’s minds. Christopher Hitchens called it “the wretched death wish that lurks horribly beneath all subservience to faith.”
Even at the core of the apparently consoling belief that life is a mere vale of tears and its tribulations, too, shall pass lies a fetid and dangerous corruption of the human spirit. Anyone who compensates for the hardships of life by contemplating the pulling down of the earthly scenery and the unmasking of the whole world as fraudulent or second rate is vulnerable, perhaps even prone, to an all-encompassing death wish. What are we to make of the 907 followers of Jim Jones killed by cyanide poisoning in 1978, who gave the poison first to children, then drank it themselves? They had arrived at the end of the world; they were pulling down its scenery to expose it as a fake. If you accept the article of faith that this life is not the “real” one, take care; your consolation differs only in degree, not kind, from the ghastly nihilism of the Jonestowners.
It is natural to understand our lives as narratives, with beginnings, middles and ends. But the story’s subject is so inconsequential against the backdrop of all of history, the telling so short! Seen sub specie aeternitatis, each of us is a mere speck of consciousness, animated by accident and gone again in a microsecond. We are, as Kurt Vonnegut put it in Deadeye Dick, “undifferentiated whisps of nothing.”
“What could grant us more meaning against the abyss of time,” McEwan proposes, “than to identify our own personal demise with the purifying annihilation of all that is.” This is a powerful alternative to accepting our status as candles in the wind. Longing for the apocalypse, McEwan is saying, is simply narcissism amped up to the max: If I have to check out, so does everyone and everything else. And merely believing in the apocalypse, as more than half of all Americans do, is the prelude to this totalitarian fantasy.
While we may think we are past the point where another Jim Jones could arise to command the imaginations of a group of benighted, prophecy-obsessed zealots, we are not. The apocalyptic personality is still alive and even walks among our elites. Retired Army General William “Jerry” Boykin, who once commanded Delta Force and the Army Special Operations Command, famously identified the United States’ enemy in the War on Terrorism in 2003 as ” a guy named Satan.” Boykin also boasted that his pursuit of a Somali warlord in 1993 was fueled by the knowledge that “my God was bigger than his. I knew that my God was a real God, and his was an idol.”
As the Special Operations Commander, Boykin sought to host a group of Baptist pastors to a prayer meeting followed by live-fire demonstrations of urban warfare. The holy shoot-em-up was meant to inspire the invited Christian shepherds to show more “guts” in the defense of the faith.
Today Boykin teaches at a private college in Virginia and leads a think tank identified by the Southern Poverty Law Center as a hate group for its activism against the LGBTQ community. He believes the United States has a mission from God to defend Christendom, and in 2018 he said that the election of Donald Trump as president bore “God’s imprint.” Boykin’s professional success raises a serious question about the enduring power of religious apocalyptic prophecies. If Boykin had to give an earnest account of his faith to his political leashholders (when he was a general), it would clearly come across to that polite and educated class as slightly bonkers. How, then, does someone like Boykin rise to the position he did? Nursing a rapturous death wish and a longing for spiritual warfare is no disqualifier for high official success, it seems, as long as such mental disturbances bear the imprint of sacred scripture.
When Boykin was tasked in 1993 to advise the Justice Department on how to remove the Branch Davidians from their compound, he would have confronted in his opponent across the Waco plain a kindred spirit–a fellow scripture-quoting, God-and-guns Christian demonologist who saw the world as a Manichean battlefield. All Americans should be disquieted by the fact that Boykin was closer in worldview to the armed, dangerous, and deranged David Koresh than he was to most of his fellow Army generals. His type is more likely to bring on the end of the world than to prevent it.
A second essay that, for me, helps define the distinct unease with humanity’s destiny that took shape in the 2000s is Bill Joy’s dystopian “Why the Future Doesn’t Need Us.” It serves as a reminder that it is not enough for enlightened societies simply to repudiate the lunatic fantasies of religion that titilated the minds of Jim Jones, David Koresh, Jerry Boykin, and so forth. We must also contend with the societal changes that will be wrought by our secular commitment to knowledge, science and reason.
The foundation of a rational society consists in what the philosopher Immanuel Kant called emanciption. Emancipation is the idea that humans are essentially alone, unaided by supernatual beings. We have only our own, fallible minds with which to try to understand the world and to order our relations with one another.
The Amerian founders believed strongly in emancipation. They were deists, which meant they believed that although God had set the universe in motion, he no longer supervised or intervened in his creation. So it came naturally to the founders to think of themselves as not being under the discipline of a heavenly parent. Many of England’s scientists in the 18th century had come to Philadelphia, in particular, to escape the oppressive “parenting” of the church back home and to follow scientific discovery wherever it led. It was a great leap forward for humankind.
The thing about emancipation, though, is that it does not guarantee that free-thinking humans will choose wisely or act in a way that shapes their societies for the best. All it says is that we unburdened by the dead hand of the past. Our future is yet to be created.
In 1987 Bill Joy, who would go on to invent much of the technological architecture of the internet, attended a conference at which luminaries of computer science made persuasive and, to him, unsettling arguments for the power of artificial intellignce to augment and even replace human cognition. It was a disturbing, formative moment for Joy. It crystallized a dilemma that he thought was rapidly taking shape for the whole of humankind. Our vaunted intelligence and talent for automation was setting in motion a new kind of creation, and it was not clear at all to Joy that humans would have a place in it.
In his essay he proposes:
First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.
Even if you disagree with Joy’s postulation as strictly stated, it is futile to deny the progress we’ve made since 2000 in what he’s getting at–having our work done for us by organized systems of increasingly intelligent machines. Even if we never reach the “utopia” of not doing any of our own work at all, we will, it seems, approach that limit asymptotically, and the difference between the real world and machine utopia will become practically insignificant.
Which could mean this, according to Joy:
If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions.
Again, the trends of our knowledge-driven society indicate that Joy is describing a highly plausible future, not a science fiction scenario. Already, algorithms, not doctors, identify which strains of seasonal flu should be immunized against each year. Search engines, not lawyers, collate the case law necessary for constructing legal briefs and going to trial. On German roads, speed trap cameras detect your speed, scan your license plate, and use networked databases to generate a citation and mail it to you. And don’t even start about Alexa locking and unlocking your doors, adjusting your thermostat, and playing lullabies for your kids on cue. Our lives today are filled with anecdotal evidence that reliance on technology is rendering our human grasp of the world increasingly obsolete.
But wait a minute. All this technology would be utterly inert and meaningless without a pre-established connection to human activity, right? German officials had to set up the system for enforcing speed limits: the technology is just the spiffy means for implementing it. The internet, to take another example, is as powerful as it is because it was designed to serve human purposes. Its proper functioning still requires the imaginative work of millions of computer scentists; its power is shaped and harnessed by millions of knowledge managers; its downstream systems require the oversight and active intervention of a phalanx of help desk workers and network engineers.
Fine, point taken. Let’s say humans will always have to man the controls of technology, no matter how “intelligent” machines become. In Joy’s view, though, this more promising-looking scenario still doesn’t get us out of the woods. It’s the other horn of the dilemma about our ultimate destiny
On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite—just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity. If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct, leaving the world to the elite. Or, if the elite consists of soft-hearted liberals, they may decide to play the role of good shepherds to the rest of the human race. They will see to it that everyone’s physical needs are satisfied, that all children are raised under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes “treatment” to cure his “problem.” Of course, life will be so purposeless that people will have to be biologically or psychologically engineered either to remove their need for the power process or make them “sublimate” their drive for power into some harmless hobby. These engineered human beings may be happy in such a society, but they will most certainly not be free. They will have been reduced to the status of domestic animals.
If anything, Joy seems to have been even more prescient about this set of trends. There are clearly still human power centers managing technology, and they are just as clearly pursuing the broad purposes Joy indicates they would. To take just one example, it is abundantly evident that a pro-Trump campaign meme in 2016 would have been designed by algorithm to micro-target the aimless poor and attract their support for policies meant to speed up their extinction (such as “guns everywhere” laws, the repeal of ACA, and the defunding of public schools). If Trump’s supporters felt increasingly voiceless in 2016, it was not for lack of willing spokesmen. It was more likely because technology-enabled chronic underemployment had drained their lives of any purpose that might be given a voice.
This anomie is coming for us all, by the way, not just the (former) laboring class. The growing trend of “bullshit jobs” as described by David Graeber in the eponymous book outlines the disorienting leading indicators of the near future of office work. Increasingly, knowledge workers will have to wring their paychecks from a fast-shrinking set of whatever meaningful tasks automation leaves for us to do. We will mostly be left, though, with what Graeber calls “the useless jobs that no one wants to talk about.”
Humans are good at struggling. What we are not good at is feeling useless. The working poor that used to make up the middle class are now confronted by a future whose contours are literally unmaginable to them. They cannot place themselves in its landscape. Every activity of life that used to absorb human energy and endow it with purpose is increasingly under the orchestration of complex, opaque systems created by elites and implemented through layers of specialized technology. Farmers, to take one example, are killing themselves in despair of this system. They cannot compete with agribusinesses scaled for international markets and underwritten by equities instruments so complex they are unintelligible to virtually everyone but their creators and which are traded by artificial intelligence agents at machine speed, around the clock.
This world that never shuts off and never stops innovating was supposed to bring propserity and, with it, human flourishing. To a marvelous extent it has. It would be redundant to review the main benefits that technological advances have brought to human life.
But the thing about technological advances is they just keep extending themselves, and as they create ever more complex systems, it becomes harder to anticipate whether they will help or harm us in the long run.
For Joy, the advent of genetics, nanothecnhnology, and robotic sciences at the turn of the century was a sea change in terms of risk. It turned scientific innovation into a non-linear phenomenon. He writes:
What was different in the 20th century? Certainly, the technologies underlying the weapons of mass destruction (WMD)—nuclear, biological, and chemical (NBC)—were powerful, and the weapons an enormous threat. But building nuclear weapons required, at least for a time, access to both rare—indeed, effectively unavailable—raw materials and highly protected information; biological and chemical weapons programs also tended to require large-scale activities.
The 21st-century technologies—genetics, nanotechnology, and robotics (GNR)—are so powerful that they can spawn whole new classes of accidents and abuses. Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups. They will not require large facilities or rare raw materials. Knowledge alone will enable the use of them.
This is the ecology within which technological threats to humanity’s future will evolve. Armed with massively powerful computers that churn terrabytes of data derived from exquisitely accurate genetic maps and then give it to robots as small as human cells (molecular level “assemblers”) to go out into the biosphere and do things with, humans increasingly have the capacity to redesign the world. “The replicating and evolving processes that have been confined to the natural world are about to become realms of human endeavor,” Joy writes.
What might this lead to? Well, hundreds of nightmare scenarios that we can imagine, and an indefinite number that we can’t. Joy quotes from the physicist Eric Drexler, author of Unbounding the Future: The Nanotechnology Revolution: “‘Plants’ with ‘leaves’ no more efficient than today’s solar cells could out-compete real plants, crowding the biosphere with an inedible foliage. Tough omnivorous ‘bacteria’ could out-compete real bacteria: They could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days. Dangerous replicators could easily be too tough, small, and rapidly spreading to stop—at least if we make no preparation.”
In other words, just like our relatively dumb personal computers, GNR technology will do exactly what we tell it to do, regardless of the depth of our ignorance of the potential consequences. And then it will do its own thing, because it will re-design itself. Until the end of the world, amen.
So pick your poison, as offered up by McEwan or Joy. It may be that we are too stupid to think seriously about our ultimate destiny, or it may be that we are too smart to settle for a future that is safe and humanly meaningful. Or, as seems most dismally likely, there is room in our world for both types.