It’s the End of the World as We Know It

BY MATTHEW HERBERT

We humans are forever predicting the end of the world. My own guess is roughly five billion years from now, when the sun will explode and burn out. This eventuality probably wouldn’t come up much as a topic of conversation, but my seven-year old routinely asks about it.

Like me, he is made vaguely sad by the idea that everything everywhere has an expiration date, even if that date is unimaginably far in the future. Not only will we not be around to worry about the demise of the solar system, but even if there are any descendents of Homo sapiens alive to contempate the last sunset, they will be as different from us as we are from bacteria, their faculties for grasping and representing reality utterly alien to our modes of cognition and perception.

Of course what is astronomically more ikely is that all the earth’s life forms will have run their course eons before our tiny corner of the Milky Way unwinds according to the laws of thermodynamics.

But there’s something definitive about the end of the world that brings it within the scope of our imagination nonetheless. It doesn’t matter how far in the indeterminate future it may lie; it still looms as a finality. We can’t let it go.

Why not, though? I mean, five billion years on a human scale is eternity. There’s a strong case for just calling the world neverending, especially in conversations with seven-year olds.

But we don’t.

The prospect of the world’s end doesn’t just haunt us vaguely, fluttering in the backs of our minds, Ian McEwan writes, it grips us and actively shapes far too much of our public lives:

Thirty years ago, we might have been able to convince ourselves that contemporary religious apocalyptic thought was a harmless remnant of a more credulous, superstitious, pre-scientific age, now safely behind us. But today, prophecy belief, particularly within the Christian and Islamic traditions, is a force in our contemporary history, a medieval engine driving our modern moral, geopolitical, and military concerns. The various jealous sky gods–and they are certainly not one and the same god–who in the past directly addressed Abraham, Paul, or Mohammed, among others, now indirectly address us throught the daily television news. These different gods have wound themselves around our politics and our political differences.

Our secular and scientific culture has not replaced or even challenged these mutually incompatible, supernatural thought systems. Scientific method, skepticism, or rationality in general, has yet to find an overarching narrative of sufficient power, simplicity, and wide appeal to compete with the old stories that give meaning to people’s lives.

This passage is from McEwan’s 2007 essay, “End of the World Blues,” one of the best essays of the 2000s in my opinion. In it, McEwan takes a cool, dissecting look at our tendency to create and believe in stories about the way(s) we think the world will end. Most of these stories–lurid, violent, and deeply unintelligent–are clothed as religious prophecies. They tend to involve plagues, fiery demons, scarlet whores, sometimes mass suicides, almost always a culling of the unrighteous.

The most distressing thing about apocalypse stories, McEwan writes, is not (just) their power to make people believe them, but  their power to make people wish for them to come true. It was not just Hitler, gun in his hand, catatonic in his bunker, who cursed the world as worthy of extinction once it had shown itself undeserving of his gift. It’s a  thought that crosses many people’s minds. Christopher Hitchens called it “the wretched death wish that lurks horribly beneath all subservience to faith.”

Even at the core of the apparently consoling belief that life is a mere vale of tears and its tribulations, too, shall pass lies a fetid and dangerous corruption of the human spirit. Anyone who compensates for the hardships of life by contemplating the pulling down of the earthly scenery and the unmasking of the whole world as fraudulent or second rate is vulnerable, perhaps even prone, to an all-encompassing death wish. What are we to make of the 907 followers of Jim Jones killed by cyanide poisoning in 1978, who gave the poison first to children, then drank it themselves? They had arrived at the end of the world; they were pulling down its scenery to expose it as a fake. If you accept the article of faith that this life is not the “real” one, take care; your consolation differs only in degree, not kind, from the ghastly nihilism of the Jonestowners.

It is natural to understand our lives as narratives, with beginnings, middles and ends. But the story’s subject is so inconsequential against the backdrop of all of history, the telling so short! Seen sub specie aeternitatis, each of us is a mere speck of consciousness, animated by accident and gone again in a microsecond. We are, as Kurt Vonnegut put it in Deadeye Dick, “undifferentiated whisps of nothing.”

“What could grant us more meaning against the abyss of time,” McEwan proposes, “than to identify our own personal demise with the purifying annihilation of all that is.” This is a powerful alternative to accepting our status as candles in the wind. Longing for the apocalypse, McEwan is saying, is simply narcissism amped up to the max: If I have to check out, so does everyone and everything else. And merely believing in the apocalypse, as more than half of all Americans do, is the prelude to this totalitarian fantasy.

While we may think we are past the point where another Jim Jones could arise to command the imaginations of a group of benighted, prophecy-obsessed zealots, we are not. The apocalyptic personality is still alive and even walks among our elites. Retired Army General William “Jerry” Boykin, who once commanded Delta Force and the Army Special Operations Command, famously identified the United States’ enemy in the War on Terrorism in 2003 as ” a guy named Satan.” Boykin also boasted that his pursuit of a Somali warlord in 1993 was fueled by the knowledge that “my God was bigger than his. I knew that my God was a real God, and his was an idol.”

As the Special Operations Commander, Boykin sought to host a group of Baptist pastors to a prayer meeting followed by live-fire demonstrations of urban warfare. The holy shoot-em-up was meant to inspire the invited Christian shepherds to show more “guts” in the defense of the faith.

Today Boykin teaches at a private college in Virginia and leads a think tank identified by the Southern Poverty Law Center as a hate group for its activism against the LGBTQ community. He believes the United States has a mission from God to defend Christendom, and in 2018 he said that the election of Donald Trump as president bore “God’s imprint.” Boykin’s professional success raises a serious question about the enduring power of religious apocalyptic prophecies. If Boykin had to give an earnest account of his faith to his political leashholders (when he was a general), it would clearly come across to that polite and educated class as slightly bonkers. How, then, does someone like Boykin rise to the position he did? Nursing a rapturous death wish and a longing for spiritual warfare is no disqualifier for high official success, it seems, as long as such mental disturbances bear the imprint of sacred scripture.

When Boykin was tasked in 1993 to advise the Justice Department on how to remove the Branch Davidians from their compound, he would have confronted in his opponent across the Waco plain a kindred spirit–a fellow scripture-quoting, God-and-guns Christian demonologist who saw the world as a Manichean battlefield.  All Americans should be disquieted by the fact that Boykin was closer in worldview to the armed, dangerous, and deranged David Koresh than he was to most of his fellow Army generals. His type is more likely to bring on the end of the world than to prevent it.

War-of-the-Worlds

A second essay that, for me, helps define the distinct unease with humanity’s destiny  that took shape in the 2000s is Bill Joy’s dystopian “Why the Future Doesn’t Need Us.” It serves as a reminder that it is not enough for enlightened societies simply to repudiate the lunatic fantasies of religion that titilated the minds of Jim Jones, David Koresh, Jerry Boykin, and so forth. We must also contend with the societal changes that will be wrought by our secular commitment to knowledge, science and reason.

The foundation of a rational society consists in what the philosopher Immanuel Kant called emanciption. Emancipation is the idea that humans are essentially alone, unaided by supernatual beings. We have only our own, fallible minds with which to try to understand the world and to order our relations with one another.

The Amerian founders believed strongly in emancipation. They were deists, which meant they believed that although God had set the universe in motion, he no longer supervised or intervened in his creation. So it came naturally to the founders to think of themselves as not being under the discipline of a heavenly parent. Many of England’s scientists in the 18th century had come to Philadelphia, in particular, to escape the oppressive “parenting” of the church back home and to follow scientific discovery wherever it led. It was a great leap forward for humankind.

The thing about emancipation, though, is that it does not guarantee that free-thinking humans will choose wisely or act in a way that shapes their societies for the best. All it says is that we unburdened by the dead hand of the past. Our future is  yet to be created.

In 1987 Bill Joy, who would go on to invent much of the technological architecture of the internet, attended a conference at which luminaries of computer science made persuasive and, to him, unsettling arguments for the power of artificial intellignce to augment and even replace human cognition. It was a disturbing, formative moment for Joy. It crystallized a dilemma that he thought was rapidly taking shape for the whole of humankind. Our vaunted intelligence and talent for automation was setting in motion a new kind of creation, and it was not clear at all to Joy that humans would have a place in it.

In his essay he proposes:

First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.

Even if you disagree with Joy’s postulation as strictly stated, it is futile to deny the progress we’ve made since 2000 in what he’s getting at–having our work done for us by organized systems of increasingly intelligent machines. Even if we never reach the “utopia” of not doing any of our own work at all, we will, it seems, approach that limit asymptotically, and the difference between the real world and machine utopia will become practically insignificant.

Which could mean this, according to Joy:

If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions.

Again, the trends of our knowledge-driven society indicate that Joy is describing a highly plausible future, not a science fiction scenario. Already, algorithms, not doctors, identify which strains of seasonal flu should be immunized against each year. Search engines, not lawyers, collate the case law necessary for constructing legal briefs and going to trial. On German roads, speed trap cameras detect your speed, scan your license plate, and use networked databases to generate a citation and mail it to you. And don’t even start about Alexa locking and unlocking your doors, adjusting your thermostat, and playing lullabies for your kids on cue. Our lives today are filled with anecdotal evidence that reliance on technology is rendering our human grasp of the world increasingly obsolete.

But wait a minute. All this technology would be utterly inert and meaningless without a pre-established connection to human activity, right? German officials had to set up the system for enforcing speed limits: the technology is just the spiffy means for implementing it. The internet, to take another example, is as powerful as it is because it was designed to serve human purposes. Its proper functioning still requires the imaginative work of millions of computer scentists; its power is shaped and harnessed by millions of knowledge managers; its downstream systems require the oversight and active intervention of a phalanx of help desk workers and network engineers.

Fine, point taken. Let’s say humans will always have to man the controls of technology, no matter how “intelligent” machines become. In Joy’s view, though, this more promising-looking scenario still doesn’t get us out of the woods. It’s the other horn of the dilemma about our ultimate destiny

On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite—just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity. If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct, leaving the world to the elite. Or, if the elite consists of soft-hearted liberals, they may decide to play the role of good shepherds to the rest of the human race. They will see to it that everyone’s physical needs are satisfied, that all children are raised under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes “treatment” to cure his “problem.” Of course, life will be so purposeless that people will have to be biologically or psychologically engineered either to remove their need for the power process or make them “sublimate” their drive for power into some harmless hobby. These engineered human beings may be happy in such a society, but they will most certainly not be free. They will have been reduced to the status of domestic animals.

If anything, Joy seems to have been even more prescient about this set of trends. There are clearly still human power centers managing technology, and they are just as clearly pursuing the broad purposes Joy indicates they would. To take just one example, it is abundantly evident that a pro-Trump campaign meme in 2016 would have been designed by algorithm to micro-target the aimless poor and attract their support for policies meant to speed up their extinction (such as “guns everywhere” laws, the repeal of ACA, and the defunding of public schools). If Trump’s supporters felt increasingly voiceless in 2016, it was not for lack of willing spokesmen. It was more likely because technology-enabled chronic underemployment had drained their lives of any purpose that might be given a voice.

This anomie is coming for us all, by the way, not just the (former) laboring class. The growing trend of “bullshit jobs” as described by David Graeber in the eponymous book outlines the disorienting leading indicators of the near future of office work. Increasingly, knowledge workers will have to wring their paychecks from a fast-shrinking set of whatever meaningful tasks automation leaves for us to do. We will mostly be left, though, with what Graeber calls “the useless jobs that no one wants to talk about.”

Humans are good at struggling. What we are not good at is feeling useless. The working poor that used to make up the middle class are now confronted by a future whose contours are literally unmaginable to them. They cannot place themselves in its landscape. Every activity of life that used to absorb human energy and endow it with purpose is increasingly under the orchestration of complex, opaque systems created by elites and implemented through layers of specialized technology. Farmers, to take one example, are killing themselves in despair of this system. They cannot compete with agribusinesses scaled for international markets and underwritten by equities instruments so complex they are unintelligible to virtually everyone but their creators and which are traded by artificial intelligence agents at machine speed, around the clock.

This world that never shuts off and never stops innovating was supposed to bring propserity and, with it, human flourishing. To a marvelous extent it has. It would be redundant to review the main benefits that technological advances have brought to human life.

But the thing about technological advances is they just keep extending themselves, and as they create ever more complex systems, it becomes harder to anticipate whether they will help or harm us in the long run.

For Joy, the advent of genetics, nanothecnhnology, and robotic sciences at the turn of the century was a sea change in terms of risk. It turned scientific innovation into a non-linear phenomenon. He writes:

What was different in the 20th century? Certainly, the technologies underlying the weapons of mass destruction (WMD)—nuclear, biological, and chemical (NBC)—were powerful, and the weapons an enormous threat. But building nuclear weapons required, at least for a time, access to both rare—indeed, effectively unavailable—raw materials and highly protected information; biological and chemical weapons programs also tended to require large-scale activities.

The 21st-century technologies—genetics, nanotechnology, and robotics (GNR)—are so powerful that they can spawn whole new classes of accidents and abuses. Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups. They will not require large facilities or rare raw materials. Knowledge alone will enable the use of them.

This is the ecology within which technological threats to humanity’s future will evolve. Armed with massively powerful computers that churn terrabytes of data derived from exquisitely accurate genetic maps and then give it to robots as small as human cells (molecular level “assemblers”) to go out into the biosphere and do things with, humans increasingly have the capacity to redesign the world. “The replicating and evolving processes that have been confined to the natural world are about to become realms of human endeavor,” Joy writes.

What might this lead to? Well, hundreds of nightmare scenarios that we can imagine, and an indefinite number that we can’t. Joy quotes from the physicist Eric Drexler, author of Unbounding the Future: The Nanotechnology Revolution: “‘Plants’ with ‘leaves’ no more efficient than today’s solar cells could out-compete real plants, crowding the biosphere with an inedible foliage. Tough omnivorous ‘bacteria’ could out-compete real bacteria: They could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days. Dangerous replicators could easily be too tough, small, and rapidly spreading to stop—at least if we make no preparation.”

In other words, just like our relatively dumb personal computers, GNR technology will do exactly what we tell it to do, regardless of the depth of our ignorance of the potential consequences. And then it will do its own thing, because it will re-design itself. Until the end of the world, amen.

So pick your poison, as offered up by McEwan or Joy. It may be that we are too stupid to think seriously about our ultimate destiny, or it may be that we are too smart to settle for a future that is safe and humanly meaningful. Or, as seems most dismally likely, there is room in our world for both types.

Advertisement

Review of “The Plague” by Albert Camus

BY MATTHEW HERBERT

Albert Camus’s 1947 novel The Plague is amost always read as an allegory. It is said to be about the spread of Nazism among the French during World War Two. For Christopher Hitchens, it is a warning about the underlying malice of religion. The desire to burn heretics only goes dormant under the civilizing forces of science, politics and common sense, Hitchens believed. The Plague shows us that tyranny can always break out anew under the right conditions.

But today it is instructive to read Camus’s novel as simply about what it says it is about–an epidemic. We need no deeper symbols to give it meaning.

plague

The focus of the story is on the progession of people’s responses to the sudden onset of a lethal, contagious disease. One day in nineteen forty-something, the denizens of the Francophile city of Oran, Algeria were going about their lives, “with blind faith in the immediate future,” as Camus puts it. With unthinking certainty, they expected every day to be followed by another one, differing in no important respect from the last. Love, ambition, work–everthing that requires the positing of a future for its fulfillment–unfolds in glorious normalcy.

Then the rats start to die. First in ones or twos, soon after in large groups. Building supervisors and trash haulers have to gather them up and carry them away. People step on them unawares, feeling something soft underfoot, then kicking them away in disgust.

Soon, people start dying too. Two Oran doctors evaluate the evidence and hypothesize that a plague is underway. Their first reaction when they speak the word to themselves is to anticipate what will happen next–a large-scale official denial of the threat even as it unfolds before everyone’s eyes. “You know,” one of the doctors says, “what they’re going to tell us? That it vanished from temperate countries long ago.”

It’s funny that we European-Americans, too, think this kind of thing, but we do. We lived in such close quarters with our animals for so long, that we contracted a whole range of ravaging diseases, then became immune to many of them, then conquered the new world through a global campaign of germ warfare we didn’t even know we were waging. So it goes. Millions died.

Late in The Plague as Oran is dying, the protagonist, Dr. Rieux reflects on how little human agency matters once a brutal, unthinking pandemic is unleashed. Rieux has been working 20-hour days, and he is beginning to realize he will eventually lose the fight against the plague’s exponential spread. Amidst the stench of the dead and dying people of Oran, he achieves a kind of clarity:

Had he been less tired, his senses more alert, that all-pervading odor of death might have made him sentimental. But when a man has had only four hours’ sleep, he isn’t sentimental. He sees things as they are; that is to say, he sees them in the garish light of justice–hideous, witless justice.

I have read all of Camus’s books, and I am confident that this is a statement of record: it is Camus speaking directly to the reader. Camus believes each person lives alone beneath a “vast, indifferent sky,” and must confront an ultimate absurdity–that life, the one thing we humans are encoded and conditioned to seek with all our energy, is precisely the thing the universe will deny us. We are guaranteed not to get it. Some justice, right? You can see why Camus calls it hideous and witless.

The plague comes for us all.

But this is not the whole of the human situation for Camus. Faced with desperate absurdity, we invent things. One of these is society. We create all kinds of groups whose overarching purposes endow our individual lives with meaning. We subordinate our selfish desires to higher ends. They give us a reason, as Marcus Arelius put it long before Camus, to rise each morning and do the work of a human. Having a society is what enables us to be fully human.

But the thing about society is that it does not come for free. It is not just there, like the elements of the periodic table. We create it, and we are responsible for sustaining it. And this is actually what The Plague is about, whether you take the plotline straight or as an allegory–it is about moral responsibility.

About one-third of the way through The Plague, the people of Oran start to understand that the epidemic ravaging their home will soon re-shape their lives. With the shit getting real and minds suddenly focused, a prominent priest decides to give a straight-talk sermon. It is, ahem, a come-to-Jesus moment:

If today the plague is in your midst, that is because the hour has struck for taking thought. The just man need have no fear, but the evildoer has good cause to tremble. For plague is the flail of God and the world his threshing floor, and implacably he will thresh out his harvest until the wheat is separated from the chaff. There will be more chaff than wheat, few chosen of the many called. Yet this calamity was not willed by God. Too long this world of ours has connived at evil, too long has it counted on the divine mercy, on God’s forgiveness.

Camus was not a religious man. Quite the opposite. The part of the priest’s sermon I have bolded, though, is something Camus believed in, in a way, with great passion. So do I. It is really about society, responsibility and solidarity.

For too long we have connived in evil by pretending that society gets by on its own, or as Margaret Thatcher thought, it simply doesn’t exist. Americans tend to take this rugged pose in various forms, either by pretending that we’re all atomized individualists; or that the market will solve all problems; or that government itself is the problem not a solution; or if we all had enough guns everything would work itself out; or if we just wait for the super-rich sprinkle a few dollars down on the poor through gig work and mcjobs, they will get by. Or my personal favorite: As long as I have a big enough pile of money, everything else is as good as it needs to be.

These are all variations on the same kind of moral illiteracy.

For many years I had the privilege to live among adults who did not believe any of these childish fantasies–or at least they did not act in accordnce with them. They knew that society was a human invention. If you wanted a decent society, you would have to pay for it.

And I don’t just mean money. Money is just a start. You would have to pay by believing that you really are responsible to your neighbors. You really do have to help set up good schools for everyone, even if you think your kids are more deserving than theirs. You have to build clinics and hospitals on the same model. Libraries, roads, tramlines. It all has to be good, and it has to be good for everyone.

We need these things all the time if we are to indulge our blind faith in the immediate future–the assumption that tomorrow will bless us with the same certainty today did.

What we are discovering through our current plague is how fragile our society is. It is fragile because we have allowed the rich and greedy to set its priorites. And so we inhabit a system designed only for the best of times–the only thing the rich can envision. Our healthcare system is set up to function well for the rich, just barely for the middle class, and not at all for the poor. Under “normal” circumstances, this is tolerable. Well, it is tolerable in the sense that it does not incite a general insurrection.

Same goes for labor and wages. Marx’s Iron Law of Wages is viable but only under the best conditions. Our country is constantly running an experiment designed to discover how low wages can be driven for the maximum number of people. Sure, we can have a country where hundreds of thousands of people use paycheck loans to survive and never send their kids to a dentist, but only as long as widespread disaster does not strike. We need feel no responsibility for those poeple. It’s part of the American story to watch them struggle, alone, for survival. It’s interesting.

But when the plague landed on our shores, all our lives suddenly threatened to become more interesting. The moral corruption of our system was exposed. Suddenly it has become an urgent matter to supply people with money, goods and services they haven’t strictly speaking earned. But what if we had already had a system in place in which we collectivized our responsibility for one another–a system that normalized the imulse to take care of each other?

Writing in the Atlantic Monthy this week, Anne Applebaum counts the cost we are now paying for letting our society believe the lie of rugged individualism. That lie has led to institutional rot and a decline of not just governmental, but civilizational capacity:

The United States, long accustomed to thinking of itself as the best, most efficient, and most technologically advanced society in the world, is about to be proved an unclothed emperor. When human life is in peril, we are not as good as Singapore, as South Korea, as Germany. And the problem is not that we are behind technologically, as the Japanese were in 1853. The problem is that American bureaucracies, and the antiquated, hidebound, unloved federal government of which they are part, are no longer up to the job of coping with the kinds of challenges that face us in the 21st century. Global pandemics, cyberwarfare, information warfare—these are threats that require highly motivated, highly educated bureaucrats; a national health-care system that covers the entire population; public schools that train students to think both deeply and flexibly; and much more.

The plague comes for us all. That is undeniable. But we need not pretend we are up against it alone. We need a system that takes care of everyone all the time, before emergencies happen. That’s what society is for. And, yes, Maggie, society does exist. It’s been one of our best inventions.

 

Review of “The Whites of Their Eyes: The Tea Party’s Revolution and the Battle over American History” by Jill Lepore

BY MATTHEW HERBERT

I wonder how many Americans know about the first suicide terrorist attack by airplane after 9/11.

It happened on February 18th, 2010. Fifty-three year old Andrew Stack III flew his single-engine Piper Dakota into an office building housing a local branch of the IRS. In addition to killing himself, Stack killed a civil servant, father of six.

Stack was mad at the IRS over a long drawn-out tax dispute. But he was mad about lots of other things too. In a suicide note he wrote shortly before setting fire to his home and launching his attack, he railed against corporate greed, the government’s bailout of the financial sector, health insurance, and the Catholic Church.

At bottom, though, what angered Stack most was having been defrauded by his own country, as he saw it. Its institutions had indoctinated him with phony beliefs about the interconnectedness of freedom, hard work, and propserity. The founding fathers, he wrote in his suicide note, had fought against taxation without representation, but the country today was throwing that legacy in the trash. President Obama, with his tax-and-spend healthcare plan, was not just leading us to ruin but was  adandoning what it means to be American.

Austin Hess, a young Boston engineer, could relate. At a rally against Obamacare the month after Stack’s suicide attack, Hess protested, “All the government does is take my money and give it to other people.” (We later come to learn the delicious fact that Hess’s paycheck comes from the Departments of Defense and Homeland Security. As the world’s largest empoyer, the federal government gives as well as it takes.)

Andrew Stack’s strange, sad terrorist attack caught the same reactionary Zeitgeist that gripped Hess and other members of the Tea Party movement then spreading across the country. Less than one month into the Obama administration, a cacophony of conservative voices was buffetting the new president with accusations of socialism, godlessness, and crypto-Islamism, among other things.

Fox News commentators, business leaders, right-wing think tanks, and Christian Fundamentalists all accused Obama of betraying the American Revolution. They gathered in Boston on April 15th, tax day, to protest the new regime of taxation without representation. They could feel the Founding Father returned to life and standing shoulder-to-shoulder with them, seething in anger, sensing some kind of alien menace they couldn’t, or wouldn’t, quite define. Well, one Tea Party sign dared to define it: “Spell-Check says that OBAMA is OSAMA,” it read.

In the formative days of the Tea Party, Fox News commentator Glen Beck set up his studio to look like a school room, the better to instruct his viewers on the real meaning of the American Revolution. Unburdened by the slightest sense of irony, Beck said he was fighting the forces of “indoctrination,” the same thing that had gotten under Andrew Stack’s skin before he crashed his plane into the IRS building. Beck implored his viewers to “hold your kids close to you” and teach them about the revolution that George Washington had led–a revolution rooted in “God and the Bible.”

Americans, rich and poor, dumb and smart, high- and low born, are forever invoking the Revolution, the Founding Fathers, and the Spirit of 1776 to  sanctify their political claims about the present. When it comes to having political arguments, “[n]othing trumps the Revolution,” writes Jill Lepore in her wonderful 2012 book The Whites of Their Eyes: The Tea Party’s Revolution and the Battle over American History.

whites eyes

As is the case with all of Lepore’s books, The Whites of Their Eyes is a wise, humane, intricately argued work of history. There is nothing reductive about it. I risk betraying Lepore’s generous intelligence, then, by beginning on a slightly reductivist note, a list  Lepore forms of things that have been thrown into Boston Harbor as acts of political theater. They have included:

A fake container of crack cocaine

The 2007 federal tax code

Cans of (non-union-produced) beer

Annual HMO reports

No doubt there have been other things flung into Boston Harbor as well. The point Lepore wants to make is that it is reflexive for us Americans to escalate our political protests to heaven, always beseeching our gods, so to speak. The objects thrown into Boston Harbor are meant to symbolize not just wrongs in need of remedy but fundamental betrayals of one or another of our founding principles. From crack to non-union beer, they are the kinds of thing that should cause our Founders to roll over in their graves, we are told.

But here is a paradox: anyone with an axe to grind can play this game. The conflicted political discourse that produced the American Revolution was so capacious and so contentious, it can accommodate almost any post-Enlightenment political idea. Lepore writes:

The remarkable debate about sovereignty and liberty that took place between 1761, when James Otis argued the writs of assistance case [about British laws that basically established police powers in the Colonies], and 1791, when the Bill of Rights was ratified, contains an ocean of ideas. You can fish almost anything out of it.

And fish we do. Almost any set of opposing causes can be found and seized upon in the body of historical writings comrpising the record of the American Revolution. Perhaps the most (in)famous is the matter of religion and theism. Cast your line in the shallowest waters of the revolutionary texts and you find Christian theism plain and simple–rights being endowed by God and all that. Fish a little deeper and you’ll find founders who betray not one whit of genuine theism–even outright rejections of it. Thomas Paine, with whom Glen Beck likes to compare himself, used his dying breath to repudiate Christianity, telling his doctor he had “no wish to believe.” Benjamin Franklin, and expert bookbinder, inserted a lampoon of Biblical-sounding nonsense into his Bible to prank anyoneone who would listen. Faith of our fathers?

History, the making of it and the writing of it, is an argument, Lepore says over and over. Its outcomes and processes were never fixed in the stars and cannot be chiseled into stone tablets. But the problem is, we treat our history, especially of the Revolution, as if it were so fixed. Using history to make political arguments requires creativity, empathy and reason, but our attitude is all too often self-assured idolatry, writes Lepore:

People who ask what the founders would do quite commonly declare that they know, they know, they just know, what the founders would do and, mostly, it comes to this: if only they could see us now, they would be rolling over in their graves. They might even rise from the dead and walk among us. We have failed to obey their sacred texts, holy writ. They suffered for us, and we have forsaken them. Come the Day of Judgment, they will damn us.

This is not an appeal to history, says Lepore. It’s fundmentalism, or what she somtimes terms “anti-history.”

What she means is that history is not simply a transcribing of facts established in the firmament. Any historical approach that posits its subject is a fossil record of this kind is bound to fail; it goes against the spirit of studying and writing history. As Orwell once wrote, good history should “make the past not only intelligible but alive.”

The facts and the character of the American Revolution were never fixed. They were contested from the outset and continued to be contested immediately after independence. John Adams and Thomas Jefferson disagreed entirely on what the revolution was; Adams maintaining it consisted in legal and political actions leading up to the Declaration of Independence, Jefferson saying it was the war for independence itself. For Benjamin Franklin, the revolution began with and consisted primarily in a crusade against established religion.

It did not take long for fights over the meaning of the Revolution to escalate into terms we would recognize today. Jefferson praised Shay’s Rebellion in 1787 as a sign of patriotic vigor. Adams said the rebels should be violently suppressed, as the constitution demanded. And the ink was not even dry on the document that was to ground this authority.

More fundamental disagreements soon followed.

When Jefferson was elected as the third president, succeeding Adams, a Boston newspaper declared that Jefferson had ridden “into the temple of liberty on the shoulders of slaves.” This because his win had been made possible by electoral votes created by the three-fifths clause. Had there been any founding fathers in their graves at that early date, surely Americans would have been alerted to their rolling over in them.

And what did Jefferson himself think of the power and meaning of the constitution? Toward the end of his life Jefferson wrote, “Some men look at constitutions with sanctimonious reverence and deem them like the ark of the covenant, too sacred to be touched. They ascribe to the men of the preceding age a wisdom more than human.”

Jefferson was better positioned than any other framer to recognize how deeply disputation ran through our founding documents, what a human creation it was. A lifelong slaveholder, Jefferson was so profoundly conflicted over slavery that his first draft of the Declaration of Independence included what could be fairly called an argument with himself over slavery. In “a breathless paragraph, his longest and angriest grievance against the king, Jefferson blamed George III for slavery,” Lepore writes, specifically for not abolishing the slave trade with the colonies. Jefferson’s contemporaries disagreed with his anti-slavery passage, some saying it went too far, others not far enough. In the end, the passage was left out as a tactical expedient: the other framers thought it would open the colonists up to charges of hypocrisy, given how thoroughly slavery was embedded in their economy and culture.

In 2010 the Tea Party’s sanctimony would sometimes become more than a little bathotic, as when its members insisted on Congressmen and other adults reciting the Pledge of Allegiance. The Pledge’s author, Francis Bellamy, was a socialist who was once chased from the pulpit for demandng the rich be taxed heavily and their wealth given to the poor. Bellamy wrote the Pledge as part of an ad campaign to promote something his boss’s company invented called the “flag movement.” They wanted to sell flags to every school in America. The Pledge helped their business. It was meant to indoctrinate children.

Lepore urges us to understand there are no political conclusions that can be lifted directly from the founders’ principles or the history of the American Revolution. History is alive, but not in the sense the Tea Party say it is. If we wish to understand anything of the founders’ principles, we have to go back and examine them in the tension from which they arose, and the tension they never escaped. The idea that the wisdom of an ealier generation could resolve poitical contests with sacred revelations was precisely what the founders rejected. And we know this becasue of the disputes they had with one another, and which never stopped. “They believed that to defer without examination to what your forefathers believed,” writes Lepore, “is to become a slave to the tyranny of the past.”

 

 

 

Orwell’s Review of “The Soul of Man under Socialism” by Oscar Wilde

BY MATTHEW HERBERT

Crack open the first volume of Prejudices, H.L. Mencken’s career-spanning collection of essays, and what’s the first chapter you see? “Criticism of Criticism of Criticism.”

It’s really good. You can read it here.

About that title. Is it Mencken being self-deprecatingly funny? Yes. Is it Mencken being earnest and passionate? Also yes.

The part of humanity I feel closest to is the part that, like Mencken, gets worked up over words, and I mean worked up to the point of life and death. But Mencken was also lighthearted. He played in a brass band, wrote nonsense poems and drank a lot of beer. He knew all those words, earnest and passionate as they were, might just be leading us in circles.

So here’s a bit of circling around and around–some criticism of criticism of criticism.

In May 1948 George Orwell wrote a review of Oscar Wilde’s essay “The Soul of Man under Socialism,” a beauteous vision of the future in which life’s necessities would be so plentiful as to obviate the need to own things or even to work.  The thing that prompted Orwell to write the review was the essay’s surprising durability. “Although [Wilde’s] prophecies have not been fulfilled,” Orwell wrote, “neither have they have they been made irrelevant by the passage of time.”

And this was saying a lot. Wilde had written the essay in 1891 at the peak of Europe’s Gilded Age. Wilde was no economist, and as Orwell points out, not really a socialist, just an admirer of the cause. The rich, old Edwardian world Wilde lived in would have been unrecognizable to Orwell’s peers in 1948–many of them survivors of the two most destructive wars in world history, the most lethal pandemic since figures had been kept, and the worst economic depression since the dawn of the industrial age. Britain was on food rations in 1948, despite winning the war. If anything in “The Soul of Man under Socialism” still rang true after such extensive trauma, Orwell thought, it deserved another look.

wilde1

Actually it might be more accurate to say Wilde’s essay was just starting to ring true in 1948. It had a long latency. When Wilde wrote about socialism taking over the rich world, this prospect was clearly a pipe dream, But, in 1948 Orwell sat up and took notice of the rise of communism in China and much of what would soon be called the Third World: “Socialism,” he wrote, “in the sense of economic collectivism, is conquering the earth at a speed that would hardly have seemed possible sixty years ago.”

The broad, gathering march of communism was what made Wilde’s essay relevant in a general way, but, it was two of Wilde’s particular observations that really grabbed Orwell’s attention. One was that Wilde correctly perceived socialism’s in-born tendency for authoritarianism. Any government given the power to control industry, markets and wages would be tempted to rule over all of society. Wilde admitted this, tangentially.

But he largely dismissed this threat, saying, “I hardly think that any Socialist, nowadays, would seriously propose that an inspector should call every morning at each house to see that each citizen rose up and did manual labor for eight hours.” And of course communist regimes did this kind of thing and much, much worse. Wilde’s error about socialism was basically the same one Americans have been making about democracy for the last 30 years. He assumed the system would work because the elites who implemented it would be rational and benevolent. What Orwell knew was that socialism’s ruling class–any ruling class–would entrench itself as an authoritarian regime once it accrued enough power to dictate to the masses. This is the basic plot line of Animal Farm.

In Oregon today, the state legislature has recently been proving that even a highly developed democratic system can break down if it is not implemented by benevolent elites. Bad faith is not a special problem of socialism. According to reporting by Vox, in Oregon, the members of the Republican party, minorites in both houses of the legislature, have been walking out of their jobs every time a bill they oppose comes to a vote. A rule written long ago on the assumption that lawmakers would be good stewards of democracy requires a 60 percent majority for a quorum in the legislature. The Democratic party holds a big enough majority to pass a law but not big enough to convene a quorum. So each time the legislature comes to the cusp of passing a law that the people of Oregon elected it to pass, the Republicans desert their posts.

There may be less to the much vaunted political culture of democracy than meets the eye. In the triumpahlist mood of the end of the Cold War, we thought of liberal democracy as intrinsically superior to other ideologies. You could just see that collectivism of any kind was bad because, look how corrupt its elites were and what failures of governance it resulted in. Well, it seems that socialists don’t have a lock on bad-faith failures of governance. Democrats too may falter in–or even openly reject– their commitments to reason, decency, and fair play. We too are capable of wrecking a good system.

Another area where Wilde was sort of right but wrong in an interesting way was in his thinking about technology and leisure. He thought machines would relieve humans of drudge work and, hence, the “sordid need to live for others.” Freed of the need to work for wages, humans would seek something like Maslow’s self-actualization. “In effect,” Orwell writes, “the world [would] be populated by artists, each striving after perfection in the way that seems best to him.”

As Orwell points out, Wilde had tunnel vision on this matter. The utopia he envisioned blanketing the whole world was really only conceivable in the most developed economies, such as the one he lived in. Africa and Asia in 1948, Orwell pointed out, were far behind this level of development. A political ideology based on the equality of all humans that left most humans out of its equations would badly miss the mark.

Furthermore, Orwell saw that the minute technical challenges of machine work would have to be tackled before robots could do our jobs for us. Wilde glossed over this problem as trivial. Orwell wrote that machines lacked human “flexibility,” possibly refering to their lack of fine motor skills, or possibly to their inability to think. “In practice, even in the most highly mechanised countries,” Orwell wrote, “an enormous amount of dull and exhausting work has to be done by unwilling human muscles.” And today? I think any Amazon Prime delivery worker would still give Orwell an amen.

So Wilde put too much faith in robots, too soon. But things are changing now. The quixotic presidential campaign of  Andrew Yang hinted in a fascinating way at a step change in the march of the machines. You know that weird idea Yang had of a universal basic income–where we just sit home and draw a check each month? Well it’s coming, and it’s coming because thinking machines really are overtaking our jobs. The artificial intelligence revolution looks set to change the relationship between humans and labor forever.

If a computer can do a better job of, say, balancing a comlex set of business accounts, why pay a roomful of sweaty, fallible humans to do the same job less well? And, if machines can think creatively (which they can, possibly beyond our ability to grasp), they will eventually reach a tipping point where they will exceed the human ability to design algorithms and apply them to real-world problems. Machines will design even better machines. Economically, what this means is that machines will create value.

Let me say that again: machines will create value. This development is unprecedented. For the 20,000 odd years we’ve had civilization, it has been up to humans, and only humans, to add our labor to nature, as John Locke phrased it, and create something of sufficient value that we feel a claim of ownership toward it. Plant and hoe a garden, and its yours. Eat your vegetables or sell them, but by god they are yours to do with as you please. This conception of property is a bedrock assumption of economics.

And, as Yang understands, it’s about to fall out from under us. If machines create the value that drives GDP, we will, to an uncertain but large extent, live off the proceeds and taxes derived from their creations. Just like that, the most improbable of Wilde’s prophecies will effectively come true: we will be freed of the “sordid necessity to live for others.” What then?

One of the recepients of Yang’s “Freedom Dividend,” the $1,000-a-month prototype of universal basic income that Yang passed out during his campaign, said he had bought a guitar with his thousand bucks. I thought it was kind of lame when I heard it, but slowly its meaning began to sink in. Guitar Man was basically an example of Orwell’s interpretation of Wilde: without the need to work for our wages, we are all artists waiting to happen. We will still be as busy as we have been the last 20,000 years creating ourselves, but not in response to the biological imperatives that have driven us thus far and the social structures that have evolved to organize those imperatives.

Thinking about what to do with your Freedom Dividend is vertigo-inducing. This is not just because that money fell from the sky, produced by a being that can feel no claim of ownership and does not know the meaning of the phrase, “by the sweat of one’s brow.” Spending your Freedom Dividend is unsettling  because what you are really doing is choosing what you want to be in a world that is no longer regimented by the conventions of work.