Where We Were From

BY MATTHEW HERBERT

The title of Joan Didion’s personal history of California, Where I Was From, presents you with a wrinkle in time. Don’t we usually say “where I am from”? Why the extra layer of past-ness? What the 70-year old Didion was implying was that by the time she looked back at her home state in 2003, not only had it changed beyond recognition, but so had she. She left home to write for Vogue in 1961, and then life kept happening, mostly in New York but also in Paris, Hawaii, Central America. She could no longer say she was the Joan Didion who comes from California; now she was someone who once came from there.

The realization that you can’t go home again is a common one; you try to go back only to discover it was an earlier, different version of yourself who lived there. The lack of fit is overdetermined: it runs both ways.

When we try to imagine the the world after COVID-19 (the “new normal”), we can easily assume the same naivete Didion sheds in the title of her memoir. We picture ourselves dazed, enervated, bereaved, and disoriented but essentially still who we were before the pandemic. We will blink our eyes and try to work out how far magnetic north has shifted, try to get our minds around where the world’s coordinates have come unfixed. How will we travel? Eat out? Vote? Change jobs? Send kids to camp?

But we will be mistaken to think it is the same old us navigating these questions. Because, like Didion, we will have changed too. And we will have no time to mourn the loss of our old selves. The grief of this loss will slip silently onto those already accumulated. So soon, we will have to say we were from the world before COVID-19.

Historical change is nothing new, of course, and individuals never emerge from it unscathed. My dad was from the world before Vietnam, for example. Lots of things were different after the war, including him. Three books I’ve read recently, alongside Didion’s Where I Was From, have helped shape my thoughts on what is happening to us. In particular, they’ve helped me meditate on how entrenched our assumptions are about who we will be on the other side of this crisis.

Milan Kundera’s Ignorance is a novel about the urge that central European exiles felt to return home when the Iron Curtain fell in 1989. Kundera himself had fled his native Czechoslovakia after the Soviet invasion of 1968, so there is a certain amount of factual autobiography that informs Ignorance. I remember that heady time too. The Berlin Wall fell down.

In Ignorance, as communism is swept off Europe’s map with dizzying speed, the novel’s antagonists, living in Paris, are encouraged by their friends to join the “Great Return.” Go home and celebrate the liberation of your homeland, they are urged. There is dancing in the streets. This call is a nearly irresistible force. But Kundera asks why this is the case. The thing is, he has not just been waiting, holding his breath in France. He built a life there. He writes his novels in French, about themes that concern the French, and all of literate humankind. His antagonists in Ignorance follow suit, in their own ways; they have misgivings.

The urge to go home is strong, but unfiltered nostalgia freezes the exile in time, takes no account of the effects of choice and circumstance that have accumulated over the exile’s intervening lifetime. Kundera goes back to the beginning of European literature to locate this paradox. What he finds is revelatory. “In Book Five of the Odyssey,” Kundera writes, “Odysseus tells Calypso [who had captured and held him for seven years in a life of erotic luxury]: ‘As wise as she is, I know that Penelope [Odysseus’s wife] cannot compare to you in stature or beauty. . . . And yet the only wish I wish each day is to go back there.” He had been seeking home for 10 years. Kundera goes on la little later:

Homer glorified nostalgia with a laurel wreath and thereby laid out a moral hierarchy of emotions. Penelope stands at its summit, very high above Calypso.

Calypso, ah, Calypso! I very often think about her. She loved Odysseus. They lived together for seven years. We do not know how long Odysseus shared Penelope’s bed, but certainly not so long as that. And yet we extol Penelope’s pain and sneer at Calypso’s tears.

What happens to the life we lived and the world we constructed during our pandemic exile? Do they simply lose all meaning once the conditions of exile have been lifted? Whose attentions will we abandon when we blindly make the same choice as Odysseus?

It strikes me that I spend several hours a day now in intense contact with my son, whom I am home-schooling. This will almost certainly never happen again, depending on how far back to normal we are able to go after COVID-19. I don’t know how long our present routine will last–almost certainly not seven years!–but when my son and I do go back toward normal, we will be different people than we were just a few months ago. The “moral hierarchy of emotions” established by Homer says we must bend our entire will to just going “home.” We will reassert the old ways of school for him, office for me. But  will we not have cause to wonder why not everything fits the same? I imagine a day in my son’s future adolescence when he and I speak to each other not at all. He’s studying for finals and I’m puttering; our paths don’t cross, even under the same roof. Won’t I think of the present days with nostalgia? Won’t he? We used to talk for hours.

In his 1939 novel Coming Up For Air, George Orwell anticipates how the approaching climax of World War II will change the politics of all the warring countries. Even if they win, the western Allies, liberal democrats all, won’t be able to escape the coming darkness, Orwell believes.  Contending against monsters will put them at risk of turning monstrous themselves.

Coming Up For Air is a very pessimistic novel. It postulates that to win the war, Great Britain and the United States will have to adopt as expedients certain authoritarian powers that could prove corrupting of decency over the longer term. Nations will become militarized, weapons massively lethal; police will rule the streets; citizen surveillance will become pervasive; propaganda will condition the masses to hate outsiders; economies will remain on a permanent war footing, beating plowshares back into swords. All the societal movement in Coming Up For Air is downward. The antagonist, George Bowling, looks ahead, to what will happen after the crisis has passed:

But it isn’t the war that matters, it’s the after-war. The world we’re going down into, the kind of hate-world, slogan world. The coloured shirts, the barbed wire, the rubber truncheons. The secret cells where the electric light burns night and day, and the detectives watching you while you sleep. And the processions and the posters with enormous faces, and the crowds of a million people all cheering for the Leader till they deafen themselves into thinking that they really worship him . . . . It’s all going to happen.

Bowling believes he just might survive the descent into darkness if he manages to take one last breath of his former innocence, England’s former innocence. And so he arranges a weekend trip to his boyhood home of Lower Binfield. There’s a fishing spot there that he knows, he just knows, no one else has discovered. The fish he had to abandon there one summer day long ago must be huge and unwary, he believes. His anticipation builds for weeks as he plans his escape. He hasn’t fished since he was a boy.

The reader doesn’t find out whether any anglers discovered Bowling’s secret pond and fished it out, and neither does Bowling. The developer’s bulldozer certainly found the pond–and drained it and flattened it for houses to be built on top of it. Lower Binfield is unrecognizable when Bowling arrives. Its center has been dwarfed by new factories, its old, well-formed town boundaries obliterated by sprawl. There are new sorts of people there, drawn by factory work, who don’t know anything about the town’s past. The pubs have fake wooden beams. Bowling comes face to face with an old lover who fails entirely to recognize him. She’s addled, unpleasant and gone to seed.

So Bowling gives up and goes home. The novel ends, or simply stops, one almost feels, with him contemplating which shabby lies he should tell his wife to put her off the scent of the real nature of his trip. (He’d told her it was a business trip, but she sniffed out that part of the lie.)

Is this the plainly bathotic ending it appears to be? Not really. The reason Bowling had kept his getaway secret was because he, a plain, fat, dull and inconsequential salesman, had political motives for trying to take a last gulp of clean air, and he didn’t think anyone would understand his grandiosity. In a changing world that threatened even the common man with propaganda, secret police, and wage slavery, there was no such thing as staying out of politics anymore, even for someone with such a trivial, deadend life as Bowling’s. This was something Orwell said over and over again in his essays of the time: the dynamics of what he called “after-war” dictated that even strong, successful nations would be perpetually in the preparatory stages of new war and, therefore, subject to emergency rule and contingency planning, habits that shade into authoritarianism. We can dream of the world that existed before we all became national-security states, but we can never go back to it. It has been obliterated by things we have done.

If we try to go back to the world before COVID-19, it won’t be there either. Biology is an even more efficient equalizing force than politics, and none of us can escape the tyrannizing effects of a pandemic. As Bob Dylan once put it, “You may be an ambassador to England or France; you may like to gamble, you might like to dance; you may be the heavyweight champion of the world; you may be a socialite with a long string of pearls.” Dylan was making a point about a different threat to our existence, but his reasoning remains solid: no one will be able to escape thinking about, and, in some sense, planning for, contagion in the future. We’ll just have to breathe that air.

Sunday_Afternoon

Memory and longing will always reach backward in time. But as a basis for living, we can only look forward. Orwell knew this, and we are about to know it. The unspoken optimistic message of Coming Up For Air is that, despite the grief of abandoning what is lost, it is always for the best to strike a new course and move forward. That is almost literally the definition of progress. The new normal will improve upon the old normal.

Adolescence is, for almost everyone, a period happily left in the past. It may provide useful marks for measuring one’s intervening life journey, but it usually offers little worth dwelling on as such. My adolescence was particularly a wreck. I was in the grip of a religious “theory” of morality that held up sexual purity as an ultimate ideal. When I wasn’t reading Paul saying that sex was bad, I was reading how Lancelot ruined not just his own life by sleeping with Guinevere, but also his friendship with Arthur and the whole basis for British monarchy, which I took to be synonymous with heroic virtue at the time.

The point is, I was morally serious but entirely misdirected. Because I exhausted all my energy in pursuit of Paul’s ridiculous and sociopathic ideal, I gave myself a free pass on all other issues of moral consideration. Which is to say, I was an asshole. I simply had no conception that morality involved other people and that being kind to them or at least avoiding harming them was an important goal in life. Luckily, I met good, patient people who would lead me out of the wilderness. I also read new books that would raise me up like Saul Bellow’s Augie March to where I could try to navigate “by the great stars, the highest considerations.” I got lucky.

It took me years to understand the moral wreck that was my adolescence. I would recall episodes of casually bullying someone or flagrantly harassing someone else, and they revealed how unreflectively cruel I was. No surprise in retrospect, though: my entire capacity to reflect was absorbed into a doomed, Quixotic scheme for overriding the biological drive that perpetuates the species. Oops. (And which, if piloted with care, gives way to the best things in life.)

In his quietly devastating novel The Sense of an Ending, Julian Barnes’s antagonist Tony Webster looks back on a youth 40 years gone by. He has long understood his adolescence as merely conventionally awkward, or maybe squalid around the edges, at worst. He had girl trouble, yes, he was slightly pompous and insecure, but wasn’t everyone? Then, tracing the lines of an old love triangle that led to the suicide of a boyhood friend, Webster comes face to face with documentary evidence of his own blatant cruelty. Stung by love betrayed, 20-year old Webster wishes the worst on his friends in clear, vigorous terms, and it comes to pass. Tragedy ensues in a way that ruins the lives of people who appeared nowhere in young Webster’s moral calculations. They couldn’t have: he was so selfish and stupid–blind, really. Like I said, he was twenty.

When we sketch out the new, post-pandemic normal and inevitably look back to the old one for reference points, we will be forced to see how ill prepared we were for the shock. Then we will realize how little moral fiber ran through our national character when our time of trial arrived.

Here is what the post-COVID-19 world will bring sharply into focus: We, the generations that have shaped political culture since Ronald Reagan and Margaret Thatcher, have built a system that seeks an upper limit on depriving people of the means to care for themselves. And after we’ve watched them on reality TV, we condemn the same people to the impersonal forces of social Darwinism to render hard but just rulings on them. Back when we came up with this arrangement, we thought of it as a capitalist Shangrai La. It was so fraudulent in its inception that we needed a movie–a movie!–to produce its slogan: “Greed is good.”

The_Triumph_of_Death_by_Pieter_Bruegel_the_Elder
Where we are from

If the real-life truth of the matter could be put into a novelistic document like the one that jolts Webster out of his moral complacency in Barnes’s The Sense of an Ending, it would be a letter from us–the ruling class and its close aspirants–to the other 80 percent of our fellow humans, telling them, why don’t you just fucking die? If you don’t like the $8.00-an-hour no-benefits service jobs that retail provides you, please take the suicide pills that the pharmaceutical industry so generously supplies you.Or maybe a gun in the mouth is more your style. Or eat your way to diabetes and wait to be priced out of insulin. Many of your fellows take a long turn in a prison run for profit. But in any case if you can’t hack it, just get out of the way. The future does not need you.

When we stopped believing that society existed, we stopped investing in it. Many moral outrages like the few I just alluded to took root. I agree, by the way, with Reagan and Thatcher that society does not exist. But unlike them, I believe it is one of the best and deservedly most robust fictions we have come up with. It is what the historian Juval Noah Harari would call a shared fiction worth believing in. The thing about shared fictions, though, is that they abhor a vacuum. Stop believing in one, and another will rush in to take its place. So, we may have weakened our collective belief in society, but oh, boy do we believe in corporations now. That’s who came rushing into fill the vacuum created by Reagan-Thatcherism.

Society is the bedrock of democracy. If individuals fail to identify with a national group with whom they are willing to pull together, they cannot define any national interests. So those interest get defined for them. In our case, the defining of interests is done by organized money in the form of corporations. This is all fine for the few who own the corporations, but for the rest of us it means we are living in a failed state. Don’t believe me? Try this brief summary of our symptoms on for size (from an Atlantic Monthly article by George Packer):

When the virus came here, it found a country with serious underlying conditions, and it exploited them ruthlessly. Chronic ills—a corrupt political class, a sclerotic bureaucracy, a heartless economy, a divided and distracted public—had gone untreated for years. We had learned to live, uncomfortably, with the symptoms. It took the scale and intimacy of a pandemic to expose their severity—to shock Americans with the recognition that we are in the high-risk category.

The crisis demanded a response that was swift, rational, and collective. The United States reacted instead like Pakistan or Belarus—like a country with shoddy infrastructure and a dysfunctional government whose leaders were too corrupt or stupid to head off mass suffering.

Our moral past–like Tony Webster’s, like my own–is a mirage. It never existed as we pictured it. What we thought of as the mere, regrettable side effects of greed–a little functional poverty here, a decline of infrastructure there–came into focus over the years as the essential aims of the system we invited to take over our lives. Greed is good?–That’s the system. The undeniable fact that we are presently subjects to corporate profit-seeking, not citizens in a democracy, should tell us that our past is not what we thought it was either. It was a different country.

 

Review of “Fantasyland: How America Went Haywire: A 500 Year History” by Kurt Andersen

BY MATTHEW HERBERT

This seems like as good a time as any to admit that we are batshit crazy.

It’s not the pandemic. The Clorox briefings. The conspiracy theories.

It’s everything. For all the good traits that make America great, there is a dark side to our exceptionalism. It is nearly impossible here to grow up to become a sane adult. Something in our culture wars against it.

In America, a child born today has a one in three chance of growing up to believe UFOs are visiting our planet and the government is covering it up. One in five will believe in alien abductions. Ditto for chemtrails. Eleven percent will believe the government is trying to achieve mass mind control through new kinds of light bulbs. A whopping sixty percent will believe that end-time events foretold in the biblical book of Revelation will actually happen, and forty percent will believe the Jesus-versus-Satan apocalypse will play out while they’re still alive. (They will take no lessons from recently dead generations who believed the same thing.)

Speaking of smackdowns, many millions of new American adults 25 years from now will tune into professional wrestling, suspending belief in the distinction between real spectacle and fake sport. Many millions more will forget or never notice that “authentic” sports such as football are also staged fantasies that mix real violence with simulations of warfare. Hordes of new adults will acquire the belief that monster truck rallies are awesome. And so on.

As forecasts, these claims are, of course, off the cuff. They warrant the usual caveat, “On current trends, . . . .” After all, who’s to say that a quarter century from now Americans will be just as likely to believe in UFOs or that new light bulbs will even be a thing? Before I propose a response, though, consider this observable fact: our tendency to believe in fantasies sets us off sharply from most other people who are otherwise like us. In most countries in the developed world, a child stands almost no chance of growing up to believe even a single instance of the lurid flimflammery our children will believe about end-times, alien abductions, or the UN’s master plan to rule the world, let lone the whole shebang. In the rest of the Global North, the institutions of society seem to coordinate–or conspire, if you like–to shield children from believing exciting untruths and indulging in louche cons and quackery.

You can actually see it. Or rather, not see it. Just as there are simply no WWE matches or faith-healing tent revivals to attend in, say, France, there is a corresponding lack of false, histrionic ideas about life, the universe, and everything designed to indoctrinate children. There are no support networks of creationist home-schoolers, because–guess what–no one keeps their kids home to avoid what they think are the dwellings of Satan but which are really just schools. In these nice places, where one is unmolested by charlatans at every turn, it is possible to actually grow up. (Indeed one does better than merely grow up. Most European and east Asian high school students significantly out-perform American kids on all key educational indices.)

About those current American trends, though, and the question whether they will hold steady–if anything, Americans will likely proliferate newer, nuttier beliefs than the ones we have now. New avatars of what Kurt Andersen calls America’s “fantasy-industrial complex” will emerge with even more outlandish myths, conspiracies, and lies for the up-and-coming generation to believe in. Our children don’t stand a chance. Grown up, they will go sweaty and red in the face defending preposterous nonsense like the Prosperity Gospel while Koreans and Finns coolly do math and science.

How did we get here? According to Kurt Andersen’s Fantasyland: How America Went Haywire: A 500 Year History, we have always been here. We have always had a “promiscuous devotion to the untrue,” as he puts it. Sure, new things like the internet happened along the way that accelerated and expanded it, but for Andersen, excitable credulity is in our DNA.

Fantasy

European settlers came to America in the 17th century for two reasons: to find gold and to establish religious utopias. Both groups, the gold seekers in Virginia and the Puritans in Massachusetts, based their ambitions on wild hopes for the future. They were gullible by definition. The Virginians were a self-selected crew of schemers “wide-eyed and desperately wishful enough” to believe the hyperbolic ad men of the Virginia Charter Company shouting in the streets of London that the New World positively gleamed with gold.

The Puritans, for their part, came to America determined to fulfill the religious revolution begun by Luther in 1517. All over western Europe, “Millions of ordinary people,” reading Gutenberg Bibles hot off the presses, “decided that they, each of them, had the right to decide what was true or untrue, regardless of what fancy experts said. And furthermore, they believed, passionate fantastical belief was the key to everything.” This revolution would continue to be fought over in Europe until at least 1648 with the Treaty of Westphalia, but the American Puritans escaped this contest and brought their radical new faith to a place where it would suffer no friction. There was no popery or any other kind of adult supervision waiting when they landed on Plymouth Rock.

The most interesting thing about the Protestant revolution in America is how quickly it became the new establishment. Rather than having to fight back against official repression by kings and popes, it was forced to deal with unruly spinoffs of its own, led by new, more extreme rebels. So began a pattern of innovation and fracturing that continues to this day. America blooms with ever-daffier religious sects. Waco’s Branch Davidians spun off the Seventh Day Adventists, who could trace their line of religious entrepreneurship all the way back to the the founding.

Of course reality set in in colonial America and tempered our fantasies. Not all roads would lead to Waco. Reason largely held sway; science thrived, especially in cosmopolitan Philadelphia. Look only to the secular, pragmatic character of founders for evidence that America did not go all-out gonzo crazy when it had the chance. Benjamin Franklin was officially a Puritan, but you couldn’t tell it from the way he attacked Cotton Mather’s religious establishment in his newspaper columns. Ben philandered, opined freely, and did science experiments. For him, all that was left of the old Puritan ways was a keen work ethic and a desire to get ahead.

But it is not Andersen’s contention in Fantasyland that we’ve ever gone all-out crazy. Rather, he argues that a large-enough number of our fantasies have survived the winnowing process of reality to tip us toward an anything-goes epistemology that could swamp what’s left of our objectivity. The kind of society that reserves the right to believe the fabulous is fun at times but ultimately cannot serve the purposes of human dignity, which include being governable by democracy. We’re not in a good way. “The American experiment,” Andersen writes,

the original embodiment of the great Enlightenment idea of intellectual freedom, every individual free to believe anything she wishes, has metastasized out of control. From the start, our ultra-individualism was attached to epic dreams, sometimes epic fantasies–every American one of God’s chosen people building a custom-made utopia, each of us free to reinvent himself by imagination and will.

For a long time, we kept our will to believe woo-woo in a kind of rolling stasis. Beliefs in snake oil, tongue-speaking, rapture, levitation, and so forth would ebb and flow (for example, though the course of three “Great Awakenings” of fundamentalist religious faith), but for the most part, sober-minded adults steered and sustained the institutions that kept (many of us) tethered to reality. Then, in 2004, one of the adults, the political operative Karl Rove, announced a dramatic shift. He told a reporter that solutions to American political problems no longer “emerge from judicious study of discernible reality. That’s not the way the world really works anymore. We create our own reality.” At that moment, Andersen thinks, we witnessed a disruption in the woo-woo/reality stasis so powerful that it should have warned us things might not go back to normal. “America,” he writes, “was the dreamworld creation of fantasists, some religious, some out to get rich quick, all with a freakish appetite for the amazing.” This appetite, not the judicious study of discernible reality, would define us.

By 2004 the unquestioned default setting of most Americans was a preference for the amazing over the non-amazing. As luck would have it, this was also when the necessary bits of machinery for delivering non-stop information miracles–smart phones and social media–converged, approximating Arthur C. Clarke’s definition of technology so whiz-bang it was indistinguishable from magic. This convergence put America’s time-tested, highly refined capacity for merging reality with fiction–what Andersen calls our fantasy-industrial complex–unassailably in charge of our culture, politics, and everything else that makes us who we are.

As a philosophy student, I would have appreciated a precise definition of the FIC by Andersen, but he doesn’t provide one, instead using his highly absorbing narrative of credulous America to draw out its main attributes. That’s fine, though. He’s a journalist and novelist by trade, so Fantasyland is probably a better book for following the forms he’s good at. Still, the idea of the FIC is the centerpiece of the book and deserves some precision. Basically what Andersen says is that America has an FIC, and that’s what sets us apart from everyone else.

The bedrock of the FIC is the belief that everyone can believe whatever one wants. It’s a basic right. The Puritans brought this belief in belief with them., and Thomas Jefferson was the first prominent America to analyze it. He wrote in his Notes on the State of Virginia that his compatriots were free to hold any zany articles of religious faith they wished (or none at all) so long as believing them didn’t “pick his pocket or break his leg.” This idea became one of the most durable of American values. In fact we have expanded on it since Jefferson, broadening it out to apply to many things outside religious faith. Today we believe that any private belief is permissible–in Wicca, chakra healing, past lives, Bulletproof Coffee, what have you–so long as it has no negative public externalities. For a short while in the mid-2000s, California’s courts even recognized people’s right not to believe in childhood vaccination safety on “personal” (not necessarily religious) grounds. That was, until kids in California started getting childhood diseases in droves that hadn’t threatened humanity in 100 years. Then the courts reversed the ruling. In a nice touch, Andersen calls the Californians’ (short-lived) opt-out the “just because” exemption. As Americans, we consider ourselves entitled to believe almost anything just because.

And we tell ourselves that this kind of cognitive promiscuity is okay, because the impersonal forces of nature will bring our beliefs in check if they go too far (as in California). Maybe. You can indeed find people voicing this kind of attitude, but much more prevalent is America’s broad embrace of ever-weirder magical thinking, which reinforces and multiplies our set of just-because beliefs. From the Salemites’ belief they were being bewitched at every turn to the New Ager’s nostrum that you can make anything happen by believing it, to Dr, Oz’s belief in homeopathy, we are addicted to the idea that we can will unreality into reality.

Magical beliefs that seem whispy are abetted in America by real, concrete actions taken to turn fiction into fact. We lead the world in the production of fictional breasts and artificially young faces. Only a few decades ago most women over 50 had gray hair; today, virtually none do, thanks to the ubiquity of hair dye. It’s a harmless vanity, of course. In a much more bizarre vein, though, a portion of the ever-growing community of cosplay gamers in America strive for a transformative level of immersion in their fictions. They call it “bleed” when they inhabit their fantasy worlds to such an extent that they experience real, comprehensive emotional lives whose referents exist only in the game.

The threshold to magical thinking has been lowered in great part, according to Andersen, by the compromise of America’s intellectual gatekeepers. Back in the day, Thomas Jefferson, Benjamin Franklin and Thomas Paine all said sure, go ahead and believe what you want, but they said it in full confidence that if you let wacky truth claims “be submitted to a candid world,” the sober facts would win out. It’s in the Declaration of Independence. Well, it was a long, strange trip, but by the 1960s our intellectual gatekeepers were saying there was no such thing as facts (of the kind Jefferson & Co. had in mind), and furthermore the proper judges of what was “true” and “false” were not  a “candid world” of clear-eyed observers but a wised-up clan of social theorists who squinted at “reality” and saw that it was a figment of our collective imagination. This was the upshot of several widely influential books by highly respected scholars in the 1960s, including The Structure of Scientific Revolution, published by Thomas Kuhn in 1962 and  The Social Construction of Reality by Peter Berger and Thomas Luckman  in 1966.

(One of the more adventurous books I’ve ever read from this genre is Against Method, by Paul Feyerabend. All I really knew about Feyerabend before I read Fantasyland was that he was a philosopher of science at Berkeley, and you could tell. Against Method is basically Feyerabend saying that scientists, rather than being paragons of objectivity, play professional games with the truth all the time based in fudging data and outright lying and even bullying. I will be forever grateful to Andersen for revealing just how weird a dude Feyerabend was. I had no idea. He grew up in Austria in the 1930s, and when the Nazis annexed his country, he saw the occupation and war as an “inconvenience, not a moral problem.” He joined the Wehrmacht and commanded troops in combat. After emigrating to the US after the war and landing a job in Berkeley as a philosophy professor, he had what Andersen calls a “full 1960s conversion.” Feyerabend had always been “excited by the world falling apart around him,” according to Andersen, and this side of him came out big time at Berkeley, even though he played things pretty straight early on, giving dull lectures about the scientific pursuit of truth. “It dawned on me,’ Feyerabend wrote in a memoir he called Farewell to Reason, “that the intricate arguments and wonderful stories I had so far told to my more or less sophisticated audience might just be dreams, reflections of the conceit of a small group who had succeeded in enslaving everyone else with their ideas.” Reason was no steady, reliably guide to the truth. It was the man keeping everyone down, man. Feyerabend was trying to pull down the pillars of the temple of science and reason, and, as Andersen writes, he was celebrating the very chaos he was trying to sow.)

Once the weirdness lid came off mainstream academia, it was off to the races for less noted but still notable scholars. Take C. Peter Wagner, a prominent Christian theologian and one-time professor at the (relatively) staid Fuller Seminary. Until he died in 2016, Wagner led a movement of pastors who preach the “dominion” gospel to millions–the idea that Christians should dominate American society and seize control of the government. (Basically the dominionists want to be the bad guys in A Handmaid’s Tale, but in real life.) In a 2011 NPR interview Wagner went on the record to claim with a straight face that Japan was suffering from demon possession because its emperor had arranged to have sex with the sun goddess. (I  normally don’t take a position on this kind of thing, but I say if you can figure out the logistics of the deal, by all means have sex with the sun goddess, because–sun goddess!) Millions of Americans thought of Wagner as a normal, sane adult, and, judging from the strength of the dominionist movement he left behind, millions still do. If this does not have your cognitive disaster light blinking red, you should probably get it checked.

In 19th century you had to be a nobody and a self-conscious fraud to sell snake oil. But the erosion of gatekeeper standards means that today, you can come out of the closet and stay out, cultivating enough credibility to both be a real, celebrated expert in something and also propagate childish nonsense. Deepak Chopra, for example, trained as an endocrinologist in the 1970s and then taught medicine at Tufts and Harvard while rising to become chief of staff at a large Boston hospital. Today, though, he believes that all ill-health is an illusion, which people can disabuse themselves of if they tune into their bodies’ “quantum mechanical” energy fields. He writes books about this which are bought by millions of Americans, and he has been heavily promoted by Oprah. It is safe to say that tens of millions of Americans believe the loony things Chopra says. And, in a new, very American twist, he may just believe what he says too. This is a new place for us. Back in the snake oil days, the impresario knew he had to hotfoot it out of town after making his sales, but today he sticks around and builds an empire of outlandish credulity which he cohabits with his dupes. But since we create our own realities now, it’s all good, right? Also see Doctor Oz.

Another outstanding feature of Andersen’s fantasy-industrial complex is the outsized role the 1960s played in our cognitive decline. While the eggheads at elite universities were busy bashing truth, science and objectivity, other groups were working away at eroding other conventions and cultural power structures across all of society. Andersen argues that the burgeoning use of marijuana was a good proxy measure for just how much the times they were a-changin’. “In 1965,” he writes, “fewer than a million Americans had smoked pot; in 1972 the number was 24 million. In 1967 only 5 percent of American college students had smoked; four years later it was a majority, and a third were getting high every day.” The use of psychedelics increased too. Woodstock happened, plus Transcendental Meditation. The Beatles turned on, and they also wanted to turn you on. No wonder so many young Americans suddenly found it so natural and appealing to commune with one another. The “Gestalt Prayer”–written in 1969 at Esalen, a fake, and highly successful, psychological research “institution” in California–is a wonderful distillation of the times, with its standing invitation to what Andersen calls a “concoct-your-own-truth” society:

I do my thing and you do your thing. I am not in this world to live up to your expectations, and you are not in this world to live up to mine. You are you, and I am I, and if by chance we find each other, it’s beautiful. If not, it can’t be helped.

(Here is an excellent profile of Esalen from the August 19, 2019 New Yorker. Yes, Esalen still around, as strong as ever. Many of the tech gurus who bend our attention wherever they wish go there to learn wisdom.)

Surprisingly, increased grooviness was not the only outcome of the 1960s. A vastly underappreciated side of the onslaught against the establishment was how it boomeranged back around to aid and abet precisely those Americans who were rooting for the establishment all along. The buzzcuts over in DoD started doing data-heavy “systems analysis” showing that nukes were a force for good and we were actually winning the Vietnam war. Even hard data could mean whatever you wanted it to mean. Somehow, the hippies and Berkeley professors missed the fact that anything-goes relativism was a game that anyone could play. The establishment never went away, of course, but it sure did learn a thing or two from the left-leaning cultural revolution of the 1960s, as Andersen observes:

In fact, what the left and right respectively love and hate are mostly flipsides of the same coins minted around 1967. All the ideas we call countercultural barged onto the cultural main stage in the 1960s and ’70s, it’s true, but what we don’t really register is that so did extreme Christianity, full-blown conspiracism, libertarianism, unembarrassed greed, and more. Anything goes meant anything went.

The conquest of talk radio by conservative voices in the 1980s and 90s was just one consequence of this shift. There was never a need for Rush Limbaugh to slow down for fact-checking because, hey man, facts are like totally made up anyway. (He might not have believed this “argument,” but all of his audience had recourse to it, and likely used it liberally.) Today a popular conservative broadcaster closes his news cast with the line, “Even when I’m wrong I’m right.” Whether he knows it or not, the 1960s helped gift him this bounty of fantasy.

Anything-goes relativism also did heroic work for Biblical literalism, which had been receding steadily for the decades between the Scopes trial, in 1925, and the 1960s. Genesis Flood, a 1961 book laying out the literalist “theory” that the earth is only 6000 years old, “almost single-handedly retrieved creationism from the dustbin of Christian intellectual history–just as the academic mainstream was starting to say that science couldn’t necessarily be trusted as the arbiter of truth.” Today, 76 percent of Americans believe god created humans; half of these believe creation happened literally (clay, Adam’s rib and all that) as described in GenesisAnother poll indicates 40 percent of Americans believe the young earth theory espoused in Genesis Flood. Virtually no one else in the Global North believes these things.

The Intelligent Design movement has its own body of “science,” some of which is impressively complex. This is the point. It is written to inspire the faithful, cajole the skeptical, bamboozle everyone. Intelligent Design “science” is emblematic of a broader characteristic of American fantasyland, according to Andersen–its hybridity. Fact and fiction are made to incorporate one another in an endless feedback loop reminiscent of The Matrix, so that no one can tell what’s real and what’s not. This was perhaps inevitable for a country that literally wrote such fictions as human rights into existence. Human rights were made real by real people’s collective recognition of them, and a damn good thing, too. They are great. I’m all for this kind of boundary-crossing creativity if you have the wisdom of Aristotle or at least the realism of Thomas Hobbes, but we are not Aristotle, or even Hobbes. We are us, so we also pour great, slopping buckets of error, cant, bigotry, schlock, and malice into our hybrid inventions. We created reality TV. “Professional” wrestling. And a reality TV-pro wrestling president, which should not surprise us. We created Disneyland and then suburbs made to look like Disneyland. Or was it the other way around? Who knows.

Open admission: I pretty much hate the French philosopher Jean Baudrillard, mostly because he wrote a ridiculous book claiming that a biggish war I happen to have fought in, the Persian Gulf War, did not actually take place. He says it was all done with mirrors and CNN. Whatever. If you dial Baudrillard back from a 14 to an 11, though, here’s one thing he most certainly got right (and which Andersen draws out): America’s mind-bending capacity to create fantasies and blend them with real life has reached disorienting proportions. Beaudrillard calls it hyperreality. The Stanford Encyclopedia of Philosophy defines hyperreality as a world

in which entertainment, information, and communication technologies provide experiences more intense and involving than the scenes of banal everyday life, as well as the codes and models that structure everyday life. The realm of the hyperreal (e.g., media simulations of reality, Disneyland and amusement parks, malls and consumer fantasylands, TV sports, virtual reality games, social networking sites, and other excursions into ideal worlds) is more real than real, whereby the models, images, and codes of the hyperreal come to control thought and behavior.

Well, ain’t that America?–except I would definitely add megachurches to the list of Disneyficators that dominate our landscape. I usually don’t like avant garde intellectual terms, but I think hyperreality is a good one. If it strikes you as a joke or an exaggeration, consider this: Baudrillard coined it two decades before the tech wonks decided to call the real-time blending of maps (or images) with textual information (and who knows what else) augmented reality.

But prominent Americans have been augmenting reality for a long time now. In a world where you can uncritically combine fact with fiction, message is all that matters, especially if you communicate for a living. It doesn’t matter whether your claims are true or false, or even where your terms come from. Do you know that famous speech by Ronald Reagan where he called the Soviet Union the evil empire? Reagan didn’t just cherry-pick one made-up term from Star Wars for that speech. It was a cornucopia of fantasies wrapped in a smorgasbord of fakery. Andersen recalls it:

Reagan delivered his “evil empire” speech in Orlando to the National Association of Evangelicals, an hour after he had been at Disney World. “I just watched a program–I don’t know just what to call it–a show, a pageant, . . . at one point in the movie Mark Twain, speaking of America, says, ‘We soared into the twentieth century on the wings of inventions and the winds of change.'” He’d seen Disney’s The American Adventure, featuring an animatronic Mark Twain saying things Mark Twain never said.

Americans loved that speech.

The fact that the USSR really was an empire and it really was evil kind of deflates any quibbles about truth and historicity in this case, right? I mean, Reagan was right in every way that counted. Maybe message really is all that matters.

But America’s hybridization of fact and fiction is not all about political speeches. Some of it matters. In the 1960s the LAPD created the country’s first SWAT team. They trained at a Universal studios lot. After the TV show S.W.A.T. came out in the 1970s, police departments started copying what they saw on the show according to investigative reporting by the New York Times. This hybridization led to the proliferation of SWAT units across the country and eventually to America having the most militarized police force on earth. It also connects to other hybridizations that led to real-life increases in life’s nastiness and brutishness here in America. Fictional apocalypses and sadistic crime dramas helped lead to real prepper movements and advocates of “guns everywhere” laws. Rather than having to stop playing army as kids, hundreds of thousands of adults now play it much better, with entirely-realistic-looking Airsoft paintball guns, and loads of cool military accessories. The Second Amendment language about “well-regulated militia[s]” was reinterpreted to mean that U.S. citizens have the right to formed armed bands to fight back against their own constitutional government, which the NRA described in 1995 as “jackbooted thugs.”

With their faith in exciting TV fabrications, deep government conspiracy theories, and nostalgic Daniel-Boone individualism, these movements evince another key feature of our fantasy-industrial complex as Andersen interprets it–the open borders of fantasyland. Belief in weird theory X is often linked to belief in weirder theory Y. If you believe, say, that you actually sup each week on the blood and body of a Bronze Age mystery cult god who is also your lord and savior, equally preposterous fictions will come easier too–maybe Q-anon or power-line epidemiology. Indeed the polls indicate this kind of overlap, and they have for a long time. In a survey of people who listened to Orson Well’s prankish broadcast of War of the Worlds in 1938, a clear majority of those who freaked out were also devout religious believers.

In a way, this is good news. Our cognitive bottom-feeders will tend to sort themselves and thus the nonsense they believe into one place–the bottom. It’s possible but pretty unlikely that your cardiologist will also be a Reiki master healer or that a federal judge will spend her nights feverishly connecting the dots of the next Pizza Gate conspiracy. This is not rocket science. Even in fantasy-besotted America, plain old education helps keep crazy down. The same Pew poll that showed 40 percent of Americans believe Jesus will come blazing down from the sky during their lifetimes indicated that people with no college were three times more likely to believe this than than college grads.

The bad news is: that murky place where lunkheads, troglodytes and enthusiasts get together to cross-fertilize outrageously stupid ideas?–It is HUGE. It contains multitudes. Our basement may only make up the bottom quarter of our house, but it remains a very big house. We clearly lead the world in producing and propagating mad fantasies and then connecting the dots that link them together into stupendously false worldviews. Think Alex Jones here. It’s embarrassing.

Obviously, I like Andersen’s book. I think, in the tradition of Orwell, he faces up to a lot of unpleasant facts. But I think Andersen misses one key feature of America’s obsession with exciting, irrefutable beliefs in fiction. That is, these kinds of beliefs are massively empowering to the individuals who hold them. Do you know those memes of Sam Elliot where he’s looking at you with that wonderful, omniscient gaze and telling you you’ve got to be a special kind of stupid to believe in immigration statistics, or climate change science, or some other dogma that the elite establishment wants to force on you? Thanks to a convergence of technology and a widespread corruption of intellectual leadership in America, that attitude is now available to everyone. Each American can now be and feel smarter than anyone who dares to tell him the facts just aren’t on his side.

Why have I written a 5,000-word review of Andersen on this gloomy topic? Is it because I think real, concrete harm is being done by Americans’ obsession with connecting random dots into self-flattering fabricated pictures of a non-world? Sure. You can count up a certain number of innocent parties who will suffer or die because of our willful stupidity. That’s bad, of course, but I doubt the danger of our faith in nonsense is all that worrisome compared to real problems that we’re actually tackling in intellectually honest ways (for example, the link between prisons for profit and skyrocketing incarceration rates).

Ultimately, fantasyland is repulsive because it is harmful to human dignity. It perverts the one thing that sets us off from other big mammals, our ability to mentally represent complex things happening outside our heads–the real world. Our intoxicating ability to add our own thoughts to the ones impinging form the outside world is, I believe, turning toxic. And, worse, the toxin is a cheap one. It openly advertises its fraudulent character. But nonetheless, we prefer the delusion to honesty. Andersen quotes the alternative-history fantasy writer Philip K. Dick at length on this dreary condition, the outcome of our peculiar addiction to the habitual blending of fact and fiction:

The problem is a real one, not a mere intellectual game. Because today we live in a society in which spurious realities are manufactured by the media, by governments, by big corporations, by religious groups, political groups–and the electronic hardware exists by which to deliver these pseudo-worlds right into the heads of the reader, the viewer, the listener . . . .

I consider that the matter of defining what is real . . . is a serious topic, even a vital topic. And in there somewhere is the other topic, the definition of the authentic human. Because the bombardment of pseudo-realities begins to produce inauthentic humans very quickly, spurious humans–as fake as the data pressing at them from all sides. . . . Fake realities will create fake humans. Or, fake humans will generate fake realities and then sell them to other humans, turning them, eventually, into forgeries of themselves. . . . It is just a very large version of Disneyland.

This, I think is the real horror of our passionate faith in nonsense. It’s nothing that picks my pocket or breaks my leg. Our belief in life-coaching, end-times, young earth, the prosperity gospel, speaking in tongues, commodity bubbles, crop circles, reptilian overlords, porn fantasy and so on probably doesn’t draw a lot of blood. But it is all so degrading–a  pathological attempt to dodge our adult responsibilities. The philosopher Immanuel Kant defined enlightenment as intellectual emancipation, the conscious recognition that we have no thought supervisors, supernatural or otherwise. Kant thought it was a great and beautiful moment for mankind and that enlightenment would help us stand up straight to face the world anew. Our insistence instead on slouching toward a hell of fantasy and delusion is repugnant to this tradition. It is a willful return to what Kant called our “self-caused immaturity.” It is inhuman and inhumane.

Orwell, Steadfast Partisan of the Left

BY MATTHEW HERBERT

Sometime during the Cold War, right-leaning ideologues got the idea that George Orwell had switched sides shortly before dying in 1949, and his legacy was somehow on their side of the aisle.

An essay by the neo-conservative writer Norman Podhoretz in the January 1983 Harpers is typical. In it, Podhoretz argues that Orwell underwent several “major political transformations,” and if you trace their arc, you see it bending clearly toward Reaganite neo-conservatism.

Well, as Orwell said of Charles Dickens, some writers are well worth stealing, and Orwell himself proved attractive to thieves. The right’s attempt to appropriate his legacy is an act of attempted robbery. Orwell was a steadfast partisan of the left and remained so to the end of his life.

Why does (re)establishing this fact matter? I’ll get to that, because it really does matter, but first let’s consider the evidence for Orwell’s enduring loyalty to the left.

  1. Actions always speak louder than words. Orwell was shot in the throat by a fascist sniper while fighting with the Workers’ Party of Marxist Unification (POUM) in the Spanish Civil War in 1937. He temporarily lost his voice and nearly died. As he was recovering in a Spanish hospital, he wrote to one of his best friends that he could “at last really believe in Socialism, which [he] never did before.”
  2.   Orwell made explicit commitments to the left’s political agenda, and he never reversed them. The leftist spirit was alive but only vaguely so in Orwell’s earliest writings, which addressed the injustices of colonialism and the structural nature of poverty in what was then the world’s biggest, richest empire. But in June 1938, he put his cards plainly on the table in his essay, “Why I Joined the Independent Labour Party.” The gathering threat of fascism he said, was forcing passive leftists to adopt a concrete, organizing dimension for their sympathies, even though that meant they would have to make unwelcome political compromises with the establishment. “One has got to be actively Socialist,” he wrote, “not merely sympathetic to Socialism.”
  3. A thinker as forthright as Orwell would have publicly and explicitly withdrawn his support for the left had he privately abandoned it. He never did. Indeed, he used his dying breath to express his enduring loyalty to socialism. A representative of the United Automobile Workers had written Orwell sometime in mid-1949 asking if 1984, with its direct indictment of collectivism, had not signaled Orwell’s abandonment of the left. Weak, feverish, and unable even to walk from his hospital bed to the radiology lab for a needed X-ray, Orwell wrote back on the 16th of June, explaining clearly and forcefully, “My recent novel is NOT intended as an attack on Socialism or on the British Labor Party (of which I am a supporter) but as a show-up of the perversions to which a centralised economy is liable and have already been partly realised in Communism and Fascism.” He would only write eight more letters in what remained of his short life, but that letter contained his last political statement; he remained a man of the left.
  4. Wait, what’s that about supporting the British Labor Party in 1949? Hadn’t Orwell joined the ILP in 1938? In the essay about that decision, Orwell said that despite his membership in the more ideological ILP, he hadn’t lost faith in the more mainstream Labour Party and that his “most earnest hope is that [they] will win a clear majority in the next General Election.” He knew where the winning votes would come from in a battle against the right. From the time he was a declared socialist to the end of his life, Orwell was a pragmatist who believed that the leftist movement would only advance its cause if it was part of a larger, viable coalition against the established monied interests. Politics, Orwell observed over and over, is always a choice between bad options and worse ones. His willingness to cooperate with non-socialist parties should not be interpreted as a rejection of his declared ideological loyalties. Winning is always ugly, and Orwell wanted to win.
  5. Podhoretz, in his 1983 essay laying claim to Orwell on behalf of the right, observes that Orwell was forever criticizing the left, with vigor. Of course he was. Orwell loved the left and did not want to see it commit suicide by bowing to rigid orthodoxies. He was always trying to keep the left honest and to make sense of his own experience as an apologist for a movement that could confound, embarrass and disappoint him in thousands of ways. On the other hand, Orwell’s career-long rejection of the right was as plain as the nose on his face.  From his earliest, unpublished writings about poverty and homelessness, Orwell was always against a state that was set up to steal the workers’ labor value and arrogate it to the one percent. (Orwell may have actually coined that phrase, by the way, in a diary entry in 1941.)

Indeed, despite Orwell’s frequent critiques of leftist foibles, there is nothing you can recover from his writing that teaches you how to be a better rightist. Orwell went to Spain in 1937 with an expressed desire to put a bullet into a real, existing fascist, and he never lost his antipathy toward the more abstract powers ranged behind the right–money worship, predatory corporations, religious authority, bought-off media, politicized courts, and of course, the great populist enabler of it all, Yahoo nationalism. All Orwell’s writings that conservatives might construe as rejections of leftism actually can, and should, be understood as instructions for how to be a better leftist.

An offhand remark by Orwell in a letter to the Partisan Review in 1944 is typical. He was tossing around the idea with his editors that Europe’s constitutional monarchies (in Britain, the Low Countries and Scandinavia) had done a better job resisting Nazism than Europe’s republics, possibly because time-worn royal pageantry stirred and provided a harmless, domestic outlet for popular patriotic sentiments. France, though, an exemplary republic that had killed its kings as any “correct” leftist movement would, had no repository for its patriotic feelings outside the state’s real power structures, and these largely strove for survival by adapting to fascism. If you tell this kind of thing to “the average left-winger,” Orwell noted, “he gets very angry, but only because he has not examined the nature of his own feelings toward Stalin.”

Orwell Tea
(Image: Commonweal)

Today’s right-winger trying to put a neo-conservative construction on Orwell generally has an easy time cherry-picking items like this one. There was a shameful number of European and American socialists who stayed true to Stalin, and Orwell repeatedly called them out for this arch sin, and with many lesser ones. Gather a few of these indictments together and, voila, you have an Orwell struggling to break free of his leftist dogmas and who would have grown in time to love Margaret Thatcher.

Bullshit. Orwell’s self-criticism never rose to the level of embracing of the right, nor did it even point that direction. Indeed, if anything systematic can be recovered from Orwell’s writings as a whole–and he seems to have hated systems–it is a multi-layered critique of the things that threatened to sink socialism.

The body of Orwell’s work weaves together three levels on which he constantly battled against leftist pieties–as an artist, as a political operative, and as a cultural conservative. Orwell believed that declaring a party loyalty was artistic suicide for a writer, whose job was to tell the truth. Writing requires complete freedom of expression, and party membership requires hamfisted modifications of this freedom. He knew he was maiming himself as a writer when he joined the ILP, but he joined anyway, because, he thought, the times demanded political responsibility even of artists. “Group loyalties are necessary,” he wrote in ‘Writers and Leviathan,’ “and yet they are poisonous to literature, so long as literature is the product of individuals.”

(Interestingly, Orwell was remarkably charitable to writers who stayed true to their art and kept out of politics. On his way to Barcelona in 1937, Orwell visited Henry Miller in Paris and praised him frankly and profusely for writing Tropic of Cancer, a book widely censored and generally seen at the time as a scandal of sacrilege and hedonism. Orwell was only slightly perplexed, possibly even charmed, by Miller’s naive indifference to what was happening in Spain. Miller exhorted Orwell to stay in Paris and drink, asking him why he would go down and throw his life away.)

As a political operative–or, by extension, as an ordinary voter–Orwell thought that backing certain desirable leftist causes would inevitably bring to light other, unarticulated commitments to less desirable, even repugnant outcomes. If you want the emancipation of the working class,  for example, you are going to need more, not less, industrialization, which is hateful on aesthetic and environmental grounds. Furthermore, there was no resolving such basic inconsistencies for Orwell: you just had to live with them. Political responsibility, he wrote, demands that we “recognise that a willingness to do certain distasteful but necessary things does not carry with it an obligation to swallow the beliefs that usually go with them.”

This is, of course, a liability of any political orthodoxy, not just the leftist one. But when Orwell indicated the best way out of this thicket, he was clearly speaking from and for the left. The first thing progressives (yes, he used that term too) must do is reject two assumptions forced on them by the established right. One is that the left is in search of a laughably unachievable utopia, and two is that any political choice is a moralistic one “between good and evil, and that if a thing is necessary it is also right.” Both these assumptions spring from a common myth, popular consensus in which Orwell thought the right had gotten for free for a long time.

This myth is the quasi-religious belief that man is fallen and essentially incorrigible. There’s simply no use trying to improve his lot. For centuries, the right (and its progenitors) have placidly asserted the dogma that humans are either candidates for heaven or hell, with no ground in between. Right in front of our nose, though, Orwell was constantly observing signs that humans were capable of making incremental progress, through politics that were often tortured, dishonest, even corrupt, but oriented nonetheless toward the reduction of human misery. In a 1943 book review, Orwell notes that the London slums of Dickens’s day teemed with poor people so deprived of decent conditions that it was objectively true to say they led subhuman lives. They were so far outside the pale, they could not even orient their existence on any kind of program to help civilize them. Sitting in his cold, dark flat during the Blitz, Orwell measured the progress achieved since the 1870s:

Gone are the days when a single room used to be inhabited by four families, one in each corner, and when incest and infanticide were almost taken for granted. Above all, gone are the days when it seemed natural to write off a whole stratum of the population as irredeemable savages.

The conservative belief that we cannot and must not take even the first step toward heaven as long as we are earth-bound, Orwell said, “belonged to the stone age.” Clearly there was no need to invoke a utopia if your real political aim was merely to reduce the worst, most tractable injustices occurring right here, right now. “Otherworldliness,” Orwell writes, “is the best alibi a rich man can have” for doing and sacrificing nothing to reduce the suffering of the poor.

The metaphysical pessimism behind the rich man’s alibi, Orwell believed, led directly to defeatism. And this defeatism made it an urgent matter for the left to reject the straw-man accusation that they were trying to build a utopia of unachievable dimensions. “The real answer,” he wrote, in a 1943 ‘As I Please’ column, “is to dissociate Socialism from Utopianism.” He would write this over and over again, in other words, in other places, until he died.

A right-winger looking to steal Orwell’s legacy can perhaps find the most aid and comfort in the third level of Orwell’s critique of his leftist fellow-travelers, his scorn for their bad taste and what we would today call the performative aspect of their politics. Despite faithfully bearing the leftist banner of liberté, égalité, fraternité, Orwell retained many of the biases and preferences of a garden-variety cultural conservative. He obviously believed it was important not to hide these things, but to wear them on his sleeve.

Although Orwell was horrified by war and believed that socialism would help pave the way to less of it, he was more horrified by pacifists who not only held to Chamberlain’s line of appeasement in 1939 but touted staying out of all wars on philosophical grounds. Orwell called this one-eyed pacifism. He did’t stop there, though. He openly despised the posturing of the pacifists and other cultural progressives of his day, calling them “juice-drinking sandal-wearers” and “creeping Jesus” types. Emotionally, Orwell was closer to Archie Bunker on some things than he was to a by-the-book leftist.

Orwell was also free with some epithets that he might think twice about today. In letters to friends, he called homosexuals fags, often in connection with boys’ public school life. (He mentions in his 1948 essay “Such, Such Were the Joys” that the younger boys at school mooned over and sometimes had crushes on the older boys.) Although he took pains in one “As I Please” column in 1943 to observe that black American G.I.s in London were more polite than white ones, he used the N word without compunction. (It should also be pointed out, though, that he used the same word with political acumen when unmasking the racist hypocrisies of liberal democracies, as in his 1939 essay “Not Counting Niggers.”)

He also viewed the racial situation in Burma, where he served as a colonial policeman, stereoscopically. With one eye he saw the Burmese as “little beasts,” but with both eyes open, he was “all for the Burmese and all against their oppressors, the British.” Again, even while propagandizing for an oppressed people, Orwell believed it important to wear his reactionary racism in full view. He would always believe humans to be a tangle of contradictions, and he did not wish to have his own hidden.

In many ways, Orwell simply deplored the bad material taste of his time. He pined, as any conservative does, for the good old days, when beer was better and fishing streams cleaner. But he clearly reserved a special contempt for the aesthetic depths to which collectivists would plunge out of loyalty to their politics. The low-level, everyday miseries of Londoners in 1984 represent not just a shudder against ugliness and poor taste in general, but a particular warning against accepting material shabbiness as a condition of political progress. The opening chapter of 1984, set in Winston Smith’s apartment building, Victory Mansions, uses the aromas of chronic poverty to animate this idea. “The hallway smelt of boiled cabbage and old rag mats.” Smith’s Victory Gin “gave off a sickly, oily smell . . . ” In a later chapter, Smith visits a proletarian pub, where the ale smells and tastes sour. (See this wonderful 2016 article from the Guardian on “George Orwell and the stench of socialism” for further discussion of this theme.)

If your socialist leader promises material progress–which they all do as a matter of course–they damn well better deliver. From East Germany to North Korea, collectivist dictators have been forced to make whole careers of denying the material poverty of their subjects. Had Orwell lived to see the Kitchen Debate of 1959 between Nixon and Kruschev, he would have called it political schlock and free propaganda for American corporations, but I think he would have also called it an important victory for liberal democracy. It showed what working people ought to expect as a return on their labor value.

Orwell lived a great deal of his life near the functional poverty line, and his tastes were never sumptuous–how could they have been? But he did believe that the ordinary person’s attraction to nice things was a politically useful force. The realistic desire for a “nice cup of tea,” a good glass of beer, or a decent dinner out with one’s partner were handy yardsticks for measuring the success or failure of a government. The whole undertone of Orwell’s 1939 novel Coming Up for Air is about how unnecessarily hard it was for an ordinary young person to fulfill even the shabbiest of proletarian desires in the world’s richest empire.

Does all this matter? Does it matter that the right cannot justifiably lay claim to Orwell’s legacy? I believe it does. Because I believe it is precisely Orwell’s stereoscopic vision of socialism that makes him true to and valuable for the left. “In a prosperous country,” he wrote in 1939, “left-wing politics are always partly humbug.” Progressivists made their livings, Orwell continued, self-righteously “demanding something they they don’t genuinely want”–a measurable reduction in the elite’s standard of living. That would make waves. Safer to stay in opposition.

Orwell spent his energies, though, in pursuit of taking and holding political power for the left. Real political responsibility would come at a cost, as he knew, and it would court contradictions, compromises, even corruption. But that was also true of the political processes that lifted the lives of 1870s slum dwellers out of subhuman misery. Yes, socialism as Orwell understood it, is partly humbug, but corporate capitalism is wholly and completely humbug. You cannot just cheer on the rich and wait for them to voluntarily return you some dividends on your labor value. It will never happen. This is not to say, though–and Orwell never would–that one system is right and the other wrong. But the system in which the worker makes his claim to a bare minimum of security is clearly a less bad system than one where the rich reserve the power to ignore the poor. Good politics is always choosing the less bad option over the worse one. This is the leftist cause, and Orwell is one of its leading champions.

Good Cheap Books

BY MATTHEW HERBERT

One of the things I love most about my e-reader is the access it gives me to inexpensive books. Some of the best bargains out there are great books or collections of great authors that you can get for mere pennies.

Now that many of us have more time for reading, you might consider savoring–or in some cases, tackling–some of these:

Germinal by Emile Zola. If you can hack the French, it’s free on Amazon. We anglophones can read it for 99 cents. This is a great book on its own, of course, but it’s particularly relevant right now while the whole nation is basically on a de facto general strike. When you consider what troubles Zola’s miners had to go to merely to organize a strike in one isolated mining district in Germinal, you start to think that our working class, with a general strike more or less materializing out of nowhere, might wake up and demand some nice things too before going back to work. Probably not, though, because, . . .

. . . our ambient levels of passivity and conformism run pretty high. On this theme, read Sinclair Lewis’s 1922 classic Babbit. It gives the American middle class’s first honest look in the mirror. While our economic life in the Roaring Twenties told us we were masters of our fate, sitting on top of the world, our interior lives said we were chumps, self-righteous fools, and slaves to convention. Babbit–the character and the novel–asks what lies beneath the masks we wear.

Speaking of broad human themes, you cannot miss Cervantes’s Don Quixote. It is possibly the first novel ever written and in any case a great literary wellspring of European enlightenment. From cover to cover it speaks unsparingly but with great comic warmth of a world no longer enchanted by religion. Milan Kundera reflects on it, “When Don Quixote went out into the world, that world turned into a mystery before his eyes. That is the legacy of the first European novel to the entire subsequent history of the novel. The novel teaches us to comprehend the world as a question. There is wisdom and tolerance in that attitude.” For 99 cents you can join the communion of the faithful who ask themselves this question over and over.

Don’t let the imposing-sounding title of Epictetus’s Enchiridion scare you off. It’s a highly accessible introduction to Stoicism, which basically says that unplanned, unjust, and chaotic as the world may be, you should still do your best to live an orderly, dignified life. When I read the plain moral brilliance of Epictetus, I have to wonder how we Americans–many of us cheerful, decent people of solid good sense–let ourselves be saddled with the farcical beastliness of Christianity and other Bronze Age Levantine sky god cults. Read Epitectus alongside your chosen “sacred” text and ask yourself which one really and truly commands your conscience.

If you enjoy unpacking surprise gifts, try any of the Stoics Six Packs series on Amazon, especially volume two, which includes Seneca’s essential On the Shortness of Life (De Brevitatis Vitae). They cost 99 cents. Had you been a curious student living in ancient times somewhere on the Mediterranean rim, you would have risked your life traveling through war zones and plague hotspots to go study these masters.

For years I put off reading Proust’s In Search of Lost Time, because (a) it’s seven books long, and (b) it struck me in my youth as too fancy pants for my tastes, which ran more toward Dostoevsky’s direct line of questioning God Himself. My mind was changed though, when I read that it was a stylistic inspiration for Shelby Foote’s massive, novelistic History of the Civil War (also a wonderful read but not one that meets my price criteria for this post). You will be richly rewarded even if you only make it through Swann’s Way, the first book. In it, Proust reveals how hard it is to be human. We think of our lives as more intelligible in retrospect than they are as they happen in real time, but Proust gives pause, page after page, to ask if that’s really true. The thing we think we know best–our self–is a collection of rounded-off impressions and outright illusions that require exertion to be held together. When critics talk about the modernist movement as one that unmasked the incoherence of the individual self–the notion that not only is there no essential me in the present, but that I cannot even construct one from the past–they invariably have Proust’s masterpiece first in mind. $1.99.

books

I have several good collections that range in price from free to $1.99. Used to be these beasts were hard to navigate because each book was marked as a “chapter,” so all you could do was move from one book to the start of another. Some were even worse (like a collection of Bertrand Russell I picked up and then abandoned in frustration). They contained thousands of pages, and sometimes all you could do was plow through from the start of the collection toward your desired book. These days, though, many large digital collections are organized better, with internal chapter markings. I am currently reading my way though the complete novels of H.G. Wells, and it is very nicely organized so that you can access individual sections or even chapters in each novel.

In 2015 or so–I can’t quite remember–I decided to read all of Charles Dickens. I was inspired by Orwell’s justly famous essay about Dickens, which convinced me that I would rather be a failed literary critic rather than a successful anything else, even if I came across as pretentious or ridiculous. Plus I didn’t have to quit my day job, which was nice. Anyway, picking up all of Dickens was the easy part. You can get his complete works for a buck.

It’s pretty much the same for Mark Twain, Jane Austen, Balzac, Nietzsche and Goethe. The E.M. Forster collection is good but lacks A Passage to India. I’m sure there are many other great collections out there; these are the ones I’ve spent my time on. Oh, yes, one more I can’t neglect. George Eliot’s Middlemarch is often called the greatest novel in the English language. You can pick it up in Eliot’s collected works and judge for yourself.

 

 

 

Review of “Joe Gould’s Teeth” by Jill Lepore

BY MATTHEW HERBERT

Have you heard of Joe Gould, the mad, drunken, chain-smoking bohemian who called himself the greatest historian in the world–flunked out of Harvard, twice, communicated with seagulls, wrote the world’s longest unpublished book, an oral history of everyone, but then again possibly didn’t?

Neither had I, until I read Jill Lepore’s completely absorbing 2016 biography of the man himself, Joe Gould’s Teeth.

Gould came from a family with a queer streak, Lepore tells us.

The Goulds had come to New England in the 1630s, and they’d been strange for as long as anyone could remember. [Joe] was born in Massachussetts in 1889, . . . . [His father,] Dr. Gould was known to fly into rages, and so was Joseph. There was something terribly wrong with the boy. In his bedroom, he wrote all over the walls and all over the floor. His sister, Hilda, found him so embarrassing, she pretended he didn’t exist. He kept seagulls as pets, or at least he said he had, and that he spoke their language: he would flap his wings, and skip, and caw. He did all his life. That’s how he got the nickname “Professor Sea Gull.”

There is plenty to start with here if you are trying to unravel the mystery of Joe Gould. The guy had weirdness in buckets. He was probably misunderstood by everyone around him, from his childhood on. Lepore ventures that he was autistic before it was a diagnosable condition. But there must have been something especially formative about having his sister deny his existence–in effect, trying to erase him.

jgteeth

Later, when Gould would attract the serious attention of literary critics, it was because of the revolutionary scope of his work. He wanted to democratize the recounting of history so that everyone had a voice. No one would be erased. He wrote:

What we used to think was history–kings and queens, treaties, conventions, big battles, beheadings, Caesar, Napoleon, Pontius Pilate, Columbus, William Jennings Bryan–is only formal history and largely false. I’ll put it down the informal history of the shirt-sleeved multitude–what they had to say about their jobs, love affairs, vittles, sprees, scrapes, and sorrows–or I’ll perish in the attempt.

To a friend, he summed it all up rather beautifully, saying, “I am trying to present lyrical episodes of everyday life. I would like to widen the sphere of history as Walt Whitman did that of poetry.” In the end, though, he did perish in the attempt. It was all too much for him.

Gould wrote the oral history in hundreds, maybe thousands, of dime-store composition books, over the course of decades. But he could never find them when it came time to publish. He thought he left some on a chicken farm. Or he would have to re-write them. Or he had sent them to correspondents who kept them in trunks. Critics, and even friends, eventually came to suspect there was no Oral History. Could it all have been a dream? Maybe.

“He was forever,” as Lepore puts it, “falling down, disintegrating, descending.” He once fell and cracked his skull on a curb. He woke up with his head bleeding and he recited parts of his history to a policeman. Writing held him together, but never for long, Lepore tells us. Gould just couldn’t get long with people, and in the end society could not accommodate him. He harassed women, including Harlem’s most famous sculptress. He turned on friends and generous benefactors. He died in America’s largest mental institution, unmourned, unnoticed by anyone on the outside. (In an early stage of his “treatment,” his teeth were pulled. Hence the book title. It was just one of those things doctors did in those days to render the insane more pliant. Lepore hypothesizes that Gould was probably also lobotomized near the end. This was another thing that doctors just did in those days, and there is a record of a man of Gould’s age and description undergoing the procedure.)

Lepore is an irresistible writer and magnificent historian. She tells the story of Gould in a way that sweeps you along with it. But it is the complexity of her subject that compels us. Beneath the greatness of Lepore’s writing is a paradox about Gould himself that yawns wide and takes us in. One of the ways Gould kept his internal balance–early on, when he still could–was to remind himself that individuals are unknowable at their core:

The fallacy of dividing people into sane and insane lies in the assumption that we really do touch other lives. Hence I would judge the sanest man to be him who most firmly realizes the tragic isolation of humanity and pursues his essential purposes calmly.

Lepore’s book is an adventure story in a way.  She set out on it, she says, to try to find the products of Gould’s “essential purposes,” the legendary stacks of dime-store composition books that contained the Oral History. But she didn’t find them. Instead, she found this: the man who gave his whole life to writing the history of the shirt-sleeved multitudes didn’t even believe in it. He couldn’t have. He thought people were unknowable at their core. You might get at the epiphenomena of their lives, but you could never access the individuals themselves. How could you write a history of all of them if you couldn’t even know one of them?

But he kept on, as we all must. “My impulse to express life in terms of my own observation and reflection is so strong,” Gould once wrote, “that I would continue to write, if I were the sole survivor of the human race, and believed that my material would be seen by no other eyes than mine.” This is an expression of courage worthy of Joseph Conrad. When you find that one life-sustaining thing you would do even with no one to witness it, you have arrived. It doesn’t matter that you are possibly as insane as Joe Gould. Because who’s to say what insanity is. Keep calm and pursue your essential purposes.

 

 

It’s the End of the World as We Know It

BY MATTHEW HERBERT

We humans are forever predicting the end of the world. My own guess is roughly five billion years from now, when the sun will explode and burn out. This eventuality probably wouldn’t come up much as a topic of conversation, but my seven-year old routinely asks about it.

Like me, he is made vaguely sad by the idea that everything everywhere has an expiration date, even if that date is unimaginably far in the future. Not only will we not be around to worry about the demise of the solar system, but even if there are any descendents of Homo sapiens alive to contempate the last sunset, they will be as different from us as we are from bacteria, their faculties for grasping and representing reality utterly alien to our modes of cognition and perception.

Of course what is astronomically more ikely is that all the earth’s life forms will have run their course eons before our tiny corner of the Milky Way unwinds according to the laws of thermodynamics.

But there’s something definitive about the end of the world that brings it within the scope of our imagination nonetheless. It doesn’t matter how far in the indeterminate future it may lie; it still looms as a finality. We can’t let it go.

Why not, though? I mean, five billion years on a human scale is eternity. There’s a strong case for just calling the world neverending, especially in conversations with seven-year olds.

But we don’t.

The prospect of the world’s end doesn’t just haunt us vaguely, fluttering in the backs of our minds, Ian McEwan writes, it grips us and actively shapes far too much of our public lives:

Thirty years ago, we might have been able to convince ourselves that contemporary religious apocalyptic thought was a harmless remnant of a more credulous, superstitious, pre-scientific age, now safely behind us. But today, prophecy belief, particularly within the Christian and Islamic traditions, is a force in our contemporary history, a medieval engine driving our modern moral, geopolitical, and military concerns. The various jealous sky gods–and they are certainly not one and the same god–who in the past directly addressed Abraham, Paul, or Mohammed, among others, now indirectly address us throught the daily television news. These different gods have wound themselves around our politics and our political differences.

Our secular and scientific culture has not replaced or even challenged these mutually incompatible, supernatural thought systems. Scientific method, skepticism, or rationality in general, has yet to find an overarching narrative of sufficient power, simplicity, and wide appeal to compete with the old stories that give meaning to people’s lives.

This passage is from McEwan’s 2007 essay, “End of the World Blues,” one of the best essays of the 2000s in my opinion. In it, McEwan takes a cool, dissecting look at our tendency to create and believe in stories about the way(s) we think the world will end. Most of these stories–lurid, violent, and deeply unintelligent–are clothed as religious prophecies. They tend to involve plagues, fiery demons, scarlet whores, sometimes mass suicides, almost always a culling of the unrighteous.

The most distressing thing about apocalypse stories, McEwan writes, is not (just) their power to make people believe them, but  their power to make people wish for them to come true. It was not just Hitler, gun in his hand, catatonic in his bunker, who cursed the world as worthy of extinction once it had shown itself undeserving of his gift. It’s a  thought that crosses many people’s minds. Christopher Hitchens called it “the wretched death wish that lurks horribly beneath all subservience to faith.”

Even at the core of the apparently consoling belief that life is a mere vale of tears and its tribulations, too, shall pass lies a fetid and dangerous corruption of the human spirit. Anyone who compensates for the hardships of life by contemplating the pulling down of the earthly scenery and the unmasking of the whole world as fraudulent or second rate is vulnerable, perhaps even prone, to an all-encompassing death wish. What are we to make of the 907 followers of Jim Jones killed by cyanide poisoning in 1978, who gave the poison first to children, then drank it themselves? They had arrived at the end of the world; they were pulling down its scenery to expose it as a fake. If you accept the article of faith that this life is not the “real” one, take care; your consolation differs only in degree, not kind, from the ghastly nihilism of the Jonestowners.

It is natural to understand our lives as narratives, with beginnings, middles and ends. But the story’s subject is so inconsequential against the backdrop of all of history, the telling so short! Seen sub specie aeternitatis, each of us is a mere speck of consciousness, animated by accident and gone again in a microsecond. We are, as Kurt Vonnegut put it in Deadeye Dick, “undifferentiated whisps of nothing.”

“What could grant us more meaning against the abyss of time,” McEwan proposes, “than to identify our own personal demise with the purifying annihilation of all that is.” This is a powerful alternative to accepting our status as candles in the wind. Longing for the apocalypse, McEwan is saying, is simply narcissism amped up to the max: If I have to check out, so does everyone and everything else. And merely believing in the apocalypse, as more than half of all Americans do, is the prelude to this totalitarian fantasy.

While we may think we are past the point where another Jim Jones could arise to command the imaginations of a group of benighted, prophecy-obsessed zealots, we are not. The apocalyptic personality is still alive and even walks among our elites. Retired Army General William “Jerry” Boykin, who once commanded Delta Force and the Army Special Operations Command, famously identified the United States’ enemy in the War on Terrorism in 2003 as ” a guy named Satan.” Boykin also boasted that his pursuit of a Somali warlord in 1993 was fueled by the knowledge that “my God was bigger than his. I knew that my God was a real God, and his was an idol.”

As the Special Operations Commander, Boykin sought to host a group of Baptist pastors to a prayer meeting followed by live-fire demonstrations of urban warfare. The holy shoot-em-up was meant to inspire the invited Christian shepherds to show more “guts” in the defense of the faith.

Today Boykin teaches at a private college in Virginia and leads a think tank identified by the Southern Poverty Law Center as a hate group for its activism against the LGBTQ community. He believes the United States has a mission from God to defend Christendom, and in 2018 he said that the election of Donald Trump as president bore “God’s imprint.” Boykin’s professional success raises a serious question about the enduring power of religious apocalyptic prophecies. If Boykin had to give an earnest account of his faith to his political leashholders (when he was a general), it would clearly come across to that polite and educated class as slightly bonkers. How, then, does someone like Boykin rise to the position he did? Nursing a rapturous death wish and a longing for spiritual warfare is no disqualifier for high official success, it seems, as long as such mental disturbances bear the imprint of sacred scripture.

When Boykin was tasked in 1993 to advise the Justice Department on how to remove the Branch Davidians from their compound, he would have confronted in his opponent across the Waco plain a kindred spirit–a fellow scripture-quoting, God-and-guns Christian demonologist who saw the world as a Manichean battlefield.  All Americans should be disquieted by the fact that Boykin was closer in worldview to the armed, dangerous, and deranged David Koresh than he was to most of his fellow Army generals. His type is more likely to bring on the end of the world than to prevent it.

War-of-the-Worlds

A second essay that, for me, helps define the distinct unease with humanity’s destiny  that took shape in the 2000s is Bill Joy’s dystopian “Why the Future Doesn’t Need Us.” It serves as a reminder that it is not enough for enlightened societies simply to repudiate the lunatic fantasies of religion that titilated the minds of Jim Jones, David Koresh, Jerry Boykin, and so forth. We must also contend with the societal changes that will be wrought by our secular commitment to knowledge, science and reason.

The foundation of a rational society consists in what the philosopher Immanuel Kant called emanciption. Emancipation is the idea that humans are essentially alone, unaided by supernatual beings. We have only our own, fallible minds with which to try to understand the world and to order our relations with one another.

The Amerian founders believed strongly in emancipation. They were deists, which meant they believed that although God had set the universe in motion, he no longer supervised or intervened in his creation. So it came naturally to the founders to think of themselves as not being under the discipline of a heavenly parent. Many of England’s scientists in the 18th century had come to Philadelphia, in particular, to escape the oppressive “parenting” of the church back home and to follow scientific discovery wherever it led. It was a great leap forward for humankind.

The thing about emancipation, though, is that it does not guarantee that free-thinking humans will choose wisely or act in a way that shapes their societies for the best. All it says is that we unburdened by the dead hand of the past. Our future is  yet to be created.

In 1987 Bill Joy, who would go on to invent much of the technological architecture of the internet, attended a conference at which luminaries of computer science made persuasive and, to him, unsettling arguments for the power of artificial intellignce to augment and even replace human cognition. It was a disturbing, formative moment for Joy. It crystallized a dilemma that he thought was rapidly taking shape for the whole of humankind. Our vaunted intelligence and talent for automation was setting in motion a new kind of creation, and it was not clear at all to Joy that humans would have a place in it.

In his essay he proposes:

First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.

Even if you disagree with Joy’s postulation as strictly stated, it is futile to deny the progress we’ve made since 2000 in what he’s getting at–having our work done for us by organized systems of increasingly intelligent machines. Even if we never reach the “utopia” of not doing any of our own work at all, we will, it seems, approach that limit asymptotically, and the difference between the real world and machine utopia will become practically insignificant.

Which could mean this, according to Joy:

If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions.

Again, the trends of our knowledge-driven society indicate that Joy is describing a highly plausible future, not a science fiction scenario. Already, algorithms, not doctors, identify which strains of seasonal flu should be immunized against each year. Search engines, not lawyers, collate the case law necessary for constructing legal briefs and going to trial. On German roads, speed trap cameras detect your speed, scan your license plate, and use networked databases to generate a citation and mail it to you. And don’t even start about Alexa locking and unlocking your doors, adjusting your thermostat, and playing lullabies for your kids on cue. Our lives today are filled with anecdotal evidence that reliance on technology is rendering our human grasp of the world increasingly obsolete.

But wait a minute. All this technology would be utterly inert and meaningless without a pre-established connection to human activity, right? German officials had to set up the system for enforcing speed limits: the technology is just the spiffy means for implementing it. The internet, to take another example, is as powerful as it is because it was designed to serve human purposes. Its proper functioning still requires the imaginative work of millions of computer scentists; its power is shaped and harnessed by millions of knowledge managers; its downstream systems require the oversight and active intervention of a phalanx of help desk workers and network engineers.

Fine, point taken. Let’s say humans will always have to man the controls of technology, no matter how “intelligent” machines become. In Joy’s view, though, this more promising-looking scenario still doesn’t get us out of the woods. It’s the other horn of the dilemma about our ultimate destiny

On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite—just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity. If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct, leaving the world to the elite. Or, if the elite consists of soft-hearted liberals, they may decide to play the role of good shepherds to the rest of the human race. They will see to it that everyone’s physical needs are satisfied, that all children are raised under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes “treatment” to cure his “problem.” Of course, life will be so purposeless that people will have to be biologically or psychologically engineered either to remove their need for the power process or make them “sublimate” their drive for power into some harmless hobby. These engineered human beings may be happy in such a society, but they will most certainly not be free. They will have been reduced to the status of domestic animals.

If anything, Joy seems to have been even more prescient about this set of trends. There are clearly still human power centers managing technology, and they are just as clearly pursuing the broad purposes Joy indicates they would. To take just one example, it is abundantly evident that a pro-Trump campaign meme in 2016 would have been designed by algorithm to micro-target the aimless poor and attract their support for policies meant to speed up their extinction (such as “guns everywhere” laws, the repeal of ACA, and the defunding of public schools). If Trump’s supporters felt increasingly voiceless in 2016, it was not for lack of willing spokesmen. It was more likely because technology-enabled chronic underemployment had drained their lives of any purpose that might be given a voice.

This anomie is coming for us all, by the way, not just the (former) laboring class. The growing trend of “bullshit jobs” as described by David Graeber in the eponymous book outlines the disorienting leading indicators of the near future of office work. Increasingly, knowledge workers will have to wring their paychecks from a fast-shrinking set of whatever meaningful tasks automation leaves for us to do. We will mostly be left, though, with what Graeber calls “the useless jobs that no one wants to talk about.”

Humans are good at struggling. What we are not good at is feeling useless. The working poor that used to make up the middle class are now confronted by a future whose contours are literally unmaginable to them. They cannot place themselves in its landscape. Every activity of life that used to absorb human energy and endow it with purpose is increasingly under the orchestration of complex, opaque systems created by elites and implemented through layers of specialized technology. Farmers, to take one example, are killing themselves in despair of this system. They cannot compete with agribusinesses scaled for international markets and underwritten by equities instruments so complex they are unintelligible to virtually everyone but their creators and which are traded by artificial intelligence agents at machine speed, around the clock.

This world that never shuts off and never stops innovating was supposed to bring propserity and, with it, human flourishing. To a marvelous extent it has. It would be redundant to review the main benefits that technological advances have brought to human life.

But the thing about technological advances is they just keep extending themselves, and as they create ever more complex systems, it becomes harder to anticipate whether they will help or harm us in the long run.

For Joy, the advent of genetics, nanothecnhnology, and robotic sciences at the turn of the century was a sea change in terms of risk. It turned scientific innovation into a non-linear phenomenon. He writes:

What was different in the 20th century? Certainly, the technologies underlying the weapons of mass destruction (WMD)—nuclear, biological, and chemical (NBC)—were powerful, and the weapons an enormous threat. But building nuclear weapons required, at least for a time, access to both rare—indeed, effectively unavailable—raw materials and highly protected information; biological and chemical weapons programs also tended to require large-scale activities.

The 21st-century technologies—genetics, nanotechnology, and robotics (GNR)—are so powerful that they can spawn whole new classes of accidents and abuses. Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups. They will not require large facilities or rare raw materials. Knowledge alone will enable the use of them.

This is the ecology within which technological threats to humanity’s future will evolve. Armed with massively powerful computers that churn terrabytes of data derived from exquisitely accurate genetic maps and then give it to robots as small as human cells (molecular level “assemblers”) to go out into the biosphere and do things with, humans increasingly have the capacity to redesign the world. “The replicating and evolving processes that have been confined to the natural world are about to become realms of human endeavor,” Joy writes.

What might this lead to? Well, hundreds of nightmare scenarios that we can imagine, and an indefinite number that we can’t. Joy quotes from the physicist Eric Drexler, author of Unbounding the Future: The Nanotechnology Revolution: “‘Plants’ with ‘leaves’ no more efficient than today’s solar cells could out-compete real plants, crowding the biosphere with an inedible foliage. Tough omnivorous ‘bacteria’ could out-compete real bacteria: They could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days. Dangerous replicators could easily be too tough, small, and rapidly spreading to stop—at least if we make no preparation.”

In other words, just like our relatively dumb personal computers, GNR technology will do exactly what we tell it to do, regardless of the depth of our ignorance of the potential consequences. And then it will do its own thing, because it will re-design itself. Until the end of the world, amen.

So pick your poison, as offered up by McEwan or Joy. It may be that we are too stupid to think seriously about our ultimate destiny, or it may be that we are too smart to settle for a future that is safe and humanly meaningful. Or, as seems most dismally likely, there is room in our world for both types.

Review of “The Plague” by Albert Camus

BY MATTHEW HERBERT

Albert Camus’s 1947 novel The Plague is amost always read as an allegory. It is said to be about the spread of Nazism among the French during World War Two. For Christopher Hitchens, it is a warning about the underlying malice of religion. The desire to burn heretics only goes dormant under the civilizing forces of science, politics and common sense, Hitchens believed. The Plague shows us that tyranny can always break out anew under the right conditions.

But today it is instructive to read Camus’s novel as simply about what it says it is about–an epidemic. We need no deeper symbols to give it meaning.

plague

The focus of the story is on the progession of people’s responses to the sudden onset of a lethal, contagious disease. One day in nineteen forty-something, the denizens of the Francophile city of Oran, Algeria were going about their lives, “with blind faith in the immediate future,” as Camus puts it. With unthinking certainty, they expected every day to be followed by another one, differing in no important respect from the last. Love, ambition, work–everthing that requires the positing of a future for its fulfillment–unfolds in glorious normalcy.

Then the rats start to die. First in ones or twos, soon after in large groups. Building supervisors and trash haulers have to gather them up and carry them away. People step on them unawares, feeling something soft underfoot, then kicking them away in disgust.

Soon, people start dying too. Two Oran doctors evaluate the evidence and hypothesize that a plague is underway. Their first reaction when they speak the word to themselves is to anticipate what will happen next–a large-scale official denial of the threat even as it unfolds before everyone’s eyes. “You know,” one of the doctors says, “what they’re going to tell us? That it vanished from temperate countries long ago.”

It’s funny that we European-Americans, too, think this kind of thing, but we do. We lived in such close quarters with our animals for so long, that we contracted a whole range of ravaging diseases, then became immune to many of them, then conquered the new world through a global campaign of germ warfare we didn’t even know we were waging. So it goes. Millions died.

Late in The Plague as Oran is dying, the protagonist, Dr. Rieux reflects on how little human agency matters once a brutal, unthinking pandemic is unleashed. Rieux has been working 20-hour days, and he is beginning to realize he will eventually lose the fight against the plague’s exponential spread. Amidst the stench of the dead and dying people of Oran, he achieves a kind of clarity:

Had he been less tired, his senses more alert, that all-pervading odor of death might have made him sentimental. But when a man has had only four hours’ sleep, he isn’t sentimental. He sees things as they are; that is to say, he sees them in the garish light of justice–hideous, witless justice.

I have read all of Camus’s books, and I am confident that this is a statement of record: it is Camus speaking directly to the reader. Camus believes each person lives alone beneath a “vast, indifferent sky,” and must confront an ultimate absurdity–that life, the one thing we humans are encoded and conditioned to seek with all our energy, is precisely the thing the universe will deny us. We are guaranteed not to get it. Some justice, right? You can see why Camus calls it hideous and witless.

The plague comes for us all.

But this is not the whole of the human situation for Camus. Faced with desperate absurdity, we invent things. One of these is society. We create all kinds of groups whose overarching purposes endow our individual lives with meaning. We subordinate our selfish desires to higher ends. They give us a reason, as Marcus Arelius put it long before Camus, to rise each morning and do the work of a human. Having a society is what enables us to be fully human.

But the thing about society is that it does not come for free. It is not just there, like the elements of the periodic table. We create it, and we are responsible for sustaining it. And this is actually what The Plague is about, whether you take the plotline straight or as an allegory–it is about moral responsibility.

About one-third of the way through The Plague, the people of Oran start to understand that the epidemic ravaging their home will soon re-shape their lives. With the shit getting real and minds suddenly focused, a prominent priest decides to give a straight-talk sermon. It is, ahem, a come-to-Jesus moment:

If today the plague is in your midst, that is because the hour has struck for taking thought. The just man need have no fear, but the evildoer has good cause to tremble. For plague is the flail of God and the world his threshing floor, and implacably he will thresh out his harvest until the wheat is separated from the chaff. There will be more chaff than wheat, few chosen of the many called. Yet this calamity was not willed by God. Too long this world of ours has connived at evil, too long has it counted on the divine mercy, on God’s forgiveness.

Camus was not a religious man. Quite the opposite. The part of the priest’s sermon I have bolded, though, is something Camus believed in, in a way, with great passion. So do I. It is really about society, responsibility and solidarity.

For too long we have connived in evil by pretending that society gets by on its own, or as Margaret Thatcher thought, it simply doesn’t exist. Americans tend to take this rugged pose in various forms, either by pretending that we’re all atomized individualists; or that the market will solve all problems; or that government itself is the problem not a solution; or if we all had enough guns everything would work itself out; or if we just wait for the super-rich sprinkle a few dollars down on the poor through gig work and mcjobs, they will get by. Or my personal favorite: As long as I have a big enough pile of money, everything else is as good as it needs to be.

These are all variations on the same kind of moral illiteracy.

For many years I had the privilege to live among adults who did not believe any of these childish fantasies–or at least they did not act in accordnce with them. They knew that society was a human invention. If you wanted a decent society, you would have to pay for it.

And I don’t just mean money. Money is just a start. You would have to pay by believing that you really are responsible to your neighbors. You really do have to help set up good schools for everyone, even if you think your kids are more deserving than theirs. You have to build clinics and hospitals on the same model. Libraries, roads, tramlines. It all has to be good, and it has to be good for everyone.

We need these things all the time if we are to indulge our blind faith in the immediate future–the assumption that tomorrow will bless us with the same certainty today did.

What we are discovering through our current plague is how fragile our society is. It is fragile because we have allowed the rich and greedy to set its priorites. And so we inhabit a system designed only for the best of times–the only thing the rich can envision. Our healthcare system is set up to function well for the rich, just barely for the middle class, and not at all for the poor. Under “normal” circumstances, this is tolerable. Well, it is tolerable in the sense that it does not incite a general insurrection.

Same goes for labor and wages. Marx’s Iron Law of Wages is viable but only under the best conditions. Our country is constantly running an experiment designed to discover how low wages can be driven for the maximum number of people. Sure, we can have a country where hundreds of thousands of people use paycheck loans to survive and never send their kids to a dentist, but only as long as widespread disaster does not strike. We need feel no responsibility for those poeple. It’s part of the American story to watch them struggle, alone, for survival. It’s interesting.

But when the plague landed on our shores, all our lives suddenly threatened to become more interesting. The moral corruption of our system was exposed. Suddenly it has become an urgent matter to supply people with money, goods and services they haven’t strictly speaking earned. But what if we had already had a system in place in which we collectivized our responsibility for one another–a system that normalized the imulse to take care of each other?

Writing in the Atlantic Monthy this week, Anne Applebaum counts the cost we are now paying for letting our society believe the lie of rugged individualism. That lie has led to institutional rot and a decline of not just governmental, but civilizational capacity:

The United States, long accustomed to thinking of itself as the best, most efficient, and most technologically advanced society in the world, is about to be proved an unclothed emperor. When human life is in peril, we are not as good as Singapore, as South Korea, as Germany. And the problem is not that we are behind technologically, as the Japanese were in 1853. The problem is that American bureaucracies, and the antiquated, hidebound, unloved federal government of which they are part, are no longer up to the job of coping with the kinds of challenges that face us in the 21st century. Global pandemics, cyberwarfare, information warfare—these are threats that require highly motivated, highly educated bureaucrats; a national health-care system that covers the entire population; public schools that train students to think both deeply and flexibly; and much more.

The plague comes for us all. That is undeniable. But we need not pretend we are up against it alone. We need a system that takes care of everyone all the time, before emergencies happen. That’s what society is for. And, yes, Maggie, society does exist. It’s been one of our best inventions.