Orwell, Steadfast Partisan of the Left

BY MATTHEW HERBERT

Sometime during the Cold War, right-leaning ideologues got the idea that George Orwell had switched sides shortly before dying in 1949, and his legacy was somehow on their side of the aisle.

An essay by the neo-conservative writer Norman Podhoretz in the January 1983 Harpers is typical. In it, Podhoretz argues that Orwell underwent several “major political transformations,” and if you trace their arc, you see it bending clearly toward Reaganite neo-conservatism.

Well, as Orwell said of Charles Dickens, some writers are well worth stealing, and Orwell himself proved attractive to thieves. The right’s attempt to appropriate his legacy is an act of attempted robbery. Orwell was a steadfast partisan of the left and remained so to the end of his life.

Why does (re)establishing this fact matter? I’ll get to that, because it really does matter, but first let’s consider the evidence for Orwell’s enduring loyalty to the left.

  1. Actions always speak louder than words. Orwell was shot in the throat by a fascist sniper while fighting with the Workers’ Party of Marxist Unification (POUM) in the Spanish Civil War in 1937. He temporarily lost his voice and nearly died. As he was recovering in a Spanish hospital, he wrote to one of his best friends that he could “at last really believe in Socialism, which [he] never did before.”
  2.   Orwell made explicit commitments to the left’s political agenda, and he never reversed them. The leftist spirit was alive but only vaguely so in Orwell’s earliest writings, which addressed the injustices of colonialism and the structural nature of poverty in what was then the world’s biggest, richest empire. But in June 1938, he put his cards plainly on the table in his essay, “Why I Joined the Independent Labour Party.” The gathering threat of fascism he said, was forcing passive leftists to adopt a concrete, organizing dimension for their sympathies, even though that meant they would have to make unwelcome political compromises with the establishment. “One has got to be actively Socialist,” he wrote, “not merely sympathetic to Socialism.”
  3. A thinker as forthright as Orwell would have publicly and explicitly withdrawn his support for the left had he privately abandoned it. He never did. Indeed, he used his dying breath to express his enduring loyalty to socialism. A representative of the United Automobile Workers had written Orwell sometime in mid-1949 asking if 1984, with its direct indictment of collectivism, had not signaled Orwell’s abandonment of the left. Weak, feverish, and unable even to walk from his hospital bed to the radiology lab for a needed X-ray, Orwell wrote back on the 16th of June, explaining clearly and forcefully, “My recent novel is NOT intended as an attack on Socialism or on the British Labor Party (of which I am a supporter) but as a show-up of the perversions to which a centralised economy is liable and have already been partly realised in Communism and Fascism.” He would only write eight more letters in what remained of his short life, but that letter contained his last political statement; he remained a man of the left.
  4. Wait, what’s that about supporting the British Labor Party in 1949? Hadn’t Orwell joined the ILP in 1938? In the essay about that decision, Orwell said that despite his membership in the more ideological ILP, he hadn’t lost faith in the more mainstream Labour Party and that his “most earnest hope is that [they] will win a clear majority in the next General Election.” He knew where the winning votes would come from in a battle against the right. From the time he was a declared socialist to the end of his life, Orwell was a pragmatist who believed that the leftist movement would only advance its cause if it was part of a larger, viable coalition against the established monied interests. Politics, Orwell observed over and over, is always a choice between bad options and worse ones. His willingness to cooperate with non-socialist parties should not be interpreted as a rejection of his declared ideological loyalties. Winning is always ugly, and Orwell wanted to win.
  5. Podhoretz, in his 1983 essay laying claim to Orwell on behalf of the right, observes that Orwell was forever criticizing the left, with vigor. Of course he was. Orwell loved the left and did not want to see it commit suicide by bowing to rigid orthodoxies. He was always trying to keep the left honest and to make sense of his own experience as an apologist for a movement that could confound, embarrass and disappoint him in thousands of ways. On the other hand, Orwell’s career-long rejection of the right was as plain as the nose on his face.  From his earliest, unpublished writings about poverty and homelessness, Orwell was always against a state that was set up to steal the workers’ labor value and arrogate it to the one percent. (Orwell may have actually coined that phrase, by the way, in a diary entry in 1941.)

Indeed, despite Orwell’s frequent critiques of leftist foibles, there is nothing you can recover from his writing that teaches you how to be a better rightist. Orwell went to Spain in 1937 with an expressed desire to put a bullet into a real, existing fascist, and he never lost his antipathy toward the more abstract powers ranged behind the right–money worship, predatory corporations, religious authority, bought-off media, politicized courts, and of course, the great populist enabler of it all, Yahoo nationalism. All Orwell’s writings that conservatives might construe as rejections of leftism actually can, and should, be understood as instructions for how to be a better leftist.

An offhand remark by Orwell in a letter to the Partisan Review in 1944 is typical. He was tossing around the idea with his editors that Europe’s constitutional monarchies (in Britain, the Low Countries and Scandinavia) had done a better job resisting Nazism than Europe’s republics, possibly because time-worn royal pageantry stirred and provided a harmless, domestic outlet for popular patriotic sentiments. France, though, an exemplary republic that had killed its kings as any “correct” leftist movement would, had no repository for its patriotic feelings outside the state’s real power structures, and these largely strove for survival by adapting to fascism. If you tell this kind of thing to “the average left-winger,” Orwell noted, “he gets very angry, but only because he has not examined the nature of his own feelings toward Stalin.”

Orwell Tea
(Image: Commonweal)

Today’s right-winger trying to put a neo-conservative construction on Orwell generally has an easy time cherry-picking items like this one. There was a shameful number of European and American socialists who stayed true to Stalin, and Orwell repeatedly called them out for this arch sin, and with many lesser ones. Gather a few of these indictments together and, voila, you have an Orwell struggling to break free of his leftist dogmas and who would have grown in time to love Margaret Thatcher.

Bullshit. Orwell’s self-criticism never rose to the level of embracing of the right, nor did it even point that direction. Indeed, if anything systematic can be recovered from Orwell’s writings as a whole–and he seems to have hated systems–it is a multi-layered critique of the things that threatened to sink socialism.

The body of Orwell’s work weaves together three levels on which he constantly battled against leftist pieties–as an artist, as a political operative, and as a cultural conservative. Orwell believed that declaring a party loyalty was artistic suicide for a writer, whose job was to tell the truth. Writing requires complete freedom of expression, and party membership requires hamfisted modifications of this freedom. He knew he was maiming himself as a writer when he joined the ILP, but he joined anyway, because, he thought, the times demanded political responsibility even of artists. “Group loyalties are necessary,” he wrote in ‘Writers and Leviathan,’ “and yet they are poisonous to literature, so long as literature is the product of individuals.”

(Interestingly, Orwell was remarkably charitable to writers who stayed true to their art and kept out of politics. On his way to Barcelona in 1937, Orwell visited Henry Miller in Paris and praised him frankly and profusely for writing Tropic of Cancer, a book widely censored and generally seen at the time as a scandal of sacrilege and hedonism. Orwell was only slightly perplexed, possibly even charmed, by Miller’s naive indifference to what was happening in Spain. Miller exhorted Orwell to stay in Paris and drink, asking him why he would go down and throw his life away.)

As a political operative–or, by extension, as an ordinary voter–Orwell thought that backing certain desirable leftist causes would inevitably bring to light other, unarticulated commitments to less desirable, even repugnant outcomes. If you want the emancipation of the working class,  for example, you are going to need more, not less, industrialization, which is hateful on aesthetic and environmental grounds. Furthermore, there was no resolving such basic inconsistencies for Orwell: you just had to live with them. Political responsibility, he wrote, demands that we “recognise that a willingness to do certain distasteful but necessary things does not carry with it an obligation to swallow the beliefs that usually go with them.”

This is, of course, a liability of any political orthodoxy, not just the leftist one. But when Orwell indicated the best way out of this thicket, he was clearly speaking from and for the left. The first thing progressives (yes, he used that term too) must do is reject two assumptions forced on them by the established right. One is that the left is in search of a laughably unachievable utopia, and two is that any political choice is a moralistic one “between good and evil, and that if a thing is necessary it is also right.” Both these assumptions spring from a common myth, popular consensus in which Orwell thought the right had gotten for free for a long time.

This myth is the quasi-religious belief that man is fallen and essentially incorrigible. There’s simply no use trying to improve his lot. For centuries, the right (and its progenitors) have placidly asserted the dogma that humans are either candidates for heaven or hell, with no ground in between. Right in front of our nose, though, Orwell was constantly observing signs that humans were capable of making incremental progress, through politics that were often tortured, dishonest, even corrupt, but oriented nonetheless toward the reduction of human misery. In a 1943 book review, Orwell notes that the London slums of Dickens’s day teemed with poor people so deprived of decent conditions that it was objectively true to say they led subhuman lives. They were so far outside the pale, they could not even orient their existence on any kind of program to help civilize them. Sitting in his cold, dark flat during the Blitz, Orwell measured the progress achieved since the 1870s:

Gone are the days when a single room used to be inhabited by four families, one in each corner, and when incest and infanticide were almost taken for granted. Above all, gone are the days when it seemed natural to write off a whole stratum of the population as irredeemable savages.

The conservative belief that we cannot and must not take even the first step toward heaven as long as we are earth-bound, Orwell said, “belonged to the stone age.” Clearly there was no need to invoke a utopia if your real political aim was merely to reduce the worst, most tractable injustices occurring right here, right now. “Otherworldliness,” Orwell writes, “is the best alibi a rich man can have” for doing and sacrificing nothing to reduce the suffering of the poor.

The metaphysical pessimism behind the rich man’s alibi, Orwell believed, led directly to defeatism. And this defeatism made it an urgent matter for the left to reject the straw-man accusation that they were trying to build a utopia of unachievable dimensions. “The real answer,” he wrote, in a 1943 ‘As I Please’ column, “is to dissociate Socialism from Utopianism.” He would write this over and over again, in other words, in other places, until he died.

A right-winger looking to steal Orwell’s legacy can perhaps find the most aid and comfort in the third level of Orwell’s critique of his leftist fellow-travelers, his scorn for their bad taste and what we would today call the performative aspect of their politics. Despite faithfully bearing the leftist banner of liberté, égalité, fraternité, Orwell retained many of the biases and preferences of a garden-variety cultural conservative. He obviously believed it was important not to hide these things, but to wear them on his sleeve.

Although Orwell was horrified by war and believed that socialism would help pave the way to less of it, he was more horrified by pacifists who not only held to Chamberlain’s line of appeasement in 1939 but touted staying out of all wars on philosophical grounds. Orwell called this one-eyed pacifism. He did’t stop there, though. He openly despised the posturing of the pacifists and other cultural progressives of his day, calling them “juice-drinking sandal-wearers” and “creeping Jesus” types. Emotionally, Orwell was closer to Archie Bunker on some things than he was to a by-the-book leftist.

Orwell was also free with some epithets that he might think twice about today. In letters to friends, he called homosexuals fags, often in connection with boys’ public school life. (He mentions in his 1948 essay “Such, Such Were the Joys” that the younger boys at school mooned over and sometimes had crushes on the older boys.) Although he took pains in one “As I Please” column in 1943 to observe that black American G.I.s in London were more polite than white ones, he used the N word without compunction. (It should also be pointed out, though, that he used the same word with political acumen when unmasking the racist hypocrisies of liberal democracies, as in his 1939 essay “Not Counting Niggers.”)

He also viewed the racial situation in Burma, where he served as a colonial policeman, stereoscopically. With one eye he saw the Burmese as “little beasts,” but with both eyes open, he was “all for the Burmese and all against their oppressors, the British.” Again, even while propagandizing for an oppressed people, Orwell believed it important to wear his reactionary racism in full view. He would always believe humans to be a tangle of contradictions, and he did not wish to have his own hidden.

In many ways, Orwell simply deplored the bad material taste of his time. He pined, as any conservative does, for the good old days, when beer was better and fishing streams cleaner. But he clearly reserved a special contempt for the aesthetic depths to which collectivists would plunge out of loyalty to their politics. The low-level, everyday miseries of Londoners in 1984 represent not just a shudder against ugliness and poor taste in general, but a particular warning against accepting material shabbiness as a condition of political progress. The opening chapter of 1984, set in Winston Smith’s apartment building, Victory Mansions, uses the aromas of chronic poverty to animate this idea. “The hallway smelt of boiled cabbage and old rag mats.” Smith’s Victory Gin “gave off a sickly, oily smell . . . ” In a later chapter, Smith visits a proletarian pub, where the ale smells and tastes sour. (See this wonderful 2016 article from the Guardian on “George Orwell and the stench of socialism” for further discussion of this theme.)

If your socialist leader promises material progress–which they all do as a matter of course–they damn well better deliver. From East Germany to North Korea, collectivist dictators have been forced to make whole careers of denying the material poverty of their subjects. Had Orwell lived to see the Kitchen Debate of 1959 between Nixon and Kruschev, he would have called it political schlock and free propaganda for American corporations, but I think he would have also called it an important victory for liberal democracy. It showed what working people ought to expect as a return on their labor value.

Orwell lived a great deal of his life near the functional poverty line, and his tastes were never sumptuous–how could they have been? But he did believe that the ordinary person’s attraction to nice things was a politically useful force. The realistic desire for a “nice cup of tea,” a good glass of beer, or a decent dinner out with one’s partner were handy yardsticks for measuring the success or failure of a government. The whole undertone of Orwell’s 1939 novel Coming Up for Air is about how unnecessarily hard it was for an ordinary young person to fulfill even the shabbiest of proletarian desires in the world’s richest empire.

Does all this matter? Does it matter that the right cannot justifiably lay claim to Orwell’s legacy? I believe it does. Because I believe it is precisely Orwell’s stereoscopic vision of socialism that makes him true to and valuable for the left. “In a prosperous country,” he wrote in 1939, “left-wing politics are always partly humbug.” Progressivists made their livings, Orwell continued, self-righteously “demanding something they they don’t genuinely want”–a measurable reduction in the elite’s standard of living. That would make waves. Safer to stay in opposition.

Orwell spent his energies, though, in pursuit of taking and holding political power for the left. Real political responsibility would come at a cost, as he knew, and it would court contradictions, compromises, even corruption. But that was also true of the political processes that lifted the lives of 1870s slum dwellers out of subhuman misery. Yes, socialism as Orwell understood it, is partly humbug, but corporate capitalism is wholly and completely humbug. You cannot just cheer on the rich and wait for them to voluntarily return you some dividends on your labor value. It will never happen. This is not to say, though–and Orwell never would–that one system is right and the other wrong. But the system in which the worker makes his claim to a bare minimum of security is clearly a less bad system than one where the rich reserve the power to ignore the poor. Good politics is always choosing the less bad option over the worse one. This is the leftist cause, and Orwell is one of its leading champions.

Good Cheap Books

BY MATTHEW HERBERT

One of the things I love most about my e-reader is the access it gives me to inexpensive books. Some of the best bargains out there are great books or collections of great authors that you can get for mere pennies.

Now that many of us have more time for reading, you might consider savoring–or in some cases, tackling–some of these:

Germinal by Emile Zola. If you can hack the French, it’s free on Amazon. We anglophones can read it for 99 cents. This is a great book on its own, of course, but it’s particularly relevant right now while the whole nation is basically on a de facto general strike. When you consider what troubles Zola’s miners had to go to merely to organize a strike in one isolated mining district in Germinal, you start to think that our working class, with a general strike more or less materializing out of nowhere, might wake up and demand some nice things too before going back to work. Probably not, though, because, . . .

. . . our ambient levels of passivity and conformism run pretty high. On this theme, read Sinclair Lewis’s 1922 classic Babbit. It gives the American middle class’s first honest look in the mirror. While our economic life in the Roaring Twenties told us we were masters of our fate, sitting on top of the world, our interior lives said we were chumps, self-righteous fools, and slaves to convention. Babbit–the character and the novel–asks what lies beneath the masks we wear.

Speaking of broad human themes, you cannot miss Cervantes’s Don Quixote. It is possibly the first novel ever written and in any case a great literary wellspring of European enlightenment. From cover to cover it speaks unsparingly but with great comic warmth of a world no longer enchanted by religion. Milan Kundera reflects on it, “When Don Quixote went out into the world, that world turned into a mystery before his eyes. That is the legacy of the first European novel to the entire subsequent history of the novel. The novel teaches us to comprehend the world as a question. There is wisdom and tolerance in that attitude.” For 99 cents you can join the communion of the faithful who ask themselves this question over and over.

Don’t let the imposing-sounding title of Epictetus’s Enchiridion scare you off. It’s a highly accessible introduction to Stoicism, which basically says that unplanned, unjust, and chaotic as the world may be, you should still do your best to live an orderly, dignified life. When I read the plain moral brilliance of Epictetus, I have to wonder how we Americans–many of us cheerful, decent people of solid good sense–let ourselves be saddled with the farcical beastliness of Christianity and other Bronze Age Levantine sky god cults. Read Epitectus alongside your chosen “sacred” text and ask yourself which one really and truly commands your conscience.

If you enjoy unpacking surprise gifts, try any of the Stoics Six Packs series on Amazon, especially volume two, which includes Seneca’s essential On the Shortness of Life (De Brevitatis Vitae). They cost 99 cents. Had you been a curious student living in ancient times somewhere on the Mediterranean rim, you would have risked your life traveling through war zones and plague hotspots to go study these masters.

For years I put off reading Proust’s In Search of Lost Time, because (a) it’s seven books long, and (b) it struck me in my youth as too fancy pants for my tastes, which ran more toward Dostoevsky’s direct line of questioning God Himself. My mind was changed though, when I read that it was a stylistic inspiration for Shelby Foote’s massive, novelistic History of the Civil War (also a wonderful read but not one that meets my price criteria for this post). You will be richly rewarded even if you only make it through Swann’s Way, the first book. In it, Proust reveals how hard it is to be human. We think of our lives as more intelligible in retrospect than they are as they happen in real time, but Proust gives pause, page after page, to ask if that’s really true. The thing we think we know best–our self–is a collection of rounded-off impressions and outright illusions that require exertion to be held together. When critics talk about the modernist movement as one that unmasked the incoherence of the individual self–the notion that not only is there no essential me in the present, but that I cannot even construct one from the past–they invariably have Proust’s masterpiece first in mind. $1.99.

books

I have several good collections that range in price from free to $1.99. Used to be these beasts were hard to navigate because each book was marked as a “chapter,” so all you could do was move from one book to the start of another. Some were even worse (like a collection of Bertrand Russell I picked up and then abandoned in frustration). They contained thousands of pages, and sometimes all you could do was plow through from the start of the collection toward your desired book. These days, though, many large digital collections are organized better, with internal chapter markings. I am currently reading my way though the complete novels of H.G. Wells, and it is very nicely organized so that you can access individual sections or even chapters in each novel.

In 2015 or so–I can’t quite remember–I decided to read all of Charles Dickens. I was inspired by Orwell’s justly famous essay about Dickens, which convinced me that I would rather be a failed literary critic rather than a successful anything else, even if I came across as pretentious or ridiculous. Plus I didn’t have to quit my day job, which was nice. Anyway, picking up all of Dickens was the easy part. You can get his complete works for a buck.

It’s pretty much the same for Mark Twain, Jane Austen, Balzac, Nietzsche and Goethe. The E.M. Forster collection is good but lacks A Passage to India. I’m sure there are many other great collections out there; these are the ones I’ve spent my time on. Oh, yes, one more I can’t neglect. George Eliot’s Middlemarch is often called the greatest novel in the English language. You can pick it up in Eliot’s collected works and judge for yourself.

 

 

 

Review of “Joe Gould’s Teeth” by Jill Lepore

BY MATTHEW HERBERT

Have you heard of Joe Gould, the mad, drunken, chain-smoking bohemian who called himself the greatest historian in the world–flunked out of Harvard, twice, communicated with seagulls, wrote the world’s longest unpublished book, an oral history of everyone, but then again possibly didn’t?

Neither had I, until I read Jill Lepore’s completely absorbing 2016 biography of the man himself, Joe Gould’s Teeth.

Gould came from a family with a queer streak, Lepore tells us.

The Goulds had come to New England in the 1630s, and they’d been strange for as long as anyone could remember. [Joe] was born in Massachussetts in 1889, . . . . [His father,] Dr. Gould was known to fly into rages, and so was Joseph. There was something terribly wrong with the boy. In his bedroom, he wrote all over the walls and all over the floor. His sister, Hilda, found him so embarrassing, she pretended he didn’t exist. He kept seagulls as pets, or at least he said he had, and that he spoke their language: he would flap his wings, and skip, and caw. He did all his life. That’s how he got the nickname “Professor Sea Gull.”

There is plenty to start with here if you are trying to unravel the mystery of Joe Gould. The guy had weirdness in buckets. He was probably misunderstood by everyone around him, from his childhood on. Lepore ventures that he was autistic before it was a diagnosable condition. But there must have been something especially formative about having his sister deny his existence–in effect, trying to erase him.

jgteeth

Later, when Gould would attract the serious attention of literary critics, it was because of the revolutionary scope of his work. He wanted to democratize the recounting of history so that everyone had a voice. No one would be erased. He wrote:

What we used to think was history–kings and queens, treaties, conventions, big battles, beheadings, Caesar, Napoleon, Pontius Pilate, Columbus, William Jennings Bryan–is only formal history and largely false. I’ll put it down the informal history of the shirt-sleeved multitude–what they had to say about their jobs, love affairs, vittles, sprees, scrapes, and sorrows–or I’ll perish in the attempt.

To a friend, he summed it all up rather beautifully, saying, “I am trying to present lyrical episodes of everyday life. I would like to widen the sphere of history as Walt Whitman did that of poetry.” In the end, though, he did perish in the attempt. It was all too much for him.

Gould wrote the oral history in hundreds, maybe thousands, of dime-store composition books, over the course of decades. But he could never find them when it came time to publish. He thought he left some on a chicken farm. Or he would have to re-write them. Or he had sent them to correspondents who kept them in trunks. Critics, and even friends, eventually came to suspect there was no Oral History. Could it all have been a dream? Maybe.

“He was forever,” as Lepore puts it, “falling down, disintegrating, descending.” He once fell and cracked his skull on a curb. He woke up with his head bleeding and he recited parts of his history to a policeman. Writing held him together, but never for long, Lepore tells us. Gould just couldn’t get long with people, and in the end society could not accommodate him. He harassed women, including Harlem’s most famous sculptress. He turned on friends and generous benefactors. He died in America’s largest mental institution, unmourned, unnoticed by anyone on the outside. (In an early stage of his “treatment,” his teeth were pulled. Hence the book title. It was just one of those things doctors did in those days to render the insane more pliant. Lepore hypothesizes that Gould was probably also lobotomized near the end. This was another thing that doctors just did in those days, and there is a record of a man of Gould’s age and description undergoing the procedure.)

Lepore is an irresistible writer and magnificent historian. She tells the story of Gould in a way that sweeps you along with it. But it is the complexity of her subject that compels us. Beneath the greatness of Lepore’s writing is a paradox about Gould himself that yawns wide and takes us in. One of the ways Gould kept his internal balance–early on, when he still could–was to remind himself that individuals are unknowable at their core:

The fallacy of dividing people into sane and insane lies in the assumption that we really do touch other lives. Hence I would judge the sanest man to be him who most firmly realizes the tragic isolation of humanity and pursues his essential purposes calmly.

Lepore’s book is an adventure story in a way.  She set out on it, she says, to try to find the products of Gould’s “essential purposes,” the legendary stacks of dime-store composition books that contained the Oral History. But she didn’t find them. Instead, she found this: the man who gave his whole life to writing the history of the shirt-sleeved multitudes didn’t even believe in it. He couldn’t have. He thought people were unknowable at their core. You might get at the epiphenomena of their lives, but you could never access the individuals themselves. How could you write a history of all of them if you couldn’t even know one of them?

But he kept on, as we all must. “My impulse to express life in terms of my own observation and reflection is so strong,” Gould once wrote, “that I would continue to write, if I were the sole survivor of the human race, and believed that my material would be seen by no other eyes than mine.” This is an expression of courage worthy of Joseph Conrad. When you find that one life-sustaining thing you would do even with no one to witness it, you have arrived. It doesn’t matter that you are possibly as insane as Joe Gould. Because who’s to say what insanity is. Keep calm and pursue your essential purposes.

 

 

It’s the End of the World as We Know It

BY MATTHEW HERBERT

We humans are forever predicting the end of the world. My own guess is roughly five billion years from now, when the sun will explode and burn out. This eventuality probably wouldn’t come up much as a topic of conversation, but my seven-year old routinely asks about it.

Like me, he is made vaguely sad by the idea that everything everywhere has an expiration date, even if that date is unimaginably far in the future. Not only will we not be around to worry about the demise of the solar system, but even if there are any descendents of Homo sapiens alive to contempate the last sunset, they will be as different from us as we are from bacteria, their faculties for grasping and representing reality utterly alien to our modes of cognition and perception.

Of course what is astronomically more ikely is that all the earth’s life forms will have run their course eons before our tiny corner of the Milky Way unwinds according to the laws of thermodynamics.

But there’s something definitive about the end of the world that brings it within the scope of our imagination nonetheless. It doesn’t matter how far in the indeterminate future it may lie; it still looms as a finality. We can’t let it go.

Why not, though? I mean, five billion years on a human scale is eternity. There’s a strong case for just calling the world neverending, especially in conversations with seven-year olds.

But we don’t.

The prospect of the world’s end doesn’t just haunt us vaguely, fluttering in the backs of our minds, Ian McEwan writes, it grips us and actively shapes far too much of our public lives:

Thirty years ago, we might have been able to convince ourselves that contemporary religious apocalyptic thought was a harmless remnant of a more credulous, superstitious, pre-scientific age, now safely behind us. But today, prophecy belief, particularly within the Christian and Islamic traditions, is a force in our contemporary history, a medieval engine driving our modern moral, geopolitical, and military concerns. The various jealous sky gods–and they are certainly not one and the same god–who in the past directly addressed Abraham, Paul, or Mohammed, among others, now indirectly address us throught the daily television news. These different gods have wound themselves around our politics and our political differences.

Our secular and scientific culture has not replaced or even challenged these mutually incompatible, supernatural thought systems. Scientific method, skepticism, or rationality in general, has yet to find an overarching narrative of sufficient power, simplicity, and wide appeal to compete with the old stories that give meaning to people’s lives.

This passage is from McEwan’s 2007 essay, “End of the World Blues,” one of the best essays of the 2000s in my opinion. In it, McEwan takes a cool, dissecting look at our tendency to create and believe in stories about the way(s) we think the world will end. Most of these stories–lurid, violent, and deeply unintelligent–are clothed as religious prophecies. They tend to involve plagues, fiery demons, scarlet whores, sometimes mass suicides, almost always a culling of the unrighteous.

The most distressing thing about apocalypse stories, McEwan writes, is not (just) their power to make people believe them, but  their power to make people wish for them to come true. It was not just Hitler, gun in his hand, catatonic in his bunker, who cursed the world as worthy of extinction once it had shown itself undeserving of his gift. It’s a  thought that crosses many people’s minds. Christopher Hitchens called it “the wretched death wish that lurks horribly beneath all subservience to faith.”

Even at the core of the apparently consoling belief that life is a mere vale of tears and its tribulations, too, shall pass lies a fetid and dangerous corruption of the human spirit. Anyone who compensates for the hardships of life by contemplating the pulling down of the earthly scenery and the unmasking of the whole world as fraudulent or second rate is vulnerable, perhaps even prone, to an all-encompassing death wish. What are we to make of the 907 followers of Jim Jones killed by cyanide poisoning in 1978, who gave the poison first to children, then drank it themselves? They had arrived at the end of the world; they were pulling down its scenery to expose it as a fake. If you accept the article of faith that this life is not the “real” one, take care; your consolation differs only in degree, not kind, from the ghastly nihilism of the Jonestowners.

It is natural to understand our lives as narratives, with beginnings, middles and ends. But the story’s subject is so inconsequential against the backdrop of all of history, the telling so short! Seen sub specie aeternitatis, each of us is a mere speck of consciousness, animated by accident and gone again in a microsecond. We are, as Kurt Vonnegut put it in Deadeye Dick, “undifferentiated whisps of nothing.”

“What could grant us more meaning against the abyss of time,” McEwan proposes, “than to identify our own personal demise with the purifying annihilation of all that is.” This is a powerful alternative to accepting our status as candles in the wind. Longing for the apocalypse, McEwan is saying, is simply narcissism amped up to the max: If I have to check out, so does everyone and everything else. And merely believing in the apocalypse, as more than half of all Americans do, is the prelude to this totalitarian fantasy.

While we may think we are past the point where another Jim Jones could arise to command the imaginations of a group of benighted, prophecy-obsessed zealots, we are not. The apocalyptic personality is still alive and even walks among our elites. Retired Army General William “Jerry” Boykin, who once commanded Delta Force and the Army Special Operations Command, famously identified the United States’ enemy in the War on Terrorism in 2003 as ” a guy named Satan.” Boykin also boasted that his pursuit of a Somali warlord in 1993 was fueled by the knowledge that “my God was bigger than his. I knew that my God was a real God, and his was an idol.”

As the Special Operations Commander, Boykin sought to host a group of Baptist pastors to a prayer meeting followed by live-fire demonstrations of urban warfare. The holy shoot-em-up was meant to inspire the invited Christian shepherds to show more “guts” in the defense of the faith.

Today Boykin teaches at a private college in Virginia and leads a think tank identified by the Southern Poverty Law Center as a hate group for its activism against the LGBTQ community. He believes the United States has a mission from God to defend Christendom, and in 2018 he said that the election of Donald Trump as president bore “God’s imprint.” Boykin’s professional success raises a serious question about the enduring power of religious apocalyptic prophecies. If Boykin had to give an earnest account of his faith to his political leashholders (when he was a general), it would clearly come across to that polite and educated class as slightly bonkers. How, then, does someone like Boykin rise to the position he did? Nursing a rapturous death wish and a longing for spiritual warfare is no disqualifier for high official success, it seems, as long as such mental disturbances bear the imprint of sacred scripture.

When Boykin was tasked in 1993 to advise the Justice Department on how to remove the Branch Davidians from their compound, he would have confronted in his opponent across the Waco plain a kindred spirit–a fellow scripture-quoting, God-and-guns Christian demonologist who saw the world as a Manichean battlefield.  All Americans should be disquieted by the fact that Boykin was closer in worldview to the armed, dangerous, and deranged David Koresh than he was to most of his fellow Army generals. His type is more likely to bring on the end of the world than to prevent it.

War-of-the-Worlds

A second essay that, for me, helps define the distinct unease with humanity’s destiny  that took shape in the 2000s is Bill Joy’s dystopian “Why the Future Doesn’t Need Us.” It serves as a reminder that it is not enough for enlightened societies simply to repudiate the lunatic fantasies of religion that titilated the minds of Jim Jones, David Koresh, Jerry Boykin, and so forth. We must also contend with the societal changes that will be wrought by our secular commitment to knowledge, science and reason.

The foundation of a rational society consists in what the philosopher Immanuel Kant called emanciption. Emancipation is the idea that humans are essentially alone, unaided by supernatual beings. We have only our own, fallible minds with which to try to understand the world and to order our relations with one another.

The Amerian founders believed strongly in emancipation. They were deists, which meant they believed that although God had set the universe in motion, he no longer supervised or intervened in his creation. So it came naturally to the founders to think of themselves as not being under the discipline of a heavenly parent. Many of England’s scientists in the 18th century had come to Philadelphia, in particular, to escape the oppressive “parenting” of the church back home and to follow scientific discovery wherever it led. It was a great leap forward for humankind.

The thing about emancipation, though, is that it does not guarantee that free-thinking humans will choose wisely or act in a way that shapes their societies for the best. All it says is that we unburdened by the dead hand of the past. Our future is  yet to be created.

In 1987 Bill Joy, who would go on to invent much of the technological architecture of the internet, attended a conference at which luminaries of computer science made persuasive and, to him, unsettling arguments for the power of artificial intellignce to augment and even replace human cognition. It was a disturbing, formative moment for Joy. It crystallized a dilemma that he thought was rapidly taking shape for the whole of humankind. Our vaunted intelligence and talent for automation was setting in motion a new kind of creation, and it was not clear at all to Joy that humans would have a place in it.

In his essay he proposes:

First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained.

Even if you disagree with Joy’s postulation as strictly stated, it is futile to deny the progress we’ve made since 2000 in what he’s getting at–having our work done for us by organized systems of increasingly intelligent machines. Even if we never reach the “utopia” of not doing any of our own work at all, we will, it seems, approach that limit asymptotically, and the difference between the real world and machine utopia will become practically insignificant.

Which could mean this, according to Joy:

If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions.

Again, the trends of our knowledge-driven society indicate that Joy is describing a highly plausible future, not a science fiction scenario. Already, algorithms, not doctors, identify which strains of seasonal flu should be immunized against each year. Search engines, not lawyers, collate the case law necessary for constructing legal briefs and going to trial. On German roads, speed trap cameras detect your speed, scan your license plate, and use networked databases to generate a citation and mail it to you. And don’t even start about Alexa locking and unlocking your doors, adjusting your thermostat, and playing lullabies for your kids on cue. Our lives today are filled with anecdotal evidence that reliance on technology is rendering our human grasp of the world increasingly obsolete.

But wait a minute. All this technology would be utterly inert and meaningless without a pre-established connection to human activity, right? German officials had to set up the system for enforcing speed limits: the technology is just the spiffy means for implementing it. The internet, to take another example, is as powerful as it is because it was designed to serve human purposes. Its proper functioning still requires the imaginative work of millions of computer scentists; its power is shaped and harnessed by millions of knowledge managers; its downstream systems require the oversight and active intervention of a phalanx of help desk workers and network engineers.

Fine, point taken. Let’s say humans will always have to man the controls of technology, no matter how “intelligent” machines become. In Joy’s view, though, this more promising-looking scenario still doesn’t get us out of the woods. It’s the other horn of the dilemma about our ultimate destiny

On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite—just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity. If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct, leaving the world to the elite. Or, if the elite consists of soft-hearted liberals, they may decide to play the role of good shepherds to the rest of the human race. They will see to it that everyone’s physical needs are satisfied, that all children are raised under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes “treatment” to cure his “problem.” Of course, life will be so purposeless that people will have to be biologically or psychologically engineered either to remove their need for the power process or make them “sublimate” their drive for power into some harmless hobby. These engineered human beings may be happy in such a society, but they will most certainly not be free. They will have been reduced to the status of domestic animals.

If anything, Joy seems to have been even more prescient about this set of trends. There are clearly still human power centers managing technology, and they are just as clearly pursuing the broad purposes Joy indicates they would. To take just one example, it is abundantly evident that a pro-Trump campaign meme in 2016 would have been designed by algorithm to micro-target the aimless poor and attract their support for policies meant to speed up their extinction (such as “guns everywhere” laws, the repeal of ACA, and the defunding of public schools). If Trump’s supporters felt increasingly voiceless in 2016, it was not for lack of willing spokesmen. It was more likely because technology-enabled chronic underemployment had drained their lives of any purpose that might be given a voice.

This anomie is coming for us all, by the way, not just the (former) laboring class. The growing trend of “bullshit jobs” as described by David Graeber in the eponymous book outlines the disorienting leading indicators of the near future of office work. Increasingly, knowledge workers will have to wring their paychecks from a fast-shrinking set of whatever meaningful tasks automation leaves for us to do. We will mostly be left, though, with what Graeber calls “the useless jobs that no one wants to talk about.”

Humans are good at struggling. What we are not good at is feeling useless. The working poor that used to make up the middle class are now confronted by a future whose contours are literally unmaginable to them. They cannot place themselves in its landscape. Every activity of life that used to absorb human energy and endow it with purpose is increasingly under the orchestration of complex, opaque systems created by elites and implemented through layers of specialized technology. Farmers, to take one example, are killing themselves in despair of this system. They cannot compete with agribusinesses scaled for international markets and underwritten by equities instruments so complex they are unintelligible to virtually everyone but their creators and which are traded by artificial intelligence agents at machine speed, around the clock.

This world that never shuts off and never stops innovating was supposed to bring propserity and, with it, human flourishing. To a marvelous extent it has. It would be redundant to review the main benefits that technological advances have brought to human life.

But the thing about technological advances is they just keep extending themselves, and as they create ever more complex systems, it becomes harder to anticipate whether they will help or harm us in the long run.

For Joy, the advent of genetics, nanothecnhnology, and robotic sciences at the turn of the century was a sea change in terms of risk. It turned scientific innovation into a non-linear phenomenon. He writes:

What was different in the 20th century? Certainly, the technologies underlying the weapons of mass destruction (WMD)—nuclear, biological, and chemical (NBC)—were powerful, and the weapons an enormous threat. But building nuclear weapons required, at least for a time, access to both rare—indeed, effectively unavailable—raw materials and highly protected information; biological and chemical weapons programs also tended to require large-scale activities.

The 21st-century technologies—genetics, nanotechnology, and robotics (GNR)—are so powerful that they can spawn whole new classes of accidents and abuses. Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups. They will not require large facilities or rare raw materials. Knowledge alone will enable the use of them.

This is the ecology within which technological threats to humanity’s future will evolve. Armed with massively powerful computers that churn terrabytes of data derived from exquisitely accurate genetic maps and then give it to robots as small as human cells (molecular level “assemblers”) to go out into the biosphere and do things with, humans increasingly have the capacity to redesign the world. “The replicating and evolving processes that have been confined to the natural world are about to become realms of human endeavor,” Joy writes.

What might this lead to? Well, hundreds of nightmare scenarios that we can imagine, and an indefinite number that we can’t. Joy quotes from the physicist Eric Drexler, author of Unbounding the Future: The Nanotechnology Revolution: “‘Plants’ with ‘leaves’ no more efficient than today’s solar cells could out-compete real plants, crowding the biosphere with an inedible foliage. Tough omnivorous ‘bacteria’ could out-compete real bacteria: They could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days. Dangerous replicators could easily be too tough, small, and rapidly spreading to stop—at least if we make no preparation.”

In other words, just like our relatively dumb personal computers, GNR technology will do exactly what we tell it to do, regardless of the depth of our ignorance of the potential consequences. And then it will do its own thing, because it will re-design itself. Until the end of the world, amen.

So pick your poison, as offered up by McEwan or Joy. It may be that we are too stupid to think seriously about our ultimate destiny, or it may be that we are too smart to settle for a future that is safe and humanly meaningful. Or, as seems most dismally likely, there is room in our world for both types.

Review of “The Plague” by Albert Camus

BY MATTHEW HERBERT

Albert Camus’s 1947 novel The Plague is amost always read as an allegory. It is said to be about the spread of Nazism among the French during World War Two. For Christopher Hitchens, it is a warning about the underlying malice of religion. The desire to burn heretics only goes dormant under the civilizing forces of science, politics and common sense, Hitchens believed. The Plague shows us that tyranny can always break out anew under the right conditions.

But today it is instructive to read Camus’s novel as simply about what it says it is about–an epidemic. We need no deeper symbols to give it meaning.

plague

The focus of the story is on the progession of people’s responses to the sudden onset of a lethal, contagious disease. One day in nineteen forty-something, the denizens of the Francophile city of Oran, Algeria were going about their lives, “with blind faith in the immediate future,” as Camus puts it. With unthinking certainty, they expected every day to be followed by another one, differing in no important respect from the last. Love, ambition, work–everthing that requires the positing of a future for its fulfillment–unfolds in glorious normalcy.

Then the rats start to die. First in ones or twos, soon after in large groups. Building supervisors and trash haulers have to gather them up and carry them away. People step on them unawares, feeling something soft underfoot, then kicking them away in disgust.

Soon, people start dying too. Two Oran doctors evaluate the evidence and hypothesize that a plague is underway. Their first reaction when they speak the word to themselves is to anticipate what will happen next–a large-scale official denial of the threat even as it unfolds before everyone’s eyes. “You know,” one of the doctors says, “what they’re going to tell us? That it vanished from temperate countries long ago.”

It’s funny that we European-Americans, too, think this kind of thing, but we do. We lived in such close quarters with our animals for so long, that we contracted a whole range of ravaging diseases, then became immune to many of them, then conquered the new world through a global campaign of germ warfare we didn’t even know we were waging. So it goes. Millions died.

Late in The Plague as Oran is dying, the protagonist, Dr. Rieux reflects on how little human agency matters once a brutal, unthinking pandemic is unleashed. Rieux has been working 20-hour days, and he is beginning to realize he will eventually lose the fight against the plague’s exponential spread. Amidst the stench of the dead and dying people of Oran, he achieves a kind of clarity:

Had he been less tired, his senses more alert, that all-pervading odor of death might have made him sentimental. But when a man has had only four hours’ sleep, he isn’t sentimental. He sees things as they are; that is to say, he sees them in the garish light of justice–hideous, witless justice.

I have read all of Camus’s books, and I am confident that this is a statement of record: it is Camus speaking directly to the reader. Camus believes each person lives alone beneath a “vast, indifferent sky,” and must confront an ultimate absurdity–that life, the one thing we humans are encoded and conditioned to seek with all our energy, is precisely the thing the universe will deny us. We are guaranteed not to get it. Some justice, right? You can see why Camus calls it hideous and witless.

The plague comes for us all.

But this is not the whole of the human situation for Camus. Faced with desperate absurdity, we invent things. One of these is society. We create all kinds of groups whose overarching purposes endow our individual lives with meaning. We subordinate our selfish desires to higher ends. They give us a reason, as Marcus Arelius put it long before Camus, to rise each morning and do the work of a human. Having a society is what enables us to be fully human.

But the thing about society is that it does not come for free. It is not just there, like the elements of the periodic table. We create it, and we are responsible for sustaining it. And this is actually what The Plague is about, whether you take the plotline straight or as an allegory–it is about moral responsibility.

About one-third of the way through The Plague, the people of Oran start to understand that the epidemic ravaging their home will soon re-shape their lives. With the shit getting real and minds suddenly focused, a prominent priest decides to give a straight-talk sermon. It is, ahem, a come-to-Jesus moment:

If today the plague is in your midst, that is because the hour has struck for taking thought. The just man need have no fear, but the evildoer has good cause to tremble. For plague is the flail of God and the world his threshing floor, and implacably he will thresh out his harvest until the wheat is separated from the chaff. There will be more chaff than wheat, few chosen of the many called. Yet this calamity was not willed by God. Too long this world of ours has connived at evil, too long has it counted on the divine mercy, on God’s forgiveness.

Camus was not a religious man. Quite the opposite. The part of the priest’s sermon I have bolded, though, is something Camus believed in, in a way, with great passion. So do I. It is really about society, responsibility and solidarity.

For too long we have connived in evil by pretending that society gets by on its own, or as Margaret Thatcher thought, it simply doesn’t exist. Americans tend to take this rugged pose in various forms, either by pretending that we’re all atomized individualists; or that the market will solve all problems; or that government itself is the problem not a solution; or if we all had enough guns everything would work itself out; or if we just wait for the super-rich sprinkle a few dollars down on the poor through gig work and mcjobs, they will get by. Or my personal favorite: As long as I have a big enough pile of money, everything else is as good as it needs to be.

These are all variations on the same kind of moral illiteracy.

For many years I had the privilege to live among adults who did not believe any of these childish fantasies–or at least they did not act in accordnce with them. They knew that society was a human invention. If you wanted a decent society, you would have to pay for it.

And I don’t just mean money. Money is just a start. You would have to pay by believing that you really are responsible to your neighbors. You really do have to help set up good schools for everyone, even if you think your kids are more deserving than theirs. You have to build clinics and hospitals on the same model. Libraries, roads, tramlines. It all has to be good, and it has to be good for everyone.

We need these things all the time if we are to indulge our blind faith in the immediate future–the assumption that tomorrow will bless us with the same certainty today did.

What we are discovering through our current plague is how fragile our society is. It is fragile because we have allowed the rich and greedy to set its priorites. And so we inhabit a system designed only for the best of times–the only thing the rich can envision. Our healthcare system is set up to function well for the rich, just barely for the middle class, and not at all for the poor. Under “normal” circumstances, this is tolerable. Well, it is tolerable in the sense that it does not incite a general insurrection.

Same goes for labor and wages. Marx’s Iron Law of Wages is viable but only under the best conditions. Our country is constantly running an experiment designed to discover how low wages can be driven for the maximum number of people. Sure, we can have a country where hundreds of thousands of people use paycheck loans to survive and never send their kids to a dentist, but only as long as widespread disaster does not strike. We need feel no responsibility for those poeple. It’s part of the American story to watch them struggle, alone, for survival. It’s interesting.

But when the plague landed on our shores, all our lives suddenly threatened to become more interesting. The moral corruption of our system was exposed. Suddenly it has become an urgent matter to supply people with money, goods and services they haven’t strictly speaking earned. But what if we had already had a system in place in which we collectivized our responsibility for one another–a system that normalized the imulse to take care of each other?

Writing in the Atlantic Monthy this week, Anne Applebaum counts the cost we are now paying for letting our society believe the lie of rugged individualism. That lie has led to institutional rot and a decline of not just governmental, but civilizational capacity:

The United States, long accustomed to thinking of itself as the best, most efficient, and most technologically advanced society in the world, is about to be proved an unclothed emperor. When human life is in peril, we are not as good as Singapore, as South Korea, as Germany. And the problem is not that we are behind technologically, as the Japanese were in 1853. The problem is that American bureaucracies, and the antiquated, hidebound, unloved federal government of which they are part, are no longer up to the job of coping with the kinds of challenges that face us in the 21st century. Global pandemics, cyberwarfare, information warfare—these are threats that require highly motivated, highly educated bureaucrats; a national health-care system that covers the entire population; public schools that train students to think both deeply and flexibly; and much more.

The plague comes for us all. That is undeniable. But we need not pretend we are up against it alone. We need a system that takes care of everyone all the time, before emergencies happen. That’s what society is for. And, yes, Maggie, society does exist. It’s been one of our best inventions.

 

Review of “The Whites of Their Eyes: The Tea Party’s Revolution and the Battle over American History” by Jill Lepore

BY MATTHEW HERBERT

I wonder how many Americans know about the first suicide terrorist attack by airplane after 9/11.

It happened on February 18th, 2010. Fifty-three year old Andrew Stack III flew his single-engine Piper Dakota into an office building housing a local branch of the IRS. In addition to killing himself, Stack killed a civil servant, father of six.

Stack was mad at the IRS over a long drawn-out tax dispute. But he was mad about lots of other things too. In a suicide note he wrote shortly before setting fire to his home and launching his attack, he railed against corporate greed, the government’s bailout of the financial sector, health insurance, and the Catholic Church.

At bottom, though, what angered Stack most was having been defrauded by his own country, as he saw it. Its institutions had indoctinated him with phony beliefs about the interconnectedness of freedom, hard work, and propserity. The founding fathers, he wrote in his suicide note, had fought against taxation without representation, but the country today was throwing that legacy in the trash. President Obama, with his tax-and-spend healthcare plan, was not just leading us to ruin but was  adandoning what it means to be American.

Austin Hess, a young Boston engineer, could relate. At a rally against Obamacare the month after Stack’s suicide attack, Hess protested, “All the government does is take my money and give it to other people.” (We later come to learn the delicious fact that Hess’s paycheck comes from the Departments of Defense and Homeland Security. As the world’s largest empoyer, the federal government gives as well as it takes.)

Andrew Stack’s strange, sad terrorist attack caught the same reactionary Zeitgeist that gripped Hess and other members of the Tea Party movement then spreading across the country. Less than one month into the Obama administration, a cacophony of conservative voices was buffetting the new president with accusations of socialism, godlessness, and crypto-Islamism, among other things.

Fox News commentators, business leaders, right-wing think tanks, and Christian Fundamentalists all accused Obama of betraying the American Revolution. They gathered in Boston on April 15th, tax day, to protest the new regime of taxation without representation. They could feel the Founding Father returned to life and standing shoulder-to-shoulder with them, seething in anger, sensing some kind of alien menace they couldn’t, or wouldn’t, quite define. Well, one Tea Party sign dared to define it: “Spell-Check says that OBAMA is OSAMA,” it read.

In the formative days of the Tea Party, Fox News commentator Glen Beck set up his studio to look like a school room, the better to instruct his viewers on the real meaning of the American Revolution. Unburdened by the slightest sense of irony, Beck said he was fighting the forces of “indoctrination,” the same thing that had gotten under Andrew Stack’s skin before he crashed his plane into the IRS building. Beck implored his viewers to “hold your kids close to you” and teach them about the revolution that George Washington had led–a revolution rooted in “God and the Bible.”

Americans, rich and poor, dumb and smart, high- and low born, are forever invoking the Revolution, the Founding Fathers, and the Spirit of 1776 to  sanctify their political claims about the present. When it comes to having political arguments, “[n]othing trumps the Revolution,” writes Jill Lepore in her wonderful 2012 book The Whites of Their Eyes: The Tea Party’s Revolution and the Battle over American History.

whites eyes

As is the case with all of Lepore’s books, The Whites of Their Eyes is a wise, humane, intricately argued work of history. There is nothing reductive about it. I risk betraying Lepore’s generous intelligence, then, by beginning on a slightly reductivist note, a list  Lepore forms of things that have been thrown into Boston Harbor as acts of political theater. They have included:

A fake container of crack cocaine

The 2007 federal tax code

Cans of (non-union-produced) beer

Annual HMO reports

No doubt there have been other things flung into Boston Harbor as well. The point Lepore wants to make is that it is reflexive for us Americans to escalate our political protests to heaven, always beseeching our gods, so to speak. The objects thrown into Boston Harbor are meant to symbolize not just wrongs in need of remedy but fundamental betrayals of one or another of our founding principles. From crack to non-union beer, they are the kinds of thing that should cause our Founders to roll over in their graves, we are told.

But here is a paradox: anyone with an axe to grind can play this game. The conflicted political discourse that produced the American Revolution was so capacious and so contentious, it can accommodate almost any post-Enlightenment political idea. Lepore writes:

The remarkable debate about sovereignty and liberty that took place between 1761, when James Otis argued the writs of assistance case [about British laws that basically established police powers in the Colonies], and 1791, when the Bill of Rights was ratified, contains an ocean of ideas. You can fish almost anything out of it.

And fish we do. Almost any set of opposing causes can be found and seized upon in the body of historical writings comrpising the record of the American Revolution. Perhaps the most (in)famous is the matter of religion and theism. Cast your line in the shallowest waters of the revolutionary texts and you find Christian theism plain and simple–rights being endowed by God and all that. Fish a little deeper and you’ll find founders who betray not one whit of genuine theism–even outright rejections of it. Thomas Paine, with whom Glen Beck likes to compare himself, used his dying breath to repudiate Christianity, telling his doctor he had “no wish to believe.” Benjamin Franklin, and expert bookbinder, inserted a lampoon of Biblical-sounding nonsense into his Bible to prank anyoneone who would listen. Faith of our fathers?

History, the making of it and the writing of it, is an argument, Lepore says over and over. Its outcomes and processes were never fixed in the stars and cannot be chiseled into stone tablets. But the problem is, we treat our history, especially of the Revolution, as if it were so fixed. Using history to make political arguments requires creativity, empathy and reason, but our attitude is all too often self-assured idolatry, writes Lepore:

People who ask what the founders would do quite commonly declare that they know, they know, they just know, what the founders would do and, mostly, it comes to this: if only they could see us now, they would be rolling over in their graves. They might even rise from the dead and walk among us. We have failed to obey their sacred texts, holy writ. They suffered for us, and we have forsaken them. Come the Day of Judgment, they will damn us.

This is not an appeal to history, says Lepore. It’s fundmentalism, or what she somtimes terms “anti-history.”

What she means is that history is not simply a transcribing of facts established in the firmament. Any historical approach that posits its subject is a fossil record of this kind is bound to fail; it goes against the spirit of studying and writing history. As Orwell once wrote, good history should “make the past not only intelligible but alive.”

The facts and the character of the American Revolution were never fixed. They were contested from the outset and continued to be contested immediately after independence. John Adams and Thomas Jefferson disagreed entirely on what the revolution was; Adams maintaining it consisted in legal and political actions leading up to the Declaration of Independence, Jefferson saying it was the war for independence itself. For Benjamin Franklin, the revolution began with and consisted primarily in a crusade against established religion.

It did not take long for fights over the meaning of the Revolution to escalate into terms we would recognize today. Jefferson praised Shay’s Rebellion in 1787 as a sign of patriotic vigor. Adams said the rebels should be violently suppressed, as the constitution demanded. And the ink was not even dry on the document that was to ground this authority.

More fundamental disagreements soon followed.

When Jefferson was elected as the third president, succeeding Adams, a Boston newspaper declared that Jefferson had ridden “into the temple of liberty on the shoulders of slaves.” This because his win had been made possible by electoral votes created by the three-fifths clause. Had there been any founding fathers in their graves at that early date, surely Americans would have been alerted to their rolling over in them.

And what did Jefferson himself think of the power and meaning of the constitution? Toward the end of his life Jefferson wrote, “Some men look at constitutions with sanctimonious reverence and deem them like the ark of the covenant, too sacred to be touched. They ascribe to the men of the preceding age a wisdom more than human.”

Jefferson was better positioned than any other framer to recognize how deeply disputation ran through our founding documents, what a human creation it was. A lifelong slaveholder, Jefferson was so profoundly conflicted over slavery that his first draft of the Declaration of Independence included what could be fairly called an argument with himself over slavery. In “a breathless paragraph, his longest and angriest grievance against the king, Jefferson blamed George III for slavery,” Lepore writes, specifically for not abolishing the slave trade with the colonies. Jefferson’s contemporaries disagreed with his anti-slavery passage, some saying it went too far, others not far enough. In the end, the passage was left out as a tactical expedient: the other framers thought it would open the colonists up to charges of hypocrisy, given how thoroughly slavery was embedded in their economy and culture.

In 2010 the Tea Party’s sanctimony would sometimes become more than a little bathotic, as when its members insisted on Congressmen and other adults reciting the Pledge of Allegiance. The Pledge’s author, Francis Bellamy, was a socialist who was once chased from the pulpit for demandng the rich be taxed heavily and their wealth given to the poor. Bellamy wrote the Pledge as part of an ad campaign to promote something his boss’s company invented called the “flag movement.” They wanted to sell flags to every school in America. The Pledge helped their business. It was meant to indoctrinate children.

Lepore urges us to understand there are no political conclusions that can be lifted directly from the founders’ principles or the history of the American Revolution. History is alive, but not in the sense the Tea Party say it is. If we wish to understand anything of the founders’ principles, we have to go back and examine them in the tension from which they arose, and the tension they never escaped. The idea that the wisdom of an ealier generation could resolve poitical contests with sacred revelations was precisely what the founders rejected. And we know this becasue of the disputes they had with one another, and which never stopped. “They believed that to defer without examination to what your forefathers believed,” writes Lepore, “is to become a slave to the tyranny of the past.”

 

 

 

Orwell’s Review of “The Soul of Man under Socialism” by Oscar Wilde

BY MATTHEW HERBERT

Crack open the first volume of Prejudices, H.L. Mencken’s career-spanning collection of essays, and what’s the first chapter you see? “Criticism of Criticism of Criticism.”

It’s really good. You can read it here.

About that title. Is it Mencken being self-deprecatingly funny? Yes. Is it Mencken being earnest and passionate? Also yes.

The part of humanity I feel closest to is the part that, like Mencken, gets worked up over words, and I mean worked up to the point of life and death. But Mencken was also lighthearted. He played in a brass band, wrote nonsense poems and drank a lot of beer. He knew all those words, earnest and passionate as they were, might just be leading us in circles.

So here’s a bit of circling around and around–some criticism of criticism of criticism.

In May 1948 George Orwell wrote a review of Oscar Wilde’s essay “The Soul of Man under Socialism,” a beauteous vision of the future in which life’s necessities would be so plentiful as to obviate the need to own things or even to work.  The thing that prompted Orwell to write the review was the essay’s surprising durability. “Although [Wilde’s] prophecies have not been fulfilled,” Orwell wrote, “neither have they have they been made irrelevant by the passage of time.”

And this was saying a lot. Wilde had written the essay in 1891 at the peak of Europe’s Gilded Age. Wilde was no economist, and as Orwell points out, not really a socialist, just an admirer of the cause. The rich, old Edwardian world Wilde lived in would have been unrecognizable to Orwell’s peers in 1948–many of them survivors of the two most destructive wars in world history, the most lethal pandemic since figures had been kept, and the worst economic depression since the dawn of the industrial age. Britain was on food rations in 1948, despite winning the war. If anything in “The Soul of Man under Socialism” still rang true after such extensive trauma, Orwell thought, it deserved another look.

wilde1

Actually it might be more accurate to say Wilde’s essay was just starting to ring true in 1948. It had a long latency. When Wilde wrote about socialism taking over the rich world, this prospect was clearly a pipe dream, But, in 1948 Orwell sat up and took notice of the rise of communism in China and much of what would soon be called the Third World: “Socialism,” he wrote, “in the sense of economic collectivism, is conquering the earth at a speed that would hardly have seemed possible sixty years ago.”

The broad, gathering march of communism was what made Wilde’s essay relevant in a general way, but, it was two of Wilde’s particular observations that really grabbed Orwell’s attention. One was that Wilde correctly perceived socialism’s in-born tendency for authoritarianism. Any government given the power to control industry, markets and wages would be tempted to rule over all of society. Wilde admitted this, tangentially.

But he largely dismissed this threat, saying, “I hardly think that any Socialist, nowadays, would seriously propose that an inspector should call every morning at each house to see that each citizen rose up and did manual labor for eight hours.” And of course communist regimes did this kind of thing and much, much worse. Wilde’s error about socialism was basically the same one Americans have been making about democracy for the last 30 years. He assumed the system would work because the elites who implemented it would be rational and benevolent. What Orwell knew was that socialism’s ruling class–any ruling class–would entrench itself as an authoritarian regime once it accrued enough power to dictate to the masses. This is the basic plot line of Animal Farm.

In Oregon today, the state legislature has recently been proving that even a highly developed democratic system can break down if it is not implemented by benevolent elites. Bad faith is not a special problem of socialism. According to reporting by Vox, in Oregon, the members of the Republican party, minorites in both houses of the legislature, have been walking out of their jobs every time a bill they oppose comes to a vote. A rule written long ago on the assumption that lawmakers would be good stewards of democracy requires a 60 percent majority for a quorum in the legislature. The Democratic party holds a big enough majority to pass a law but not big enough to convene a quorum. So each time the legislature comes to the cusp of passing a law that the people of Oregon elected it to pass, the Republicans desert their posts.

There may be less to the much vaunted political culture of democracy than meets the eye. In the triumpahlist mood of the end of the Cold War, we thought of liberal democracy as intrinsically superior to other ideologies. You could just see that collectivism of any kind was bad because, look how corrupt its elites were and what failures of governance it resulted in. Well, it seems that socialists don’t have a lock on bad-faith failures of governance. Democrats too may falter in–or even openly reject– their commitments to reason, decency, and fair play. We too are capable of wrecking a good system.

Another area where Wilde was sort of right but wrong in an interesting way was in his thinking about technology and leisure. He thought machines would relieve humans of drudge work and, hence, the “sordid need to live for others.” Freed of the need to work for wages, humans would seek something like Maslow’s self-actualization. “In effect,” Orwell writes, “the world [would] be populated by artists, each striving after perfection in the way that seems best to him.”

As Orwell points out, Wilde had tunnel vision on this matter. The utopia he envisioned blanketing the whole world was really only conceivable in the most developed economies, such as the one he lived in. Africa and Asia in 1948, Orwell pointed out, were far behind this level of development. A political ideology based on the equality of all humans that left most humans out of its equations would badly miss the mark.

Furthermore, Orwell saw that the minute technical challenges of machine work would have to be tackled before robots could do our jobs for us. Wilde glossed over this problem as trivial. Orwell wrote that machines lacked human “flexibility,” possibly refering to their lack of fine motor skills, or possibly to their inability to think. “In practice, even in the most highly mechanised countries,” Orwell wrote, “an enormous amount of dull and exhausting work has to be done by unwilling human muscles.” And today? I think any Amazon Prime delivery worker would still give Orwell an amen.

So Wilde put too much faith in robots, too soon. But things are changing now. The quixotic presidential campaign of  Andrew Yang hinted in a fascinating way at a step change in the march of the machines. You know that weird idea Yang had of a universal basic income–where we just sit home and draw a check each month? Well it’s coming, and it’s coming because thinking machines really are overtaking our jobs. The artificial intelligence revolution looks set to change the relationship between humans and labor forever.

If a computer can do a better job of, say, balancing a comlex set of business accounts, why pay a roomful of sweaty, fallible humans to do the same job less well? And, if machines can think creatively (which they can, possibly beyond our ability to grasp), they will eventually reach a tipping point where they will exceed the human ability to design algorithms and apply them to real-world problems. Machines will design even better machines. Economically, what this means is that machines will create value.

Let me say that again: machines will create value. This development is unprecedented. For the 20,000 odd years we’ve had civilization, it has been up to humans, and only humans, to add our labor to nature, as John Locke phrased it, and create something of sufficient value that we feel a claim of ownership toward it. Plant and hoe a garden, and its yours. Eat your vegetables or sell them, but by god they are yours to do with as you please. This conception of property is a bedrock assumption of economics.

And, as Yang understands, it’s about to fall out from under us. If machines create the value that drives GDP, we will, to an uncertain but large extent, live off the proceeds and taxes derived from their creations. Just like that, the most improbable of Wilde’s prophecies will effectively come true: we will be freed of the “sordid necessity to live for others.” What then?

One of the recepients of Yang’s “Freedom Dividend,” the $1,000-a-month prototype of universal basic income that Yang passed out during his campaign, said he had bought a guitar with his thousand bucks. I thought it was kind of lame when I heard it, but slowly its meaning began to sink in. Guitar Man was basically an example of Orwell’s interpretation of Wilde: without the need to work for our wages, we are all artists waiting to happen. We will still be as busy as we have been the last 20,000 years creating ourselves, but not in response to the biological imperatives that have driven us thus far and the social structures that have evolved to organize those imperatives.

Thinking about what to do with your Freedom Dividend is vertigo-inducing. This is not just because that money fell from the sky, produced by a being that can feel no claim of ownership and does not know the meaning of the phrase, “by the sweat of one’s brow.” Spending your Freedom Dividend is unsettling  because what you are really doing is choosing what you want to be in a world that is no longer regimented by the conventions of work.

 

Review of “Secondhand Time: The Last of the Soviets” by Svetlana Alexievich

BY MATTHEW HERBERT

At a recent conference of far-right luminaries in Rome, Roberto de Mattei, a conservative Catholic intellectual, opined that a global leftist elite had “banned” good, honest patriots from publishing history books about communism.

The reporter of these words, Anne Applebaum, was better positioned than most of de Mattei’s audience to appreciate how wrong he was. Among the 16 awards Applebaum has won for writing histories of communism, two were National Book Awards and one was a Pulitzer Prize. Cataloging Soviet depredations is basically her career, and she’s still going strong.

But these days, if you choose your audience carefully and you make your claims in a certain tone of voice, it doesn’t really matter if there are great heaps of facts that contradict your position. Heads will be nodded in concerned sympathy, brows furrowed in resolve. But what kind of person, you might still wonder, would make such an easily falsifiable claim as the one de Mattei did about a global elite’s censoring of history?

I’ll come back to that in a moment.

In the meantime, I just finished a beautiful, wide-ranging oral history of late-20th century communism by the Belarusian journalist Svetlana Alexievich, Secondhand Time: The Last of the Soviets, published in English translation in 2016. It is one of four epoch-spanning oral histories Alexievich has published about communism in eastern Europe.

secondhand time

And, like Applebaum, Alexievich has won here share of recognition. In 2015 she won the Nobel Prize in Literature, which, I’ve always been told, is a pretty high mark to hit. One wonders what kind of intellectual de Mattei could be since he clearly knows so little about the world of letters. Maybe he didn’t know we have girl historians who write books these days?

If I had to guess, I’d say the handful of Americans who read Alexievich probably start with the 2006 translation of her book Voices From Chernobyl, because it’s pretty clearly going to stroke the Schadenfreude we expect to feel about the USSR’s mendacity, brutishness and incompetence. Well, if that’s what gets you to read Alexievich, fine, but political porn is not what she delivers. She won the Nobel for a good reason. She writes about real people, mostly the things they’ve lost.

What Alexievich delivers in Secondhand Time is a haunting collection of often bleak but deeply human stories about how Soviet people experienced the death and denouement of the system they had built their lives around and thought of as permanent.

The impression that many westerners have of Soviets is that their lives were thoroughly dictated to them: they hated and feared their political masters and  never authentically believed in the ideology the Kremlin forced down their throats. The most important message of Secondhand Time is that many Soviets really did believe what they were taught, even if they knew their teachers were brutes. It turns out that real people found real reasons for believing in communism despite the horrors, large and small, that propped it up–the gulags, the informants, the secret police, the cult of Stalin, the show trials, the bread lines, the work camps, the mass relocations.

Outside the space created by these horrors, many Soviets managed to thrive in their mental and communal lives. They lived for the rewards that austerity tends to inspire in an educated people–ideas, discussions, small freedoms, camaraderie. Books and writers were the focal point this life, which brimmed with a shared sense of struggle. A former Soviet school teacher finds a notebook that belonged to her daughter during the last days of the USSR. An essay in it called “What is Life?” proclaims, “The purpose of life is whatever makes you rise above.” Another former Soviet observes, “Russians don’t want to just live, they want to live for something. They want to participate in some great undertaking.”

Most of Alexievich’s interlocutors show they have undergone a personal transformation that typifies this kind of great-society desire. Many recall the disciplined subordination of their individual interests to a collective goal as a sacred, even exhilarating experience. Remember Don Delillo’s oh-so-90s observation that “the future belongs to crowds”? Well it’s not entirely cynical. Masses can sometimes yearn for justice, not just demagoguery. Alexievich gets to know the crowd joiners and, again, reveals them as real people who were not crazy for believing what they believed.

The Soviet everyman saw himself as a worker, and not just in a factory. He (or in many cases, she, as Alexievich illustrates) saw himself as working hard to build a political system the world had never seen–a state that would guarantee economic justice. Of course we all know how precipitously the revolution collapsed, but we tend to see this event from the top down and the outside in. We are impressed by its structural features, its ethos as it affected the rest of the world. But what Alexievich draws out is that ordinary Soviet people suffered real loss at the failure of communism. Contrary to a western literature on the USSR going back to Isaiah Berlin’s 1949 The Soviet Mind, Soviet people didn’t just have cutout identities thrust on them by a faceless authority. They actively built lives that embodied communism’s guiding values of economic justice. They were proud of the fact that each was entitled to basic goods and services and there was no room for the predatory rich. And, despite the overwhelming corruption of the Soviet system that first undermined and later obliterated those values, the lives of many ordinary Soviets were full of dignity.

Alexievich, if you’re wondering, does not bait her hook and go fishing for stereotypes of such dignity. She casts her net wide and gathers whatever stories come up. Most of the personas are damaged, and a few are repulsive, such as the former gulag guard who “was just following orders” and says he would torture and kill again if another Stalin would arise.

But mostly, the stories are of loss, experienced by people we can relate to. Very few of Alexievich’s interlocutors want the whole Soviet edifice back (although some are nostalgic for empire), but most of them want a return to the time when their ideals meant something and had official backing. One subject recalls how for decades, she and her friends and family had been content to discuss books and debate communism all the time, usually in the kitchen. When those decades came to a sudden close, the ideas behind the discussions simply vanished, she recalls:

With perestroika, everything came crashing down. Capitalism descended. . . .90 rubles became 10 dollars. It wasn’t enough to live on anymore. We stepped out of our kitchens and onto the streets, where we soon discovered that we hadn’t any ideas after all–the whole time, we’d just been talking. . . . We [had been] like houseplants. We made everything up, and as it later turned out, everything we thought we knew was nothing but figments of our imaginations: the West. Capitalism. The Russian people. We lived in a world of mirages.

In his 1908 book The Philosophy of Loyalty, the philosopher Josiah Royce argues that the key ingredient to a meaningful individual life is the same as the key ingredient to a decent community–loyalty. It’s the thing that enables a dying person to say, “Okay, I can let go. The cause I’ve lived for is still intact. It will absorb the contribution I made during my life and keep going.” If we can die still feeling such loyalty, we can be fundamentally content.

The devastation that so many of Alexievich’s interlocutors evince is caused, I think, by the loss of the only object of loyalty they ever knew. They had worked, suffered and bled for the revolution. And many killed for it. But since the 1990s, they face the prospect of death with their life-sustaining narratives swept away. There is no recognizable future for them to be loyal to, let alone struggle for. One interlocutor told Alexievich, “For us, suffering is a personal struggle, the path to salvation.” Now, though, they’ve been told to stop struggling; there was never any point. They should just feel good and buy stuff instead.

For me, this is where the former Soviet’s pain gets slightly personal. By a series of accidents that shaped my values, and despite my utterly different lived experience, I arrived at a communist-style contempt for the crass life of the stomach and wallet, the very thing that many of Alexievich’s interlocutors scorn. I feel the same contempt for consumerism that the ex-Soviets experienced so dramatically when they were simply told that money would be their new god. Class war was still on, they were told, but the goal was different. It was a now race to the top of the exploiting class.

Many of Alexievich’s interlocutors were appalled at the economic injustices that cascaded all around them in the 1990s even as promises continued to be made that everything would be fine. One observed that the Russians in transition “were sure [in the 1990s] that a new future awaited them. Now [in the 2000s] it’s a different story. Today’s students have truly seen and felt capitalism: the inequality, the poverty, the shameless wealth. They’ve witnessed the lives of their parents, who never got anything out of the plundering of our country.”

Homo sovieticus is dead and gone. He cannot and should not be brought back. But gain and again, Alexievich’s interlocutors seem to mourn certain parts of their lived communist experience that were undeniably decent and good. A kitchen talk about political novels is an objectively better thing–for the individual and society–than a coked-up ride on an oligarch’s yacht. There’s no denying that, unless you are a supreme asshole. Alexievich’s ex-Soviets rebel at the idea that the default option to communism is surrender to an even lower, more crass form of life. This presentation of “the problem” is both stupid and harmful. You can be a collectivist without going Stalinist.

Many of the subjects in Secondhand Time still feel loyal to those kitchen discussions and the underlying idea that people have a higher purpose than exploiting other humans and consuming as much as they possibly can. I do too. Yachts are for jackasses and ignoramuses. Give me the kitchen and a good book any day.

(You can also read reviews of Secondhand Time in the NYT or the Independent. As always, I wrote mine first to keep my thoughts fresh.)

 

Active Measures

BY MATTHEW HERBERT

We can often be forgiven our sins of omission. We let small duties slip, under-perform just slightly on challenges to our integrity. If the harm done is not too great, and we buck up and pledge to do better, fine. We can’t be expected to attend to every little thing all the time.

Take Hi C Fruit Punch. I was raised in a time when adults believed it was good for kids, because, read the label, it has vitamin C. Back then, there were no science-based warnings like this one that say, for the love of god, DO NOT give this diabetes bomb to innocent children. It was a sin of omission.

But not anymore. Now we know. In just a few few minutes you can inform yourself, using reliable sources, about what Hi C is–a dyed mixture of water and corn syrup containing more sugar than Coke. And, so informed, you need never again be implicated in the peculiar form of child abuse for which this product was designed.

This past week I was reminded that, in some cases we are not merely guilty of passive, Hi-C-in-the-1970s-style ignorance, but we sometimes take active measures to achieve the levels of stupidity our corporate masters desire of us.

If you don’t believe in the Devil, you can go ahead and believe in this, which is far worse: we humans will actively harm our intellects to keep from knowing things that threaten to constrain our other appetites.

chimps

For example, it is no secret that lobby groups have for decades suppressed any kind of scientific research into gun violence. The CDC wants to do this analysis, since gun violence is a leading cause of death, but it can’t get the funding. For anyone with eyes to see, this is clearly because the gun industry does not want to have a Hi C moment. Almost everyone who looks at the available data knows, for example that, if you of keep guns in the house to defend against “home invasion,” you are much more likely to end up killing or wounding yourself, your spouse, or someone else in your family than an intruder. To put the point slightly technically, gun possession in the home makes self-harm a more likely outcome than successful home defense.

But here’s the rub. The point cannot be put more than slightly technically, because the necessary science has been prevented. (Orwell wrote with bitter disdain about the “prevention of literature” in an authoritarian political culture. He would have loved the idea that the authorities could also prevent science. Not.)

Suppose for a second that you are a rational actor seeking to make an informed decision about whether to keep guns in your home. (I’m not sure there are rational actors, since the work of Tversky and Kahnemann, but the concept is at least a useful fiction. It makes a certain kind of liberal politics on which our laws are based possible, so let’s go with it.)

What you want is a sound cost-benefit analysis, a comparison of the most likely risks and rewards presented in your dilemma. The perceived rewards in this case are generated inside your head. They are derived from horrific, or possibly heroic scenarios of you confronting psychopathic home invaders bent on harming you and your family. Movies and TV help supply these images. But drawing on one’s cinematic intuitions of home invasion as a data source does not really give us a running start at making a rational choice. However, we have to start somewhere. Let’s table this side of the analysis and take a look at the other side.

The data on the other side, about the potential costs of keeping guns in the house, comes from . . . nowhere. That’s because, as I noted, it has been forbidden to do the required research. Since the 1996 Dickey Amendment, which drastically cut funding for gun violence research, no major studies have been done on the potential linkage between gun ownership and various kinds of self-harm. And the spirit of that law has been vigorously reinforced over the decades by the energetic lobbying of the NRA. Anytime some crusading university or think tank starts thinking about trying to reinvigorate gun violence research, the NRA meets with the senator(s) who are capable of shutting it down. And shut down it is. It’s money well spent, if your objective is never to know anything substantive about linkages between guns and gun death.

The last, probably only rigorous study of the risks and rewards of keeping guns in the home was done in 1993. It said if you kept a gun at home you tripled the chances that someone would be shot there, and that someone was rarely an intruder. Specifically: “The researchers found that a majority of victims, 76.7 percent, were killed by a spouse, family member or someone they knew, and that in 85 percent of cases there was no forced entry into the home.”

Research, if it is to be effective, must be done and re-done. Its results must be challenged, validated, put into new contexts. Much has changed since 1993. Or has it? When it comes to gun deaths, we really don’t know. Many gun advocates with even a rudimentary understanding of statistics are probably looking at that figure from 1993 that says keeping a gun at home triples the risk and saying to themselves, “Well, okay, that’s a change in risk, but what’s the absolute risk level? If it’s small enough, I might still make a rational choice to keep a gun in my home. That tripling figure might not be decisive for me if it means the risk factor goes up from .01 percent to .03. Going from one percent to three, though?–that might be a different story.

But we’re on course to never know the real figures. Active measures are being taken to prevent the pertinent knowledge. Ambient levels of ignorance among the public are not good enough for the NRA, so they are funding active resistance to scientific research.

Here is the nub of the problem: corporations and their lobbyists (and therefore our government) are doing whatever it takes to deprive us of the science necessary to ground a rational choice. No matter which side of the debate our instincts lie on, it is impossible for us to discover enough to upgrade our emotive instincts to reasoned arguments. We’re allowed to know about the effects of Hi C on blood sugar (thank goodness), but the effects of guns on the home and family are officially off limits.

You know that bumper sticker, Guns don’t kill people. People kill people? The NRA pays our government millions of dollars to make sure scientists never turn that quip into a testable hypothesis. It works because it remains a bumper sticker. Ignoring the potential link between guns and gun violence is a (forgivable) sin of omission but only as long as the relevant scientific knowledge is never generated.

This topic came up last week because the latest federal budget included a provision that said a teensy weensy bit of research might happen sometime. As weak as the clause is, it should be child’s play for the lobbyists to kill it.

So stopping science before it starts is one way to promote mass ignorance. Another one is to take a hammer and wreck scientific results after they have been produced.

An article in the Atlantic Monthly last week brought this kind of active measure into the light. Basically, this is what has happened. A federal agency tightly under the control of the current regime (yes, it is a regime), the National Highway Traffic Safety Administration (NHTSA), got ahold of research produced by the Environmental protection Agency (EPA) that had been used to underwrite emissions restrictions put in place by the former administration.

The EPA had rolled out this research in the usual way, by publishing a detailed explanation of the science behind it and a transparent record of how it got incorporated into policy recommendations. This is the kind of boring, thankless job that civil servants do each day for the citizens and leaders of the country. It gets done not for glory or money, but because individual experts are committed to public service and professional standards.

This kind of boring, deliberate process, by the way, is necessary in a republic because it allows citizens and lawmakers to judge for themselves whether the laws and rules that regulate our lives comport with reason and reality. It’s the kind of thing you would read with interest if you suspect big government ever tends to get too big. This kind of transparency is one thing that makes the difference between a law and a decree. When a government just says whatever it wants, without accountability or reason, that’s a decree.

And the NHTSA did such a shoddy job of showing its work on its latest emissions study that its conclusion looked an awful lot like a decree.

For decades, the EPA and NHTSA had coordinated closely on their work on tailpipe emissions, and their conclusions and policy recommendations were more or less issued jointly. But in 2018 the NHTSA took the most recent joint report on emissions and, without the EPA’s knowledge, re-jiggered its most important assumptions and re-did much of its math. The conclusion they came up with–and I am not making this up; if you don’t trust the Atlantic you can read the scientific paper on which its story is based–the conclusion they came up with was that increasing the weight and carbon emissions of American cars would save American lives.

This conclusion came out, unsurprisingly, without the long, tedious explanation that is required to accompany a change in federal rules. In the 7th grade, this is what you get a D for–writing a thesis sentence without any supporting evidence. It turned out, though, that there was something behind the study that was supposed to look like evidence, . . . kind of.

I had no idea what a turducken was before I read this article, but hats off to its author for describing the NHTSA’s reasoning behind this conclusion as a turducken of errors.

So instead of just coming out and making the baseless claim that we’d all be better off in bigger cars that emit more greenhouse gases, some apparatchiks in the NHTSA used a crayon–who knows, maybe it was a sharpie–to falsify the science behind a painstaking analysis to the contrary. The peer-reviewed science paper that rebuts NHTSA’s work says it “cited incorrect data and made calculation errors, on top of bungling the basics of supply and demand.”

I use the term apparatchik with due consideration. When Soviet regime loyalists needed to pull the wool over the people’s eyes, which was pretty much all the time, they would go back and change the written record to do so. They would simply make events or people disappear from books and newspapers. Or, insert new persons and events. And as Orwell illustrated in 1984, changing the documentary record with sufficient force and assiduousness is as good as changing the underlying reality.

Give an editor enough power, and s/he literally controls the truth. The NHTSA is trying to exercise this kind of power. It is trying to take a rule based in science and replace it with a decree based in the regime’s say-so.

The active suppression of facts–about guns, cars, what have you–flies in the face of what it means to be American. We are not supposed to be scared of information, or of the intellectual disciplines that test information and put it in theoretical context. As Jefferson wrote in the Declaration of Independence, “[L]et facts be submitted to a candid world,” and the world can evaluate them. The current regime, which has already outraged so many other American values, thinks we are unworthy of knowing facts, prefers that we put our faith in memes and bumper stickers instead. This active maiming of our own intellectual faculties is what makes decrees possible.

Our country was created for the bold exercise of intellectual courage. It’s part of our national purpose. Alexander Hamilton wrote about this in Federalist No. 1, the very first essay that made a case for what our country would be for. First and foremost, he said, the United States would be an experiment:

It seems to have been reserved to the people of this country, by their conduct and example, to decide the important question whether societies of men are really capable or not of establishing good government from reflection and choice, or whether they are forever destined to depend for their political constitutions on accident and force.

Sins of omission happen by the hundreds. Have I read every rule explanation that deals with, say, implementing the Clean Water Act? Nope. I’ve let my intellectual responsibilities slide. But I have a certain amount of justified faith that the pertinent explanations are available and have been designed by reflection and choice. I have been able, so far, to get by on a certain amount of trust that my government makes laws, not decrees.

But the government’s active measures to induce intellectual sloth and cowardice compromise this trust. Greater vigilance is called for. When we let a government cancel science and shout down reason, we make way for a life ruled by accident and force.