The Four Things I Keep Coming Back To | Part One: Sheer Egoism

BY MATTHEW HERBERT

I am stuck with writing as a way of life, in much the same way I am stuck with running as a way of life. I have to do it pretty routinely to feel fully alive, even though it doesn’t come easily to me and I’m not very good at it.

My natural place in the field of writers, as with runners, is somewhere toward the back of the pack. Every time I write one of these posts, I learn my limitations anew: I sweat out a first draft that I feel okay about. I start knocking the rough edges off it and almost invariably end up changing its direction–without intending to–shifting tone, getting lost in asides and so forth. But usually I’ve put enough work into the second draft that I just accept its shortcomings. I polish it once, twice, three times, hit the publish button, and then I still read things back to myself that sound embarrassingly bad. I even manage to choose quotations from great writers that were trenchant and arresting in their original settings but seem off target in the places I try to use them.

And I ask myself: Since the human mind is more or less a representation machine, shouldn’t the act of writing down sentences that describe the world or express our thoughts come naturally to us? Why is it so hard?

Humans are, to my knowledge, the only species capable of being ridiculous, and this because we are uniquely capable of getting representations of simple, basic things so wrong. If cats could write, one gets the impression they would not produce ugly, awkward, or jarringly stupid sentences about the cat world–how or why would they? They would just, without bother, represent cat-facts as they are, right? Why, poor human that I am, with all my talent for error and solecism, do I insist on multiplying my opportunities for humiliation by trying to write things down? Wouldn’t it be better to take the advice of Wittgenstein and simply pass over in silence the things that resist our ability to express them? And isn’t that set of things pretty much all of life?

But I can’t drop it, this compulsion to write. There is a broad motive that underlies the whole attitude of a writer that I find is best expressed in the life of James Baldwin, and I may, in the coming weeks write about it, but today I want to focus on a subject that I keep coming back to year after year–the four specific reasons Orwell gave for writing in his 1946 essay “Why I Write.” They are all deceptively simple. And for anyone like me, who feels that writing is an organic part of living, they are much more than answers to the question, Why do I write? They are answers to the question, Why do I try to live they way I do?

When Orwell was asked why he wrote, in 1946, he had published scores of book reviews, dozens of essays, his own regular op-ed column in a national newspaper, and eight books, but he was still poor. Animal Farm, the book that would finally make him an international literary star, had come out in late 1945, but it would be several years before it would earn him much money. (In fact, as with 1984, almost all the royalties would come after his death.) Orwell cared so little about the proceeds that while he was still struggling to find a publisher for Animal Farm, one of the things he did was to make sure a Ukrainian translation was issued, for free. Some Ukrainian dissidents, after reading their free copies, would tell Orwell that his grasp of Soviet repression and intellectual corruption was literally incredible: they could not believe Orwell had not lived in the USSR himself.

Hearing this kind of thing was, it turns out, the first reason Orwell wrote. He didn’t care about the money; he wanted to be heard. Known. Admired. All writers do. This is how he put it:

Putting aside the need to earn a living, I think there are four great motives for writing, at any rate for writing prose. They exist in different degrees in every writer, and in any one writer the proportions will vary from time to time, according to the atmosphere in which he is living. They are:

1. Sheer egoism. Desire to seem clever, to be talked about, to be remembered after death, to get your own back on grownups who snubbed you in childhood, etc etc. It is humbug to pretend that this is not a motive, and a strong one.

The key is in that last sentence. It is humbug–the Victorian word for bullshit–to pretend you are not in the writing game for the ego strokes. This crucial piece of self-knowledge is something every writer must attain if s/he is to move on to serious writing, Orwell believes. All writers are “vain and self-centered,” he writes in the same passage, and they’re better off admitting it. It clears the accounts and puts the writer in an honest frame of mind.

This is one of the things I keep coming back to. Orwell’s admission of egoism is a secular version of the religious consciousness that all humans have sinned and fallen short of the Kingdom of Heaven. Orwell seems to be saying (to me, at any rate) that even writers, who set themselves up in a kind of omniscient position, interrogating politics, art, society, history, and so forth and passing exalted judgments on them, remain ego-driven children at their foundations.

I think it was in this same vain that the novelist Orhan Pamuk described with disarming frankness in 2006 why it was so great to win the Nobel Prize. The Nobel is at the top rank of human achievement. Only our most vaunted, learned thinkers earn it. They are imbued, we believe, with cold, Olympian virtues of aloofness and detachment–virtually a race apart from us. But Pamuk punctures this myth delightfully (and delightedly), saying in his Nobel Banquet speech that he once again felt like a teacher-pleasing child:

Actually the question I’ve heard most often since the news of this prize reached me is: How does it feel to get the Nobel Prize? I say, oh! It feels good. All the grown ups are constantly smiling at me. Suddenly everybody is again gentle, polite and tender. In fact, I almost feel like a prince. I feel like a child. . . . In fact now … come to think of it … That is why I write and why I will continue to write.

You can’t get much more egoistic than a prince, right? Everyone around you thinks you are young, smart, handsome, fit to rule the realm. Orwell was right about this state of the writer’s mind, and he was right to put egoism first in his list. Whatever writers tell you their purpose is, you can be sure their primary motive is to be heard, acknowledged, and valued.

And since writing is a way of life for me, I take Orwell’s words as a caution: In everything I do, even if it seems selfless or noble, there must be a part of me that is putting me first, calling out for praise and recognition. I have come to believe that some people think the point of growing up is to deny the existence of, overcome, or possibly even eliminate, this childish, selfish part of oneself. But this is humbug. The child never goes away, and we should not pretend that it virtue demands we negate it.

Two Cheers for “Blab” Books: How Corny Quotation Collections Shaped Two of the Greatest Minds in American History

BY MATTHEW HERBERT

Two of the best books I’ve read over the last year have been Abe: Abraham Lincoln in His Own Age, by David S. Reynolds, and Frederick Douglass: Prophet of Freedom, by David W. Blight.

Both men, Lincoln and Douglass, show that anyone can learn wisdom, gracefulness of expression, and moral courage from salutary books, no matter how humble or, indeed, corny. Even the most plebeian texts imaginable can help us furnish our minds beautifully, as the lives of these two giants show us.

One reason we hail Lincoln as a great democrat is his reputation as a rail-splitter–a frontier working man. But this image had only the tiniest grain of truth to it; he put up one rail fence, in 1830. The idea was seized on by Republican senators for Lincoln’s presidential campaign in 1860, and it took on a life of its own. Lincoln actually spent nearly all his pre-political working life behind a desk or arguing in front of a court. His roots in frontier culture, however, are of course real. As a boy, he attended country schools for a total of one year, on and off. Frontier schools were dodgy affairs back then; teacher certification was not a thing and many “teachers” were outright frauds. There were few books to be had and no standardized curriculum. Mostly the children just listened and repeated back what they heard.

“Much of the school day,” Reynolds reports, “was devoted to individual and group recitation. The idea behind these ‘blab’ or ‘vocal’ schools was that information could best be imprinted on the memory if spoken aloud–a habit that stuck with Lincoln, who later irritated colleagues in his Springfield law office by constantly reading aloud from newspapers or books.”

It is in Lincoln’s early reading habits, not in his over-hyped reputation for manual labor, that his real roots as a democrat begin to reveal themselves. Not only did he form his mind from the crudest intellectual clay, but even more importantly, he became completely receptive to cultural elements in his environment that were as eclectic as the content of his blab books. “[H]is mind was fed early on by all kinds of sources, high and low, sacred and secular,” Reynolds writes. As an adult Abe would speak one moment like a preacher, the next like a barroom raconteur, full of earthy jokes, then quote Shakespeare.

Among the most formative of Lincoln’s school books were “William Scott’s Lessons in Elocution, The Kentucky Preceptor, Noah Webster’s The American Speller, . . . and Lindley Murray’s The English Reader.” Lest we dismiss these eclectic, archly didactic books as merely the stage-setters that opened Lincoln’s mind to finer literature later in life, they actually stayed with him. Lincoln carried ideas and passages from these odd, humble books all his life. He developed a great capacity for memorizing texts from them and invoking them later.

Reynolds writes, “Lessons in Elocution included literary passages such as the soliloquy of Hamlet’s uncle on the murder of his brother (‘Oh! My offense is rank; it smells to heaven’), which Lincoln would spontaneously recite during his presidency.” From Aesop’s Fables, Lincoln took with him the image of bundled sticks, the strength of which he invoked “in a political circular . . . encouraging his fellow Whigs to act in unison rather than separately.”

These are just two examples of the scattershot collection of texts that shaped Lincoln’s mind. What mattered about the passages he memorized was not always their inherent greatness. Some were homely and modest, some scandalous, some preachy, and many–about spelling or grammar–destined to be outdated. But in all they reflected an amalgam of American impulses and ideas, a bounty of differing viewpoints that seemed in a way to embody the “multitudes” that captivated Walt Whitman.

The lessons Lincoln took from his school books were simple but powerful. The first was the importance of clarity. Though we recall the language of 19th century as florid and meandering (just try getting through the longeurs of Melville or Hawthorne), Lincoln led Americans into a new linguistic paradigm of brevity and precision. Say exactly what you mean, was the new injunction. But Lincoln also managed to cultivate a sense of style that gave his words literary power and moral weight. Lincoln gave the most important speech in American history, the Gettysburg Address, in only three minutes–a mere ten sentences that defined a whole new model of political language. To this day we still believe that anyone with something to say should be as clear and brief as possible, but without sacrificing beauty or style.

Lincoln also learned from the blab schools and quotation books that texts are intrinsically worth committing to memory. He didn’t know he would become president when he started memorizing all those lines; they just spoke to him. We can learn from Lincoln that this remains a habit worth emulating. If a passage of a poem, essay, play, or novel speaks to us, we can and should carry that passage with us. Words anchor us to the world, with all its wonders and trials. When we have nothing else, they are there to guide us, as they did Lincoln, during the gravest tests of human wisdom and courage.

Ringing literary allusions do not merely reflect our inner selves, though; they connect us to others. This was a third lesson Lincoln learned from his school books. A good communicator must know his audience if he wants to relate to them. It is a lesson tailor-made for a politician, but it it applies to the rest of us too. One of the things we say about experts and academics when they talk is that they are “off in their own world.” How true! They only seem to relate to their own kind. Lincoln’s school book readings taught him there are all kinds of people in the world, and to be fully human–especially in a democratic society–one must understand them and empathize with them. This starts with speaking their languages.

It was in the winter of 1830 that young Abraham Lincoln discovered a school book called The Columbian Orator. Like other school books already mentioned, it was a hodgepodge, a collection of texts laid out in no particular order but with the ring of nobility to them. As its name implied, The Columbian Orator was designed to teach effective public speaking. The winter after Lincoln began reading his copy of the book on the Illinois prairie, the young Fredrick Bailey, 900 miles to the east, in Baltimore, would acquire his.

We know Bailey today by the name he took after escaping slavery–Frederick Douglass. In a a way that mirrors the young Abraham Lincoln’s personal history, Bailey-Douglass was also the product, pedagogically speaking, of whatever school books were to be found in his immediate environment. David W. Blight recalls in Frederick Douglass: Prophet of Freedom, that young Bailey acquired his copy of The Columbian Orator from Irish friends of his who were carrying their school books with them when they would see young Frederick on the streets of Baltimore.

As it turned out, The Columbian Orator did have a guiding theme, chosen by its editor, Caleb Bingham, but it would have been hard to tease it out of its haphazard contents. Blight writes of Bigham’s book,

[Its] eighty-four entries were organized without regard for chronology or topic; such a lack of system was a pedagogical theory of the time designed to hold student interest. It held Frederick Bailey in rapt attention. The selections included prose, verse, plays, and especially political speeches by famous orators from antiquity and the Enlightenment. Cato, Cicero, Demosthenes, Socrates, John Milton, George Washington, Benjamin Franklin, William Pitt, Napoleon, Charles James Fox, and Daniel O’Connell . . . all appear at least once, and some several times. Most of the pieces address themes of nationalism, individual liberty, religious faith, or the value of education.

(Image: Open Library)

Taken on the whole, the book was, if the term is not too disparaging, a ragbag. I mean this word in the same way Orwell did when he used it to describe a genius no less than Shakespeare. What Orwell meant, and what I mean, is that a compelling voice–like the one in The Columbian Orator–can impart great wisdom even if it fails to evoke systematic understanding, minute design, or even erudition. It is in the powerful expression of an idea that the reader (or hearer) can see a life-changing truth as in a flash of lightning or hear a higher call to duty as in a clarion note. It’s the voice that matters.

As Frederick Bailey recited the words of The Columbian Orator to himself, essentially undergoing the same rote exercise in elocution and memorization that Abraham Lincoln did in the frontier schools of Illinois, those words took shape and caught fire. This sort of awakening was exactly what the passages in The Orator were meant to produce. Bingham, the collection’s editor, was a Dartmouth-educated abolitionist. He had chosen the texts for the Orator to showcase the central, founding idea of America–that each individual is sovereign and may not be owned or ruled over by others. Without saying the words “slavery” or “abolition,” Bingham assembled The Columbian Orator to teach the reader that slavery was un-American and indeed was at war with the liberal arc of history. In its pages, American school children, Blight tells us, “would have repeatedly encountered irresistible words such as ‘freedom,’ ‘liberty,’ ‘tyranny,’ and the ‘rights of man.'” It was “a vocabulary of liberation.”

All throughout his life, Douglass would refer to his copy of The Columbian Orator as his “rich treasure” and “noble acquisition.” He carried it with him when he escaped slavery. The Orator‘s promotion of American ideas poured “floods of light,” Douglass recalled, “on the nature and character of slavery, . . . penetrat[ing] the secret of all slavery and oppression.” Put simply, America would not have in its cultural possession one of its greatest books, Douglass’s epochal autobiography My Life as a Slave, without young Frederick Bailey’s chance acquisition of The Columbian Orator, that stiff, eclectic, grandstanding collection of liberal ideas. One of our great prophets might not have found his voice. And the chorus that eventually called for America to hold true to its ideal of freedom would have lacked its most plangent, powerful tones.

Review of “Black Earth: The Holocaust as History and Warning” by Timothy Snyder

BY MATTHEW HERBERT

I was going to open by saying what a tragedy it is that we still need books about the Holocaust. The quintessential crime against humanity, we are supposed to be past it now. But genocidal war fueled by the Big Lie has made something of a comeback with Putin’s invasion of Ukraine. “Sane,” “rational” people with an “elected” government are waging a brutal, murderous war to destroy, not just a country, but a nation, in Europe. Here we are again.

Still, we must deal narrowly with the events of today, right? How much can a liberal democrat like myself gain by reading a new history of the Holocaust? All it can serve to do, seemingly, is to highlight again what has been clearly and repeatedly established as humanity’s worst, most monstrous failure. Isn’t rehashing irremediable atrocities a kind of political pornography? And so Lublin, Treblinka and Auschwitz cannot really tell us much about Bucha, Mariupol, and Kramatorsk.

But maybe the recurring justification for revisiting the Holocaust lies in the audience, not the subject matter. I come from a country, America, where the people think they are naturally too virtuous to commit genocide, and I live in a country, Germany, where atonement for Nazi crimes has become so routine and ubiquitous that it can feel like a hollow ritual.

I am, it turns out, precisely the kind of person for whom Timothy Snyder wrote his 2015 book Black Earth: The Holocaust as History and Warning. As someone who thinks we understand Nazism and its crimes, I tend to believe that unformed sorrow and a stark pledge of never again is all we can offer the wounded human race after the Holocaust. It was all so much pure evil. But Snyder insists we still get the political facts of the Holocaust wrong, and thus we risk repeating it.

The actuating force of the Holocaust was not pure, unmotivated evil, according to Snyder. It was the intersection of humiliated nationalism, economic crisis, and conspiratorial racism. At the crossroads where these elements come together, Snyder argues, “few of us would behave well. There is little reason to think that we are ethically superior to the Europeans of the 1930s and 1940s, or for that matter less vulnerable to the kind of ideas that Hitler so successfully promulgated and realized.”

Before I offer my views on how history is repeating itself today, let me lay out Snyder’s main thesis and arguments. The effectiveness of Hitler’s genocide, Snyder writes, can be measured geographically, on the map of Europe. Although westerners typically call to mind the Third Reich’s deportation of Jews from Western Europe, it was in the East where the mass killing was most effective. More than 90 percent of the Jews in Poland, the Baltic countries and Nazi-occupied parts of the USSR were murdered; whereas half or more of the Jews in western Europe survived. Ironically (if the word can be used), Germany’s Jews had one of the highest rates of survival.

The East became Ground Zero for Nazi genocide, Snyder argues, because of how thoroughly Germany destroyed the state institutions there. When government authority was destroyed by the Wehrmacht‘s Blitzkrieg, so was the link to the rule of law. Even Hitler’s re-introduction of the law of the jungle, though, was only an enabling condition for the Holocaust. Had the situation been so simple as Einsatzgruppen rampaging without any laws to constrain them, Snyder argues, far fewer Jews would have died. The Nazis acting alone simply didn’t have the capacity to kill by the millions. What they needed, Snyder argues, was a large cohort of highly motivated local collaborators.

And they got them. This is the heart of Snyder’s argument: the unique tragedy of Eastern Europe during World War Two was the fact of double occupation–military conquest first by the Nazis then by the Red Army. To signal loyalty to each occupying power in turn, or often just to survive, thousands of eastern and central Europeans actively contributed, albeit in different ways, to the wholesale killing of Jews. When the Soviets occupied eastern Poland in 1939, they divested Jews of their property and businesses because they were anathema to communism. Non-Jewish Poles moved in and occupied their stolen property. When the Nazis invaded, they coopted the new Polish property owners into a scheme to kill the former owners, “racializing” what had been a purely political oppression by the Soviets.

In some areas Jews formed partisan groups that proved successful at killing Nazi occupiers. When the Red Army began liberating areas in 1943 that had been defended by partisans, Stalin ordered the partisans killed so they couldn’t claim credit for helping win the war. These patterns repeated themselves in dozens of variations across the East, and the Jews got it coming and going, by the Nazis and the Red Army.

The historian’s first task, of course, is to present the facts faithfully. Snyder obviously would not have written Black Earth had he not believed he was unearthing some new, objective evidence about what the Holocaust really was. But it is the moral of his book–the warning–that gives it urgency. If we persist in seeing the Holocaust as an instance of utterly unintelligible evil, he writes, we could blind ourselves to its central mystery–how ordinary people carried it out. How we might carry it out again.

“I am a normal man with normal needs,” says Paul Doll, an imaginary SS death camp commandant in Martin Amis’s 2015 novel The Zone of Interest; “I am completely normal. That is what nobody seems to understand.” The novel’s action is set in 1942 and 1943, as it is becoming evident that Germany is losing the war. But even as the Wehrmacht‘s military conquests slow and then go into reverse after Stalingrad, the genocide in the death camps picks up pace. Doll has to clear newly arriving trains every day. “We cannot cope with the numbers,” he complains. It dawns on Doll that the death camps have become Hitler’s main effort. The war for Lebensraum is being lost. So the genocide must be sped up. Amis’s unflinching theme in The Zone of Interest is the examination of each character as someone who is, or once was, normal but is now under the reality-bending circumstances of Hitler’s doomed killing frenzy.

“Under National Socialism,” reflects Amis’s protagonist, “you looked into the mirror and saw yourself. You found yourself out. . . . We all discovered, or helplessly revealed, who we were. Who somebody really was. That was the zone of interest.” And in a way, this is the zone of interest for Snyder as well, to insist on seeing the actors in the Holocaust, major and minor, as normal people. They believed a Big Lie when it was credible, in the 1930s, and then became part of the Big Lie’s monstrous bloody reality even after it passed beyond belief in the 1940s.

In Ukraine we are witnessing a hinge moment in history, where just such a transition is happening in real time. Vladimir Putin’s idea of Russia as victimized, surrounded, and unfairly constrained has been fermenting into a mass Russian movement for decades, and it is now exploding into a justification for genocidal murder. Just as Hitler drew an organic link (where there was none) between a real strategic adversary–Soviet Communism–and a helpless, demonizable people–the Jews–Putin has pulled off the same diabolical maneuver. Germany was by natural rights a strong, forthright nation, Hitler said, deserving of whatever wealth, land, and power it could grab. That’s just Realpolitik in its purest form. But the Jews devised a global conspiracy of liberalism that kept Germany in check. It wasn’t fundamentally the Jews who were an obstacle to Germany’s greatness; it was the alliance they created. The alliance was too big to conquer, so Hitler went after its putative source, in Jewishness.

While some of the details differ, this view of geopolitics is far too close to Putin’s to be ignored. The Ukrainians must be subdued, he says, not because they themselves are a threat to Russia’s greatness; they are nothing but homosexuals, leftover Nazis, and drug addicts. But because these weaklings have tricked NATO and the EU into constraining Russia, they have committed a fatal sin against Russian greatness. Tragically, I think it is plausible that Putin will turn to mass killing as the only achievable war aim that is left to him once it becomes clear that he, like Hitler, is losing the war.

Notes on “There There” by Tommy Orange

BY MATTHEW HERBERT

In my last post, I made a gesture of détente toward postmodernism, the philosophy that says we create our own reality, including things that seem to exist entirely on their own, such as rocks or numbers or people.

Why would I do such a thing? The idea that the world would be a mere void without our thoughts to substantiate its content seems plainly, irrefutably wrong. The world is just there, right?

I still don’t believe that the claims of postmodernism are literally true, and I still have an old-fashioned attachment to literal truth as the most important kind of truth.

But over the years I have come to believe that the allure of postmodernist claims ought to be taken seriously even if the claims themselves make little sense in their hyped-up, academic form. They sometimes have an important kind of figurative truth to them.

I began my encounter with postmodernism in the field of philosophy–the very realm where hyped-up, academic claims take center stage and fight it out with one another–but it is literature that has softened my objections to postmodernist ideas and given them a playground rather than a field of battle. Certain novels have persuaded me that there are levels of reality that we do in fact create and it is sometimes hard to distinguish those levels–socially constructed reality–from the already-there world.

But let me back up.

In the 1990s I belonged to a school of philosophers called analytic philosophers, who believe that the best arguments for or against a given position are expressed as numbered sequences of clear, precise sentences, or even better, symbolic logic. To us, really excellent proofs were more like math than conversation. Here’s an example of one:

(∀C) (((∃S)csg(C, S, “A”)) → ((∃T )csg(C, T, “B”)))

It became a sort of mean-spirited game to show how ridiculous postmodernism was by translating it into our preferred forms of precise, logical arguments and then spotlighting the consequent profusion of nonsense. (Example: I recall a professor of mine smirking at a line from Being and Time in which Heidegger says that when he enters a room, he “pervades” it. At what rate of motion, the professor wondered, would Heidegger’s pervasion have progressed? Could one outrun it? And so on.)

It was all great fun. But one does read other books than analytic philosophy, eventually. And the best books change you, or reveal parts of yourself you had misunderstood all your life, or didn’t even know were there. For me, the pivot happened through novels. Over the years I have found that many of the most searching novels address ideas and questions that draw on postmodernist concerns, and that I in fact share many of those concerns. Read White Noise by Don Delillo or The Black Book by Orhan Pamuk and see if they do not affect your sense of reality.

Not everything worth saying in life is translatable to a syllogism.

I have just finished the stunningly good There There by Tommy Orange, which is certainly not translatable into a syllogism. It is a novel about twelve present-day Native Americans who are converging on a powwow in Oakland, California. It is a multi-sided tale told through twelve individual voices. Although the first responsibility of the critic is to focus on the work of art itself, I cannot avoid saying at the outset just how good this book is, especially considering it is Orange’s first, and that he didn’t even start reading literature until he was an adult. His sensitivity to the craft of novel writing is simply astonishing. I hope he lives a long time and writes a lot of books.

Orange’s choice of a collection of narrators rather than a single protagonist is an acknowledgment of how all of our identities are increasingly embedded in virtual networks–of social media, virtual schooling, home-office work, e-commerce, and so forth. It also illustrates Maurice Merleau-Ponty’s observation that individual humans are essentially situated in social settings from which our identities are derived. Over and against the Enlightenment ideal of hermetic individuality (God: “I am that I am.” Thoreau: “I will breathe after my own fashion.”), the postmodern self acknowledges how deeply it is interpenetrated by, and dependent on, others. We are other people, as I often remind myself.

One of Orange’s best-drawn characters is Edwin Black, an overweight, socially awkward aspiring writer whose extensive online research has made him something of a computer nerd as well. While searching for the identity of his long-lost father–a query that is definitive, concrete, and personally compelling–he finds something else, more elusive and disorienting: his own intellectual agency is merging with the algorithms and networks of the online world. Almost everyone has experienced something like the weird annexation of self by the internet that Edwin describes as “the open window my mind has become since the internet got inside it, made me a part of it.” Lest we think this kind of encounter is just a static intermingling of online “content” with one’s private thoughts, Black reminds us, it is a relationship that can exert force and direction on us, changing who we are: “Sometimes the internet can think with you, even for you, lead you in mysterious ways to information you need and would never have thought to think of or research on your own.”

We are not just other people; these days we are also other virtual networks.

Take this kind of reflection to its extreme, and you can see how it became the inspiration for the Matrix movies. If, as Jean Beaudrillard argues, our selves, the objects of our knowledge, and our very worlds are essentially mediated, we live with the menace of being unable to reliably distinguish our “true” experiences from our mediated ones, our core selves from what has been shaped by others. That’s The Matrix in a nutshell. But as Orange reminds us through Black’s reflections, it is at the thin end of this wedge, where we feel the internet–or any virtual network–making us a part of it that is most unsettling, because unlike science fiction movies it is really happening. You need not believe that The Matrix could be literally true to fear the encroachment of the mediated, virtual world onto the unmediated, “real” one. Most of us are not “really” at work, for example, until we log into the hive mand that runs, monitors, in fact is our organization. We are living in a Beaudrillardian twilight zone, and Orange is finely, expertly attuned to it. This sensitivity alone makes There There wonderful.

In a luminous, ironic prologue worthy of Kurt Vonnegut, Orange executes a deft act of misdirection to set the stage for There There, surprising the reader with the news that many 21st century Natives are, like Orange himself, urban, and call Oakland, California home. Orange writes:

Getting us to the cities was supposed to be the final, necessary step in our assimilation, absorption, erasure, the completion of a five-hundred-year-old genocidal campaign. But the city made us new, and we made it ours. . . . We made art and we made babies and we made a way for our people to go back and forth between reservation and city. We did not move to cities to die.

He later says that Natives living in Oakland draw as much inspiration from their built-up, urban environment as they do from “any sacred mountain range.” Some of his characters speak West Coast gangster, weaving words and attitude from Tupac into their Native-inflected patois. Who would have thought? The city is in them, and vice versa.

One thing about cities, though, is that they are constantly changing. Can they really help us frame a solid identity if their personality is ever in flux? Augie March would say yes without question, based on his announcement at the beginning of Saul Bellow’s masterpiece, “I am an American–Chicago born.” But in real life, the essence of a city is unstable and unlocatable, a fact that Orange embraces in the very title of his novel. “There there” is snipped from Gertrude Stein’s famous remark on Oakland, her childhood home. Returning as an adult in search of her roots, Stein once observed there “was no there there” in her hometown.

(Stein would not be the only California native to find the geography of her past transmuted and slipping out of her reach. Joan Didion’s 2003 Where I Was From is a deeply felt 240-page rumination on the same subject.)

As a good novelist, Orange does not simply disclose his worldview to the reader. But through the dialogue of the characters and the convergence of the plot strands on the climax at the big Oakland Powwow, we come to discover how thoroughgoing a postmodernist Orange is. His suggestion that there is no “there” in Oakland despite his characters’ creation of meaning there, in that very place, despite the powwow’s gravitational pull, drawing them there, to that very place, reflects Beaudrillard’s idea that a referent–the thing that a word refers to–can evaporate into nothing through a “normal” sequence of shifts in the structure of representations. But a whole city can just disappear?

Maybe. Trace the referent of a place, a society, or an institution–anything–far enough back and you eventually fall through a trapdoor of indeterminacy. What is “home,” for instance? For Natives, Orange probes the idea that there is something stereotypically “original” about unspoiled nature as the Indian’s proper home. The grasslands, the desert, the forest is somehow where he is supposed to be. But why would this be such a fixed notion, Orange wonders:

Nothing is original, everything comes form something that came before, which was once nothing. Everything is new and doomed.

Orange is also attuned to Beaudrillard’s concept of hyperreality, or the process of overdetermining the meaning of a representation through mediation and mimesis. Orange lodges this idea throughout his story, in elements large and small. In a line that looks like a throwaway, he has one of his characters muse that she dislikes poinsettias “because of how even the real ones look fake.” They are too brilliantly red, she thinks. In a world where representation is always overdone (think of how fake food is manipulated to look like real food, in fact too good to eat, in advertisements) poinsettias look too much like poinsettias.

There There is filled with these Alice-in-Wonderland gems, scenes where characters’ fears, habits, desires, or actions destabilize the line between appearance and reality. Edwin Black tries to convince his mom that he is (finally) eating better by mediating his own virtuous performance. “‘See?’ I almost shout, holding up the apple for her to see. ‘I’m trying. Here’s a live update for you. I’m live streaming it to you right now, look, I’m trying to eat better. I just spit out some Pepsi in the sink. This is a glass of water.'”

“Virtually everything” 14-year-old Orvil Redfeather “learned about being Indian, he’d learned virtually. From watching hours and hours of Powwow footage, documentaries on YouTube, by reading all that there was to read on sites like Wikipedia, PowWows.com and Indian Country Today.” Orvil plans to translate these virtual representations into “real” Indian ritual when he dances at the Oakland Powwow. On the big day, he boards the bus, wearing his regalia; he muses on being an Indian dressed up as an Indian. Does the double layer of Indianness make him more or less Indian? And what should we make of Orvil’s fellow BART passengers, so bathed in a mediated world that they raise not one curious eye toward him in his getup?

Daniel Gonzales, a smalltime drug dealer, uses a 3-D printer to print a 3-D printer. (It will be used, a la Chekov, to print a gun, which will bring about the climax of There There.) He spends his time online and writing code. To Daniel, it would not come as a shock that there is no longer any there there in Oakland. “I mostly see Oakland from online now,” he says. “That’s where we’re all going to be mostly eventually. Online.” “There” is just a representation of a place, in some ways truer and more useful than an actual, hunk-of-earth place.

The pervasiveness of virtuality might even offer a kind of hope to us mortal humans. Opal Victoria Bear Shield takes comfort in something her mother said right before revealing she was dying of cancer. “She told me the world was made of stories, nothing else, just stories, and stories about stories.” This passage comes about as close as possible to rendering Derrida’s tendentious claim that “there is nothing outside the text” in a beautiful, humane way. In a world like Victoria’s mother’s, where we are only present as parts of a text, we just might live forever. Is that so bad?

Learning to Not Hate Postmodernism

BY MATTHEW HERBERT

I’ve always had a strained relationship with postmodernist philosophy. That’s the idea that we make up reality: there’s truth for you and truth for me, but no Truth with a capital T.

When I read in the 1990s that postmodernism’s leading light, Jacques Derrida, said there is nothing outside the text (Il n’y a pas de hors-texte), I thought, well, anyone sitting on a beach can scoop up a handful of sand and disprove that theory. We don’t need to name sand for it to exist, or any of the rest of reality for that matter.

Of course Derrida meant something more subtle. He didn’t mean there’s absolutely no such thing as the world around us. But without language to shape and categorize our interface with the world, there would only exist an undifferentiated quantum of not-quite-reality. (William James, much more helpfully than Derrida, called this a “bloomin’, buzzin’ confusion.” Wittgenstein was more poignant: “The limits of my language mean the limits of my world,” he said.) We write things into existence, Derrida argued, by imposing structures of grammar and meaning on them. (The 1967 book in which Derrida dropped this anti-reality bomb has the forbidding but not quite helpful title Of Grammatology.)

So here’s the first thing to bear in mind about postmodernism: it has a way of overselling its claims. I think the idea is to generate hype; first you say something very radical-sounding and then you hedge and dissimulate till you arrive at a much less exciting truth.

How about this next example.– When the postmodernist Jean Beaudrillard wrote a book about the Persian Gulf War of 1991, he called it The Gulf War Did Not Take Place. If you read the book you’ll discover what he really meant, which was: (a) the Gulf War did take place, and (b) it was so thoroughly mediated by propaganda and flashy TV images that you could imagine the media spectacle was the whole point and in some way more “real” than the war itself. It’s a much less radical claim than the title promises.

By the way, I fought in the Gulf War and can vouch for (a). I got on a real plane in August of 1990, dodged a (real) SCUD or two, and then flew home in April of 1991.

I think it’s important that I sketch just how sharply I opposed postmodernism even before I knew what it was. When I got around to finishing college in the mid 1990s, I discovered my own worldview, and its philosophical foundations. It was pretty much the opposite of postmodernism. According to my newly discovered hero, Socrates, the world is made up of a set of hard-edged objects defined by knowable concepts and set into relation with each other by immutable laws. Think of gravity, or Mount Everest, or math. They would exist without humans, or language.

I believed this with the fervor of a tent show revival convert. Of course we do not invent the truth that 2+2=4 any more than we hallucinate the fact that there is a physical world all around us. To think otherwise seemed not just factually wrong to me but irreverent in some quasi-religious way. Being postmodernist was a comprehensively wrong attitude about life, a sneer impersonating a theory, as far as I was concerned.

So when I decided to go to grad school to get my PhD in philosophy, it was basically a decision to go to war against the edifice of postmodernism. My hatred for it gave my life meaning. The way wised-up twenty-year olds could start spouting about science being a “matrix of patriarchal power projections” (or other nonsense) before even trying to understand, say, the periodic table, convinced me that a postmodernist was anyone trying to look cool while avoiding serious intellectual work.

Here’s the second thing about Postmodernism: it is mostly written in gibberish; furthermore, this is, I believe, essential to its character. Its lifeblood is the creative obfuscation of ordinary claims and arguments with opaque jargon.

Flip to a random page of any postmodernist tract, and you will encounter deeply unintelligible passages whose only job seems to be generating new, transgressive terminology.

The point, or, rather, pointlessness, of postmodernist writing was exposed by the physicist Alan Sokal in 1996 when he published the now-infamous paper “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity” in the journal Social Text. Sokal filled his article with such compelling nonsense that the journal’s editors mistook it to be a razor-sharp rapier stroke against the “objectivity” of science. The construction they put on “Transgressing the Boundaries” was that the epistemological ambiguity at the heart of quantum phenomena had significant implications for science, society, and the whole notion of truth. Sokal, in fact, believed nothing of the sort. A pocket-protector-and-pen kind of guy, he had made the whole paper up just to see what he could get away with. It was a huge embarrassment for postmodernists.

My club of philosophers thought this was pretty cool. There is absolutely nothing sweeter than watching your enemies self-immolate in public. Or at least that’s how I felt in 1996.

But I didn’t finish my doctorate: I didn’t dismantle postmodernism. It turned out that believing other people’s worldview to be incredibly dumb was not enough oomph to get me through a PhD. And so I moved on. Although I never forgot about the fight between objectivity and whatever postmodernists believed in, my ability to join in that combat faded in importance over the years. I read novels instead of philosophy.

But something has changed over the last 27 years. Even though I remain fundamentally Socratic in my worldview, I have not been able to avoid some of the fears and suspicions that originally motivated postmodernism. And so I have found the need to re-digest certain postmodernist claims in ways that make sense to me and help me cope with a reality that is not quite as stable as I believed it to be in 1996.

The biggest development that has brought on this rethink is the empowerment of a global anti-enlightenment movement that thrives on the abolishment of the distinction between truth and falsehood.

The idea that we might not be able to discover certain facts is not new. Science is hard, history has gaps, and so forth. The scope of human knowledge must have limits. But the attitude that results from a wholesale skepticism about everything has taken on new, dangerous dimensions. I blame reality TV.

In his groundbreaking 2014 book, Nothing Is True And Everything Is Possible: The Surreal Heart of the New Russia, Peter Pomerantsev reports on the reality-bending twists and turns at the foundation of the Kremlin’s control of information in Russia. Very briefly: When Putin took power in 2000, he cultivated a growing national taste in Russia for reality TV. Reality shows took off, rewarding Russians for suspending their ability to tell fact from fiction by providing them novel, nonstop entertainment.

At the same time, Putin oversaw the construction of a political machine that created, not just his own party’s image, but all of what would pass for civil society, including the political opposition. Let me underline that: every time an opposition party has appeared in Russia since 2000, it has been created by the Kremlin. This means thousands and thousand of people who genuinely believed they were part of an opposition–indeed subjectively were part of an opposition–were really just actors in a play written by Putin. People are not stupid, of course, and many Russians sensed that their politics was not what it seemed; ringmasters somewhere were secretly orchestrating Russia’s political circus. No matter, Pomerantsev reports; hip skepticism became the new normal. Russia’s elites and growing middle class came to the delirious conclusion that, not only politics, but “everything” was just PR.

Do you see where this is going? If you regard information as primarily a commodity to be served up to empower the ruling class and titilate the masses, you can drop certain bothersome concerns about how true the information is. You can also get people to ignore questions about what information is for in the first place (admittedly an easy task; most of us stop worrying about this sort of thing after high school or college).

The delusion-forming nature of this trend could become dangerous. At its endpoint, you could even get millions of people to believe that their army was surgically dismantling a Nazi junta in a neighboring country actually governed by a freely elected Jew and that the operation was going to plan despite the wholesale, daily destruction by bombardment of residential areas and the sudden flight of three million refugees. Remember?–Language stopped having to correspond with reality back in the 2000s. And if you, as an individual, were confronted with something that looked like a troubling fact–say, the firebombing of a maternity hospital–you could simply brush it off as fake news. In the digital information ecology of 2022, everything is fakeable! You need never believe anything you don’t “like.”

Kharkiv, Ukraine (Image: Sergey BOBOK/AFP)

How about America? Our denial of reality is just zany infotainment, not a murderous pathology, right?

The odious spectacle of Alex Jones compounding the devastation of Sandyhook survivors by floating his conspiracy theories and then retreating to the defense that no one can prove anything one way or another stands front and center. Everything is just PR, right? This pose of hip skepticism demands a reconsideration of Foucault’s claim that knowledge is nothing more than ideology imposed by force. While this claim seemed like rank nonsense to me in 1996, it has resurfaced as a terrifyingly plausible reality: What you can get people to know pales in comparison to what you can get them to act on as if they know. Sandyhook “truthers” have physically assaulted surviving parents for allegedly perpetrating an elaborate fiction in the service of an all-powerful gun control conspiracy. Gather a large enough force of anti-knowledge troglodytes (such as Jones’s followers), and you can submerge the very idea of truth. This is the defeat of objective reality by mob rule.

And so I have had to reconsider some of the tenets of postmodernism. The very existence of reality TV tells us there is a human appetite for destabilizing the relationship between facts and representations, mind and world. Whether you think those relationships are inherently unstable (a la Foucault and Derrida) or they have been deliberately degraded by the powers that be (a la Beaudrillard and sometimes Foucault) is, with World War Three looming, rather beside the point. We should not blow the world up in murderous defense of each of our rights to believe whatever we want to believe.

My sermon is over. Here’s the recessional:

I’ve been searching for the reasons I’ve all but stopped using social media. Most of them are ordinary reasons. I got bored. I got tired of meaningless flame wars. I was embarrassed by almost everything I said, either for being too earnest or too obscure. But the deepest reason was resoundingly postmodern. Trying to have any kind of discourse on social media only ends up empowering the enemies of discourse. The information ecology of social media is actuated in a way that kills discourse. It submerges the possibility of truth in a new skepticism now available to all. Just like the “oppositionists” in Putin’s Russia who didn’t know they were the Kremlin’s pawns because their motives were pure, you might be subjectively sure you are having a sincere, well-informed argument on social media, but you’re actually perpetuating an information ecology that ruins argument and is helping ruin the evidentiary boundaries of normal discourse–facts, truth, and other nice things we used to have.

We are, to close with a hat-tip to Beaudrillard, participating in a simulation of discourse, not real discourse. I guess that’s what I got tired of in social media. But the cat videos are still great. I stay on for those.

Abraham Lincoln: Self-Help Guru

BY MATTHEW HERBERT

For Christmas, my mom got me the book I’d been waiting more than a year to buy, Abe: Abraham Lincoln in His Times, by David S. Reynolds. It came out in late 2020, but it’s a 1,000-page tome, and, like all of Reynolds’s books, I wanted to take time to absorb it. For most of 2020 and 2021 I was just coping with, you know, life, so I kept putting Abe off.

Reynolds writes long, detailed books about the cultural forces that shaped major developments in U.S. history, focusing on the 19th century. Even before our current struggles with the blight of nativist authoritarianism, I had long thought the 19th century was pivotal for understanding our country’s self-image and destiny. (How many articles have you read in the last five years that say we are re-living the 1850s? Yeah, me too. But they’re not wrong.) So, Reynolds, I have discovered, validates many of my assumptions about how to read U.S. history.

Anyway, the “fierce rush of life” (Wodehouse) still keeps me from writing much of anything serious here, which is probably just as well. One thing this blog has taught me is that I’m still in the process of purging all the tedious, precious, and otherwise horrible writing that is still inside me and which must be gotten rid of before I can do better. Onward ho.

But yesterday I had a bit of the old feeling again–where I read something trenchant and profound, go on a trail run, and discover a bunch of connections between what I just read and real life. So here it is.

We all love formulas. They simplify our problems. Or they clarify something annoyingly vague about reality. (A friend of mine wrote this wonderful post about trail running that opens with a discussion along these lines of the Hegelian dialectic. Sound intimidating? Well, check out my friend’s post and discover that you already know what the Hegelian dialectic is, and it’s everywhere. Then go show off to your friends.)

But I digress.

As Abraham Lincoln was trying to figure how to turn his opposition to slavery from a fuzzy moral intuition into an actionable political platform in the mid-1850s, he ultimately settled on the following formula. The challenge to America must be posed as (1) the “naked question” and (2) the “central idea.”

These were–and are–deceptively simple terms. For Lincoln, the naked question was, Can America remain true to its founding principle of human equality while extending slavery into its expanding territory? The central idea was equality. Lincoln did not just mean that the idea of equality was central to resolving America’s conflict. He meant that the idea of equality must be made to occupy the country’s political center, so that abolitionism could be a centripetal force rather than a centrifugal one.

I haven’t finished Abe yet, so I won’t dwell on all the ways Reynolds argues this was an ingenious formulation of America’s basic political identity crisis. Read the book, if you have time. It’s awesome.

What I wanted to say today is how well I think Abe’s formula applies to almost any challenge in everyday life. Not just a political genius, Abe is also a self-help guru, waiting to be discovered. As countless bumper stickers and inspirational posters tell us, almost everyone is struggling with a hard problem. On many days, the problem might seem irresolvable because it is intractable. We can’t get our hands around it because we can’t get our mind around it.

About two-thirds of the way through my trail run yesterday–where inspiration often hits me–it occurred to me that the things I struggle with the hardest in plain old life are susceptible to Abe’s political analysis.

Think about it: for anything big that is in the way of your happiness or well being, it can probably be put into Abe’s formula. What is the naked question? And what is the central idea? Not some zany, extreme idea, but something you can make sense of, something around which you can organize your life’s energy and priorities.

Like Hegel’s dialectic, once you grasp the naked question-central idea, you can apply it usefully almost anywhere. Last week, days before I read about Abe’s formula, I read this insightful analysis by The Atlantic‘s Derek Thompson about what’s wrong with America. And when I thought about it yesterday out on the trail, Voila!–it hit me that Thompson was putting the problem precisely in Abe’s terms. The naked question: Why is America failing so clearly at the same core national priorities we used to be good at? The central idea: Abundance. Thompson argues that if we put abundance at the center of a new national agenda, we can rebuild our national greatness and decency.

Even though Lincoln felt an extreme moral repugnance about slavery, he calculated that making his outrage the centerpiece of his political agenda would fail. It would fail the whole nation. Abolition, the actual goal, could only be brought about by putting something politically viable at the center.

I won’t go into the depths of Lincoln’s genius, except to say, read Reynolds’s book. Again, it’s awesome. Today’s idea is how useful Lincoln’s political analysis can be to us ordinary folk. Try it: Think of a big, tangled problem that holds you back and see if you can break it down into a naked question and a central idea. It works.

Orwell on the Sanctity of Life

BY MATTHEW HERBERT

George Orwell could be cold eyed and occasionally cold hearted. His strangely affectless reaction to his wife’s death during a routine surgery in 1945 was even more stoic than the English code of stiff upper lip called for. He mentioned Eileen’s death briefly in perfunctory letters to his closest correspondents and then went back to writing an essay on nationalism, one of his best.

As a lifelong socialist, Orwell knew that the struggle to end all forms of man’s domination over man would require sacrifice, and, in one of his off-script moments, he seemed to relish the prospect of London’s streets “running red with blood” (of aristocrats, presumably) when the revolution came.

One can take this kind of exercise too far, but it is possible to imagine Orwell exploring his own cold-bloodedness when in 1984 he has Winston Smith agree, a little too eagerly, that he is willing to kill and commit all manner of cruelties for the cause of bringing down Big Brother. (This is the scene in which Winston is trying to join the underground Goldstein rebellion, which turns out to be a setup.)

However idiosyncratic Orwell’s subjective experience of grief may have been, though; and however zealously he accepted a commitment to fight and die for liberalism, he was, I believe, more deeply devoted to a counterpoised set of ideas and feelings which today we call the sanctity of life.

Orwell was always most in his element when writing about the rights of the powerless. Look at the situation of any nameless, faceless suffering person, he often wrote, and you will see that the rights they lack have been expropriated by others who have the power to restore them. Human suffering is not inevitable, or at least it should not be accepted as such. Orwell was constantly reminding us citizens of the rich, developed, liberal world that we are not mere witnesses to the suffering of the downtrodden, but that we often share responsibility for stripping them of the very rights that could protect them from deprivation.

So far I am not telling you anything you do not know about Orwell. Even if you’ve never read a word he wrote, you know he was one of the 20th century’s greatest advocates of human freedom. He believed in his heart and defended till his death what Franklin Roosevelt famously called the four freedoms–freedom of speech and worship and freedom from fear and want.

But, somewhat behind the scenes, Orwell was also a moral entrepreneur. He observed certain human sins and wrote about them in new ways, which would eventually germinate into whole new movements.

Presciently, Orwell wrote about moral outrages against three groups that had never received focused political attention before: the child, the medical patient, and the criminally convicted. His brief, subjective observations posited the unusual idea that these groups were possessed of rights, which could and should be enshrined in law. In each case, I believe, Orwell’s germ of a concept is rooted in a deeper idea that there are no second-class humans–that every life is sacred.

Let’s look at each one, starting with the convicted criminal.

In one of Orwell’s first widely read essays, “A Hanging,” he recounts how in 1926 he was detailed as a colonial policeman in Burma to assist in hanging a man convicted of an unnamed crime. As the police march the condemned man toward the gallows, and the convict reflexively sidesteps a puddle, Orwell’s narrator (probably fictionalized–we believe Eric Blair, colonial policeman, merely watched the hanging and did not participate in it) has an epiphany:

It is curious, but till that moment I had never realized what it means to destroy a healthy, conscious man. When I saw the prisoner step aside to avoid the puddle, I saw the mystery, the unspeakable wrongness, of cutting a life short when it is in full tide. This man was not dying, he was alive just as we were alive. All the organs of his body were working –bowels digesting food, skin renewing itself, nails growing, tissues forming–all toiling away in solemn foolery. His nails would still be growing when he stood on the drop, when he was falling through the air with a tenth of a second to live. His eyes saw the yellow gravel and the grey walls, and his brain still remembered, foresaw, reasoned – reasoned even about puddles. He and we were a party of men walking together, seeing, hearing, feeling, understanding the same world; and in two minutes, with a sudden snap, one of us would be gone – one mind less, one world less.

Albert Camus would eventually turn this sentiment into a philosophical position (with the publication in 1957 of Reflections on the Guillotine), which, in turn, became a political movement against the death penalty. Orwell said it first. The “solemn foolery” of an ongoing human life is beyond the authority of the state to deliberately cut short. Over the years, several ancillary arguments would be added to the case against capital punishment, including the salience of wrongful convictions, but they would all build on Orwell’s original idea that the reasoned, deliberate decision to kill an already incapacitated human is always immoral. It is unspeakably wrong.

candle amnesty international logo - Clip Art Library
Image: Amnesty International, the world’s leading human rights advocate

In 1929, 25-year old Eric Blair lay on a bed in one of Paris’s worst, poorest hospitals, ostensibly being treated for pneumonia. The man in the bed next to him has died and the color has drained from his face: an eerie prefigurement of Orwell’s own death by tuberculosis in 1950.

Earlier, the hospital staff have “processed” Blair with the offhanded sadism of prison guards. They interrogate him, take his clothes, and dispatch him across an icy courtyard to search for his assigned ward in the dark of a February night. Blair’s medical treatment, when it comes, is clinical and medieval. The doctor regards him as a repository of “procedures” about which he (Blair) is uninformed and over which he has no say. The nurses and orderlies mechanically call out the numbers of the patients dying around Blair. The man who would become Orwell is literally waiting for his number to come up. The “medical professionals” charged with his care are plainly indifferent to his fate and, if anything, seem inconvenienced by his presence.

Blair escapes the hospital’s horrors as soon as he has the strength, not waiting for a discharge.

The writer George Orwell would live to see the beginning of a sea change in medicine, which he wrote about in “How the Poor Die,” the 1946 essay in which he recalls his experience in the Paris hospital. The name of his essay says it all. The poor die in miserable conditions, often with no healthcare. What most of us regard as the normal, humane mode of medical care is actually a privilege which we pay for in money. It is not a universal human right.

But today we are closer. The World Health Organization declared in 1947, “The enjoyment of the highest attainable standard of health is one of the fundamental rights of every human being without distinction of race, religion, political belief, economic or social condition.” Much more significant than these fine words, most developed countries have decided to act to protect healthcare as a right. In Canada, Europe, Australia, and much of Asia, people have access to the highest attainable standard of healthcare, funded by the governments, often in efficient, single-payer insurance schemes.

As Orwell lay dying of tuberculosis in the long winter of 1949, he recalled a conversation he’d had with the journalist Malcolm Muggeridge. Muggeridge had remarked to Orwell that “anyone” who’d lived in Asia had stepped over the bodies of dead children in the street and had in effect grown callous to such horrors. It leads Orwell to observe:

I read recently in the newspaper than in Shanghai (now full of refugees) abandoned children are becoming so common on the pavement that one no longer notices them. In the end, I suppose, the body of a dying child becomes simply a piece of refuse to be stepped over. Yet all these children started out with the expectation of being loved and protected and with the conviction which one can see even in a very young child that the world is a splendid place and there are plenty of good times ahead.

Orwell then reflects that his own society was not so different, not so long ago. Londoners might not have unthinkingly stepped over the corpses of children in the street, but they very clearly valued children’s lives less than their own. “One of the differences between Victorians and ourselves,” Orwell reflects, “was that they looked on the adult as more important than the child. In a family of ten or twelve it was almost inevitable that one or two should die in infancy and though these deaths were sad, of course, they were soon forgotten, as there were always more children coming along.” It was simple math.

But with that simple math came a tortured truth, or at least something regarded as the truth. Adults, as fully-developed persons, were accorded full protections against life’s hazards and full support for their onward movement. Children were treated as second-class humans because they could not be relied on to survive and return the adults’ investments in them.

Today we have turned this formula on its head. It is more the potential of children that we exert ourselves to protect rather than the already-realized value of the adult. Even in Orwell’s day, he recorded, [T]he death of a child is the worst thing that most people are able to imagine.”

But it was not math, not the changing actuarials of the postwar 20th century, that made Orwell a champion of the child. It was his own brief but intense experience as a father. Orwell loved and, yes, doted on his son Richard (it’s in his letters). It is fair to say that Richard was the last love of Orwell’s life. Eileen had already been four years gone when Orwell soliloquized in a 1949 essay that the one redemptive experience of being human was “fastening one’s love upon other human individuals.” It must have been the five-year old Richard he had in mind when he wrote this. The heartbreak of leaving Richard behind was the inevitable price of being human, but it was worth paying.

In the last two years of his life, what Orwell most deeply regretted was his declining ability to play with Richard. The pair had worked together on their patch of farm on the island of Jura, and they had gotten into adventures and misadventures together.

Every parent, of course, grieves the prospect of leaving their children to fare without them, so it is worth putting Orwell’s more particular regret in the context of his writing. Orwell knew he was not long for this world even when Richard was as young as five. But being bedridden not only limited Orwell’s ability to protect and express his love to Richard while he was still alive; it threatened Richard’s “conviction that the world is a splendid place with plenty of good times ahead.” Orwell once piercingly wrote that he didn’t want Richard to think of him as someone who is always sick and unable to play. He wanted Richard to have everything a child deserves, even if one of those things was an optimistic delusion of wellbeing that waxes during the high tide of life and then wanes with age. Orwell thought children deserved to see the world as a good place, and they should be helped to find their deepest fulfillment in it. They are not mere replacement workers, soldiers, or handmaidens.

Children have rights, then. The United Nations says so. I believe Orwell was prophetic in intuiting those rights based on his own experiences of war, poverty, politics and, most importantly, fatherhood.

Indeed it is not giving Orwell too much credit to recognize that many of the rights movements that would mature at the end of the 20th century–including the rights of convicts, patients, and children–are foreshadowed in his writings 50 years beforehand.

The Meaning of Haymarket Square: How Marxists Won the Eight-Hour Day for Working Americans

BY MATTHEW HERBERT

There is no “straight” history of the United States. The standard version is the one told from the perspective of organized money; it is not just a plain, objective recounting of facts.

In his 1980 masterpiece, A People’s History of the United States, Howard Zinn informs the reader from the get-go that his perspective will be different. He will tell the story of America from the view of Americans who were on the receiving end of oligarchy–the poor, the oppressed, the disenfranchised, the silenced, excluded and ignored. The ninety-nine percent, if you will.

(And by the way, in 1980, assuming that America was an oligarchy turned out to be a pretty good working hypothesis. In 2014, two Princeton economists–stout guardians of the status quo if anyone is–concluded that Zinn’s assumption was sound: our country was run by a small group of economic elites with exclusionary access to power.)

I am heartily enjoying re-reading of Zinn these last couple weeks. To me it’s a puzzle that so many “ordinary” Americans seem allergic to Zinn, or any kind of critical retelling of our history. (1619 Project wars, anybody?). If we are indeed the home of the brave, why should we fear hearing voices from the past that have been forgotten or marginalized or blotted out? Can our faith in our founding ideals not withstand the testimony of ordinary, powerless people?

I don’t do a lot of flag-waving, but I would only want to belong to a country that keeps digging up its past and trying out new versions of its history. Any other kind of country is not free–and is also deeply uninteresting, which is another kind of problem.

If the authorities get the people to forget all the “irrelevant” facts that have been winnowed away to create the official version of history, eventually the unofficial version will simply die away. Memory and public record are the only things that enable us to think honestly about who we used to be and how we have changed. If the authorities can manipulate those two things sufficiently, they can create a history of themselves that is impervious to examination. Future Princeton economists will not get to call them oligarchs. We will be forced to believe their version as the only one. It will be as if the state’s antagonists never existed. And weren’t we born from an antagonistic movement? Didn’t Jefferson himself say that a democracy needs rebels?

So with that thought in mind, today I want to showcase Zinn’s recounting of the history of Haymarket Square. As an American living abroad most of the last 30 years, I’ve been quizzed more than once about this event. Almost every developed country east of the Azores celebrates May Day. The date is renowned as a victory of labor over capital.

Furthermore, almost every educated European also knows two slightly incongruent things about Haymarket. They know that the events in Chicago are venerated almost exclusively by the political left, and they know that America today is curiously devoid of memories of Haymarket. It’s not just that we have forgotten about it. We don’t seem to want it in our history. We even moved our version of Labor Day to a whole new month to keep it free of socialist taint.

Generally speaking, my interlocutors are not setting me up for a gotcha moment when they ask about Haymarket. They genuinely want to know how such basic information about it could have been purged from the public consciousness in the very country where it happened. It’s puzzling.

Zinn introduces his retelling of Haymarket by recalling a poem of the day, “My Boy.” It goes

I have a little boy at home,

A pretty little son;

I think sometimes the world is mine

In him, my only one . . .

‘Ere dawn my labor drives me forth;

Tis night when I am free;

A stranger am I to my child;

And stranger my child to me.

When we think of the labor movement, we think of strikers demanding two things–higher pay and better conditions. But the subject of this poem doesn’t just want a job that’s better remunerated or safer or easier for him to do. He wants a life. He wants his everyday not to be dictated to him so that it prevents privacy, agency, and normal human bonds of love.

By 1886, labor movements across the country were gaining momentum. The workers had nothing to lose but their chains, to paraphrase a Certain Someone. Zinn recounts that, from the days of Revolutionary America onward, laborers had worked 12 to 16 hour days and many considered a mere 9-hour shift on Saturdays a godsend. They were paid poverty-level wages across almost all industries.

As the Industrial Revolution gathered force, producers’ need for labor skyrocketed, and by the Civil War, cities across America (mostly in the north) became huge slums of the working poor. Contrary to Horatio Alger Myths, there was no way up and out of the slum, and this was by design. The system needed those masses of the powerless, immiserated poor to stay where they were and spend their every waking hour working.

When the American Federation of Labor called for nationwide strikes on 1 May 1886, it had the explicit goal in mind of ending the working person’s entrapment in a workday that permitted no private life, no time to be anything other than a factory hand. One group in Chicago that answered the AFL’s call, indeed, anticipated it, was the Central Labor Union. Led by two Marxists, Albert Parsons and August Spies, the CLU had published a manifesto the previous year. Here is the main part of it:

Be it Resolved, That we urgently need the wage-earning class to arm itself in order to be able to put forth against their exploiters such an argument which alone can be effective: Violence, and further, Be it Resolved, that notwithstanding that we expect very little from the introduction of the eight-hour day, we firmly promise to assist our more backward brethren in this class struggle with all means and power at our disposal, so long as they will continue to show an open and resolute front to our common oppressors, the aristocratic vagabonds and exploiters. Our war cry is “Death to the foes of the human race.”

Even without the advanced state of sleep science today, common sense and normal, bodily imperatives tell us we need about eight hours of sleep each night. The oligarchs of 1886 America said that’s all we needed, period: eight hours of sleep and 16 hours of work. You would have had no use for any life outside the factory and your meager bed. You are a mere extension of the machine you attend.

When the CLU had the temerity to assemble thousands of strikers in Haymarket Square, Chicago on 1 May against this idea, the authorities sent out the police, as usual. Many strikers quit under fire, many others were arrested. Spies wrote a fiery pamphlet calling for stiffer resistance, and on 4 May, a smaller group of protesters turned out. What they didn’t know is that an agent provocateur was among them, and at the end of the gathering, he threw a bomb at the police, killing seven of them.

Haymarket Riot - HISTORY
Image: History.com

With no physical evidence to identify who threw the bomb, the public prosecutor went after Parsons, Spies and six other CLU leaders. The lack of evidence was no barrier to achieving justice. Zinn recalls, “The evidence against the eight anarchists was their ideas, their literature; none had been at Haymarket that day except Fielden, who was speaking when the bomb exploded. A jury found them guilty, and they were sentenced to death.”

As Christopher Hitchens reminds us of the Catholic Church, it is worth worth remembering what it was like when it was strong, even though it seems docile today. The Inquisition and the Church’s other atrocities cannot simply be tossed down the memory hole.

I take a similar lesson from Zinn’s history of the contest between labor and capital in 19th-century America. Give capital enough power, and it will deny that you are even a human being. It will find a way to deprive you of a life of your own, and it will pay for “respectable” courts to convict you of thought crime if you demand more. For that is indeed what Parsons and Spies were convicted of. They were hanged in 1887. They were executed for thinking that workers deserved to have their own lives–a third of their day in which they could love, loaf, read, garden, or do whatever made them them. It is vital that we be able to recall a time in our history when this idea was deemed so dangerous that the state dispassionately killed its authors.

Parsons and Spies did not win the eight-hour work day alone, but they did spearhead its victory and made the ultimate sacrifice for it. I quoted their manifesto at length on purpose. It’s a discomfiting document. Its authors are Marxists, and their prose shows it–turgid, militant, straddling a line between peaceful protest and violent rebellion. It is an all-American document. If you feel entitled to your eight-hour workday, as I readily admit I do, take a moment to remember that the bravest, most committed partisans of this privilege–all-American Marxists–were hanged for bringing it to you. That is the meaning of Haymarket Square for me.

Howard Zinn and the Legacy of Orwell

BY MATTHEW HERBERT

For students of Orwell, Howard Zinn’s 1980 masterpiece, A People’s History of the United States, provides an endless source of inspiration and reflection. All history writing, Zinn argues, is recounting facts. But choosing the facts to recount is shaped by ideology. So when you look back at the stories that historians tell–or when you want to tell a new story of your own–you must try to be critical and candid about which ideology is at play.

Orwell wrote that his only talent was a “power of facing unpleasant facts.” And this talent was not merely an idle form of pessimism. Orwell’s most militant attacks on the status quo were calculated, deliberate and, yes, optimistic. He saw around him a society of fundamentally decent people but who were blind to the mass thievery, subjugation, and brutality of colonialism, which provided their income. Orwell wrote over and over again that the English people’s comfort and gentleness–national virtues which he sincerely admired–were based in a widespread ability not to see what was right in front of their noses. He wanted them to do better.

Orwell was also devastatingly frank about his own ideologies. Near the end of his life he wrote in an essay that since 1936, when he had gone to Spain to fight for the republic, everything he published had been “propaganda” for social democracy. Of course he believed he was writing the truth, but he didn’t believe that any writer could work completely objectively: it was impossible to write about anything of importance, Orwell believed, from an ideologically neutral perspective.

From the moment you crack open A People’s History of the United States, you see Zinn exuding a kind of intellectual courage that would do Orwell proud. The standard view of American history is not factually wrong, Zinn writes, but it dissembles, overshadows or understates so many unpleasant facts that it ends up telling a deeply warped version of how our country came to be. Every single-volume history of the United States that we had been presented with before 1980 invites its readers to avoid seeing what is right in front of their noses.

All honest historians know, Zinn writes, of Columbus’s cruelty and greed in “dealing” with the native Arawak people of the West Indies. Columbus himself records that one of his first acts was to “take by force” several Aarawaks to interrogate them about the location of gold. He also records how he enslaved thousands of Arawaks and sent them back to Spain. Tens of thousands more were to follow. Many were simply massacred. Once Columbus landed on their shores, the Arawaks’ history became one of subjugation, enslavement and death.

The standard histories do not lie about such devastation; one historian even calls it genocide. Another says Columbus “had his faults” but must be remembered for his seamanship. What the historian accomplishes by this kind of subterfuge is, Zinn writes, worse than lying. The loyal historian

refuses to lie about Columbus. He does not omit the story of mass murder; indeed he describes it with the harshest word one can use: genocide.

But he does something else–he mentions the truth quickly and goes on to other things more important to him. Outright lying or quiet omission takes the risk of discovery which, when made, might arouse the reader to rebel against the writer. To state the facts, however, and them to bury them in a mass of other information is to say to the reader with a certain infectious calm: yes, mass murder took place, but it’s not that important–it should weigh very little in our final judgments; it should affect very little what we do in the world. . . . To emphasize the heroism of Columbus and his successors as navigators and discoverers, and to de-emphasize their genocide, is not a technical necessity but an ideological choice.

To weigh and arrange the facts of history in a certain way is an ideological choice. And just as Orwell tells us plainly that his writings are all “propaganda,” Zinn tells us what his ideological choice is: he will tell the history of America focusing on the lives of the forgotten, subjugated, murdered, silenced and oppressed. Such a history may not give us a perfect picture of who we are or how our country came to be, but it would–and does–go a long way toward correcting a historical narrative shaped only by the ideology of conquest, money, and property.