The Best Words

BY MATTHEW HERBERT

Do the words we use really matter that much?

Of course they do. We couldn’t navigate the world without them. But the importance of finding the right words–or, since the “right” ones are rarely cut and dried–the importance of finding the best ones, is supreme.

One challenge to finding the best words, though, is that they are constantly changing. In the 1970s you could have sincerely denigrated someone’s cool by calling them a jive turkey. To use the term today, though, could only be a joke about the corniness of times gone by.

orwell-sketch

Orwell said that for our words to gain real purchase on the world, we must constantly reinvent them. This is the main point in his justly famous essay “Politics and the English Language.”

As it is still such an important essay, I’ll quote Orwell at length on the need to refresh our words. He alerts us to a handful of habits that promote imprecise language, including:

DYING METAPHORS. A newly invented metaphor assists thought by evoking a visual image, while on the other hand a metaphor which is technically ‘dead’ (e. g. iron resolution) has in effect reverted to being an ordinary word and can generally be used without loss of vividness.

But in between these two classes there is a huge dump of worn-out metaphors which have lost all evocative power and are merely used because they save people the trouble of inventing phrases for themselves. Examples are: Ring the changes on, take up the cudgel for, toe the line, ride roughshod over, stand shoulder to shoulder with, play into the hands of, no axe to grind, grist to the mill, fishing in troubled waters, on the order of the day, Achilles’ heel, swan song, hotbed.

Many of these are used without knowledge of their meaning (what is a ‘rift’, for instance?), and incompatible metaphors are frequently mixed, a sure sign that the writer is not interested in what he is saying. Some metaphors now current have been twisted out of their original meaning without those who use them even being aware of the fact. For example, toe the line is sometimes written as tow the line. Another example is the hammer and the anvil, now always used with the implication that the anvil gets the worst of it. In real life it is always the anvil that breaks the hammer, never the other way about: a writer who stopped to think what he was saying would avoid perverting the original phrase. . . . 

By using stale metaphors, similes, and idioms, you save much mental effort, at the cost of leaving your meaning vague, not only for your reader but for yourself. This is the significance of mixed metaphors. The sole aim of a metaphor is to call up a visual image. When these images clash — as in The Fascist octopus has sung its swan song, the jackboot is thrown into the melting pot — it can be taken as certain that the writer is not seeing a mental image of the objects he is naming; in other words he is not really thinking.

Using worn out mental imagery is a sign we are not thinking clearly. Instead of pressing ourselves for vivid new words, we just grab from the shelves whatever stock phrases fill out our thoughts most conveniently.

In his recent book On Tyranny: Twenty Lesson from the Twentieth Century, the historian Timothy Snyder argues that politicians enlist our lexical laziness to inculcate mental habits that support their agendas. We can be hypnotized by the repeated use of certain phrases to accept implicit judgments, ideas or values we have not thought through.

If you find yourelf repeating a stock phrase (which abound in political slogans), chances are a notable person has inserted it into the public discourse to save you the effort of thinking about something he wants you to accept at face value. I have to monitor, for example, my use of “reactionary right,” or it will come out automatically.

Echoing Orwell, Snyder exhorts us, “Avoid pronouncing the phrases everyone else does. Think up your own way of speaking, even if only to convey that thing you think everyone else is saying.”

The novelist Martin Amis thinks the freshness of one’s style signals the clarity of her insight. Bad style, he says, is not just clumsiness, dreariness, or vagueness of expression; it is a failure to grasp the thing one is trying to write about. In fact, for Amis to write well is to wage a “war against cliche.” (Check out his excellent book of the same title.)

It is instructive, and sometimes just fun, to dig up phrases that have become cliches but were formerly useful, creative expressions, which even Orwell or Amis could be proud of.

For example, Orwell kept a diary during the first two years of World War Two. Like many in Great Britain, he sensed his country was gearing up to fight a millenial war, one that would abolish the social privilege of the wealthy and prioritize the basic needs of the masses. Hundreds of thousands of mobilized Britons were saying to themselves, and increasingly to one another, this was the last time they would toil and bleed for the aristocracy. But the aristocracy, Orwell noted, didn’t see the change coming. They continued blindly to be chauferred to London’s fashionable spots in mink scarves and bespoke suits, fretting about when the best restaurants’ menus would go back to normal, “as if,” Orwell said, “the other 99 percent of the population did not exist.”

Today, “the other 99 percent”  has become such a tired phrase, it shocked me to see it (first?) written down in 1940! Of course Orwell was the very man to come up with it.

Sometimes writers pass beyond mere analytic clarity and achieve a magical, prophetic grasp of inflective words–phrases they might not even see coming but which irrupt into history and embed themselves in our consciousness.

To wit:

In his sprawling novel Vineland, Thomas Pynchon mixes elements of gonzo, Conrad and Kafka with his own madly inventive magical realism to tap into America’s dark political subconscious. The book is set in the 1980s, when Reaganism is orchestrating a corporatized reaction to Viet Nam and Jimmy Carter of vague, triumphalizing menace. Private prisons are on the rise, and so are two-bit drug convictions that fill them with profitable, expendable bodies. The state gives free rein to the police, and they use it. Strikes are broken, cults are  suppressed, order imposed. In one passage, Pynchon needs a phrase for the small-time fascist’s view of the status of women under the new authoritarian order. How will pure power license the brave new men to treat the fair sex? Like this:

Troopers evicted the members of a commune in Texas, beating the boys with slapjacks, grabbing handcuffed girls by the pussy.

And there it is. Written in 1990. “Grab them by the pussy.” There’s no way Pynchon could have anticipated the phrase’s radioactivity. But then the small-time fascism of sexual predation disclosed itself in precisely this phrase in 2016 and won its way to national power.

A good country preacher knows when to cut his homily short–in the quiet after a revivifying piece of exegesis, where the congregation can make its own sense the lesson. And so I leave you with Pynchon’s words of prophecy, his omen of how casually we can give away our own dignity if we just stop thinking.

Advertisement

Our Winston Smith Moment

BY MATTHEW HERBERT

The appalling climax of 1984 is when Winston Smith simultaneously (1) accepts Big Brother’s proposition that two plus two equals five and (2) comes to love Big Brother.

Philosophically, we could understand (1) in one of several ways. It could be that Smith accepts the “truth” of the proposition, in some way that is not quite accessible to us in our normal frame of mind. He really does believe the factuality of this statement. This triumph of naked will seems like a long shot.

Given Smith’s career at the Ministry of Truth, which he spent eroding the very conditions for discovering facts, it is more plausible to believe Smith has done something slightly different than accepting the literal truth of 2+2=5–he has accepted the abolishment of any standards for assessing whether it is true or not. He is essentially saying, there’s no way we can check this. Let’s just call it whatever its author wishes. If that’s “true,” so be it.

John Hurt as Winston Smith. His own personal sadness helped him

This is what happens to prisoners who are tortured. They are worn down by the bright lights, the sleep deprivation, the electric shocks, the threats to their families, and they abandon their commitment to objective truth. They simply let go of the idea that the truth might make a difference anymore. And they confess. The accute need to return to a normal, pain-free life erases the distinction between truth and falsity.

But Smith goes a step further than confessing. He not only accepts the abolishment of facticity; he feels loves for Big Brother. Why? In a word, power. Smith is overawed by Big Brother’s power to perpetrate the ultimate outrage on human dignity–to make humans stop believing in truth, our capacity to discover it, or even our interest in ascertaining it. (If you wish to explore this attitude in action, see Peter Pomerantsev’s 2014 book Nothing Is True And Everything Is Possible: The Surreal Heart of the New Russia.)

This attitude is the basest, most abject way a person can say of life, “Ah, fuck it. Nothing really matters.”

I presume you see where I am going with this. 1984 was supposed to be a novel about something that would never really happen to us. And luckily, our situation does not exactly mirror the one Orwell depicts in bad old Oceania. We can still have private relations; we haven’t accepted the government’s legitimacy as auditor of all our thoughts and invigilator of all our decisions. We don’t have the police crashing through the door to arrest us for thought crime. Yet.

Netflix, televangelists, energy drinks, and barely legal snack foods, though, are all doing their parts to heard us toward this golden future, but I digress.

What I want to point out is that, in terms of political attitude, we have skipped to the end of 1984 and accepted, through casual assaults on our dignity, what Winston Smith only accepts after torture–that our dear leader is licensed to abolish facts. This ability is, in a way, the utlimate power, and we admire him for attaining it, just as Winston Smith loved Big Brother.

But we don’t have to. We don’t have to join the shameful consensus that says, “Ah, fuck it. There probably aren’t any observable facts connecting Trump and Russia–just partisan spin and fake news. Why bother looking?” Or, you could write your political representatives, as I have, and ask them to demand the independent investigation of Trump that has not happened yet and seems unlikely to happen.

Wittgenstein and Me

BY MATTHEW HERBERT

Well, that’s a cheeky way to start.

Wittgenstein photo

Wittgenstein: Towering figure in logic and mathematics, member of old Viennese family of eccentric aristocratics gone mad with genius, possibly the greatest philosopher of the 20th century.

Me: Hillbilly turned bureaucrat, blogger, broken down trail runner.

Where’s the connection?

Actually, the thing that makes my link with Wittgenstein improbable is not my humble background or middling life accomplishments. It’s my devotion to the ideas of other philosophers that differ so much from Wittgenstein’s. From Plato I inherited a love for objective, eternal truths. I may not be very good at math or geometry, the main disciplines that discover such truths, but I believe, as Plato did, that the facts of mathematics are special. They are on a radically different footing from “ordinary” facts we observe with our senses.

The ordinary fact that grass is green, for example, is subject to a multitude of caveats indicating that its being true depends on one’s perspective. The chlorofil in grass, for example, is only “green” at the structural level of mid-sized objects; at the moleular level, it is not. Shrink yourself down to microscopic level, and you would not, in fact, see grass as green. Furthermore, green things (like grass) are only green in the types of atmospheres that can broadcast the visible spectrum of light as we know it here on earth. In different environments, different parts of the spectrum could be visible, and green might not be.

And so it goes. Grass is green, but only in the particular embodied, earthbound circumstances we find ourselves.

Mathematical facts are not like this. They do not depend on one’s perspective or physical configuration. They are eternally and objectively true. You can imagine the universe exploding–everything and everyone disappearing–and a new universe being reborn billions of years hence: two plus two will still equal four. Grass may or may not exist in this new reality, and it may or may not be green, but math will endure as is.

Wittgenstein was not convinced of this proposition. For most of his career he was a “nominalist,” someone who thinks the truths of mathematics are mere products of the rules regulating its symbols. The system does not necessarily correspond to any facts outside itself. In other words, you could devise your own system of symbols, and as long as its internal rules of meaning and syntax were consistent, you could produce statements that were “true” under all circumstances. Big deal.

The fact that lots of people subscribe to mathematics as a useful, coherent system does not mean its facts have any more weight than those produced by other made-up systems, say the nominalists.  Mathematical truths are merely the outcome of the way we use the language of mathematics; they are hardly the eternal, crystalline truths revered by Plato.

Later in his career, Wittgenstein would modify his nominalism slightly, but he would never go all the way where my intutions led me, into the camp of realists. Realists believe the facts of mathematics correspond with an external reality and that they are always and everywhere true, not mere constructions of symbolic coherence or the consensus of those who use them.

I won’t bore you with details, but there are several other instances where I find myself disagreeing with what Wittgenstein thought. The brochures of university philosophy departments, though, will tell you that philsophy’s value is in teaching you how to think, not what to think. Despite this sentence’s awful triteness, it is true. Philosophy teaches us how to think. And it is in that endeavor that Wittgenstein won me over.

For me, Wittgenstein’s biggest contribution to philosophy is his development of the idea of levels of analysis, a very powerful insight about how we think. Any informed body of discourse–on, say, politics, art, physics, or whatever–proceeds on the assumption of concepts and terminology that are determined by a particular level of analysis.

This is not quite an original thought. Two thousand three hundred years before Wittgenstein, Aristotle taught that any and all analysis should be conducted at the right level of granularity:

It is the mark of an educated man to look for precision in each class of things just so far as the nature of the subject admits; it is evidently equally foolish to accept probable reasoning from a mathematician and to demand from a rhetorician scientific proofs.

But this breezy observation is more or less where Aristotle left his idea. It’s a good idea, but hard to act on. Finding the right level of precision is almost always the hardest part of tackling a problem analytically. It’s like telling a baseball player to hit the ball smack-dab in the middle. Great advice, but those who can follow it are probably already good batters.

I contend, though, that Wittgenstein birthed an idea that can actually help us calibrate our sense of precision to the problem at hand–the idea of the language game.

First, a word about what the language game is not. Because many people first encounter this idea as undergraduates in one of the “critical” disciplines that try to subvert what the other disiciplines are saying–the humanities or softer sciences–they tend to think Wittgenstein is using the term subverively himself. He must be trying to warn us that elites are using fancy words to perpetrate a wily intellectual fraud on us. Please banish this thought. If you want to pursue it, read Michel Foucault’s gloss on the Marxist idea of the mystification of terminology. It has nothing to do with Wittgenstein.

So what did he really mean? Wittgenstein’s language game is any specialized discourse designed to deal with a particular task or themtic area. What characterizes the language game is that (1) it defines the acceptable useage of terms and grammar for anyone who needs to address its task, and (2) it works. In Philosophical Investigations, the second of Wittgenstein’s landmark books, he outlines what a very basic language game could look like:

Let us imagine a language for which the description given by Augustine is right. The language is meant to serve for communication between a builder A and an assistant B. A is building with building-stones: there are blocks, pillars, slabs and beams. B has to pass the stones, and that in the order in which A needs them. For this purpose they use a language consisting of the words “block”, “pillar”, “slab”, “beam”. A calls them out;—B brings the stone which he has learnt to bring at such-and-such a call.——Conceive this as a complete primitive language.

For the builders, the best level of analysis is the one that individuates blocks, pillars, slabs and beams, and nothing else. They have no need for terms expressing broader composites (say, wall, buttress, etc.) or smaller parts, like molecules. Their language gets the job done.

Why does Wittgenstein start with such obvious observations about language? At the time he was writing the Philosophical Investigations, philosophy was gripped by an idea that language could and should be made ever more precise. Science led the way: however finely science divided up the world, all our language shoud follow suit. (If you are interested in this idea, see A.J. Ayer’s Language, Truth and Logic, one of the finest introductions to what came to be known as analytic philosophy.)

By the mid-20th century, many philosphers believed their prime directive was to translate ordinary-language sentences into a special form of symbolic logic (at which Wittgenstein was highly adept). This exercise, they believed, would reveal a sentence’s correspondence with the pristine structure of reality and help dsitinguish between terms that signified something real and those that were erroneous or merely metaphorical.

It was a good idea, but it went too far. Science was constantly revealing new, more precise levels of the world’s structure, and it was a fool’s game to try to adapt ordinary language to mirror its progress. Wittgenstein himself had helped bring this folly about in his first landmark book, the imposingly named Tractatus Logico-Philosophicus. There, Wittgenstein argued that language created a picture of reality. Its job was to depict the structure and dynamics of reality as accurately as possible. If science or logical analysis could reveal it, humans could talk about it in ever greater detail and accuracy.

By the time he wrote the Philosophical Invetigations, Wittgenstein had come to view his picture theory of meaning as too doctrinaire. Real people used language for a “motley” of reasons, and the utility of their useage did not necessarily depend on ever-sharpening precision. He even came to mock this analytic mindset he helped inspire. Precision is a fine thing, Wittgenstein came to believe, but like most fine things, it is not always better in greater quantities. He wrote:

When I say: “My broom is in the corner”,—is this really a statement about the broomstick and the brush? Well, it could at any rate be replaced by a statement giving the position of the stick and the position of the brush. And this statement is surely a further analysed form of the first one.—But why do I call it “further analysed”?—Well, if the broom is there, that surely means that the stick and brush must be there, and in a particular relation to one another; and this was as it were hidden in the sense of the first sentence, and is expressed in the analysed sentence. Then does someone who says that the broom is in the corner really mean: the broomstick is there, and so is the brush, and the broomstick is fixed in the brush?—If we were to ask anyone if he meant this he would probably say that he had not thought specially of the broomstick or specially of the brush at all. And that would be the right answer, for he meant to speak neither of the stick nor of the brush in particular. Suppose that, instead of saying “Bring me the broom”, you said “Bring me the broomstick and the brush which is fitted on to it.”!—Isn’t the answer: “Do you want the broom? Why do you put it so oddly?”

I quote this passage at length because it is one of the most revelatory and useful ones I have ever read. I have probably read tens of thousands of pages of philosophy; this one stands out for speaking vividly to philosophy’s main task–clarifying the methods of analysis so that we can think more clearly about real problems. Only 2,300 years after Aristotle (!), Wittgentein’s idea of a language game actually affords some purchase on this maxim.

But then again, isn’t Wittgenstein just recommending we optimize precision according to the purpose of our chosen language game? If your language game is “particle physics,” you’d better be prepared to prepared to re-analyze sentences about atoms into sentences about protons, neutrons and electrons, and so forth. But if your game is “housecleaning,” you’ll just sound silly if you refine terms to account for finer structures and more recondite dynamics than those relevant to the task at hand.

Believe it or not, though, one of the 20th century’s most respected philosophers of science believed science worked more or less like a Wittgensteinian language game. Although scientists like to think of themselves as discovering the hidden structures of reality, in ever finer detail, Thomas Kuhn believed what they did in practice was to carry out a constant negotiation about the acceptability of professional terminology.

In the long stretches when sicentists were agreed on a general outlook of the world–say Newtonian physics, which nicely explains the behavior of mid-sized objects–Kuhn says they wrote their literature using a common lexicon of accepted terms. He called it “normal” or “puzzle-solving” science. But when a major discovery, like quantum physics, upended the general outlook, scientists had to begin a radical renegotiation of terms. Those who stuck with the old paradigm were gradually “read out” of the literature, as Kuhn phrased it in his standout 1962 book The Structure of Scientific Revolution.

By the way, “paradigm” is not just a neat term for this way of thinking about science. It is the term Kuhn himself chose to describe what are essentially Wittgensteinian language games “played” by scientists. To say biology is in a post-Darwinian paradigm is simply a nifty way of saying biologists now share a language game that enshrines and proceeds on the assumtion of Darwin’s principles. I don’t mean to sound overbearing, but if you consider yourself in any way an intellectual and you have not read Kuhn’s Structure of Scentific Revolution, go read it now. It is the book that gave us the phrase “paradigm shift.” Discovering what the term signified when it was fresh is a pleasure.

Kuhn’s book caused a stir because it seemed to say scientists changed paradigms, not because they were factually wrong about the world, but because they were overwhelmed by the partisans of a new, winning set of terminology. If we are going to give Kuhn credit where he earned it, for enlightening us on the way science proceeds in practice, we might as well blame him as well for darker things he wrought, and this seems a good place to do it.

Kuhn’s idea of a professional paradigm shift, laid at the altar of the radical intellectual left, fueled “critical” and “literary theorists” to conclude that science was “really” just a discipline for imposing certain dogmas on those too powerless to defend themselves. You know–the ideological narrative of the power elite.

Kuhn would have said we can be agnostic about whether facts really change the minds of scientists without sounding the death knell for truth, but the critical and literary theorists were already off to the races. Science, like the police and colonialism, was just one more tool for oppressing the masses, they said.

It was a dark time. In 1996 the physicist Alan Sokal nicely parodied critical theory when he submitted a junk paper arguing that quantum gravity was just a social and linguistic construct, not a “fact” about “reality.” Sokal got his paper published in Social Text, a leading journal of critical theory, by mimicking the style of its favorite authors–essentially emulating their language game.

So what, if anything, did Sokal prove?–that academic language games really are just frivolous excercises in generating new terminology? Without wading through all the arguments pro and con, my own view of language games is that they offer a provisional perspective on the world that lets one conduct the educated guesswork involved in optimizing precision.

If that sounds garbled, it’s because I am not a brilliant philosopher. Daniel Dennett, however, is, and he is also an heir to Wittgenstein, so I will let him do my work for me. As we try to make sense of the world, Dennett says, we adopt a provisional stance. If the stance is useful, we acknowledge (sometimes explicitly but usually implicitly) that its basic axioms, principles and terminology are conducive to discovering the truth, and the stance becomes less provisional. It starts to harden into something like a paradigm.

Adopting the right paradigm, though, is no mere matter of negotiating terms. The terms have to work, and as they work, they tend to group themselves into different levels of precision.

Joshua Rothman beautifully captures Dennett’s idea of stances, levels and precision in a recent profile in the New Yorker. He writes:

Some objects are mere assemblages of atoms to us, and have only a physical dimension; when we think of them, [Dennett] says, we adopt a “physicalist stance”—the stance we inhabit when, using equations, we predict the direction of a tropical storm. When it comes to more sophisticated objects, which have purposes and functions, we typically adopt a “design stance.” We say that a leaf’s “purpose” is to capture energy from sunlight, and that a nut and bolt are designed to fit together. Finally, there are objects that seem to have beliefs and desires, toward which we take the “intentional stance.”

If you’re playing chess with a chess computer, you don’t scrutinize the conductive properties of its circuits or contemplate the inner workings of its operating system (the physicalist and design stances, respectively); you ask how the program is thinking, what it’s planning, what it “wants” to do. These different stances capture different levels of reality, and our language reveals which one we’ve adopted. We say that proteins fold (the physicalist stance), but that eyes see (the design stance). We say that the chess computer “anticipated” our move, that the driverless car “decided” to swerve when the deer leaped into the road.

Life, like science, is an experiment. My experiment has led me to analyze the world in almost exactly the same style as Dennett, if several notches below him in terms of insight achieved. There are certain stances that seem indispensable for understanding the world. They are not written in stone. We discover them and, yes, in a certain sense, create them as we probe the world around us. That does not mean we make it all up, but it does mean that our language and thought take an active role in dividing the world up into its constituent parts and describing their structure and dynamics.

Wittgenstein was right when he said language creates a picture of the world. But that picture will never be finished, and we can never step outside it to check whether our stance (or paradigm or language game–you pick) is the right one. But, in the word’s of Wodehouse, Wittgenstein’s favorite comedic writer, we do not repine; we stagger on.

Review of 36 Arguments for the Existence of God

BY MATTHEW HERBERT

In the early 20th century, the philosopher Bertrand Russell upended the niche of mathematics known as set theory when he came up with the idea of “non-normal sets.” Non-normal sets include themselves as members. Normal sets do not. For example, the set of all integers does not include itself as a member; it only includes integers. It’s a normal set.

But what about the set of all sets that don’t include themselves? Or, more picturesquely, barbers who only shave men who do not shave themselves? A barber meeting this description would indeed shave himself, but only because he does not shave himself, which is the criterion of belonging to his set. You can probably think up your own versions of this idea, which has come to be known as the Russell Paradox.

In 36 Arguments for the Existence of God, an intellectually lively novel by Rebecca Newberger Goldstein, one of the characters is a mathematical child prodigy who, at the age of six, reconstructs Euclid’s proof that there is no largest prime number. He is also plagued by a Russell Paradox in real life.

Azarya is a Hasidic Jew set to inherit the mantle of Rabbi from his father. His sect of a few thousand mystics in upstate New York is blindly and utterly counting on him to be their leader when he reaches adulthood.

But then something happens to the boy as he hits adolescence. With trepidation, he leaves the confines of his closed-off community one week to visit MIT. A professor there with connections to his family has noticed the boy’s genius and wants to show him the woundrous, Platonic realm of pure math that awaits his contemplation if he leaves his sect, which studies only Torah and Talmud.

During the visit to MIT, the boy becomes aware of his own genius and begins to long for the unchained life of the mind he could lead if he would move to Boston. He doesn’t even believe the doctrines that send his kinsmen into swaying, chanting religious reveries are true. His intellect tells him to leave it all behind.

But there is a catch. The people of his sect draw their entire sense of identity from the spark of the divine they believe their Rabbi shines forth. Without the boy as their next guide, the community will collapse. Azarya’s grandfather led this sect, Moses-like, to escape the Holocaust by the skin of their teeth. Can Azarya callously commit them to oblivion by choosing MIT and the pure truths of mathematics?

So what happens when an intellectual is convicted by his genius that he must abandon a life of individual genius? There’s the rub. I won’t tell you how Goldstein resolves it, but I will say this problem spells out the ethos of her novel–the idea that religious doctrines, although plagued by illogic and outright falsehoods, remain a fraught and sometimes even beautiful part of who we are.

36 args

Azarya, the troubled genius, is actually only a secondary character in the novel. Goldstein’s protagonist is Cass Seltzer, a professor of psychology who has penned a surprise runaway bestseller The Varieties of Religious Illusion and is about to claim his role as an intellectual celebrity by accepting a position at Harvard.

Seltzer is known, even among his critics, as the atheist with a soul. This is because, despite his official atheism (he does not buy any of the tradtional “evidence” for God’s existence) he understands quite well humankind’s impulse for supernatural thinking and the cognitive illusions that make its artifacts seem plausible. As he befriends Azarya, Goldstein lets us believe Cass may even come to see religious illusion as worthy of trumping the truth in certain circumstances.

On balance, 36 Arguments is a worthy novel. It is not an excellent novel, though. Goldstein gives Cass two paper-thin love interests, who, the reader will see from miles away,  are mere props, destined to give way to a third, a finely developed character who owns Cass’s heart with earthiness, bravado and gusto.

Cass’s towering intellectual mentor is also novelistic failure. A buffoon-scholar who absurdly forces together Schopenauer’s weight and Beethoven’s Sturm und Drang, he is a cutout figure who confirms all the vulgar stereotypes of an over-wrought, risibly self-important nutty professor. Goldstein, who has spent her life among real academics, could have done much better than this. Instead she plays to the crowd.

One area where Goldstein does not play to the crowd is in actually taking a real-life position on the main argument that trundles through her book. Like her protagonist Cass with his book, Goldstein allows herself to append an annex that delivers her real message in short form. It’s all very humane to allow people their religious convictions on liberal grounds, she seems to says in the course of her story, but the cold, hard limits of logic say the arguments adduced for the existence of God are all flawed. Goldstein’s annex is a summary knock-down of the traditional (and some not so traditional) arguments for theism.

Although it does not much bother me that Goldstein’s device is one that lets her have her cake and eat it too, because I take her side in the debate about God, I suspect her critics will not like it. Overall, 36 Arguments is a good novel for those already predisposed to enjoy its subject matter, but I doubt it would engage those who stand to benefit most from its message.

 

Review of Night Work

BY MATTHEW HERBERT

In The Meditations, Marcus Aurelius repeatedly invites the reader to contemplate a kind of super extinction–a time when not only he, the reader, has died, but also everyone who might possibly mourn him or recall his memory. A time when his very home has collapsed and turned to dust. Why?

Thomas Glavinic offers an astonishing meditation of his own on what it would be like to cope with the sudden onset of such a super extinction in his 2006 novel Night Work. Like Kafka’s Gregor Samsa, who wakes up one morning to find he has metamorphosed into a beetle, Glavinic’s protagonist Jonas wakes up one morning in Vienna to discover there is no one else left on earth.

All the objects ever made by man are still there, but every last person has disappeared, stranding Jonas under a cloudless summer sky. Radio and TV emit only static; the Internet is down. Cell phones don’t work. Yesterday’s newspaper is the only indication that the ordinary world of people and events had kept going as normal up till the day before.

At first, Jonas’s exploits across an empty Vienna have the feel of a lark. Taking whatever car he likes, he drives the wrong way down one-way streets. He climbs to a rotating cafe atop a TV tower and speeds it up like a whirligig. But of course it is a lark increasingly tinged with madness. As the horror of humankind’s dissapearance quietly sets in, Jonas becomes obsessed with surveiling his empty hometown by film while he sleeps to find signs of whatever sinister force took everyone else away. He also films himself asleep at night and discovers that his id routinely slips free to cavort with the forces of darkness.

We all know a person would go mad in Jonas’s situation. Why bother writing (or reading) a novel reflecting on such desperation?

But a novel, Milan Kundera reminds us, is an experiment in a very literal sense of the word. The novelist manipulates an independent variable and observes the effects on a possible self. So, as we are watching Jonas’s madness, grief and fear come to occupy his whole being like swabs of virus spreading across a petri dish, we remind ourselves we are not wallowing gratuitously in someone else’s sickness any more than the laboratory technician is so doing when she observes the bright splotch of deadly virus on the agar. We are ascertaining with Glavinic, in clinical detail, what it means to be sane, whole and normal.

night work

Anyone who has ever received a coffee cup as a gift from a loved one and then used it for several years can relate to this reflection: our world is made up of triads–objects, persons, and ourselves, linked together in a matrix of meaning. Every object that is dear to us is suffused with perceptions and memories that serve to link the object with a person who, jointly with the object, reflects our self back onto us. Plato memorably pointed out that this is why we cherish our lover’s everyday belongings, treating them nearly as dearly as the lover herself.

As the truth of Jonas’s abandonment sets in, he begins to collect things from his past. He breaks into his old family home and refurnishes it with the chairs, tables and sofa of his childhood, taken from the basement of his father’s apartment. He finds old photographs, toys, his musical teddy bear. Childhood, he recalls, was the best time of all, when he and his cousins strapped on swim muscles while the adults drank wine and watched the World Cup on TV.

Physical objects, Jonas discovers, are the silent witnesses to the decades and even centuries of human flourishing. But are they really, completely silent? No.

Jonas makes a desperate attempt to find signs of his girlfriend Marie, who had left for Scotland on the last day people existed. Battling chronic fatigue and spiraling fugues of madness, Jonas makes it to Scotland and retrieves Marie’s suitcase. He then steels himself for the return to Vienna, which he knows will be a final homecoming. On the way back to Europe, he encounters a constriction in the Chunnel caused by two trains, and he is forced to abandon his moped, forever, it hits him. He reflects as he crosses through the narrow gap:

And now the moped was standing at the other end of the train. It would continue to stand there for a long time. Until it rusted away and disintegrated, or until the roof of the tunnel collapsed. For many years. All alone in the dark. 

Objects are not mere silent witnesses to human lives: they hum with our intentions, perceptions and memories. Consider this: a derelict moped left on a junk pile elicits no melancholy, because life goes on around it, replete with intentions, perceptions and memories. People will make new things to be used and thrown away. Maybe someone will even pick up the old moped, smash it, and use it to makes something new.

But not Jonas’s abandoned moped. It will just it there, for an achingly long stretch of eternity, unable to reflect anything radiated by the world of human persons. It was not such a silent witness to humanity after all.

What do we learn at this far end of Marcus Aurelius’s super extinction, at Glavinic’s deletion of all human persons? That the world is sacred. That our triads of things, persons and ourselves are all we have, and that they are worthy of our love.  Before madness gripped him, Jonas hoped he would recall the supremacy of love at the moment of his death, and he does. His journey through loss ends in victory, as can all of ours.

I am reminded very vividly of the prayer-like words Joseph Conrad offers up in The Shadow Line about this world that forms us, props us up, makes us who we are:

The world of the living contains enough marvels and mysteries . . . acting upon our emotions and intelligence in ways so inexplicable that it would almost justify the conception of life as an enchanted state. No, I am too firm in my consciousness of the marvellous to be ever fascinated by the mere supernatural.

Night Work is a wondrous novel that ingeniously reveals this “world of the living” by capturing a stark negative image of it.

Can We Be Serious for a Moment?

BY MATTHEW HERBERT

In The Magic of Reality, the biologist Richard Dawkins asks the reader to imagine placing postcard-thick photographs of a single line of her ancestors one on top of another in a stack going back 185 million generations. (It’s a thought experiment; it doesn’t matter that we’ve only had photography for the last six or seven generations.)

The stack of photos, laid on its side for easy reference, will be 40 miles long. To find your first ancestor who was appreciably of a different species (Homo erectus), you would have to thumb your way back to about your 50,000-greats grandfather.

At the far end of the stack, all the way at mile 40, you would find one of your earliest vertebrate ancestors, which Dawkins points out, with relish, is–a fish.

There are two important things to notice about the series of photos going back to your fishy ancestor. First, it is almost unimaginably long. You find your Homo erectus ancestor well before the one-mile mark, which means you still have more than 39 miles of pictures to go through before reaching the fish who gave you your backbone. For anyone who has trouble imagining the pace at which species-producing mutational change happens, this ought to help illustrate just how much time we are talking about

The second thing to notice is that your ancestors go on for tens or even hundreds of thousands of generations with no noticeable changes. Dawkins estimates you would have to go back 4,000 generations to find the first Homo sapiens who showed the slightest family resemblances to his Homo erectus forbears–“a slight thickening of the skull, especially under the eyebrows.” So enveloped withing the nearly-unimaginably long spans of time required for speciation, there are shorter but still vast spans of time during which no changes occur to the gross structure of organisms.

Put these two observations together, and you have the outline of what I think of as the biological conception of time–the picture of time’s passage that accounts for the fantastically long series of recombinations of four bases of DNA that have put us humanoids here. Evoltuion is the outcome of combinatorics and time–almost more time than we can possibly imagine.

And that’s just to account for the existence of living things. If the biological conception of time stumps you, chances are you are, like me, hopeless when it comes to the cosmological conception of time. I “know” that the universe is just shy of 14 billion years old, and I “understand” how physicists have reached this conclusion–by reverse modeling the expansion of the universe back along the original trajectories of its component parts, a formula that leads back to the theoretical starting point (and starting time).

None of this abstract knowledge does much good, though, at least for me, when it comes to understanding how the billions of galaxies in existence formed into their prooper parts of planets and moons, many of which likely resemble our own arrangement here in our solar system. We are left to satisfy ourselves with the information that masses of gas and dust naturally spin, and when they do so, the dust tends to accumulate into bodies that become planets. This started happening in our neighborhood sometime after the formation of our sun, about 9 billion years ago.

galaxy
Everything, or at least a part of it

Why this brief history of time? The better to frame the third, and in my mind most important, way we are uttrely unable to take the passage of time seriously. I think of this notion as the moral conception of time. You may want to call it the religious conception, but I prefer my term, for reasons I’ll come to.

In his short story “The Immortal,” Jorge Luis Borges imagines a group of men who have achieved the Kingdom of Heaven but are subsequently discovered by a cosmic wanderer to have abandoned it. Why? To put Borges’s answer with violent brevity, the blessed men realize, a few generations into their experience, that Heaven is a nightmare because of its moral pointlessness. We spend less than 100 years on earth trying to appease a cosmic moral judge, and if we pass his test, we find ourselves, forever, in a realm utterly divested of moral meaning. Sickened at the thought that this nightmare is programmed to last forever–an absurdly long time given the shortness of the span during which our actions actually mattered–Borges’s immortals return to their mortal existence, where life at least had value based on its fleetingness and irrevocability.

Dawkins completes a fascinating trio of thoughts for me. Most of us are congenitally unable to take seriously the passage of time, not on a biological, cosmological, or moral scale.

To come back to the reason I prefer the term moral conception of time (as opposed to the “religious” one): For all of our extraordinarily brief lives, our actions have moral consequences. Everything we do matters. Living by this principle is one of the things that marks us off as human. Borges’ immortals abandon Heaven, not just because they didn’t want to live forever, but also because the could not imagine living lives completely devoid of moral significance. Nothing they would do in Heaven for the rest of their eternal lives would matter.

Mark Twain voiced his objections to this nightmare in a vulgar fashion. Hardly anyone on earth, he observed, enjoys singing Hosannas for more than 20 minutes, if that long. But, we profess we wish to do so forever. Twain has his doubts, and so do I.

Borges makes Twain’s point in a more devastating form. The timespan during which we picture being in Heaven would be unrecognizably human, and therefore unrecognizable as life. It would be missing the one thing that inspires all the triumph and tragedy, blood sweat and tears here during our mortal existence–meaning.

Science builds a scaffold from which we can picture vast timespans that are just barely imaginable. Billions of years. But even these would be a twinkling of the eye when measured against eternity. And we would propose to live out the nightmare of a meaningless Heaven over a timespan that would dwarf these eons? We can’t be serious.

Karl Popper for President

BY MATTHEW HERBERT

This is going to be tricky. First, Professor Popper is dead, which more or less disqualifies him to run for president. Second, he was born in Austria, which certainly and explicitly disqualifies him.

Still, Karl Popper had lots of something that is missing from public dialogue these days, something so important as to justify an immodest exception to the rules and traditions of running for president, I believe. This something is the will to seek knowledge through the elimination of non-knowledge. The current president and his advisors would not recognize this idea as a virtuous one if God Himself seared it directly into their brains with a holy epistemological laser gun.

So in the time I have today, I’d like to illustrate why Popper’s biggest idea is an important one and mount a brief critique of the current president in light of Popper’s idea.

NPG P370; Sir Karl Raimund Popper by Lucinda Douglas-Menzies
Karl Popper

When Popper was a young man at the University of Vienna, a certain strain of philosophy borrowed from science was all the rage, even invading the softer academic disciplines like history and sociology. Philosophers called it verificationism. It was the idea that any truth-aimed statement a pereson might make is meaningful only to the extent that one can verify the fact(s) that would make it true.

(The clunky phrase “truth-aimed” is meant to distinguish statements we make to describe potential facts, such as “Grass is green,” from the great variety of other utterances we make that are not meant to describe facts, for example, “Lord almighty!” or “Are you sure this is cheese?”)

In a nutshell, verificationism embodied the idea that even we non-scientists, carrying on in our fuzzy day-to-day lives, ought to aim for scientific rigor when it came to making factual claims. After all, we are trying to describe the same world scientists are, right?

But Popper noticed something a little off about verificationism. It did not actually reflect the way sicentists went about their work of turning hypotheses into knowledge. The way this actually worked was by a process of elimination. A scientist posits a hypothesis and designs an experiment to test for it. If her results are positive, she adds those results to a stack of facts that weigh in favor of her claim. But here’s the thing, and it’s a thing all scientists acknowledge: you can acquire corroborating evidence for your hypothesis till the cows come home, but you will never achieve certainty. That’s because, in the world of empirical fact-checking, there is always the possibility you will happen upon a negative result, something that disconfirms your hypothesis.

The flip side of this unsettling observation offers better news: disconfirming evidence is decisive. If you design an experiment to test for x and the result after  few tries is not x, you may state “x is not true” with a margin of confidence so comfortable it approaches certainty.

Popper called this idea scientific falsificationism, and it made him famous.

But the thing about falsificationism is that it runs contrary to human nature. As naturally as cheetahs sprint across the veldt in pursuit of zebras and do not poke about abstractly probing for non-zebras, we are likewise designed to seek evidence that confirms our hypothetical beliefs, and we race toward it in unreflective abandon, tongues lolling out the sides of our mouths.

There is a famous class of experiments that offers a glimpse into this side of ourselves. The details change slightly from case to case, but the basic outline is the same. Children are given a task of discovering an answer to a puzzle by choosing from a group of candidate answers, usually given in flashcards. Some of the candidate answers lead to a confirmatory chain of reasoning (i.e. that offers preliminary “right” answers) and some open up disconfirming chains of reasoning (preliminary “wrong” answers). What the kids don’t know is that the experimenters are actually measuring their emotional responses to the preliminary answers.

When the kids get confirmatory evidence they beam with pride and happiness. When they get disconfimring evidence they frown. This despite the fact that, the way the experiment is designed, the disconfirming evidence is equally valuable for solving the puzzle and arriving at the correct final answer.

If you think this is mere kids’ stuff, it is not. One of the most robust conclusions of cognitive psychology is the demonstration of what is now commonly known as the confirmation bias. Once we are on to a hypothesis, we are programmed to seek and pile on the evidence in its favor rather than looking for equally valuable disconfirming evidence. Yes, even in our 30s, 40s, or 50s, we are still that kid who wants to glow in the warmth of a “right” answer when the flashcard turns. No one is naturally good at the long game of rationality. It takes practice and determination.

My dad, however, was good at emulatinng Popper, as I suppose many mechanics are. When one of our cars had a problem, he invariably set about ruling out possible causes before homing in on what needed to be fixed. I did not understand this approach at the time and usually did not find out till after the repair what his working hypothesis was. I just stood by in a state of suspense as he ruled out false hypotheses, false candidates for the truth.

So, speaking of false candidates, do you know who is spectacularly lacking in Popperian falsificationism and is in fact imploding in a black hole of its opposite principle, confirmationism? You guessed it. By designating as “fake news” any information that might disconfirm its factual claims, the Trump administration has set itself up as the most powerful enemy of Popper’s idea in the history of, well, Popper’s idea. This is no mean feat if you believe, as I do, that Popper’s idea is the epistimeological engine of scientific enquiry. All the mirculous discoveries that make our lives a paradise of ever-advancing technology depend on the existence of an elite bunch of intellectual adventurers brave enough to seek out evidence that would prove them wrong. That’s how science is done.

Trump, on the other hand, insists there is nothing that could even in principle, prove him wrong when he makes factual claims. All evidence is confirmatory. If it doesn’t buttress ordinary facts of the kind that might interest you or me, it points to alternate facts, which are in some sense better than the “real” thing (where “real” means fake).

The problem is, in the world of policy, it is crucial to be able to assess one’s results and determine if one’s working hypothesis is more likely right or wrong. Take Trump’s idea of bombing the shit out of ISIS. Dressed up a bit, his working hypothesis is: applying greater military force will defeat ISIS. This, unfortunately, is an unfalsifiable hypothesis. Any feedback Trump gets from the field, even if apparently negative, will be interpreted to mean not enough military force has yet been applied.

Here is a sobering thought. Almost all of us raise our kids to be like Popper, not Trump. There’s a good reason for this. We want our kids to be able to process failure and incorporate feedback into a repertoire of beliefs that is continuously refined and enriched with better hypotheses. It’s called learning. When was the last time you saw a parent coaching their kid that he should commence shouting insults and bluster at the source of any inconvenient facts he encountered? No one–or, I hope no one–raises their kids to despise learning.

As usual, I find myself veering into a new direction and should therefore sign off for today, but please consider this in closing: there is probably a connection between Trump’s supporters’ disdain for science and their tendency to interpret Trump’s smirking evasions of inconvenient facts as wily victories over the purveyors of “fake news.” If you admire a man who patently cannot face facts, and if you take comfort in his ability to smear their authors, chances are you don’t care much for facts themselves.

Facebook Works Because of Hegel

BY MATTHEW HERBERT

Well, it’s not exactly like that. Facebook works because of something Hegel discovered about human nature: we need others to recognize us and esteem what we do. The feedback we get from others plays an essential role in defining who we are. Oddly, much of what we do appears to be pointless. Or is it?

In Hegel’s day. Napoleon was rampaging across Europe, conquering Venice one afternoon, taking Sardinia the next. Why? It’s not because France needed the extra Lebensraum. No, Napoleon was in a fight to the death for glory, or, as Hegel would put it in the Phenomenology of Spirit, recognition.

hegel-phenomenology
Georg Wilhelm Friedrich Hegel, probably thinking about someone thinking about him

Strange as it sounds, Hegel realized we are all in the same fight Napoleon was, although we lead much quieter lives. Hegel believed we become consicious of ourselves as human persons only through a series of relationships defined by mutual recognition: first as a recepient (and eventual giver) of familial love, then as a member of a society free to effect contractual relationships, and ultimatley as a citizen of a liberal democratic state.

For Hegel, we do not become conscious of ourselves as human persons unless we have other humans around to project images back onto us that tell us how good (or bad) we are. We are, to a mysterious extent, constituted by what other people think about us. This is Hegel’s theory of self-consciousness. (It’s also one of the pillars of what would become postmodernism, the idea that there is no “I” at the center of the self. But that’s another story.)

Nietzsche painted a fascinating picture of Hegel’s idea, calling humans “the beast with red cheeks.” What he meant was that humans, alone among animals, are capable of aspiring to excellence and feeling pride when we achieve it, shame when we fall short. But why the red cheeks? Can’t we just keep our compsure as we court admiration, keeping score internally? No, because people are watching, and we need them to be. The people around us help define the standards of excellence to which we aspire and, to an even greater extent, attribute virtue or vice to us when we succeed or fail, as the case may be. According to Hegel, we can’t do this scorekeeping all for ourselves. There’s no emotional payoff to it.

But here’s the really shocking thing Hegel said about our need for recognition: it is not just a nice-to-have; we are actually in a fight to the death for it. Now this sounds crazy until you reflect that most of us, relaxing, sipping a drink, and browsing the Internet, have been bathing in recognition since day one. We probably zipped right through Hegel’s three levels of mutuality and don’t even know what it is like to go without the kind of recognition that creates persons.

It is only when you look at cases of stunted personhood that you start to appreciate that Hegel was right about the fight-to-the-death thing. Who hasn’t heard of someone committing suicide over being jilted by a lover? How many gang murders happen because one gang member is dissed by another? Road rage homicide?–It is basically a very brief, but thermonuclear demand for recognition. Aristocratic gentlemen no longer duel, but when they did, it was to attain “satisfaction” that they were honorable and–this is crucial–that they were seen to be honorable.

So what does all this have to do with the wild success of Facebook? I think you probably know by now, but I’ll spell it out the way it occurred to me yesterday as I was out trail running.

I was toodling along, outlining a blog post I want to write about nominating a dead Austrian philosoper for President of the United States, when the remainder of my run started to unfold before my mind’s eye with remarkable clarity. I was about 20km into a 30km run, feeling good, and estimating how much longer I would be at it. Within about five minutes’ accuracy as it turned out, I pictured reaching my car, fumbling with my phone, and posting my performance to Facebook.

Why? Every Sunday I post more or less the same squiggle on a map of the Odenwald announcing to my Facebook friends that, once again I heaved and clambered my way up and over my favorite hill. Why go through the motions of this petty appeal for praise? It’s just the same damn thing every week.

I wouldn’t say the answer came in a flash, but it sort of developed as the prize I won for three hours of trail-running toil. I make those posts because I am in a fight to the death for recognition, and because this recognition is so important–if Hegel is right, it is part of me–I am greedy for it even in the smallest quantities. It doesn’t even matter if it is shamelessly elicited by a Facebook signpost that says, “Look at me!”

I think this must be why we are all in it. Facebook has democratized our elicitations of recognition. How easy it has become just to like someone’s crochet project, bike ride, new porch, kids’ photos, and so forth. Now, more than ever, there is someone out there who will recognize you, who will raise you up and make you fractionally more of a person than you were before they hit “like.”

It is a thought for another day to consider what this democratization of recognition might be doing to us. All economists will tell you a commodity must be scarce to be valuable. What happens once we can all find recognition all the time? I’m sure Zuckerberg would be the first to admit he doesn’t really know what he has stumbled upon by creating Facebook, but he should read Hegel to get an idea of where it is going, and to discharge some of his philosophical debt.