Big Ideas:
A Question of Timing
The Most Human Human, by Brian Christian
Wednesday, 27 April 2011

Is it age? Every book that I read these days seems to be utterly remarkable, unprecedented, world-changingly important. At my age, it ought, one would think, be the other way round. Nothing new under the sun and all that. But no: I appear to have lived just long enough to glimpse the sun for the first time.

The subtitle of Brian Christian’s The Most Human has a banal and fatuous ring, just like most other subtitles these days. “What Talking with Computers Teaches Us About What It Means to Be Alive.” That’s comprehensive! “Talking with computers”! “Being alive”! The only way to make it sound even more portentous would be to throw in something about changing the world forever. But that’s exactly what’s already implicit: the suggestion that we wouldn’t know much about being alive if we couldn’t talk to computers — something that almost everyone knows was impossible until quite recently. The world has been changed, probably forever. To demonstrate the point, you have only to turn Christian’s subtitle into a question, and interrogate the recent past. What, in 1960, say, did talking with computers have to tell us about what it meant to be alive?

Exactly nothing. For one thing, there was no talking with computers in those days. ELIZA, the first AI program capable of dialogue, was still a few years in the future. More to the point, nobody (beyond a tiny handful of visionaries like Claude Shannon) had any idea that a machine of any kind had anything to teach about being human (which, for our purposes, is what “being alive” means). Machines were tools, and they were also threats, just as they had been since the early days of the Industrial Revolution, when French workers invented sabotage by throwing their wooden sabots into looms, and Mary Shelley envisioned Dr Frankenstein’s monster. The computer was simply the latest in a string of inventions whose unanticipated powers might, many feared, bring down Promethean punishment on mankind. Computers were what made annihilation by ballistic missiles possible. They told us what it meant not to be alive.

So, if you were a bright young person in 1960, an interest in computers would probably carry you away from the pursuit of humanist wisdom that motivates the prodigious Brian Christian. (Can he really have been born in 1984?)  And if you didn’t choose cold blue steely science, you would make your mark in the wilds of the Peace Corps, then in the first flower of Rousseauvian idealism.

In 1960, computers were gigantesque, exorbitantly expensive, and not very powerful. As that changed, so did the world of business, which went from humdrum to hot in the same years that computers shrank to PC proportions. In 1985, asking a computer what it meant to be alive might very well take the form of a Lotus spreadsheet, splaying out competing mortgage options. Not very transcendent stuff, but something:  a computer might help you live a (marginally) better life. By the late Nineties, talking with a computer meant talking with the entire world. Email displayed a dark side of bad manners that the sheer clunkiness of snail mail had helped to conceal: not only did hitting “Send” prematurely expose one’s id in unflattering ways, but it also tempted your correspondent to “share” the spectacle of your bad behavior with, ultimately, everyone else with a computer. That something genuinely new was going on seems to be clearly indicated by the fact that the era’s countless flame wars, far from killing anyone, provided a global learning experience. Not only did we all learn something about self-restraint, but we learned it together.

If you want to know what a computer has to tell you about being alive today, you face the daunting threshhold problem of defining “computer.” Is your iPhone a computer? Your iPod? What about all the diagnostic tools, such as fMRI, that depend upon computational wizardry for their effectiveness? Physical gadgets will be with us for a long time, but “computers” seem to be dissolving into “computing,” something performed by many types of device. Unlike the computing of 1960, modern computing is ubiquitous and intimate; we wouldn’t want to live without it.

When I read the excerpt from The Most Human Human that appeared in The Atlantic a few months ago, I pegged Brian Christian for a forty-something journalist specializing in science and ethics, a field that ordinarily leaves me cold. I thought that, before his piece came to an end, Christian would be sounding hair-tearing alarms about the growing power of AI chatbots, which, he pointed out early on, nearly passed the Turing Test in 2008. All we need say about the Turing Test at this point is that it posits a point at which computers might be said to be capable of thought, by appearing, to a jury of human beings, to be capable of human conversation. The Turing Test became a bulwark of humanity, the breach of which by human-seeming computers would signify a Something Awful that I expected Christian to spell out in hectoring detail. What toppled instead, however, was my own expectation.

Christian turned out to be an extraordinarily well-educated young man — how right he is to dedicate his book to his teachers! — without a moping bone in his body. Far from being a passive journalist (or disgusted observer), Christian had the computer-science chops that would enable him to enter the contest himself. It’s at this point that we need to look back at his subtitle. What he does not say is that he beat the computers in the Turing Test. There are no computers, really, in The Most Human Human. There are only people “being themselves,” and other people trying to make machines simulate people being themselves.

“Being himself” is the very first thing that Christian decides not to do, and herein lies the glory of his undertaking. Like legions of high-school students facing aptitude tests, he is advised by the Test’s manager that there is no special training that will enhance his performance. Piffle, says Christian.

So, I must say, my intention from the start was to be as thoroughly disobedient to the organizers’ advice to “just show up at Brighton in September and ‘be myself'” as possible — spending the months leading up to the test gathering as much information, preparation, and experience aas possible ad coming to Brighton ready to give it everything I had.

Ordinarily, there wouldn’t be very much odd about this notion at all, of course — we train and prepare for tennis competitions, spelling bees, standardized tests, and the like. But given that the Turing teest is meant to evaluate how human I am, the implication seems to be that being human (and being oneself) is about more than simply showing up. I contend that it is.

Unlike Christian, I lived through the late Sixties, and I’ve been contending that just showing up is not enough for a long time now. It has been an unfashionable thing to say. But unlike Christian I came of age long before learning from AI (“computers”) was an option. It’s very hard not to be jealous of a bright young man whose timing appears to have been excellent. He can do much better than mouth plausible platitudes about contemplation and great books. He can “do the math,” and he does it in a way that any intelligent reader will grasp at once and with pleasure. If I weren’t so grateful, and if I didn’t so much admire Christian for making the most of his superlative opportunity, I’d be eaten alive by the green-eyed monster.