Vinge's Law

Of course, I never wrote the “important” story, the sequel about the first amplified human. Once I tried something similar. John Campbell’s letter of rejection began: “Sorry—you can’t write this story. Neither can anyone else.” The moral: Keep your supermen offstage, or deal with them when they are children (Wilmar Shiras’s Children of the Atom), or when they are in disguise (Campbell’s own story “The Idealists”). (There is another possibility, one that John never mentioned to me: You can deal with the superman when s/​he’s senile. This option was used to very amusing effect in one episode of the Quark television series.)

“Bookworm, Run!” and its lesson were important to me. Here I had tried a straightforward extrapolation of technology, and found myself precipitated over an abyss. It’s a problem writers face every time we consider the creation of intelligences greater than our own. When this happens, human history will have reached a kind of singularity—a place where extrapolation breaks down and new models must be applied—and the world will pass beyond our understanding. In one form or another, this Technological Singularity haunts many science-fiction writers: A bright fellow like Mark Twain could predict television, but such extrapolation is forever beyond, say, a dog. The best we writers can do is creep up on the Singularity, and hang ten at its edge.

-- Vernor Vinge, True Names and other Dangers, p. 47.

Vinge’s Law (as rephrased by Yudkowsky) states: Characters cannot be significantly smarter than their authors. You can’t have a realistic character that’s too much smarter than the author, because to really know how a character like that would think, you’d have to be that smart yourself.

(In nonfictional form we call this Vinge’s Principle and consider it under the heading of Vingean uncertainty: You cannot exactly predict the actions of agents smarter than you, though you may be able to predict that they’ll successfully achieve their goals. If you could predict exactly where Deep Blue would play on a chessboard, you could play equally good chess yourself by moving where you predicted Deep Blue would.)

As a matter of literary form, Vinge suggests keeping the transhuman intelligences out of sight, or only dealing with them after some catastrophe has reduced them to a mortal level.

Yudkowsky’s Guide to Intelligent Characters suggests that there are limited ways for an author to cheat Vinge’s Law, but that they only go so far:

  • An author can deliberately implant solution-assisting resources into earlier chapters, while the character must solve problems on the fly.

  • An author can choose only puzzles their character can solve, while the character seems to be handling whatever reality throws at them.

  • An author can declare that what seems like a really good idea will actually work, while in real life the gap between “seems like a great idea” and “actually works” is much greater.

Yudkowsky further remarks: “All three sneaky artifices allow for a limited violation of Vinge’s Law… You can sometimes get out more character intelligence, in-universe, than you put in as labor. You cannot get something for nothing… Everything Hollywood does wrong with their stereotype of genius can be interpreted as a form of absolute laziness: they try to depict genius in a way that requires literally zero cognitive work.”

Similarly, Larry Niven has observed that puzzles that take the author months to solve (or compose) can be solved by a character in seconds, but that writing superhumanly intelligent characters is still very hard. In other words, speed superhumanity is easier to depict than cognitive superhumanity.

Parents:

  • Vingean uncertainty

    You can’t predict the exact actions of an agent smarter than you—so is there anything you can say about them?