Vinge's Law

Of course, I never wrote the “im­por­tant” story, the se­quel about the first am­plified hu­man. Once I tried some­thing similar. John Camp­bell’s let­ter of re­jec­tion be­gan: “Sorry—you can’t write this story. Nei­ther can any­one else.” The moral: Keep your su­per­men offstage, or deal with them when they are chil­dren (Wil­mar Shiras’s Chil­dren of the Atom), or when they are in dis­guise (Camp­bell’s own story “The Ideal­ists”). (There is an­other pos­si­bil­ity, one that John never men­tioned to me: You can deal with the su­per­man when s/​he’s se­nile. This op­tion was used to very amus­ing effect in one epi­sode of the Quark tele­vi­sion se­ries.)

“Book­worm, Run!” and its les­son were im­por­tant to me. Here I had tried a straight­for­ward ex­trap­o­la­tion of tech­nol­ogy, and found my­self pre­cip­i­tated over an abyss. It’s a prob­lem writ­ers face ev­ery time we con­sider the cre­ation of in­tel­li­gences greater than our own. When this hap­pens, hu­man his­tory will have reached a kind of sin­gu­lar­ity—a place where ex­trap­o­la­tion breaks down and new mod­els must be ap­plied—and the world will pass be­yond our un­der­stand­ing. In one form or an­other, this Tech­nolog­i­cal Sin­gu­lar­ity haunts many sci­ence-fic­tion writ­ers: A bright fel­low like Mark Twain could pre­dict tele­vi­sion, but such ex­trap­o­la­tion is for­ever be­yond, say, a dog. The best we writ­ers can do is creep up on the Sin­gu­lar­ity, and hang ten at its edge.

-- Ver­nor Vinge, True Names and other Dangers, p. 47.

Vinge’s Law (as rephrased by Yud­kowsky) states: Char­ac­ters can­not be sig­nifi­cantly smarter than their au­thors. You can’t have a re­al­is­tic char­ac­ter that’s too much smarter than the au­thor, be­cause to re­ally know how a char­ac­ter like that would think, you’d have to be that smart your­self.

(In non­fic­tional form we call this Vinge’s Prin­ci­ple and con­sider it un­der the head­ing of Vingean un­cer­tainty: You can­not ex­actly pre­dict the ac­tions of agents smarter than you, though you may be able to pre­dict that they’ll suc­cess­fully achieve their goals. If you could pre­dict ex­actly where Deep Blue would play on a chess­board, you could play equally good chess your­self by mov­ing where you pre­dicted Deep Blue would.)

As a mat­ter of liter­ary form, Vinge sug­gests keep­ing the tran­shu­man in­tel­li­gences out of sight, or only deal­ing with them af­ter some catas­tro­phe has re­duced them to a mor­tal level.

Yud­kowsky’s Guide to In­tel­li­gent Char­ac­ters sug­gests that there are limited ways for an au­thor to cheat Vinge’s Law, but that they only go so far:

  • An au­thor can de­liber­ately im­plant solu­tion-as­sist­ing re­sources into ear­lier chap­ters, while the char­ac­ter must solve prob­lems on the fly.

  • An au­thor can choose only puz­zles their char­ac­ter can solve, while the char­ac­ter seems to be han­dling what­ever re­al­ity throws at them.

  • An au­thor can de­clare that what seems like a re­ally good idea will ac­tu­ally work, while in real life the gap be­tween “seems like a great idea” and “ac­tu­ally works” is much greater.

Yud­kowsky fur­ther re­marks: “All three sneaky ar­tifices al­low for a limited vi­o­la­tion of Vinge’s Law… You can some­times get out more char­ac­ter in­tel­li­gence, in-uni­verse, than you put in as la­bor. You can­not get some­thing for noth­ing… Every­thing Hol­ly­wood does wrong with their stereo­type of ge­nius can be in­ter­preted as a form of ab­solute laz­i­ness: they try to de­pict ge­nius in a way that re­quires liter­ally zero cog­ni­tive work.”

Similarly, Larry Niven has ob­served that puz­zles that take the au­thor months to solve (or com­pose) can be solved by a char­ac­ter in sec­onds, but that writ­ing su­per­hu­manly in­tel­li­gent char­ac­ters is still very hard. In other words, speed su­per­hu­man­ity is eas­ier to de­pict than cog­ni­tive su­per­hu­man­ity.


  • Vingean uncertainty

    You can’t pre­dict the ex­act ac­tions of an agent smarter than you—so is there any­thing you can say about them?