Vinge's Principle

Vinge’s Prin­ci­ple says that, in do­mains com­pli­cated enough that perfect play is not pos­si­ble, less in­tel­li­gent agents will not be able to pre­dict the ex­act moves made by more in­tel­li­gent agents.

For ex­am­ple, if you knew ex­actly where Deep Blue would play on a chess­board, you’d be able to play chess at least as well as Deep Blue by mak­ing what­ever moves you pre­dicted Deep Blue would make. So if you want to write an al­gorithm that plays su­per­hu­man chess, you nec­es­sar­ily sac­ri­fice your own abil­ity to (with­out ma­chine aid) pre­dict the al­gorithm’s ex­act chess moves.

This is true even though, as we be­come more con­fi­dent of a chess al­gorithm’s power, we be­come more con­fi­dent that it will even­tu­ally win the chess game. We be­come more sure of the game’s fi­nal out­come, even as we be­come less sure of the chess al­gorithm’s next move. This is Vingean un­cer­tainty.

Now con­sider agents that build other agents (or build their own suc­ces­sors, or mod­ify their own code). Vinge’s Prin­ci­ple im­plies that the choice to ap­prove the suc­ces­sor agent’s de­sign must be made with­out know­ing the suc­ces­sor’s ex­act sen­sory in­for­ma­tion, ex­act in­ter­nal state, or ex­act mo­tor out­puts. In the the­ory of tiling agents, this ap­pears as the prin­ci­ple that the suc­ces­sor’s sen­sory in­for­ma­tion, cog­ni­tive state, and ac­tion out­puts should only ap­pear in­side quan­tifiers. This is Vingean re­flec­tion.

For the rule about fic­tional char­ac­ters not be­ing smarter than the au­thor, see Vinge’s Law.


  • Vingean reflection

    The prob­lem of think­ing about your fu­ture self when it’s smarter than you.