Superintelligent

Ma­chine perfor­mance in­side a do­main (class of prob­lems) can po­ten­tially be:

  • Op­ti­mal (im­pos­si­ble to do bet­ter)

  • Strongly su­per­hu­man (bet­ter than all hu­mans by a sig­nifi­cant mar­gin)

  • Weakly su­per­hu­man (bet­ter than all the hu­mans most of the time and most of the hu­mans all of the time)

  • Par-hu­man (performs about as well as most hu­mans, bet­ter in some places and worse in oth­ers)

  • Sub­hu­man or in­frahu­man (performs worse than most hu­mans)

A su­per­in­tel­li­gence is ei­ther ‘strongly su­per­hu­man’, or else at least ‘op­ti­mal’, across all cog­ni­tive do­mains. It can’t win against a hu­man at log­i­cal tic-tac-toe, but it plays op­ti­mally there. In a real-world game of tic-tac-toe that it strongly wanted to win, it might sab­o­tage the op­pos­ing player, de­ploy­ing su­per­hu­man strate­gies on the richer “real world” game­board.

I. J. Good origi­nally used ‘ul­train­tel­li­gence’ to de­note the same con­cept: “Let an ul­train­tel­li­gent ma­chine be defined as a ma­chine that can far sur­pass all the in­tel­lec­tual ac­tivi­ties of any man how­ever clever.”

To say that a hy­po­thet­i­cal agent or pro­cess is “su­per­in­tel­li­gent” will usu­ally im­ply that it has all the ad­vanced-agent prop­er­ties.

Su­per­in­tel­li­gences are still bounded (if the char­ac­ter of phys­i­cal law at all re­sem­bles the Stan­dard Model of physics). They are (pre­sum­ably) not in­finitely smart, in­finitely fast, all-know­ing, or able to achieve ev­ery de­scrib­able out­come us­ing their available re­sources and op­tions. How­ever:

  • A su­per­nova isn’t in­finitely hot, but it’s still pretty warm. “Bounded” does not im­ply “small”. You should not try to walk into a su­per­nova us­ing a stan­dard flame-re­tar­dant jump­suit af­ter rea­son­ing, cor­rectly but un­helpfully, that it is only bound­edly hot.

  • A su­per­in­tel­li­gence doesn’t know ev­ery­thing and can’t perfectly es­ti­mate ev­ery quan­tity. How­ever, to say that some­thing is “su­per­in­tel­li­gent” or su­per­hu­man/​op­ti­mal in ev­ery cog­ni­tive do­main should al­most always im­ply that its es­ti­mates are epistem­i­cally effi­cient rel­a­tive to ev­ery hu­man and hu­man group. Even a su­per­in­tel­li­gence may not be able to ex­actly es­ti­mate the num­ber of hy­dro­gen atoms in the Sun, but a hu­man shouldn’t be able to say, “Oh, it will prob­a­bly un­der­es­ti­mate the num­ber by 10% be­cause hy­dro­gen atoms are pretty light”—the su­per­in­tel­li­gence knows that too. For us to know bet­ter than the su­per­in­tel­li­gence is at least as im­plau­si­ble as our be­ing able to pre­dict a 20% price in­crease in Microsoft’s stock six months in ad­vance with­out any pri­vate in­for­ma­tion.

  • A su­per­in­tel­li­gence is not om­nipo­tent and can’t ob­tain ev­ery de­scrib­able out­come. But to say that it is “su­per­in­tel­li­gent” should sup­pose at least that it is in­stru­men­tally effi­cient rel­a­tive to hu­mans: We should not sup­pose that a su­per­in­tel­li­gence car­ries out any policy \(\pi_0\) such that a hu­man can think of a policy \(\pi_1\) which would get more of the agent’s util­ity. To put it an­other way, the as­ser­tion that a su­per­in­tel­li­gence op­ti­miz­ing for util­ity func­tion \(U,\) would pur­sue a policy \(\pi_0,\) is by de­fault re­futed if we ob­serve some \(\pi_1\) such that, so far as we can see, \(\mathbb E[U | \pi_0] < \mathbb E[U | \pi_1].\) We’re not sure the effi­cient agent will do \(\pi_1\) - there might be an even bet­ter al­ter­na­tive we haven’t fore­seen—but we should re­gard it as very likely that it won’t do \(\pi_0.\)

If we’re talk­ing about a hy­po­thet­i­cal su­per­in­tel­li­gence, prob­a­bly we’re ei­ther sup­pos­ing that an in­tel­li­gence ex­plo­sion hap­pened, or we’re talk­ing about a limit state ap­proached by a long pe­riod of progress.

Many/​most prob­lems in AI al­ign­ment seem like they ought to first ap­pear at a point short of full su­per­in­tel­li­gence. As part of the pro­ject of mak­ing dis­course about ad­vanced agents pre­cise, we should try to iden­tify the key ad­vanced agent prop­erty more pre­cisely than say­ing “this prob­lem would ap­pear on ap­proach­ing su­per­in­tel­li­gence”—to sup­pose su­per­in­tel­li­gence is usu­ally suffi­cient but will rarely be nec­es­sary.

For the book, see Nick Bostrom’s book Su­per­in­tel­li­gence.

Parents:

  • Advanced agent properties

    How smart does a ma­chine in­tel­li­gence need to be, for its nice­ness to be­come an is­sue? “Ad­vanced” is a broad term to cover cog­ni­tive abil­ities such that we’d need to start con­sid­er­ing AI al­ign­ment.