Epistemic and instrumental efficiency

An agent that is “effi­cient”, rel­a­tive to you, within a do­main, is one that never makes a real er­ror that you can sys­tem­at­i­cally pre­dict in ad­vance.

  • Epistemic effi­ciency (rel­a­tive to you): You can­not pre­dict di­rec­tional bi­ases in the agent’s es­ti­mates (within a do­main).

  • In­stru­men­tal effi­ciency (rel­a­tive to you): The agent’s strat­egy (within a do­main) always achieves at least as much util­ity or ex­pected util­ity, un­der its own prefer­ences, as the best strat­egy you can think of for ob­tain­ing that util­ity (while stay­ing within the same do­main).

If an agent is epistem­i­cally and in­stru­men­tally effi­cient rel­a­tive to all of hu­man­ity across all do­mains, we can just say that it is “effi­cient” (and al­most surely su­per­in­tel­li­gent).

Epistemic efficiency

A su­per­in­tel­li­gence can­not be as­sumed to know the ex­act num­ber of hy­dro­gen atoms in a star; but we should not find our­selves be­liev­ing that we our­selves can pre­dict in ad­vance that a su­per­in­tel­li­gence will over­es­ti­mate the num­ber of hy­dro­gen atoms by a fac­tor of 10%. Any thought pro­cess we can use to pre­dict this over­es­ti­mate should also be ac­cessible to the su­per­in­tel­li­gence, and it can ap­ply the same cor­rec­tive fac­tor it­self.

The main anal­ogy from pre­sent hu­man ex­pe­rience would be the Effi­cient Mar­kets Hy­poth­e­sis as ap­plied to short-term as­set prices in highly-traded mar­kets. Any­one who thinks they have a re­li­able, re­peat­able abil­ity to pre­dict 10% changes in the price of S&P 500 com­pa­nies over one-month time pe­ri­ods is mis­taken. If some­one has a story to tell about how the econ­omy works that re­quires ad­vance-pre­dictable 10% changes in the as­set prices of highly liquid mar­kets, we in­fer that the story is wrong. There can be sharp cor­rec­tions in stock prices (the mar­kets can be ‘wrong’), but not hu­mans who can re­li­ably pre­dict those cor­rec­tions (over one-month timescales). If e.g. some­body is con­sis­tently mak­ing money by sel­l­ing op­tions us­ing some straight­for­ward-seem­ing strat­egy, we sus­pect that such op­tions will some­times blow up and lose all the money gained (“pick­ing up pen­nies in front of a steam­rol­ler”).

An ‘effi­cient agent’ is epistem­i­cally strong enough that we ap­ply at least the de­gree of skep­ti­cism to a hu­man propos­ing to outdo their es­ti­mates that, e.g., an ex­pe­rienced pro­po­nent of the Effi­cient Mar­kets Hy­poth­e­sis would ap­ply to your un­cle boast­ing about how he made a lot of money by pre­dict­ing how Gen­eral Mo­tors’s stock would rise.

Epistemic effi­ciency im­plic­itly re­quires that an ad­vanced agent can always learn a model of the world at least as pre­dic­tively ac­cu­rate as used by any hu­man or hu­man in­sti­tu­tion. If our hy­poth­e­sis space were use­fully wider than that of an ad­vanced agent, such that the truth some­times lay in our hy­poth­e­sis space while be­ing out­side the agent’s hy­poth­e­sis space, then we would be able to pro­duce bet­ter pre­dic­tions than the agent.

In­stru­men­tal efficiency

This is the analogue of epistemic ad­vance­ment for in­stru­men­tal strate­giz­ing: By defi­ni­tion, hu­mans can­not ex­pect to imag­ine an im­proved strat­egy com­pared to an effi­cient agent’s se­lected strat­egy (rel­a­tive to the agent’s prefer­ences, and given the op­tions the agent has available).

If some­one ar­gues that a cog­ni­tively ad­vanced pa­per­clip max­i­mizer would do X yield­ing M ex­pected pa­per­clips, and we can think of an al­ter­na­tive strat­egy Y that yields N ex­pected pa­per­clips, N > M, then while we can­not be con­fi­dent that a Paper­clipMax­i­mizer will use strat­egy Y, we strongly pre­dict that:

  • (1) a pa­per­clip max­i­mizer will not use strat­egy X, or

  • (2a) if it does use X, strat­egy Y was un­ex­pect­edly flawed, or

  • (2b) if it does use X, strat­egy X will yield un­ex­pect­edly high value

…where to avoid priv­ileg­ing the hy­poth­e­sis or fight­ing a rear­guard ac­tion we should usu­ally just say, “No, a Paper­clip Max­i­mizer wouldn’t do X be­cause Y would pro­duce more pa­per­clips.” In say­ing this, we’re im­plic­itly mak­ing an ap­peal to a ver­sion of in­stru­men­tal effi­ciency; we’re sup­pos­ing the Paper­clip Max­i­mizer isn’t stupid enough to miss some­thing that seems ob­vi­ous to a hu­man think­ing about the prob­lem for five min­utes.

In­stru­men­tal effi­ciency im­plic­itly re­quires that the agent is always able to con­cep­tu­al­ize any use­ful strat­egy that hu­mans can con­cep­tu­al­ize; it must be able to search at least as wide a space of pos­si­ble strate­gies as hu­mans could.

In­stru­men­tally effi­cient agents are presently unknown

From the stand­point of pre­sent hu­man ex­pe­rience, in­stru­men­tally effi­cient agents are un­known out­side of very limited do­mains. There are perfect tic-tac-toe play­ers; but even mod­ern chess-play­ing pro­grams, with abil­ity far in ad­vance of any hu­man player, are not yet so ad­vanced that ev­ery move that looks to us like a mis­take must there­fore be se­cretly clever. We don’t dis­miss out of hand the no­tion that a hu­man has thought of a bet­ter move than the chess-play­ing al­gorithm, the way we dis­miss out of hand a sup­posed se­cret to the stock mar­ket that pre­dicts 10% price changes of S&P 500 com­pa­nies us­ing pub­lic in­for­ma­tion.

There is no analogue of ‘in­stru­men­tal effi­ciency’ in as­set mar­kets, since mar­ket prices do not di­rectly se­lect among strate­gic op­tions. No­body has yet for­mu­lated a use of the EMH such that we could spend a hun­dred mil­lion dol­lars to guaran­tee liquidity, and get a well-traded as­set mar­ket to di­rectly de­sign a liquid fluoride tho­rium nu­clear plant, such that if any­one said be­fore the start of trad­ing, “Here is a de­sign X that achieves ex­pected value M”, we would feel con­fi­dent that ei­ther the as­set mar­ket’s fi­nal se­lected de­sign would achieve at least ex­pected value M or that the origi­nal as­ser­tion about X’s ex­pected value was wrong.

By re­strict­ing the mean­ing even fur­ther, we get a valid metaphor in chess: an or­di­nary per­son such as you, if you’re not an In­ter­na­tional Grand­mas­ter with hours to think about the game, should re­gard a mod­ern chess pro­gram as in­stru­men­tally effi­cient rel­a­tive to you. The chess pro­gram will not make any mis­take that you can un­der­stand as a mis­take. You should ex­pect the rea­son why the chess pro­gram moves any­where to be only un­der­stand­able as ‘be­cause that move had the great­est prob­a­bil­ity of win­ning the game’ and not in any other terms like ‘it likes to move its pawn’. If you see the chess pro­gram move some­where un­ex­pected, you con­clude that it is about to do ex­cep­tion­ally well or that the move you ex­pected was sur­pris­ingly bad. There’s no way for you to find any bet­ter path to the chess pro­gram’s goals by think­ing about the board your­self. An in­stru­men­tally effi­cient agent would have this prop­erty for hu­mans in gen­eral and the real world in gen­eral, not just you and a chess game.

Cor­po­ra­tions are not superintelligences

For any rea­son­able at­tempt to define a cor­po­ra­tion’s util­ity func­tion (e.g. dis­counted fu­ture cash flows), it is not the case that we can con­fi­dently dis­miss any as­ser­tion by a hu­man that a cor­po­ra­tion could achieve 10% more util­ity un­der its util­ity func­tion by do­ing some­thing differ­ently. It is com­mon for a cor­po­ra­tion’s stock price to rise im­me­di­ately af­ter it fires a CEO or re­nounces some other mis­take that many mar­ket ac­tors knew was a mis­take but had been go­ing on for years—the mar­ket ac­tors are not able to make a profit on cor­rect­ing that er­ror, so the er­ror per­sists.

Stan­dard eco­nomic the­ory does not pre­dict that any cur­rently known eco­nomic ac­tor will be in­stru­men­tally effi­cient un­der any par­tic­u­lar util­ity func­tion, in­clud­ing cor­po­ra­tions. If it did, we could max­i­mize any other strate­gic prob­lem if we could make that ac­tor’s util­ity func­tion con­di­tional on it, e.g., re­li­ably ob­tain the best hu­manly imag­in­able nu­clear plant de­sign by pay­ing a cor­po­ra­tion for it via a suffi­ciently well-de­signed con­tract.

We have some­times seen peo­ple try­ing to la­bel cor­po­ra­tions as su­per­in­tel­li­gences, with the im­pli­ca­tion that cor­po­ra­tions are the real threat and equally se­vere, as threats, com­pared to ma­chine su­per­in­tel­li­gences. But epistemic or in­stru­men­tal de­ci­sion-mak­ing effi­ciency of in­di­vi­d­ual cor­po­ra­tions is just not pre­dicted by stan­dard eco­nomic the­ory. Most cor­po­ra­tions do not even use in­ter­nal pre­dic­tion mar­kets, or try to run con­di­tional stock-price mar­kets to se­lect among known courses of ac­tion. Stan­dard eco­nomic his­tory in­cludes many ac­counts of cor­po­ra­tions mak­ing ‘ob­vi­ous mis­takes’ and these ac­counts are not ques­tioned in the way that e.g. a per­sis­tent large pre­dictable er­ror in short-run as­set prices would be ques­tioned.

Since cor­po­ra­tions are not in­stru­men­tally effi­cient (or epistem­i­cally effi­cient), they are not su­per­in­tel­li­gences.

Children:

  • Time-machine metaphor for efficient agents

    Don’t imag­ine a pa­per­clip max­i­mizer as a mind. Imag­ine it as a time ma­chine that always spits out the out­put lead­ing to the great­est num­ber of fu­ture pa­per­clips.

Parents:

  • Advanced agent properties

    How smart does a ma­chine in­tel­li­gence need to be, for its nice­ness to be­come an is­sue? “Ad­vanced” is a broad term to cover cog­ni­tive abil­ities such that we’d need to start con­sid­er­ing AI al­ign­ment.