Corporations vs. superintelligences

It is some­times sug­gested that cor­po­ra­tions are rele­vant analo­gies for su­per­in­tel­li­gences. To eval­u­ate this anal­ogy with­out sim­ply fal­ling prey to the con­tinuum fal­lacy, we need to con­sider which spe­cific thresh­olds from the stan­dard list of ad­vanced agent prop­er­ties can rea­son­ably be said to ap­ply in full force to cor­po­ra­tions. This sug­gests roughly the fol­low­ing pic­ture:

  • Cor­po­ra­tions gen­er­ally ex­hibit in­frahu­man, par-hu­man, or high-hu­man lev­els of abil­ity on non-heav­ily-par­allel tasks. On cog­ni­tive tasks that par­allelize well across mas­sive num­bers of hu­mans be­ing paid to work on them, cor­po­ra­tions ex­hibit su­per­hu­man lev­els of abil­ity com­pared to an in­di­vi­d­ual hu­man.

  • In or­der to try and grasp the over­all perfor­mance boost from or­ga­niz­ing into a cor­po­ra­tion, con­sider a Microsoft-sized cor­po­ra­tion try­ing to play Go in 2010. The cor­po­ra­tion could po­ten­tially pick out its strongest player and so gain high-hu­man perfor­mance, but would prob­a­bly not play very far above that in­di­vi­d­ual level, and so would not be able to defeat the in­di­vi­d­ual world cham­pion. Con­sider also the fa­mous chess game of Kas­parov vs. The World, which Kas­parov ul­ti­mately won.

  • On mas­sively par­allel cog­ni­tive tasks, cor­po­ra­tions ex­hibit strongly su­per­hu­man perfor­mance; the best pas­sen­ger air­craft des­ignable by Boe­ing seems likely to be far su­pe­rior to the best pas­sen­ger air­craft that could be de­signed by a sin­gle en­g­ineer at Boe­ing.

  • In virtue of be­ing com­posed of hu­mans, cor­po­ra­tions have most of the ad­vanced-agent prop­er­ties that hu­mans them­selves do:

  • They can de­ploy gen­eral in­tel­li­gence and cross-do­main con­se­quen­tial­ism.

  • They pos­sess big-pic­ture strate­gic aware­ness and op­er­ate in the real-world do­main.

  • They can de­ploy re­al­is­tic psy­cholog­i­cal mod­els of hu­mans and try to de­ceive them.

  • Also in virtue of be­ing com­posed of hu­mans, cor­po­ra­tions are not in gen­eral Vingean-un­pre­dictable, hence not sys­tem­at­i­cally cog­ni­tively un­con­tain­able. Without con­stituent re­searchers who know se­cret phe­nom­ena of a do­main, cor­po­ra­tions are not strongly cog­ni­tively un­con­tain­able.

  • Cor­po­ra­tions are not epistem­i­cally effi­cient rel­a­tive to hu­mans, ex­cept per­haps in limited do­mains for the ex­tremely few such that have de­ployed in­ter­nal pre­dic­tion mar­kets with suffi­ciently high par­ti­ci­pa­tion and sub­sidy. (The stock prices of large cor­po­ra­tions are effi­cient, but the cor­po­ra­tions aren’t; of­ten the stock price tanks af­ter the cor­po­ra­tion does some­thing stupid.)

  • Cor­po­ra­tions are not in­stru­men­tally effi­cient. No cur­rently known method ex­ists for ag­gre­gat­ing hu­man strate­gic acu­men into an in­stru­men­tally effi­cient con­glomer­ate the way that pre­dic­tion mar­kets try to do for epistemic pre­dic­tions about near-term testable events. It is of­ten pos­si­ble for a hu­man to see a bet­ter strat­egy for ac­com­plish­ing the cor­po­ra­tion’s pseudo-goals than the cor­po­ra­tion is pur­su­ing.

  • Cor­po­ra­tions gen­er­ally ex­hibit lit­tle in­ter­est in fun­da­men­tal cog­ni­tive self-im­prove­ment, e.g. ex­tremely few of them have de­ployed in­ter­nal pre­dic­tion mar­kets (per­haps since the pre­dic­tions of these in­ter­nal pre­dic­tion mar­kets are of­ten em­bar­rass­ing to over­con­fi­dent man­agers). Since cor­po­rate in­tel­li­gence is al­most en­tirely com­posed of hu­mans, most of the ba­sic al­gorithms run­ning a cor­po­ra­tion are not sub­ject to im­prove­ment by the cor­po­ra­tion. At­tempts to do crude analogues of this tend to, e.g., bog down the en­tire cor­po­ra­tion in bu­reau­cracy and in­ter­nal reg­u­la­tions, rather than re­sult­ing in ge­netic en­g­ineer­ing of bet­ter ex­ec­u­tives or an in­tel­li­gence ex­plo­sion.

  • Cor­po­ra­tions have no ba­sic speed ad­van­tage over their con­stituent hu­mans, since speed does not par­allelize.

Some­times dis­cus­sion of analo­gies be­tween cor­po­ra­tions and hos­tile su­per­in­tel­li­gences fo­cuses on a pur­ported mis­al­ign­ment with hu­man val­ues.

As men­tioned above, cor­po­ra­tions are com­posed of con­se­quen­tial­ist agents, and can of­ten de­ploy con­se­quen­tial­ist rea­son­ing to this ex­tent. The hu­mans in­side the cor­po­ra­tion are not all always pul­ling in the same di­rec­tion, and this can lead to non-con­se­quen­tial­ist be­hav­ior by the cor­po­ra­tion con­sid­ered as a whole; e.g. an ex­ec­u­tive may not max­i­mize fi­nan­cial gain for the com­pany out of fear of per­sonal le­gal li­a­bil­ity or just other life con­cerns.

On many oc­ca­sions some cor­po­ra­tions have acted psy­cho­path­i­cally with re­spect to the out­side world, e.g. to­bacco com­pa­nies. How­ever, even to­bacco com­pa­nies are still com­posed en­tirely of hu­mans who might balk at be­ing e.g. turned into pa­per­clips. It is pos­si­ble to imag­ine cir­cum­stances un­der which a Board of Direc­tors might wedge it­self into press­ing a but­ton that turned ev­ery­thing in­clud­ing them­selves into pa­per­clips. How­ever, act­ing in a unified way to pur­sue an in­ter­est of the cor­po­ra­tion that is con­trary to the non-fi­nan­cial per­sonal in­ter­ests of all ex­ec­u­tives and di­rec­tors and em­ploy­ees and share­hold­ers, does not well-char­ac­ter­ize the be­hav­ior of most cor­po­ra­tions un­der most cir­cum­stances.

The con­di­tions for the co­her­ence the­o­rems im­ply­ing con­sis­tent ex­pected util­ity max­i­miza­tion are not met in cor­po­ra­tions, as they are not met in the con­stituent hu­mans. On the whole, the strate­gic acu­men of big-pic­ture cor­po­rate strat­egy seems to be­have more like Go than like air­plane de­sign, and in­deed cor­po­ra­tions are usu­ally strate­gi­cally dumber than their smartest em­ployee and of­ten seem to be strate­gi­cally dumber than their CEOs. Run­ning down the list of Con­ver­gent in­stru­men­tal strate­gies sug­gests that cor­po­ra­tions ex­hibit some such be­hav­iors some­times, but not all of them nor all of the time. Cor­po­ra­tions some­times act like they wish to sur­vive; but some­times act like their ex­ec­u­tives are lazy in the face of com­pe­ti­tion. The di­rec­tors and em­ploy­ees of the com­pany will not go to liter­ally any lengths to en­sure the cor­po­ra­tion’s sur­vival, or pro­tect the cor­po­ra­tion’s (nonex­is­tent) rep­re­sen­ta­tion of its util­ity func­tion, or con­verge their de­ci­sion pro­cesses to­ward op­ti­mal­ity (again con­sider the lack of in­ter­nal pre­dic­tion mar­kets to ag­gre­gate epistemic ca­pa­bil­ities on near-term re­solv­able events; and the lack of any known method for ag­glomer­at­ing hu­man in­stru­men­tal strate­gies into an effi­cient whole).

Cor­po­ra­tions ex­ist in a strongly mul­ti­po­lar world; they op­er­ate in a con­text that in­cludes other cor­po­ra­tions of equal size, al­li­ances of cor­po­ra­tions of greater size, gov­ern­ments, an opinionated pub­lic, and many nec­es­sary trade part­ners, all of whom are com­posed of hu­mans run­ning at equal speed and of equal or greater in­tel­li­gence and strate­gic acu­men. Fur­ther­more, many of the re­sult­ing com­pli­ance pres­sures are ap­plied di­rectly to the in­di­vi­d­ual per­sonal in­ter­ests of the di­rec­tors and man­agers of the cor­po­ra­tion, i.e., the de­ci­sion-mak­ing CEO might face in­di­vi­d­ual le­gal sanc­tion or pub­lic-opinion sanc­tion in­de­pen­dently of the cor­po­ra­tion’s ex­pected av­er­age earn­ings. Even if the cor­po­ra­tion did, e.g., suc­cess­fully as­sas­si­nate a ri­val’s CEO, not all of the re­sult­ing benefits to the cor­po­ra­tion would ac­crue to the in­di­vi­d­u­als who had taken the great­est le­gal risks to run the pro­ject.

Po­ten­tial strong dis­analo­gies to a pa­per­clip max­i­mizer in­clude the fol­low­ing:

  • A pa­per­clip max­i­mizer can get much stronger re­turns on cog­ni­tive in­vest­ment and rein­vest­ment ow­ing to be­ing able to op­ti­mize its own al­gorithms at a lower level of or­ga­ni­za­tion.

  • A pa­per­clip max­i­mizer can op­er­ate in much faster se­rial time.

  • A pa­per­clip max­i­mizer can scale sin­gle-brain al­gorithms (rather than hiring more hu­mans to try to com­mu­ni­cate with each other across ver­bal bar­ri­ers, a pa­per­clip max­i­mizer can po­ten­tially solve prob­lems that re­quire one BIG brain us­ing high in­ter­nal band­width).

  • A pa­per­clip max­i­mizer can scale con­tin­u­ous, perfectly co­op­er­a­tive and co­or­di­nated copies of it­self as more com­pu­ta­tional power be­comes available.

  • Depend­ing on the re­turns on cog­ni­tive in­vest­ment, and the timescale on which it oc­curs, a pa­per­clip max­i­mizer un­der­go­ing an in­tel­li­gence ex­plo­sion can end up with a strong short-term in­tel­li­gence lead on the near­est ri­val AI pro­jects (e.g. be­cause the times sep­a­rat­ing the differ­ent AI pro­jects were mea­sured on a hu­man scale, with the sec­ond-lead­ing pro­ject 2 months be­hind the lead­ing pro­ject, and this time differ­ence was am­plified by many or­ders of mag­ni­tude by fast se­rial cog­ni­tion once the lead­ing AI be­came ca­pa­ble of it).

  • Strongly su­per­hu­man cog­ni­tion po­ten­tially leads the pa­per­clip max­i­mizer to rapidly over­come ini­tial ma­te­rial dis­ad­van­tages.

  • E.g. a pa­per­clip max­i­mizer that can e.g. crack pro­tein fold­ing to de­velop its own biolog­i­cal or­ganisms or boot­strap nan­otech­nol­ogy, or that de­vel­ops su­per­hu­man psy­cholog­i­cal ma­nipu­la­tion of hu­mans, po­ten­tially ac­quires a strong po­si­tional ad­van­tage over all other play­ers in the sys­tem and can ig­nore game-the­o­retic con­sid­er­a­tions (you don’t have to play the Iter­ated Pri­soner’s Dilemma if you can sim­ply dis­assem­ble the other agent and use their atoms for some­thing else).

  • Strongly su­per­hu­man strate­gic acu­men means the pa­per­clip max­i­mizer can po­ten­tially de­ploy tac­tics that liter­ally no hu­man has ever imag­ined.

  • Se­ri­ally fast think­ing and se­ri­ally fast ac­tions can take place faster than hu­mans (or cor­po­ra­tions) can re­act.

  • A pa­per­clip max­i­mizer is ac­tu­ally mo­ti­vated to liter­ally kill all op­po­si­tion in­clud­ing all hu­mans and turn ev­ery­thing within reach into pa­per­clips.

To the ex­tent one cred­its the dis­similar­i­ties above as rele­vant to what­ever em­piri­cal ques­tion is at hand, ar­gu­ing by anal­ogy from cor­po­ra­tions to su­per­in­tel­li­gences—es­pe­cially un­der the ban­ner of “cor­po­ra­tions are su­per­in­tel­li­gences!”—would be an in­stance of the non­cen­tral fal­lacy or refer­ence class ten­nis. Us­ing the anal­ogy to ar­gue that “su­per­in­tel­li­gences are no more dan­ger­ous than cor­po­ra­tions” would be the “prece­dented there­fore harm­less” vari­a­tion of the harm­less su­per­nova fal­lacy. Us­ing the anal­ogy to ar­gue that “cor­po­ra­tions are the real dan­ger,” with­out hav­ing pre­vi­ously ar­gued out that su­per­in­tel­li­gences are harm­less or that su­per­in­tel­li­gences are suffi­ciently im­prob­a­ble, would be de­railing.

Parents:

  • Advanced agent properties

    How smart does a ma­chine in­tel­li­gence need to be, for its nice­ness to be­come an is­sue? “Ad­vanced” is a broad term to cover cog­ni­tive abil­ities such that we’d need to start con­sid­er­ing AI al­ign­ment.