Almost all real-world domains are rich

The propo­si­tion that al­most all real-world prob­lems oc­cupy rich do­mains, or could oc­cupy rich do­mains so far as we know, due to the de­gree to which most things in the real world en­tan­gle with many other real things.

If play­ing a real-world game of chess, it’s pos­si­ble to:

  • make a move that is es­pe­cially likely to fool the op­po­nent, given their cog­ni­tive psychology

  • an­noy the opponent

  • try to cause a mem­ory er­ror in the opponent

  • bribe the op­po­nent with an offer to let them win fu­ture games

  • bribe the op­po­nent with candy

  • drug the opponent

  • shoot the opponent

  • switch pieces on the game board when the op­po­nent isn’t looking

  • bribe the refer­ees with money

  • sab­o­tage the cam­eras to make it look like the op­po­nent cheated

  • force some poorly de­signed cir­cuits to be­have as a ra­dio so that you can break onto a nearby wire­less In­ter­net con­nec­tion and build a smarter agent on the In­ter­net who will cre­ate molec­u­lar nan­otech­nol­ogy and op­ti­mize the uni­verse to make it look just like you won the chess game

  • or ac­com­plish what­ever was meant to be ac­com­plished by ‘win­ning the game’ via some en­tirely differ­ent path.

Since ‘al­most all’ and ‘might be’ are not pre­cise, for op­er­a­tional pur­poses this page’s as­ser­tion will be taken to be, “Every su­per­in­tel­li­gence with op­tions more com­pli­cated than those of a Zer­melo-Fraenkel prov­abil­ity Or­a­cle, should be taken from our sub­jec­tive per­spec­tive to have an at least a 13 prob­a­bil­ity of be­ing cog­ni­tively un­con­tain­able.”

com­ment: (Work in progress)

fill this out

a cen­tral difficulty of one ap­proach to Or­a­cle re­search is to so dras­ti­cally con­strain the Or­a­cle’s op­tions that the do­main be­comes strate­gi­cally nar­row from its per­spec­tive (and we can know this fact well enough to pro­ceed)

grav­i­ta­tional in­fluence of peb­ble thrown on Earth on moon, but this seems not use­fully con­trol­lable be­cause we think the AI can’t pos­si­bly iso­late any con­trol­lable effect of this en­tan­gle­ment.

when we build an agent based on our be­lief that we’ve found an ex­cep­tion to this gen­eral rule, we are vi­o­lat­ing the Omni Test.

cen­tral ex­am­ples: That Alien Mes­sage, the Zer­melo-Frankel oracle

<div>

Parents: