Mind design space is wide

Imag­ine an enor­mous space of pos­si­ble mind de­signs, within which all hu­mans who’ve ever lived are a sin­gle tiny dot. We all have the same cere­bral cor­tex, cere­bel­lum, tha­la­mus, etcetera. There’s an in­stinct to imag­ine “Ar­tifi­cial In­tel­li­gences” as a kind of weird tribe that lives across the river and ask what pe­cu­liar cus­toms this for­eign tribe might have. Really the word “Ar­tifi­cial In­tel­li­gence” just refers to the en­tire space of pos­si­bil­ities out­side the tiny hu­man dot. So to most ques­tions about AI, the an­swer may be, “It de­pends on the ex­act mind de­sign of the AI.” By similar rea­son­ing, a uni­ver­sal claim over all pos­si­ble AIs is much more du­bi­ous than a claim about at least one AI. If you imag­ine that in the vast space of all mind de­signs, there’s at least a billion bi­nary de­sign choices that can be made, then, there’s at least \(2^{1,000,000,000}\) dis­tinct mind de­signs. We might say that any claim of the form, “Every pos­si­ble mind de­sign has prop­erty \(P\)” has \(2^{1,000,000,000}\) chances to be false, while any claim of the form “There ex­ists at least one mind de­sign with prop­erty \(P\)” has \(2^{1,000,000,000}\) chances to be true. This doesn’t pre­clude us from think­ing about prop­er­ties that most mind de­signs might have. But it does sug­gest that if we don’t like some prop­erty \(P\) that seems likely to usu­ally hold, we can maybe find some spe­cial case of a mind de­sign which un­usu­ally has \(P\) false.


  • Orthogonality Thesis

    Will smart AIs au­to­mat­i­cally be­come benev­olent, or au­to­mat­i­cally be­come hos­tile? Or do differ­ent AI de­signs im­ply differ­ent goals?