Epistemic exclusion

An “epistemic ex­clu­sion” would be a hy­po­thet­i­cal form of AI limi­ta­tion that made the AI not model (and if re­flec­tively sta­ble, not want to model) some par­tic­u­lar part of phys­i­cal or math­e­mat­i­cal re­al­ity, or model it only us­ing some re­stricted model class that didn’t al­low for the max­i­mum pos­si­ble pre­dic­tive ac­cu­racy. For ex­am­ple, a be­hav­iorist ge­nie would not want to model hu­man minds (ex­cept us­ing a tightly re­stricted model class) to avoid Mind­crime, pro­gram­mer ma­nipu­la­tion and other pos­si­ble prob­lems.

At pre­sent, no­body has in­ves­ti­gated how to do this (in any re­flec­tively sta­ble way), and there’s all sorts of ob­vi­ous prob­lems stem­ming from the fact that, in re­al­ity, most facts are linked to a sig­nifi­cant num­ber of other facts. How would you make an AI that was re­ally good at pre­dict­ing ev­ery­thing else in the world but didn’t know or want to know what was in­side your base­ment? In­tu­itively, it seems likely that a lot of naive solu­tions would, e.g., just cause the AI to de facto end up con­struct­ing some­thing that wasn’t tech­ni­cally a model of your base­ment, but played the same role as a model of your base­ment, in or­der to max­i­mize pre­dic­tive ac­cu­racy about ev­ery­thing that wasn’t your base­ment. We could similarly ask how it would be pos­si­ble to build a re­ally good math­e­mat­i­cian that never knew or cared whether 333 was a prime num­ber, and whether this might re­quire it to also ig­nore the ‘cast­ing out nines’ pro­ce­dure when­ever it saw 333 as a dec­i­mal num­ber, or what would hap­pen if we asked it to mul­ti­ply 3 by (100 + 10 + 1), and so on.

That said, most prac­ti­cal rea­sons to cre­ate an epistemic ex­clu­sion (e.g. against mod­el­ing hu­mans in too much de­tail, or against mod­el­ing dis­tant alien civ­i­liza­tions and su­per­in­tel­li­gences) would in­volve some prac­ti­cal rea­son the ex­clu­sion was there, and some level of in-prac­tice ex­clu­sion that was good enough, which might not re­quire e.g. max­i­mum pre­dic­tive ac­cu­racy about ev­ery­thing else com­bined with zero pre­dic­tive ac­cu­racy about the ex­clu­sion.

Parents:

  • Task-directed AGI

    An ad­vanced AI that’s meant to pur­sue a se­ries of limited-scope goals given it by the user. In Bostrom’s ter­minol­ogy, a Ge­nie.