Artificial General Intelligence

An “Ar­tifi­cial Gen­eral In­tel­li­gence” is a ma­chine in­tel­li­gence pos­sessed of some form of the same “sig­nifi­cantly more gen­er­ally ap­pli­ca­ble” in­tel­li­gence that dis­t­in­guishes hu­mans from our near­est chim­panzee rel­a­tives. A bee builds hives, a beaver builds dams, a hu­man looks at both and imag­ines a dam with hon­ey­comb struc­ture. We can drive cars or do alge­bra, even though no similar prob­lems were found in our en­vi­ron­ment of evolu­tion­ary adapt­ed­ness, and we haven’t had time to evolve fur­ther to match cars or alge­bra. The brain’s al­gorithms are suffi­ciently gen­eral, suffi­ciently cross-do­main, that we can learn a tremen­dous va­ri­ety of new do­mains within our life­times. We are not perfectly gen­eral—we have an eas­ier time learn­ing to walk than learn­ing to do ab­stract calcu­lus, even though the lat­ter is much eas­ier in an ob­jec­tive sense. But we’re suffi­ciently gen­eral that we can figure out Spe­cial Rel­a­tivity and en­g­ineer skyscrap­ers de­spite our not hav­ing those abil­ities at “com­pile time”. An Ar­tifi­cial Gen­eral In­tel­li­gence has the same prop­erty; it can learn a tremen­dous va­ri­ety of do­mains, in­clud­ing do­mains it had no inkling of when it was switched on.


  • Advanced agent properties

    How smart does a ma­chine in­tel­li­gence need to be, for its nice­ness to be­come an is­sue? “Ad­vanced” is a broad term to cover cog­ni­tive abil­ities such that we’d need to start con­sid­er­ing AI al­ign­ment.