Artificial General Intelligence

An “Artificial General Intelligence” is a machine intelligence possessed of some form of the same “significantly more generally applicable” intelligence that distinguishes humans from our nearest chimpanzee relatives. A bee builds hives, a beaver builds dams, a human looks at both and imagines a dam with honeycomb structure. We can drive cars or do algebra, even though no similar problems were found in our environment of evolutionary adaptedness, and we haven’t had time to evolve further to match cars or algebra. The brain’s algorithms are sufficiently general, sufficiently cross-domain, that we can learn a tremendous variety of new domains within our lifetimes. We are not perfectly general—we have an easier time learning to walk than learning to do abstract calculus, even though the latter is much easier in an objective sense. But we’re sufficiently general that we can figure out Special Relativity and engineer skyscrapers despite our not having those abilities at “compile time”. An Artificial General Intelligence has the same property; it can learn a tremendous variety of domains, including domains it had no inkling of when it was switched on.

Parents:

  • Advanced agent properties

    How smart does a machine intelligence need to be, for its niceness to become an issue? “Advanced” is a broad term to cover cognitive abilities such that we’d need to start considering AI alignment.