Friendly AI

“Friendly AI” or “FAI” is an old term in­vented by Yud­kowsky to mean an ad­vanced AI suc­cess­fully al­igned with some ideal­ized ver­sion of hu­mane val­ues, such as e.g. ex­trap­o­lated vo­li­tion. In cur­rent use it has mild con­no­ta­tions of sig­nifi­cant self-sovereignty and/​or be­ing able to iden­tify de­sir­able strate­gic-level con­se­quences for it­self, since this is the sce­nario Yud­kowsky origi­nally en­vi­sioned. An “UnFriendly AI” or “UFAI” means one that’s speci­fi­cally not tar­get­ing hu­mane ob­jec­tives, e.g. a pa­per­clip max­i­mizer. (Note this does mean there are some things that are nei­ther UFAIs or FAIs, like a Ge­nie that only con­sid­ers short-term ob­jec­tives, or for that mat­ter, a rock.)

Parents: