Friendly AI

“Friendly AI” or “FAI” is an old term invented by Yudkowsky to mean an advanced AI successfully aligned with some idealized version of humane values, such as e.g. extrapolated volition. In current use it has mild connotations of significant self-sovereignty and/​or being able to identify desirable strategic-level consequences for itself, since this is the scenario Yudkowsky originally envisioned. An “UnFriendly AI” or “UFAI” means one that’s specifically not targeting humane objectives, e.g. a paperclip maximizer. (Note this does mean there are some things that are neither UFAIs or FAIs, like a Genie that only considers short-term objectives, or for that matter, a rock.)

Parents: