Standard agent properties

Bound­edly ra­tio­nal agents

  • Have prob­a­bil­is­tic mod­els of the world.

  • Up­date those mod­els in re­sponse to sen­sory in­for­ma­tion.

  • The ideal al­gorithm for up­dat­ing is Bayesian in­fer­ence, but this re­quires too much com­put­ing power and a bounded agent must use some bounded al­ter­na­tive.

  • Im­plic­itly, we as­sume the agent has some equiv­a­lent of a com­plex­ity-pe­nal­iz­ing prior or Oc­cam’s Ra­zor. Without this, spec­i­fy­ing Bayesian in­fer­ence does not much con­strain the end re­sults of epistemic rea­son­ing.

  • Have prefer­ences over events or states of the world, quan­tifi­able by a util­ity func­tion that maps those events or states onto a scalar field.

  • Th­ese prefer­ences must be quan­ti­ta­tive, not just or­dered, in or­der to com­bine with epistemic states of un­cer­tainty (prob­a­bil­ities).

  • Are con­se­quen­tial­ist: they eval­u­ate the ex­pected con­se­quences of ac­tions and choose among ac­tions based on prefer­ence among their ex­pected con­se­quences.

  • Bounded agents can­not eval­u­ate all pos­si­ble ac­tions and hence can­not ob­tain literal max­i­mums of ex­pected util­ity ex­cept in very sim­ple cases.

  • Act in real time in a noisy, un­cer­tain en­vi­ron­ment.

For the ar­gu­ments that suffi­ciently in­tel­li­gent agents will ap­pear to us as bound­edly ra­tio­nal agents in some sense, see:

Eco­nomic agents

  • Achieve their goals by effi­ciently al­lo­cat­ing limited re­sources, in­clud­ing, e.g., time, money, or ne­gen­tropy;

  • Try to find new paths that route around ob­sta­cles to goal achieve­ment;

  • Pre­dict the ac­tions of other agents;

  • Try to co­or­di­nate with, ma­nipu­late, or hin­der other agents (in ac­cor­dance with the agent’s own goals or util­ities);

  • Re­spond to both nega­tive in­cen­tives (penalties) and pos­i­tive in­cen­tives (re­wards) by plan­ning ac­cord­ingly, and may also con­sider strate­gies to avoid penalties or gain re­wards that were un­fore­seen by the cre­ators of the in­cen­tive frame­work.

Nat­u­ral­is­tic agents

  • Nat­u­ral­is­tic agents are em­bed­ded in a larger uni­verse and are made of the same ma­te­rial as other things in the uni­verse (wave­func­tion, on our cur­rent be­liefs about physics).

  • A nat­u­ral­is­tic agent’s un­cer­tainty about the en­vi­ron­ment is un­cer­tainty about which nat­u­ral uni­verse em­beds them (what ma­te­rial struc­ture un­der­lies their available sen­sory and in­tro­spec­tive data).

  • Some of the ac­tions available to nat­u­ral­is­tic agents po­ten­tially al­ter their sen­sors, ac­tu­a­tors, or com­put­ing sub­strate.

  • Suffi­ciently pow­er­ful nat­u­ral­is­tic agents may con­struct other agents out of re­sources available to them in­ter­nally or in their en­vi­ron­ment, or ex­tend their in­tel­li­gence into out­side com­put­ing re­sources.

  • A nat­u­ral­is­tic agent’s sens­ing, cog­ni­tive, and de­ci­sion/​ac­tion ca­pa­bil­ities may be dis­tributed over space, time, and mul­ti­ple sub­strates; the ap­pli­ca­bil­ity of the ‘agent’ con­cept does not re­quire a small lo­cal robot body.

Children:

  • Bounded agent

    An agent that op­er­ates in the real world, us­ing re­al­is­tic amounts of com­put­ing power, that is un­cer­tain of its en­vi­ron­ment, etcetera.

Parents:

  • Advanced agent properties

    How smart does a ma­chine in­tel­li­gence need to be, for its nice­ness to be­come an is­sue? “Ad­vanced” is a broad term to cover cog­ni­tive abil­ities such that we’d need to start con­sid­er­ing AI al­ign­ment.