Standard agent properties
Boundedly rational agents
Have probabilistic models of the world.
Update those models in response to sensory information.
The ideal algorithm for updating is Bayesian inference, but this requires too much computing power and a bounded agent must use some bounded alternative.
Implicitly, we assume the agent has some equivalent of a complexity-penalizing prior or Occam’s Razor. Without this, specifying Bayesian inference does not much constrain the end results of epistemic reasoning.
Have preferences over events or states of the world, quantifiable by a utility function that maps those events or states onto a scalar field.
These preferences must be quantitative, not just ordered, in order to combine with epistemic states of uncertainty (probabilities).
Are consequentialist: they evaluate the expected consequences of actions and choose among actions based on preference among their expected consequences.
Bounded agents cannot evaluate all possible actions and hence cannot obtain literal maximums of expected utility except in very simple cases.
Act in real time in a noisy, uncertain environment.
For the arguments that sufficiently intelligent agents will appear to us as boundedly rational agents in some sense, see:
Economic agents
Achieve their goals by efficiently allocating limited resources, including, e.g., time, money, or negentropy;
Try to find new paths that route around obstacles to goal achievement;
Predict the actions of other agents;
Try to coordinate with, manipulate, or hinder other agents (in accordance with the agent’s own goals or utilities);
Respond to both negative incentives (penalties) and positive incentives (rewards) by planning accordingly, and may also consider strategies to avoid penalties or gain rewards that were unforeseen by the creators of the incentive framework.
Naturalistic agents
Naturalistic agents are embedded in a larger universe and are made of the same material as other things in the universe (wavefunction, on our current beliefs about physics).
A naturalistic agent’s uncertainty about the environment is uncertainty about which natural universe embeds them (what material structure underlies their available sensory and introspective data).
Some of the actions available to naturalistic agents potentially alter their sensors, actuators, or computing substrate.
Sufficiently powerful naturalistic agents may construct other agents out of resources available to them internally or in their environment, or extend their intelligence into outside computing resources.
A naturalistic agent’s sensing, cognitive, and decision/action capabilities may be distributed over space, time, and multiple substrates; the applicability of the ‘agent’ concept does not require a small local robot body.
Children:
- Bounded agent
An agent that operates in the real world, using realistic amounts of computing power, that is uncertain of its environment, etcetera.
Parents:
- Advanced agent properties
How smart does a machine intelligence need to be, for its niceness to become an issue? “Advanced” is a broad term to cover cognitive abilities such that we’d need to start considering AI alignment.
I didn’t know that about Bayesian inference-ish updating baking in an Occam-ish prior. Does it need to be complexity penalizing, or would any consistent prior-choosing rule work? I assume the former from the phrasing.
Why is that? “does not much constrain the end results” could just mean that unless we assume the agent is Occam ish, then we can’t tell from its posteriors whether it did Bayesian inference or something else. But I don’t see why that couldn’t be true of some non-Occam-ish prior picking rule, as long as we knew what that was.
I think this definition includes agents that only cared about their sensory inputs, since sensory inputs are a subset of states of the world.
This makes me think that the definition of economic agent that I googled isn’t what was meant, since this one seems to be primarily making a claim about efficiency, rather than about impacting markets (“an agent who is part of the economy”). Something more like homo economicus?
Naturalistic agents seems to have been primarily a claim about the situation that agent finds itself in, rather than a claim about that agents’ models (eg, a cartesian dualist which was in fact embedded in a universe made of atoms and was itself made of atoms, would still be a “naturalistic agent”, I think)
The last point reminds me of Dawkins style extended phenotypes; not sure how analogous/comparable that concept is. I guess it makes me want to go back and figure out if we defined what “an agent” was. So like does a beehive count as “an agent” (I believe that conditioned on it being an agent at all, it would be a naturalized agent)?
…does Arbital have search functionality right now? Maybe not :-/