Corporations vs. superintelligences

It is sometimes suggested that corporations are relevant analogies for superintelligences. To evaluate this analogy without simply falling prey to the continuum fallacy, we need to consider which specific thresholds from the standard list of advanced agent properties can reasonably be said to apply in full force to corporations. This suggests roughly the following picture:

  • Corporations generally exhibit infrahuman, par-human, or high-human levels of ability on non-heavily-parallel tasks. On cognitive tasks that parallelize well across massive numbers of humans being paid to work on them, corporations exhibit superhuman levels of ability compared to an individual human.

  • In order to try and grasp the overall performance boost from organizing into a corporation, consider a Microsoft-sized corporation trying to play Go in 2010. The corporation could potentially pick out its strongest player and so gain high-human performance, but would probably not play very far above that individual level, and so would not be able to defeat the individual world champion. Consider also the famous chess game of Kasparov vs. The World, which Kasparov ultimately won.

  • On massively parallel cognitive tasks, corporations exhibit strongly superhuman performance; the best passenger aircraft designable by Boeing seems likely to be far superior to the best passenger aircraft that could be designed by a single engineer at Boeing.

  • In virtue of being composed of humans, corporations have most of the advanced-agent properties that humans themselves do:

  • They can deploy general intelligence and cross-domain consequentialism.

  • They possess big-picture strategic awareness and operate in the real-world domain.

  • They can deploy realistic psychological models of humans and try to deceive them.

  • Also in virtue of being composed of humans, corporations are not in general Vingean-unpredictable, hence not systematically cognitively uncontainable. Without constituent researchers who know secret phenomena of a domain, corporations are not strongly cognitively uncontainable.

  • Corporations are not epistemically efficient relative to humans, except perhaps in limited domains for the extremely few such that have deployed internal prediction markets with sufficiently high participation and subsidy. (The stock prices of large corporations are efficient, but the corporations aren’t; often the stock price tanks after the corporation does something stupid.)

  • Corporations are not instrumentally efficient. No currently known method exists for aggregating human strategic acumen into an instrumentally efficient conglomerate the way that prediction markets try to do for epistemic predictions about near-term testable events. It is often possible for a human to see a better strategy for accomplishing the corporation’s pseudo-goals than the corporation is pursuing.

  • Corporations generally exhibit little interest in fundamental cognitive self-improvement, e.g. extremely few of them have deployed internal prediction markets (perhaps since the predictions of these internal prediction markets are often embarrassing to overconfident managers). Since corporate intelligence is almost entirely composed of humans, most of the basic algorithms running a corporation are not subject to improvement by the corporation. Attempts to do crude analogues of this tend to, e.g., bog down the entire corporation in bureaucracy and internal regulations, rather than resulting in genetic engineering of better executives or an intelligence explosion.

  • Corporations have no basic speed advantage over their constituent humans, since speed does not parallelize.

Sometimes discussion of analogies between corporations and hostile superintelligences focuses on a purported misalignment with human values.

As mentioned above, corporations are composed of consequentialist agents, and can often deploy consequentialist reasoning to this extent. The humans inside the corporation are not all always pulling in the same direction, and this can lead to non-consequentialist behavior by the corporation considered as a whole; e.g. an executive may not maximize financial gain for the company out of fear of personal legal liability or just other life concerns.

On many occasions some corporations have acted psychopathically with respect to the outside world, e.g. tobacco companies. However, even tobacco companies are still composed entirely of humans who might balk at being e.g. turned into paperclips. It is possible to imagine circumstances under which a Board of Directors might wedge itself into pressing a button that turned everything including themselves into paperclips. However, acting in a unified way to pursue an interest of the corporation that is contrary to the non-financial personal interests of all executives and directors and employees and shareholders, does not well-characterize the behavior of most corporations under most circumstances.

The conditions for the coherence theorems implying consistent expected utility maximization are not met in corporations, as they are not met in the constituent humans. On the whole, the strategic acumen of big-picture corporate strategy seems to behave more like Go than like airplane design, and indeed corporations are usually strategically dumber than their smartest employee and often seem to be strategically dumber than their CEOs. Running down the list of Convergent instrumental strategies suggests that corporations exhibit some such behaviors sometimes, but not all of them nor all of the time. Corporations sometimes act like they wish to survive; but sometimes act like their executives are lazy in the face of competition. The directors and employees of the company will not go to literally any lengths to ensure the corporation’s survival, or protect the corporation’s (nonexistent) representation of its utility function, or converge their decision processes toward optimality (again consider the lack of internal prediction markets to aggregate epistemic capabilities on near-term resolvable events; and the lack of any known method for agglomerating human instrumental strategies into an efficient whole).

Corporations exist in a strongly multipolar world; they operate in a context that includes other corporations of equal size, alliances of corporations of greater size, governments, an opinionated public, and many necessary trade partners, all of whom are composed of humans running at equal speed and of equal or greater intelligence and strategic acumen. Furthermore, many of the resulting compliance pressures are applied directly to the individual personal interests of the directors and managers of the corporation, i.e., the decision-making CEO might face individual legal sanction or public-opinion sanction independently of the corporation’s expected average earnings. Even if the corporation did, e.g., successfully assassinate a rival’s CEO, not all of the resulting benefits to the corporation would accrue to the individuals who had taken the greatest legal risks to run the project.

Potential strong disanalogies to a paperclip maximizer include the following:

  • A paperclip maximizer can get much stronger returns on cognitive investment and reinvestment owing to being able to optimize its own algorithms at a lower level of organization.

  • A paperclip maximizer can operate in much faster serial time.

  • A paperclip maximizer can scale single-brain algorithms (rather than hiring more humans to try to communicate with each other across verbal barriers, a paperclip maximizer can potentially solve problems that require one BIG brain using high internal bandwidth).

  • A paperclip maximizer can scale continuous, perfectly cooperative and coordinated copies of itself as more computational power becomes available.

  • Depending on the returns on cognitive investment, and the timescale on which it occurs, a paperclip maximizer undergoing an intelligence explosion can end up with a strong short-term intelligence lead on the nearest rival AI projects (e.g. because the times separating the different AI projects were measured on a human scale, with the second-leading project 2 months behind the leading project, and this time difference was amplified by many orders of magnitude by fast serial cognition once the leading AI became capable of it).

  • Strongly superhuman cognition potentially leads the paperclip maximizer to rapidly overcome initial material disadvantages.

  • E.g. a paperclip maximizer that can e.g. crack protein folding to develop its own biological organisms or bootstrap nanotechnology, or that develops superhuman psychological manipulation of humans, potentially acquires a strong positional advantage over all other players in the system and can ignore game-theoretic considerations (you don’t have to play the Iterated Prisoner’s Dilemma if you can simply disassemble the other agent and use their atoms for something else).

  • Strongly superhuman strategic acumen means the paperclip maximizer can potentially deploy tactics that literally no human has ever imagined.

  • Serially fast thinking and serially fast actions can take place faster than humans (or corporations) can react.

  • A paperclip maximizer is actually motivated to literally kill all opposition including all humans and turn everything within reach into paperclips.

To the extent one credits the dissimilarities above as relevant to whatever empirical question is at hand, arguing by analogy from corporations to superintelligences—especially under the banner of “corporations are superintelligences!”—would be an instance of the noncentral fallacy or reference class tennis. Using the analogy to argue that “superintelligences are no more dangerous than corporations” would be the “precedented therefore harmless” variation of the harmless supernova fallacy. Using the analogy to argue that “corporations are the real danger,” without having previously argued out that superintelligences are harmless or that superintelligences are sufficiently improbable, would be derailing.

Parents:

  • Advanced agent properties

    How smart does a machine intelligence need to be, for its niceness to become an issue? “Advanced” is a broad term to cover cognitive abilities such that we’d need to start considering AI alignment.