A reply to Francois Chollet on intelligence explosion

This is a reply to Francois Chollet, the inventor of the Keras wrapper for the Tensorflow and Theano deep learning systems, on his essay “The impossibility of intelligence explosion.”

In response to critics of his essay, Chollet tweeted:

If you post an argument online, and the only opposition you get is braindead arguments and insults, does it confirm you were right? Or is it just self-selection of those who argue online?

And he earlier tweeted:

Don’t be overly attached to your views; some of them are probably incorrect. An intellectual superpower is the ability to consider every new idea as if it might be true, rather than merely checking whether it confirms/​contradicts your current views.

Chollet’s essay seemed mostly on-point and kept to the object-level arguments. I am led to hope that Chollet is perhaps somebody who believes in abiding by the rules of a debate process, a fan of what I’d consider Civilization; and if his entry into this conversation has been met only with braindead arguments and insults, he deserves a better reply. I’ve tried here to walk through some of what I’d consider the standard arguments in this debate as they bear on Chollet’s statements.

As a meta-level point, I hope everyone agrees that an invalid argument for a true conclusion is still a bad argument. To arrive at the correct belief state we want to sum all the valid support, and only the valid support. To tally up that support, we need to have a notion of judging arguments on their own terms, based on their local structure and validity, and not excusing fallacies if they support a side we agree with for other reasons.

My reply to Chollet doesn’t try to carry the entire case for the intelligence explosion as such. I am only going to discuss my take on the validity of Chollet’s particular arguments. Even if the statement “an intelligence explosion is impossible” happens to be true, we still don’t want to accept any invalid arguments in favor of that conclusion.

Without further ado, here are my thoughts in response to Chollet.

The basic premise is that, in the near future, a first “seed AI” will be created, with general problem-solving abilities slightly surpassing that of humans. This seed AI would start designing better AIs, initiating a recursive self-improvement loop that would immediately leave human intelligence in the dust, overtaking it by orders of magnitude in a short time.

I agree this is more or less what I meant by “seed AI” when I coined the term back in 1998. Today, nineteen years later, I would talk about a general question of “capability gain” or how the power of a cognitive system scales with increased resources and further optimization. The idea of recursive self-improvement is only one input into the general questions of capability gain; for example, we recently saw some impressively fast scaling of Go-playing ability without anything I’d remotely consider as seed AI being involved. That said, I think that a lot of the questions Chollet raises about “self-improvement” are relevant to capability-gain theses more generally, so I won’t object to the subject of conversation.

Proponents of this theory also regard intelligence as a kind of superpower, conferring its holders with almost supernatural capabilities to shape their environment 

A good description of a human from the perspective of a chimpanzee.

From a certain standpoint, the civilization of the year 2017 could be said to have “magic” from the perspective of 1517. We can more precisely characterize this gap by saying that we in 2017 can solve problems using strategies that 1517 couldn’t recognize as a “solution” if described in advance, because our strategies depend on laws and generalizations not known in 1517. E.g., I could show somebody in 1517 a design for a compressor-based air conditioner, and they would not be able to recognize this as “a good strategy for cooling your house” in advance of observing the outcome, because they don’t yet know about the temperature-pressure relation. A fancy term for this would be “strong cognitive uncontainability”; a metaphorical term would be “magic” although of course we did not do anything actually supernatural. A similar but much larger gap exists between a human and a smaller brain running the previous generation of software (aka a chimpanzee).

It’s not exactly unprecedented to suggest that big gaps in cognitive ability correspond to big gaps in pragmatic capability to shape the environment. I think a lot of people would agree in characterizing intelligence as the Human Superpower, independently of what they thought about the intelligence explosion hypothesis.

— as seen in the science-fiction movie Transcendence (2014), for instance.

I agree that public impressions of things are things that someone ought to be concerned about. If I take a ride-share and I mention that I do anything involving AI, half the time the driver says, “Oh, like Skynet!” This is an understandable reason to be annoyed. But if we’re trying to figure out the sheerly factual question of whether an intelligence explosion is possible and probable, it’s important to consider the best arguments on all sides of all relevant points, not the popular arguments. For that purpose it doesn’t matter if Deepak Chopra’s writing on quantum mechanics has a larger readership than any actual physicist.

Thankfully Chollet doesn’t spend the rest of the essay attacking Kurzweil in particular, so I’ll leave this at that.

The intelligence explosion narrative equates intelligence with the general problem-solving ability displayed by individual intelligent agents — by current human brains, or future electronic brains.

I don’t see what work the word “individual” is doing within this sentence. From our perspective, it matters little whether a computing fabric is imagined to be a hundred agents or a single agency, if it seems to behave in a coherent goal-directed way as seen from outside. The pragmatic consequences are the same. I do think it’s fair to say that I think about “agencies” which from our outside perspective seem to behave in a coherent goal-directed way.

The first issue I see with the intelligence explosion theory is a failure to recognize that intelligence is necessarily part of a broader system — a vision of intelligence as a “brain in jar” that can be made arbitrarily intelligent independently of its situation.

I’m not aware of myself or Nick Bostrom or another major technical voice in this field claiming that problem-solving can go on independently of the situation/​environment.

That said, some systems function very well in a broad variety of structured low-entropy environments. E.g. the human brain functions much better than other primate brains in an extremely broad set of environments, including many that natural selection did not explicitly optimize for. We remain functional on the Moon, because the Moon has enough in common with the Earth on a sufficiently deep meta-level that, for example, induction on past experience goes on functioning there. Now if you tossed us into a universe where the future bore no compactly describable relation to the past, we would indeed not do very well in that “situation”—but this is not pragmatically relevant to the impact of AI on our own real world, where the future does bear a relation to the past.

In particular, there is no such thing as “general” intelligence. On an abstract level, we know this for a fact via the “no free lunch” theorem — stating that no problem-solving algorithm can outperform random chance across all possible problems.

Scott Aaronson’s reaction: “Citing the “No Free Lunch Theorem”—i.e., the (trivial) statement that you can’t outperform brute-force search on random instances of an optimization problem—to claim anything useful about the limits of AI, is not a promising sign.”

It seems worth spelling out an as-simple-as-possible special case of this point in mathy detail, since it looked to me like a central issue given the rest of Chollet’s essay. I expect this math isn’t new to Chollet, but reprise it here to establish common language and for the benefit of everyone else reading along.

Laplace’s Rule of Succession, as invented by Thomas Bayes, gives us one simple rule for predicting future elements of a binary sequence based on previously observed elements. Let’s take this binary sequence to be a series of “heads” and “tails” generated by some sequence generator called a “coin”, not assumed to be fair. In the standard problem setup yielding the Rule of Succession, our state of prior ignorance is that we think there is some frequency \(\theta\) which a coin comes up heads, and for all we know \(\theta\) is equally likely to take on any real value between \(0\) and \(1.\) We can do some Bayesian inference and conclude that after seeing \(M\) heads and \(N\) tails, we should predict that the odds for heads : tails on the next coinflip are:

$$\frac{M + 1}{M + N + 2} : \frac{N + 1}{M + N + 2}$$

(See the Arbital page on Laplace’s Rule of Succession for the proof.)

This rule yields advice like: “If you haven’t yet observed any coinflips, assign 50-50 to heads and tails” or “If you’ve seen four heads and no tails, assign 16 probability rather than 0 probability to the next flip being tails” or “If you’ve seen the coin come up heads 150 times and tails 75 times, assign around 23 probability to the coin coming up heads next time.”

Now this rule does not do super-well in any possible kind of environment. In particular, it doesn’t do any better than the maximum-entropy prediction “the next flip has a 50% probability of being heads, or tails, regardless of what we have observed previously” if the environment is in fact a fair coin. In general, there is “no free lunch” on predicting arbitrary binary sequences; if you assign greater probability mass or probability density to one binary sequence or class of sequences, you must have done so by draining probability from other binary sequences. If you begin with the prior that every binary sequence is equally likely, then you never expect any algorithm to do better on average than maximum entropy, even if that algorithm luckily does better in one particular random draw.

On the other hand, if you start from the prior that every binary sequence is equally likely, you never notice anything a human would consider an obvious pattern. If you start from the maxentropy prior, then after observing a coin come up heads a thousand times, and tails never, you still predict 50-50 on the next draw; because on the maxentropy prior, the sequence “one thousand heads followed by tails” is exactly as likely as “one thousand heads followed by heads”.

The inference rule instantiated by Laplace’s Rule of Succession does better in a generic low-entropy universe of coinflips. It doesn’t start from specific knowledge; it doesn’t begin from the assumption that the coin is biased heads, or biased tails. If the coin is biased heads, Laplace’s Rule learns that; if the coin is biased tails, Laplace’s Rule will soon learn that from observation as well. If the coin is actually fair, then Laplace’s Rule will rapidly converge to assigning probabilities in the region of 50-50 and not do much worse per coinflip than if we had started with the max-entropy prior.

Can you do better than Laplace’s Rule of Succession? Sure; if the environment’s probability of generating heads is equal to 0.73 and you start out knowing that, then you can guess on the very first round that the probability of seeing heads is 73%. But even with this non-generic and highly specific knowledge built in, you do not do very much better than Laplace’s Rule of Succession unless the first coinflips are very important to your future survival. Laplace’s Rule will probably figure out the answer is somewhere around 34 in the first dozen rounds, and get to the answer being somewhere around 73% after a couple of hundred rounds, and if the answer isn’t 0.73 it can handle that case too.

Is Laplace’s Rule the most general possible rule for inferring binary sequences? Obviously not; for example, if you saw the initial sequence…

$$HTHTHTHTHTHTHTHT...$$

…then you would probably guess with high though not infinite probability that the next element generated would be \(H.\) This is because you have the ability to recognize a kind of pattern which Laplace’s Rule does not, i.e., alternating heads and tails. Of course, your ability to recognize this pattern only helps you in environments that sometimes generate a pattern like that—which the real universe sometimes does. If we tossed you into a universe which just as frequently presented you with ‘tails’ after observing a thousand perfect alternating pairs, as it did ‘heads’, then your pattern-recognition ability would be useless. Of course, a max-entropy universe like that will usually not present you with a thousand perfect alternations in the initial sequence to begin with!

One extremely general but utterly intractable inference rule is Solomonoff induction, a universal prior which assigns probabilities to every computable sequence (or computable probability distribution over sequences) proportional to algorithmic simplicity, that is, in inverse proportion to the exponential of the size of the program required to specify the computation. Solomonoff induction can learn from observation any sequence that can be generated by a compact program, relative to a choice of universal computer which has at most a bounded effect on the amount of evidence required or the number of mistakes made. Of course a Solomonoff inductor will do slightly-though-not-much-worse than the max-entropy prior in a hypothetical structure-avoiding universe in which algorithmically compressible sequences are less likely; thankfully we don’t live in a universe like that.

It would then seem perverse not to recognize that for large enough milestones we can see an informal ordering from less general inference rules to more general inference rules, those that do well in an increasingly broad and complicated variety of environments of the sort that the real world is liable to generate:

The rule that always assigns probability 0.73 to heads on each round, performs optimally within the environment where each flip has independently a 0.73 probability of coming up heads.

Laplace’s Rule of Succession will start to do equally well as this, given a couple of hundred initial coinflips to see the pattern; and Laplace’s Rule also does well in many other low-entropy universes besides, such as those where each flip has 0.07 probability of coming up heads.

A human is more general and can also spot patterns like \(HTTHTTHTTHTT\) where Laplace’s Rule would merely converge to assigning probability 13 of each flip coming up heads, while the human becomes increasingly certain that a simple temporal process is at work which allows each succeeding flip to be predicted with near-certainty.

If anyone ever happened across a hypercomputational device and built a Solomonoff inductor out of it, the Solomonoff inductor would be more general than the human and do well in any environment with a programmatic description substantially smaller than the amount of data the Solomonoff inductor could observe.

None of these predictors need do very much worse than the max-entropy prediction in the case that the environment is actually max-entropy. It may not be a free lunch, but it’s not all that expensive even by the standards of hypothetical randomized universes; not that this matters for anything, since we don’t live in a max-entropy universe and therefore we don’t care how much worse we’d do in one.

Some earlier informal discussion of this point can be found in No-Free-Lunch theorems are often irrelevant.

If intelligence is a problem-solving algorithm, then it can only be understood with respect to a specific problem.

Some problems are more general than other problems—not relative to a maxentropy prior, which treats all problem subclasses on an equal footing, but relative to the low-entropy universe we actually live in, where a sequence of a million observed heads is on the next round more liable to generate H than T. Similarly, relative to the problem classes tossed around in our low-entropy universe, “figure out what simple computation generates this sequence” is more general than a human which is more general than “figure out what is the frequency of heads or tails within this sequence.”

Human intelligence is a problem-solving algorithm that can be understood with respect to a specific problem class that is potentially very, very broad in a pragmatic sense.

In a more concrete way, we can observe this empirically in that all intelligent systems we know are highly specialized. The intelligence of the AIs we build today is hyper specialized in extremely narrow tasks — like playing Go, or classifying images into 10,000 known categories. The intelligence of an octopus is specialized in the problem of being an octopus. The intelligence of a human is specialized in the problem of being human.

The problem that a human solves is much more general than the problem an octopus solves, which is why we can walk on the Moon and the octopus can’t. We aren’t absolutely general—the Moon still has a certain something in common with the Earth. Scientific induction still works on the Moon. It is not the case that when you get to the Moon, the next observed charge of an electron has nothing to do with its previously observed charge; and if you throw a human into an alternate universe like that one, the human stops working. But the problem a human solves is general enough to pass from oxygen environments to the vacuum.

What would happen if we were to put a freshly-created human brain in the body of an octopus, and let in live at the bottom of the ocean? Would it even learn to use its eight-legged body? Would it survive past a few days? … The brain has hardcoded conceptions of having a body with hands that can grab, a mouth that can suck, eyes mounted on a moving head that can be used to visually follow objects (the vestibulo-ocular reflex), and these preconceptions are required for human intelligence to start taking control of the human body.

It could be the case that in this sense a human’s motor cortex is analogous to an inference rule that always predicts heads with 0.73 probability on each round, and cannot learn to predict 0.07 instead. It could also be that our motor cortex is more like a Laplace inductor that starts out with 72 heads and 26 tails pre-observed, biased toward that particular ratio, but which can eventually learn 0.07 after another thousand rounds of observation.

It’s an empirical question, but I’m not sure why it’s a very relevant one. It’s possible that human motor cortex is hyperspecialized—not just jumpstarted with prior knowledge, but inductively narrow and incapable of learning better—since in the ancestral environment, we never got randomly plopped into octopus bodies. But what of it? If you put some humans at a console and gave them a weird octopus-like robot to learn to control, I’d expect their full deliberate learning ability to do better than raw motor cortex in this regard. Humans using their whole intelligence, plus some simple controls, can learn to drive cars and fly airplanes even though those weren’t in our ancestral environment.

We also have no reason to believe human motor cortex is the limit of what’s possible. If we sometimes got plopped into randomly generated bodies, I expect we’d already have motor cortex that could adapt to octopodes. Maybe MotorCortex Zero could do three days of self-play on controlling randomly generated bodies and emerge rapidly able to learn any body in that class. Or, humans who are allowed to use Keras could figure out how to control octopus arms using ML. The last case would be most closely analogous to that of a hypothetical seed AI.

Empirical evidence is relatively scarce, but from what we know, children that grow up outside of the nurturing environment of human culture don’t develop any human intelligence. Feral children raised in the wild from their earliest years become effectively animals, and can no longer acquire human behaviors or language when returning to civilization.

Human visual cortex doesn’t develop well without visual inputs. This doesn’t imply that our visual cortex is a simple blank slate, and that all the information to process vision is stored in the environment, and the visual cortex just adapts to that from a blank slate; if that were true, we’d expect it to easily take control of octopus eyes. The visual cortex requires visual input because of the logic of evolutionary biology: if you make X an environmental constant, the species is liable to acquire genes that assume the presence of X. It has no reason not to. The expected result would be that the visual cortex contains a large amount of genetic complexity that makes it better than generic cerebral cortex at doing vision, but some of this complexity requires visual input during childhood to unfold correctly.

But if in the ancestral environment children had grown up in total darkness 10% of the time, before seeing light for the first time on adulthood, it seems extremely likely that we could have evolved to not require visual input in order for the visual cortex to wire itself up correctly. E.g., the retina could have evolved to send in simple hallucinatory shapes that would cause the rest of the system to wire itself up to detect those shapes, or something like that.

Human children reliably grow up around other humans, so it wouldn’t be very surprising if humans evolved to build their basic intellectual control processes in a way that assumes the environment contains this info to be acquired. We cannot thereby infer how much information is being “stored” in the environment or that an intellectual control process would be too much information to store genetically; that is not a problem evolution had reason to try to solve, so we cannot infer from the lack of an evolved solution that such a solution was impossible.

And even if there’s no evolved solution, this doesn’t mean you can’t intelligently design a solution. Natural selection never built animals with steel bones or wheels for limbs, because there’s no easy incremental pathway there through a series of smaller changes, so those designs aren’t very evolvable; but human engineers still build skyscrapers and cars, etcetera.

Among humans, the art of Go is stored in a vast repository of historical games and other humans, and future Go masters among us grow up playing Go as children against superior human masters rather than inventing the whole art from scratch. You would not expect even the most talented human, reinventing the gameplay all on their own, to be able to win a competition match with a first-dan pro.

But AlphaGo was initialized on this vast repository of played games in stored form, rather than it needing to actually play human masters.

And then less than two years later, AlphaGo Zero taught itself to play at a vastly human-superior level, in three days, by self-play, from scratch, using a much simpler architecture with no ‘instinct’ in the form of precomputed features.

Now one may perhaps postulate that there is some sharp and utter distinction between the problem that AlphaGo Zero solves, and the much more general problem that humans solve, whereby our vast edifice of Go knowledge can be surpassed by a self-contained system that teaches itself, but our general cognitive problem-solving abilities can neither be compressed into a database for initialization, nor taught by self-play. But why suppose that? Human civilization taught itself by a certain sort of self-play; we didn’t learn from aliens. More to the point, I don’t see a sharp and utter distinction between Laplace’s Rule, AlphaGo Zero, a human, and a Solomonoff inductor; they just learn successively more general problem classes. If AlphaGo Zero can waltz past all human knowledge of Go, I don’t see a strong reason why AGI Zero can’t waltz past the human grasp of how to reason well, or how to perform scientific investigations, or how to learn from the data in online papers and databases.

This point could perhaps be counterargued, but it hasn’t yet been counterargued to my knowledge, and it certainly isn’t settled by any theorem of computer science known to me.

If intelligence is fundamentally linked to specific sensorimotor modalities, a specific environment, a specific upbringing, and a specific problem to solve, then you cannot hope to arbitrarily increase the intelligence of an agent merely by tuning its brain — no more than you can increase the throughput of a factory line by speeding up the conveyor belt. Intelligence expansion can only come from a co-evolution of the mind, its sensorimotor modalities, and its environment.

It’s not obvious to me why any of this matters. Say an AI takes three days to learn to use an octopus body. So what?

That is: We agree that it’s a mathematical truth that you need “some amount” of experience to go from a broadly general prior to a specific problem. That doesn’t mean that the required amount of experience is large for pragmatically important problems, or that it takes three decades instead of three days. We cannot casually pass from “proven: some amount of X is required” to “therefore: a large amount of X is required” or “therefore: so much X is required that it slows things down a lot”. (See also: Harmless supernova fallacy: bounded, therefore harmless.)

If the gears of your brain were the defining factor of your problem-solving ability, then those rare humans with IQs far outside the normal range of human intelligence would live lives far outside the scope of normal lives, would solve problems previously thought unsolvable, and would take over the world — just as some people fear smarter-than-human AI will do.

“von Neumann? Newton? Einstein?” —Scott Aaronson

More importantly: Einstein et al. didn’t have brains that were 100 times larger than a human brain, or 10,000 times faster. By the logic of sexual recombination within a sexually reproducing species, Einstein et al. could not have had a large amount of de novo software that isn’t present in a standard human brain. (That is: An adaptation with 10 necessary parts, each of which is only 50% prevalent in the species, will only fully assemble 1 out of 1000 times, which isn’t often enough to present a sharp selection gradient on the component genes; complex interdependent machinery is necessarily universal within a sexually reproducing species, except that it may sometimes fail to fully assemble. You don’t get “mutants” with whole new complex abilities a la the X-Men.)

Humans are metaphorically all compressed into one tiny little dot in the vastness of mind design space. We’re all the same make and model of car running the same engine under the hood, in slightly different sizes and with slightly different ornaments, and sometimes bits and pieces are missing. Even with respect to other primates, from whom we presumably differ by whole complex adaptations, we have 95% shared genetic material with chimpanzees. Variance between humans is not something that thereby establishes bounds on possible variation in intelligence, unless you import some further assumption not described here.

The standard reply to anyone who deploys e.g. the Argument from Gödel to claim the impossibility of AGI is to ask, “Why doesn’t your argument rule out humans?”

Similarly, a standard question that needs to be answered by anyone who deploys an argument against the possibility of superhuman general intelligence is, “Why doesn’t your argument rule out humans exhibiting pragmatically much greater intellectual performance than chimpanzees?”

Specialized to this case, we’d ask, “Why doesn’t the fact that the smartest chimpanzees aren’t building rockets let us infer that no human can walk on the Moon?”

No human, not even John von Neumann, could have reinvented the gameplay of Go on their own and gone on to stomp the world’s greatest Masters. AlphaGo Zero did so in three days. It’s clear that in general, “We can infer the bounds of cognitive power from the bounds of human variation” is false. If there’s supposed to be some special case of this rule which is true rather than false, and forbids superhuman AGI, that special case needs to be spelled out.

Intelligence is not a superpower; exceptional intelligence does not, on its own, confer you with proportionally exceptional power over your circumstances.

…said the Homo sapiens, surrounded by countless powerful artifacts whose abilities, let alone mechanisms, would be utterly incomprehensible to the organisms of any less intelligent Earthly species.

A high-potential human 10,000 years ago would have been raised in a low-complexity environment, likely speaking a single language with fewer than 5,000 words, would never have been taught to read or write, would have been exposed to a limited amount of knowledge and to few cognitive challenges. The situation is a bit better for most contemporary humans, but there is no indication that our environmental opportunities currently outpace our cognitive potential.

Does this imply that technology should be no more advanced 100 years from today, than it is today? If not, in what sense have we taken every possible opportunity of our environment?

Is the idea that opportunities can only be taken in sequence, one after another, so that today’s technology only offers the possibilities of today’s advances? Then why couldn’t a more powerful intelligence run through them much faster, and rapidly build up those opportunities?

A smart human raised in the jungle is but a hairless ape. Similarly, an AI with a superhuman brain, dropped into a human body in our modern world, would likely not develop greater capabilities than a smart contemporary human. If it could, then exceptionally high-IQ humans would already be displaying proportionally exceptional levels of personal attainment; they would achieve exceptional levels of control over their environment, and solve major outstanding problems— which they don’t in practice.

It can’t eat the Internet? It can’t eat the stock market? It can’t crack the protein folding problem and deploy arbitrary biological systems? It can’t get anything done by thinking a million times faster than we do? All this is to be inferred from observing that the smartest human was no more impressive than John von Neumann?

I don’t see the strong Bayesian evidence here. It seems easy to imagine worlds such that you can get a lot of pragmatically important stuff done if you have a brain 100 times the size of John von Neumann’s, think a million times faster, and have maxed out and transcended every human cognitive talent and not just the mathy parts, and yet have the version of John von Neumann inside that world be no more impressive than we saw. How then do we infer from observing John von Neumann that we are not in such worlds?

We know that the rule of inferring bounds on cognition by looking at human maximums doesn’t work on AlphaGo Zero. Why does it work to infer that “An AGI can’t eat the stock market because no human has eaten the stock market”?

However, these billions of brains, accumulating knowledge and developing external intelligent processes over thousand of years, implement a system — civilization — which may eventually lead to artificial brains with greater intelligence than that of a single human. It is civilization as a whole that will create superhuman AI, not you, nor me, nor any individual. A process involving countless humans, over timescales we can barely comprehend. A process involving far more externalized intelligence — books, computers, mathematics, science, the internet — than biological intelligence…

Will the superhuman AIs of the future, developed collectively over centuries, have the capability to develop AI greater than themselves? No, no more than any of us can.

The premise is that brains of a particular size and composition that are running a particular kind of software *human brains) can only solve a problem X (which in this case is equal to “build an AGI”) if they cooperate in a certain group size N and run for a certain amount of time and build Z amount of external cognitive prostheses. Okay. Humans were not especially specialized on the AI-building problem by natural selection. Why wouldn’t an AGI with larger brains, running faster, using less insane software, containing its own high-speed programmable cognitive hardware to which it could interface directly in a high-bandwidth way, and perhaps specialized on computer programming in exactly the way that human brains aren’t, get more done on net than human civilization? Human civilization tackling Go devoted a lot of thinking time, parallel search, and cognitive prostheses in the form of playbooks, and then AlphaGo Zero blew past it in three days, etcetera.

To sharpen this argument:

We may begin from the premise, “For all problems X, if human civilization puts a lot of effort into X and gets as far as W, no single agency can get significantly further than W on its own,” and from this premise deduce that no single AGI will be able to build a new AGI shortly after the first AGI is built.

However, this premise is obviously false, as even Deep Blue bore witness. Is there supposed to be some special case of this generalization which is true rather than false, and says something about the ‘build an AGI’ problem which it does not say about the ‘win a chess game’ problem? Then what is that special case and why should we believe it?

Also relevant: In the game of Kasparov vs. The World, the world’s best player Garry Kasparov played a single game against thousands of other players coordinated in an online forum, led by four chess masters. Garry Kasparov’s brain eventually won, against thousands of times as much brain matter. This tells us something about the inefficiency of human scaling with simple parallelism of the nodes, presumably due to the inefficiency and low bandwidth of human speech separating the would-be arrayed brains. It says that you do not need a thousand times as much processing power as one human brain to defeat the parallel work of a thousand human brains. It is the sort of thing that can be done even by one human who is a little more talented and practiced than the components of that parallel array. Humans often just don’t agglomerate very efficiently.

However, future AIs, much like humans and the other intelligent systems we’ve produced so far, will contribute to our civilization, and our civilization, in turn, will use them to keep expanding the capabilities of the AIs it produces.

This takes in the premise “AIs can only output a small amount of cognitive improvement in AI abilities” and reaches the conclusion “increase in AI capability will be a civilizationally diffuse process.” I’m not sure that the conclusion follows, but would mostly dispute that the premise has been established by previous arguments. To put it another way, this particular argument does not contribute anything new to support “AI cannot output much AI”, it just tries to reason further from that as a premise.

Our problem-solving abilities (in particular, our ability to design AI) are already constantly improving, because these abilities do not reside primarily in our biological brains, but in our external, collective tools. The recursive loop has been in action for a long time, and the rise of “better brains” will not qualitatively affect it — no more than any previous intelligence-enhancing technology.

From Arbital’s Harmless supernova fallacy page:

  • Precedented, therefore harmless: “Really, we’ve already had supernovas around for a while: there are already devices that produce ‘super’ amounts of heat by fusing elements low in the periodic table, and they’re called thermonuclear weapons. Society has proven well able to regulate existing thermonuclear weapons and prevent them from being acquired by terrorists; there’s no reason the same shouldn’t be true of supernovas.” (Noncentral fallacy /​ continuum fallacy: putting supernovas on a continuum with hydrogen bombs doesn’t make them able to be handled by similar strategies, nor does finding a category such that it contains both supernovas and hydrogen bombs.)

Our brains themselves were never a significant bottleneck in the AI-design process.

A startling assertion. Let’s say we could speed up AI-researcher brains by a factor of 1000 within some virtual uploaded environment, not permitting them to do new physics or biology experiments, but still giving them access to computers within the virtual world. Are we to suppose that AI development would take the same amount of sidereal time? I for one would expect the next version of Tensorflow to come out much sooner, even taking into account that most individual AI experiments would be less grandiose because the sped-up researchers would need those experiments to complete faster and use less computing power. The scaling loss would be less than total, just like adding CPUs a thousand times as fast to the current research environment would probably speed up progress by at most a factor of 5, not a factor of 1000. Similarly, with all those sped-up brains we might see progress increase only by a factor of 50 instead of 1000, but I’d still expect it to go a lot faster.

Then in what sense are we not bottlenecked on the speed of human brains in order to build up our understanding of AI?

Crucially, the civilization-level intelligence-improving loop has only resulted in measurably linear progress in our problem-solving abilities over time.

I obviously don’t consider myself a Kurzweilian, but even I have to object that this seems like an odd assertion to make about the past 10,000 years.

Wouldn’t recursively improving X mathematically result in X growing exponentially? No — in short, because no complex real-world system can be modeled as X(t + 1) = X(t) \* a, a > 1.

This seems like a really odd assertion, refuted by a single glance at world GDP. Note that this can’t be an isolated observation, because it also implies that every necessary input into world GDP is managing to keep up, and that every input which isn’t managing to keep up has been economically bypassed at least with respect to recent history.

We don’t have to speculate about whether an “explosion” would happen the moment an intelligent system starts optimizing its own intelligence. As it happens, most systems are recursively self-improving. We’re surrounded with them… Mechatronics is recursively self-improving — better manufacturing robots can manufacture better manufacturing robots. Military empires are recursively self-expanding — the larger your empire, the greater your military means to expand it further. Personal investing is recursively self-improving — the more money you have, the more money you can make.

If we define “recursive self-improvement” to mean merely “causal process containing at least one positive loop” then the world abounds with such, that is true. It could still be worth distinguishing some feedback loops as going much faster than others: e.g., the cascade of neutrons in a nuclear weapon, or the cascade of information inside the transistors of a hypothetical seed AI. This seems like another instance of “precedented therefore harmless” within the harmless supernova fallacy.

Software is just one cog in a bigger process — our economies, our lives — just like your brain is just one cog in a bigger process — human culture. This context puts a hard limit on the maximum potential usefulness of software, much like our environment puts a hard limit on how intelligent any individual can be — even if gifted with a superhuman brain.

“A chimpanzee is just one cog in a bigger process—the ecology. Why postulate some kind of weird superchimp that can expand its superchimp economy at vastly greater rates than the amount of chimp-food produced by the current ecology?”

Concretely, suppose an agent is smart enough to crack inverse protein structure prediction, i.e., it can build its own biology and whatever amount of post-biological molecular machinery is permitted by the laws of physics. In what sense is it still dependent on most of the economic outputs of the rest of human culture? Why wouldn’t it just start building von Neumann machines?

Beyond contextual hard limits, even if one part of a system has the ability to recursively self-improve, other parts of the system will inevitably start acting as bottlenecks. Antagonistic processes will arise in response to recursive self-improvement and squash it .

Smart agents will try to deliberately bypass these bottlenecks and often succeed, which is why the world economy continues to grow at an exponential pace instead of having run out of wheat in 1200 CE. It continues to grow at an exponential pace despite even the antagonistic processes of… but I’d rather not divert this conversation into politics.

Now to be sure, the smartest mind can’t expand faster than light, and its exponential growth will bottleneck on running out of atoms and negentropy if we’re remotely correct about the character of physical law. But to say that this is therefore no reason to worry would be the “bounded, therefore harmless” variant of the Harmless Supernova fallacy. A supernova isn’t infinitely hot, but it’s pretty darned hot and you can’t survive one just by wearing a Nomex jumpsuit.

When it comes to intelligence, inter-system communication arises as a brake on any improvement of underlying modules — a brain with smarter parts will have more trouble coordinating them;

Why doesn’t this prove that humans can’t be much smarter than chimps?

What we can infer about the scaling laws that were governing human brains from the evolutionary record is a complicated topic. On this particular point I’d refer you to section 3.1, “Returns on brain size”, pp. 35-39, in my semitechnical discussion of returns on cognitive investment. The conclusion there is that we can infer from the increase in equilibrium brain size over the last few million years of hominid history, plus the basic logic of population genetics, that over this time period there were increasing marginal returns to brain size with increasing time and presumably increasingly sophisticated neural ‘software’. I also remark that human brains are not the only possible cognitive computing fabrics.

It is perhaps not a coincidence that very high-IQ people are more likely to suffer from certain mental illnesses.

I’d expect very-high-IQ chimps to be more likely to suffer from some neurological disorders than typical chimps. This doesn’t tell us that chimps are approaching the ultimate hard limit of intelligence, beyond which you can’t scale without going insane. It tells us that if you take any biological system and try to operate under conditions outside the typical ancestral case, it is more likely to break down. Very-high-IQ humans are not the typical humans that natural selection has selected-for as normal operating conditions.

Yet, modern scientific progress is measurably linear. I wrote about this phenomenon at length in a 2012 essay titled “The Singularity is not coming”. We didn’t make greater progress in physics over the 1950–2000 period than we did over 1900–1950 — we did, arguably, about as well. Mathematics is not advancing significantly faster today than it did in 1920. Medical science has been making linear progress on essentially all of its metrics, for decades.

I broadly agree with respect to recent history. I tend to see this as an artifact of human bureaucracies shooting themselves in the foot in a way that I would not expect to apply within a single unified agent.

It’s possible we’re reaching the end of available fruit in our finite supply of physics. This doesn’t mean our present material technology could compete with the limits of possible material technology, which would at the very least include whatever biology-machine hybrid systems could be rapidly manufactured given the limits of mastery of biochemistry.

As scientific knowledge expands, the time and effort that have to be invested in education and training grows, and the field of inquiry of individual researchers gets increasingly narrow.

Our brains don’t scale to hold it all, and every time a new human is born you have to start over from scratch instead of copying and pasting the knowledge. It does not seem to me like a slam-dunk to generalize from the squishy little brains yelling at each other to infer the scaling laws of arbitrary cognitive computing fabrics.

Intelligence is situational — there is no such thing as general intelligence. Your brain is one piece in a broader system which includes your body, your environment, other humans, and culture as a whole.

True of chimps; didn’t stop humans from being much smarter than chimps.

No system exists in a vacuum; any individual intelligence will always be both defined and limited by the context of its existence, by its environment.

True of mice; didn’t stop humans from being much smarter than mice.

Part of the argument above was, as I would perhaps unfairly summarize it, “There is no sense in which a human is absolutely smarter than an octopus.” Okay, but pragmatically speaking, we have nuclear weapons and octopodes don’t. A similar pragmatic capability gap between humans and unaligned AGIs seems like a matter of legitimate concern. If you don’t want to call that an intelligence gap then call it what you like.

Currently, our environment, not our brain, is acting as the bottleneck to our intelligence.

I don’t see what observation about our present world licenses the conclusion that speeding up brains tenfold would produce no change in the rate of technological advancement.

Human intelligence is largely externalized, contained not in our brain but in our civilization. We are our tools — our brains are modules in a cognitive system much larger than ourselves.

What about this fact is supposed to imply slower progress by an AGI that has a continuous, high-bandwidth interaction with its own onboard cognitive tools?

A system that is already self-improving, and has been for a long time.

True if we redefine “self-improving” as “any positive feedback loop whatsoever”. A nuclear fission weapon is also a positive feedback loop in neutrons triggering the release of more neutrons. The elements of this system interact on a much faster timescale than human neurons fire, and thus the overall process goes pretty fast on our own subjective timescale. I don’t recommend standing next to one when it goes off.

Recursively self-improving systems, because of contingent bottlenecks, diminishing returns, and counter-reactions arising from the broader context in which they exist, cannot achieve exponential progress in practice. Empirically, they tend to display linear or sigmoidal improvement.

Falsified by a graph of world GDP on almost any timescale.

In particular, this is the case for scientific progress — science being possibly the closest system to a recursively self-improving AI that we can observe.

I think we’re mostly just doing science wrong, but that would be a much longer discussion.

Fits-on-a-T-Shirt rejoinders would include “Why think we’re at the upper bound of being-good-at-science any more than chimps were?”

Recursive intelligence expansion is already happening — at the level of our civilization. It will keep happening in the age of AI, and it progresses at a roughly linear pace.

If this were to be true, I don’t think it would be established by the arguments given.

Much of this debate has previously been reprised by myself and Robin Hanson in the “AI Foom Debate.” I expect that even Robin Hanson, who was broadly opposing my side of this debate, would have a coughing fit over the idea that progress within all systems is confined to a roughly linear pace.

For more reading I recommend my own semitechnical essay on what our current observations can tell us about the scaling of cognitive systems with increasing resources and increasing optimization, “Intelligence Explosion Microeconomics.”