Distances between cognitive domains

In the con­text of AI al­ign­ment, we may care a lot about the de­gree to which com­pe­tence in two differ­ent cog­ni­tive do­mains is sep­a­rable, or al­ter­na­tively highly tan­gled, rel­a­tive to the class of al­gorithms rea­son­ing about them.

  • Cal­ling X and Y ‘sep­a­rate do­mains’ is as­sert­ing at least one of “It’s pos­si­ble to learn to rea­son well about X with­out need­ing to know about Y” or “It’s pos­si­ble to learn to rea­son well about Y with­out nec­es­sar­ily know­ing how to rea­son well about X”.

  • Cal­ling X a dis­tinct do­main within a set of do­mains Z rel­a­tive to a back­ground do­main W would say that: tak­ing for granted other back­ground al­gorithms and knowl­edge W that the agent can use to rea­son about any do­main in Z; it’s pos­si­ble to rea­son well about the do­main X us­ing ideas, meth­ods, and knowl­edge that are mostly re­lated to each other and not tan­gled up with ideas from non-X do­mains within Z.

For ex­am­ple: If the do­mains X and Y are ‘blue cars’ and ‘red cars’, then it seems un­likely that X and Y would be well-sep­a­rated do­mains be­cause an agent that knows how to rea­son well about blue cars is al­most surely ex­tremely close to be­ing an agent that can rea­son well about red cars, in the sense that:

  • For al­most ev­ery­thing we want to do or pre­dict about blue cars, the sim­plest or fastest or eas­iest-to-dis­cover way of ma­nipu­lat­ing or pre­dict­ing blue cars in this way, will also work for ma­nipu­lat­ing or pre­dict­ing red cars. This is the sense in which the blue-car and red-car do­mains are ‘nat­u­rally’ very close.

  • For most nat­u­ral agent de­signs, the state or speci­fi­ca­tion of an agent that can rea­son about blue cars, is prob­a­bly ex­tremely close to the state or speci­fi­ca­tion of an agent that can rea­son about red cars.

  • The only rea­son why an agent that rea­sons well about blue cars would be hard to con­vert to an agent that rea­sons well about red cars, would be if there were spe­cific ex­tra el­e­ments added to the agent’s de­sign to pre­vent it from rea­son­ing well about red cars. In this case, the de­sign dis­tance is in­creased by what­ever fur­ther mod­ifi­ca­tions are re­quired to un­tan­gle and delete the anti-red-car-learn­ing in­hi­bi­tions; but no fur­ther than that, the ‘blue car’ and ‘red car’ do­mains are nat­u­rally close.

  • An agent that has already learned how to rea­son well about blue cars prob­a­bly re­quires only a tiny amount of ex­tra knowl­edge or learn­ing, if any, to rea­son well about red cars as well. (Again, un­less the agent con­tains spe­cific added de­sign el­e­ments to make it rea­son poorly about red cars.)

In more com­pli­cated cases, which do­mains are truly close or far from each other, or can be com­pactly sep­a­rated out, is a the­ory-laden as­ser­tion. Few peo­ple are likely to dis­agree that blue cars and red cars are very close do­mains (if they’re not speci­fi­cally try­ing to be dis­agree­able). Re­searchers are more likely to dis­agree in their pre­dic­tions about:

  • Whether (by de­fault and ce­teris paribus and as­sum­ing de­signs not con­tain­ing ex­tra el­e­ments to make them be­have differ­ently etcetera) an AI that is good at de­sign­ing cars is also likely to be very close to learn­ing how to de­sign air­planes.

  • Whether (as­sum­ing straight­for­ward de­signs) the first AGI to ob­tain su­per­hu­man en­g­ineer­ing abil­ity for de­sign­ing cars in­clud­ing soft­ware, would prob­a­bly be at least par-hu­man in the do­main of in­vent­ing new math­e­mat­i­cal proofs.

  • Whether (as­sum­ing straight­for­ward de­signs) an AGI that has su­per­hu­man en­g­ineer­ing abil­ity for de­sign­ing cars in­clud­ing soft­ware, nec­es­sar­ily needs to think about most of the facts and ideas that would be re­quired to un­der­stand and ma­nipu­late hu­man psy­chol­ogy.

Re­la­tion to ‘gen­eral in­tel­li­gence’

A key pa­ram­e­ter in some such dis­agree­ments may be how much credit the speaker gives to the no­tion of gen­eral in­tel­li­gence. Speci­fi­cally, to what ex­tent the nat­u­ral or the most straight­for­ward ap­proach to get par-hu­man or su­per­hu­man perfor­mance in crit­i­cal do­mains, is to take rel­a­tively gen­eral learn­ing al­gorithms and de­ploy them on learn­ing the do­main as a spe­cial case.

If you think that it would take a weird or twisted de­sign to build a mind that was su­per­hu­manly good at de­sign­ing cars in­clud­ing writ­ing their soft­ware, with­out us­ing gen­eral al­gorithms and meth­ods that could with minor or lit­tle adap­ta­tion stare at math­e­mat­i­cal proof prob­lems and figure them out, then you think ‘de­sign cars’ and ‘prove the­o­rems’ and many other do­mains are in some sense nat­u­rally not all that sep­a­rated. Which (ar­guendo) is why hu­mans are so much bet­ter than chim­panzees at so many ap­par­ently differ­ent cog­ni­tive do­mains: the same com­pe­tency, gen­eral in­tel­li­gence, solves all of them.

If on the other hand you are more in­spired by the way that su­per­hu­man chess AIs can’t play Go and AlphaGo can’t drive a car, you may think that hu­mans us­ing gen­eral in­tel­li­gence on ev­ery­thing is just an in­stance of us hav­ing a sin­gle ham­mer and try­ing to treat ev­ery­thing as a nail; and pre­dict that spe­cial­ized mind de­signs that were su­per­hu­man en­g­ineers, but very far in mind de­sign space from be­ing a kind of mind that could prove Fer­mat’s Last The­o­rem, would be a more nat­u­ral or effi­cient way to cre­ate a su­per­hu­man en­g­ineer.

See the en­try on Gen­eral in­tel­li­gence for fur­ther dis­cus­sion.

Parents:

  • Cognitive domain

    An allegedly com­pact unit of knowl­edge, such that ideas in­side the unit in­ter­act mainly with each other and less with ideas in other do­mains.