Most cognitive systems learn subsets of the same abstractions

Created time
Mar 19, 2023 12:47 PM
Tags
Public
AI Alignment
Natural Abstractions

Most cognitive systems learn subsets of the same abstractions

notion image
Theoretically, any cognitive system could learn any abstraction they want to. In other words, there is no law determining which information to throw away. These abstractions still have to retain a certain amount of predictive power about a low-level system, using a high-level summary of it. But there might be several possible abstractions of a low-level system that fulfil this purpose.
The claim of
The Natural Abstractions Hypothesis
in the title basically says, that of all the information that a cognitive system could use to make abstractions and predictions, there is a small subset of all abstractions that are most plausible. Or, as (Chan et al., 2023) puts it: “the vast majority of information is not represented in any of these cognitive systems.”
They give an example of a rotating gear: no cognitive system plausibly tracks the exact thermal motion in the gear to make predictions about it.

Source: Chan, L., Lang, L. and Jenner, E. (no date) ‘Natural Abstractions: Key claims, Theorems, and Critiques’. Available at: https://www.alignmentforum.org/posts/gvzW46Z3BsaZsLc25/natural-abstractions-key-claims-theorems-and-critiques-1 (Accessed: 19 March 2023).