Sophisticated AI systems will learn natural abstractions that we don’t understand

Sophisticated AI systems will learn natural abstractions that we don’t understand

Most general cognitive systems can learn the same abstractions
poses the scenario in which an AI might learn an abstraction that we don’t understand.
Abstractions are created from a need to simplify a low-level system. Abstractions might lead astray, thus it might make sense to not use abstractions if possible. Similarly, a late-game superintelligent AI might not have the need to simplify a low-level system and just make accurate predictions using real-world data. But we still need a way to understand the information the AI is handling, since we still need abstractions.

Source:
Links: