Fulfilling a metric can not be equated with being an optimizer

Tags
AI Alignment
Inner Misalignment
Outer Misalignment
EA
Philosophy
Reference Box
Date created
Sep 25, 2022 10:35 AM
Related Main Box
Common criticism of behaviorism looks like this:
Dispositions to act cannot be equated with certain mental states
.
Thus, just because an agent is taking an action A that leads to goal G, we can not equate mental states with dispositions to act in a certain way.
Let’s apply this argument to optimizer.
AI-Behaviorism would claim that an agent that is fulfilling a metric or goal, has the internal property of being an optimizer, if we find an appropriate action A, that leads to the fulfillment.
E.g. we would say that an Agent B is optimizing towards goal G, if it is consistently taking actions that lead to G. A bottle cap would be an optimizer, optimizing for amount of water that is kept in the bottle when closed etc. But this implies that the bottle cap is looking for a solution to the given problem. It implies that the bottle cap knows that the given action leads to goal G.