Hubinger states that whether an AI is optimizing for something depends on the internal structure of the agent. Thus, we can not judge, based on the fulfillment of a certain metric, whether something is truly an optimizer or not.
He gives the example of a water bottle cap, that sufficiently holds in the water, but isn’t actually an optimizer, because it is not searching the solution space for optimizing towards a given metric.
This criticism sounds awfully like the common criticism to behaviorism.