As seen in Values are not just sense data and Values are a function of things I know, don’t know and will never know., I care not only about things I can sense, but also about things that I don’t know.
Similarly, an AI has to acknowledge that I don’t know things, that I might value in the future.
E.g. I might care about some distant relative that I didn’t know of before, or I might care about something after changing my mind about it.
Thus, the AI does not only have to optimize for the things I truly value and not only my estimate of things I value, it also has to know about things I potentially might value.