Something implicit in my prior post, which I want to make clearer
My theory presumes that agents are defined by their respective utility functions. The information specifying those functions must necessarily be embodied within some physical substrate in order for an agent to operate within a physical domain. And thus, an optimally programmed Turing machine should be capable of ultimately ascertaining the utility function of a given agent when provided the states of the agent's constitutive variables. Once the utility function is derived from that information, it should be possible to predict the agent's preference with respect to various possible states of the universe. Solomonoff induction provides (at least) one means of converting that information into a probability distribution.
My theory presumes that agents are defined by their respective utility functions. The information specifying those functions must necessarily be embodied within some physical substrate in order for an agent to operate within a physical domain. And thus, an optimally programmed Turing machine should be capable of ultimately ascertaining the utility function of a given agent when provided the states of the agent's constitutive variables. Once the utility function is derived from that information, it should be possible to predict the agent's preference with respect to various possible states of the universe. Solomonoff induction provides (at least) one means of converting that information into a probability distribution.