> The problem with the "rational agent" model is that it's a tautology. Yes sure everyone wants more utility, great, but everyone's utility function is slightly different. As you say some are risk takers pursuing insane rewards/yields/profits with low probability, while others are super risk-averse conservative in their choices, etc.
There's nothing in the rational agent model that assumes that everyone has the same definition of utility function. Different people want different things.
Yes, of course, but that's what I mean by problem. Saying people are (bonded rational) utility function maximizers doesn't give us a predictive theory, it just means in a fancy way people do what they do for "reasons", and everyone usually have different set of reasons.
Of course, it's fine as a very-very general fundamental theory, if you then want to study how people's revealed and non-revealed (old names for implicit and explicit) preferences aggregate into a utility function. (There's a whole bunch of math about pairwise comparison matrices. Lately there's some movement in that space about using perturbation to model inconsistencies in preferences, etc.)
There's nothing in the rational agent model that assumes that everyone has the same definition of utility function. Different people want different things.