Garrett and I probably have compatible ontologies (or so it seems to me), but he and I are using different definitions of "free will" and related subjects. For instance, take the sentence "If a person acts with a reason, they were not in control of their actions, which would have been different in different circumstances." My definition of "free will" does not require that an agent's actions be *random*, or that they be *uncaused*. I have no problem whatsoever with attributing free will to agents living in a completely deterministic universe, like the one in Greg Egan's "Permutation City" (a work which I heartily recommend).
Free will, for me, is necessarily something compatible with the common understanding of free will – while at the same time not necessarily compatible with "folk theories" of what the existence of free will entails (like the ability to act independently of any external causative factors). Any definition I adopt, whatever its degree of logical formality, should be a *refinement* of that common understanding.
So how do I define free will? Roughly speaking: an intelligent being, getting information from external and internal sources, processing it, and taking actions based on that processing. Moreover, the greater the degree of self-reflection – that is, the greater the extent to which that being thinks about its own thought processes, and how they relate to its environment and its actions – the more likely an outside observer would be to attribute "free will" to that being.
View more