Do folks think that consciousness matters for moral responsibility ?
Un post en anglais, résumant une expérience réalisée il y a un certain temps sur les liens entre conscience et responsabilité morale. [In this post, I describe the results of an experiment run a while ago about the relationship between consciousness and moral responsibility.]
Almost two months ago, Neil Levy posted on Flickers of Freedom this fascinating question: does consciousness matters (for free will / moral responsibility) ? Very soon after that, some comments asked this different (but related) question: are people's intuitions about free will / moral responsibility taking into account and influenced by whether the psychological states an action has its source are conscious or not ? (A question which is in fact different from the one that forms the title of this post : it could be that people think - at an explicit level - that consciousness matters is a necessary condition for moral responsibility - but that this belief is not coherent with their intuitions about freedom. Such dissociations are not uncommon. For example, Freiman and Nichols have a paper showing that people explicitly believes that one cannot deserve resources acquired through "brute luck" but that their intuitions about particular cases are not consistent with this belief.)
Now, I have made a year ago an experiment about this very same question. The results of this experiment have since been sleeping in a drawer, but, since some people seemed interested in running similar experiments, I think they might be interested in those.
Let me first describe what were the motivations of this experiment. The primary question was: why do so much people seem to believe that Libet's famous experiment is a threat to the existence of moral responsibility (not to mention free will) ? Let's consider that Libet's results and his interpretation are right (I don't say they are, I'll do just do "as if" for the following). What do they show? That our voluntary actions are preceded by unconscious brain states that are already "programming" this action - brain states that comes before we become aware of having the intention / will / urge to act. So what ? It is possible to accept these results and still believe in moral responsibility - for example if you consider that these very same brain states are also psychological states that exist for some seconds before becoming conscious.
And here comes the psychological puzzle: why do so many people consider that this experiment is a threat to moral responsibility ? A first option, the most obvious, is the following:
- CS: People's intuitions follow the following rule : "an agent A is morally responsible for an action B only if A was conscious of the psychological states B has its source in.
But there is another possibility. Nahmias and his colleagues have observed through experiments the following phenomenon: people are prone to answer that an agent in a deterministic universe is morally responsible as long as determinism is described in psychological terms. As soon as determinism is described in neurological terms, the great majority answer that this agent is not morally responsible. How are we to interpret these results? Nahmias offers the following hypothesis: maybe people reading a neurological description come to believe that the agent does not act upon his own mental states. Rather, his mental states are "bypassed" and he's forced to act by external forces (i.e. physical forces).
It turns out that, when it comes to folk intuitions about moral responsibility, I tend to think that people's intuitions are driven by a set of rules close to Watson's "True Self" hypothesis, or, more recently, Sripada's Deep Self model. In a nutshell: when people act on the basis of psychological states, we make a distinction between mental states that truly belong to the agent, and "shallow" mental states, that play a role in this agent's action but are not truly his own. This distinction, for example, allow us to understand sci-fi stories including hypnotic suggestions or alien possession in which we do not blame the agent for what he did, because he was acting on mental states that did not really belong to him. So, my theory is that folk intuitions dismiss moral responsibility when an agent is not acting on the basis of his "True Self", that is on the basis of mental states that are not really his own.
What about Libet's experiment? My hypothesis is then that, in this case, people's "intuitive dualism" (i.e. the fact that physical events and mental states are computed by differtent cognitive systems) lead them to consider that a neural event is not a genuine mental state, but some external force (my brain and not I) and then, that, if an agent acts on the basis of this external force, he's not morally responsible (the famous "my brain made me do it"). So we have another option:
- TS: People's intuitions follow the following rule : "an agent A is morally responsible for an action B only if B has its source on mental states that "truly belong" to A (whether they are conscious or not).
How are we to decide between CS and TS ? By running more experiments, of course. And so, I decided to test for CS, and created a group of four scenarios, that allowed me to vary two factors (consciousness and motives). All scenarios described a man living in a building meeting his new neighbor (a very beautiful woman) and deciding to help her with moving her furniture up the stairs (there is no elevator). The agent could have bad motives or good motives. In the good motives condition, people were told that he helped her only because he cared about her and wanted to help her, without any afterthought. In the bad motives condition, he helped her only because he hoped to have sex with her (okaaay ! these are not really bad motives, but they are not good ones either). Then, the agent could be conscious or unconscious of his motivations). In the conscious condition, the action and the motives were described without further precision. In the unconscious condition, it was said that the agent suffered of very low [high] self-esteem and (falsely) believed that he had helped his neighbor because he wanted to have sex [just wanted to help her].
Each participant read only one of the four scenarios, then had to rate on a scale whether the agent deserved praise for having helped his neighbor (on a scale from 0 = "not at all" to 7 = "absolutely"). The results are shown in the figure.
An ANOVA reveals that there is a significant effect of motives: unsurprisingly, people tend to attribute more praise in the good motives that in the bad motives condition. More surprisingly, there was no significant effect of consciousness. But there was a significant interaction between the two factors, since the effect of consciousness on ratings changed according to the valence of the motives.
Indeed, we can observe, that not being conscious of one's motives led to higher ratings in the bad motives condition but to lesser ratings in the good motives condition. These results cannot be explained by CS, that would predict that lack of consciousness would lead to decrease in moral responsibility in both cases. But my guess is that they can be explained by TS if we make the auxiliary hypothesis that people use consciousness and the agent's beliefs about his motives as a "clue" to what is the agent's "true self". So, in the bad motives condition, some subjects came to think that, if the agent believed he had good motives, he must have had good motives and gave higher rating (and the opposite in the good motives condition).
So, these results spell trouble for CS and for the idea that people's intuitions consider consciousness of one's motives as a condition for moral responsibility. Nevertheless, the link between consciousness and moral responsibility can rest on different rules. For example, we could have the following hypothesis :
- CS_2: People's intuitions follow the following rule : "an agent A is morally responsible for an action B only if A wasable to become conscious of the psychological states B has its source in.
To test for such possibility, it would be necessary to run a new version of this experiment, with, for example, an agent suffering from a mental disease making him incapable of becoming aware of his true motives. Nevertheless, I think that there's great chance for consciousness not being a necessary condition for moral responsibility (as long as our intuitions are concerned).
Participants receiving the unconscious condition also had to answer two more questions: ''Do you think it is possible not to be conscious of one's own motivations for acting?' and "Do you think it is possible to have desire one is not aware of?'. To the first question, 85% of participants answered YES, and 91% to the second question. Note that the experiment was run with participants recruited in the Quartier Latin in Paris, where psychoanalysis is still popular.