Search This Blog

Pages

About Me

My photo
19 years old. Homeschooled, then went to a community college instead of high school. Currently at Hampshire College. http://www.facebook.com/NamelessWonderBand http://myspace.com/namelesswondermusic http://youtube.com/namelesswonderband http://twitter.com/NamelessWonder7 http://www.youtube.com/dervine7 http://ted.com/profiles/778985

Thursday, May 19, 2011

Did I lock the door? The cognitive neuroscience of Obsessive-Compulsive Disorder

Hampshire College
Brain and Cognition

Introduction

Obsessive-Compulsive Disorder (OCD) is a common disorder that appears in both children and adults. It consists of obsessions, compulsions, or both. Obsessions consist of recurring thoughts or impulses that are intrusive and unwarranted by the environment (but still perceived as originating from the patients own mind, distinguishing them from “though-insertion”), causing distress or anxiety, and compulsions are repetitive behaviors the obsessive-compulsive feels he/she must perform. Compulsions are often related to the obsessions, and are meant to neutralize the distress caused by such obsessions. Some of the most common obsessions include germaphobia, fear of causing harm to oneself or to others, and worrying that important tasks have been left undone, for example being unsure whether one has locked the door, and compulsions include behavior such as hand washing, counting, and checking. The patient is usually aware of the absurdity of their thoughts and actions, but nevertheless feels powerless to stop them (American Psychological Association, 2000).
In this paper, I will discuss the neurocognitive findings regarding OCD, particularly as they relate to the SEC/OCD model proposed by Huey et al (2008). I will begin with an overview of some of the brain studies of OCD, moving on to some popular models of the disorder. Finally, I will discuss the SEC/OCD model.

The Brain Regions Implicated in OCD and their Functions

Despite copious amounts of research, the precise neuropsychology of OCD is uncertain (Markarian et al, 2010). Different studies often provide different and even contradictory results. Despite this, there are some generally agreed upon neuroanatomical features of OCD. Specifically, it has been consistently found that obsessive-compulsives reveal anatomical and functional abnormalities in the orbitofrontal cortex and the anterior cingulate cortex, and also the basal ganglia, prefrontal cortex, and thalamus (Markarian et al, 2010; Huey et al, 2008).

Several studies have reported reduced bilateral orbitofrontal cortex volumes in obsessive-compulsives, and there seems to be a correlation between greater reductions and worse symptoms (Maia, Cooney, & Peterson, 2008). The orbitofrontal cortex seems to be involved in reward learning and emotions, and the regulation of complex behavior. It has been found that the orbitofrontal cortex responds to reward stimuli, but not when the desire for the stimuli has been satiated. Macaques with orbitofrontal have difficulty learning the reward value of stimuli, and are slow to change their behavior when reward conditions change (Huey et al, 2008).

There is evidence of increased anterior cingulate cortex activation in obsessive-compulsives (Fitzgerald et al, 2005). The anterior cingulate cortex is implicated in decision-making. In particular, it acts as a conflict and error detector, responding when there is a discrepancy between expected and actual events. Activation of the anterior cingulate cortex seems to lead to negative emotional states, such as anxiety (Huey et al, 2008).

It has been found that damage to the basal ganglia through infection can lead to obsessive-compulsive symptoms. The basal ganglia seem to play an important role in the generation and regulation of motor activities. Specifically, it has been suggested that the basal ganglia acts as a sort of gate for motor signals, facilitating certain motor actions while suppressing others (Huey et al, 2008).

The thalamus shows higher activation in subjects with OCD compared to controls, and has been reported to be larger in obsessive-compulsives (Maia, Cooney, & Peterson, 2008; Huey et al, 2008)). The thalamus seems to be a gateway for interactions between brain areas involved in OCD (Huey et al, 2008).

Some Models of Obsessive-Compulsive Disorder

Based on behavioral and neuroanatomical evidence, there have been numerous models proposed for OCD.

The Cognitive-Behavioral Model: According to this model, OCD arises from dysfunctional beliefs regarding the importance of thoughts. Almost everyone has had intrusive thoughts that are perceived as inappropriate: for example, the fleeting, unwarranted, and unwanted mental image of stabbing a loved one with a knife. Healthy subjects recognize this as just meaningless junk in the stream of consciousness. However, obsessive-compulsives, according to the cognitive-behavioral model, incorrectly view these thoughts as highly significant—for example, as evidence that one will, in fact, lose control and stab someone. Because of the importance attached to these thoughts, they develop into obsessions, and compulsions arise as a way to attempt to get rid of these unwanted thoughts and/or neutralize the danger associated with such thoughts. These compulsions are then reinforced by the temporary reduction in anxiety they cause and the fact that they prevent the obsessive-compulsive from learning that harmful consequences will not arise as the result of their thoughts (McKay & Abramowitz, 2010). This model has gathered a lot of support (with some exceptions), but does not provide a neuropsychological explanation.

The Standard Model: The standard neuroanatomical model of OCD proposes that the disorder arises as a dysfunction of elements of a post-frontal cortex-basal ganglia-thalamic loop. This model is consistent with most of the data collected on the neuroanatomy of OCD and forms the basis of many following models. However, it does not explain the psychological mechanisms of OCD (Huey et al, 2008).

The “Feeling of Knowing” Model: Szechtman & Woody (2004) propose a model wherein OCD is caused by an inability to create the “feeling of knowing” that a task has been completed. Specifically, they argue that the common symptoms of OCD—washing, checking, fear of causing harm, etc.—are those that evolutionarily would have been related to the security of the organism and its fellows. This need for security leads to the evolution of a security-seeking system. Because there are no external stimuli that indicate the completion of a security-seeking task (for example, there could always a predator that the animal has missed), the completion of such tasks is indicated by an internally generated “feeling of knowing”. Obsessive-compulsives have a deficit in generating this subjective sensation, leading to the odd phenomena in which the obsessive-compulsive is perfectly aware, objectively, that, for example, their hands are perfectly clean, but they do not feel clean, leading to further washing.

The SEC/OCD Model

Huey et al’s (2008) model of OCD expands on Szechtman &Woody’s model and also on their own earlier work where they propose that complex behaviors with beginnings, middles, and ends are stored in the prefrontal cortex in the form of Structured Event Complexes, of SECs. For example, the knowledge of how to correctly eat dinner at a restaurant—finding a seat, ordering, eating, paying the bill, leaving—would be an SEC. SECs are usually implicitly recalled, and in this respect are similar to procedural memory. SECs are stored when a complex sequence of behavior leads to a reward, in order that such a sequence may be repeated. As evidence of these SECs, patients with damage to the prefrontal cortex have often reported difficulty with ordering and sequencing events and actions.

Just as the completion of an SEC can be rewarding, so too can the inability to complete an SEC can feel punishing. Furthermore, there are SECs that are themselves punishing but which bring about the removal of punishment: for example, few people feel good about doing their taxes, but most are relieved when their taxes are finally done.

In the SEC/OCD model, it is proposed that the initiation of an SEC is accompanied by a motivational signal, experienced as anxiety encouraging the animal to complete the SEC. In healthy subjects, the completion of the SEC is accompanied by a reward signal. According to the model, obsessive-compulsives have a deficiency in this latter process. Although the SEC is complete, the obsessive-compulsive does not have the sensation that it is done. This leads the anterior cingulate cortex to produce an error signal. The orbitofrontal cortex responds to this error as punishment, leading to a feeling of anxiety. This feeling is unconscious, leading the obsessive-compulsive to attempt to assign an explicit cause to it. This interpretation forms the basis of an obsession. The compulsion is caused by the fact that the completion of an SEC, for example hand washing, gives the obsessive-compulsives only partial relief, so that they feel they must repeat the SEC.

In regards to the basal ganglia, Huey et al suggest that just as it facilitates some motor actions while suppressing others, so too does it gate SECs. It is proposed that the basal ganglia sets thresholds for the activation of SECs, and when this threshold is lowered, for example by damage, it can lead to the overactivation of SECs, causing symptoms of OCD.

Conclusion

The cognitive neuroscience of OCD is still in its infancy. Much work is still to be done. However, Huey at al’s model provides a useful paradigm for further work. It shares and integrates elements from many of the previous models: along with expanding on Szechtman & Woody, it provides an explanation of why undue importance would be attached to fleeting thoughts as per the cognitive-behavioral model (the brain is looking for an explicit source of anxiety), and explains the possible psychological mechanisms of OCD that the standard model leaves out.


References

American Psychiatric Association (2000). Diagnostic and Statistical Manual of Mental Disorders: DSM-IV-TR, 4th ed Washington, DC: American Psychiatric Association Text Revision.

Fitzgerald, K., Welsh, R. C., Gehring, W. J., Abelson, J. L., Himle, J. A., Liberzon, I., & Taylor, S. F. (2005). Error-Related Hyperactivity of the Anterior Cingulate Cortex in Obsessive-Compulsive Disorder. Biological Psychiatry, 57(3), 287-294.

Huey, E. D., Zahn, R., Krueger, F., Moll, J., Kapogiannis, D., Wassermann, E. M., & Grafman, J. (2008). A psychological and neuroanatomical model of obsessive- compulsive disorder. The Journal of Neuropsychiatry and Clinical Neurosciences, 20(4), 390-408.

Maia, T. V., Cooney, R. E., & Peterson, B. S. (2008). The neural bases of obsessive- compulsive disorder in children and adults. Development and Psychopathology, 20(4), 1251-1283.

Markarian, Y., Larson, M. J., Aldea, M. A., Baldwin, S. A., Good, D., Berkeljon, A., & ... McKay, D. (2010). Multiple pathways to functional impairment in obsessive– compulsive disorder. Clinical Psychology Review, 30(1), 78-88.

McKay, D., Taylor, S., & Abramowitz, J. S. (2010). Obsessive-compulsive disorder. In D. McKay, J. S. Abramowitz, S. Taylor, D. McKay, J. S. Abramowitz, S. Taylor (Eds.) , Cognitive-behavioral therapy for refractory cases: Turning failure into success (pp. 89-109). Washington, DC US: American Psychological Association.

Szechtman, H., & Woody, E. (2004). Obsessive-Compulsive Disorder as a Disturbance of Security Motivation. Psychological Review, 111(1), 111-127.

Epiphenonema: can't live with 'em, can't live without 'em

Hampshire College

Philosophy of Mind

Introduction

Frank Jackson, in his paper “Epiphenomenal Qualia”, argues that we could know all the physical facts about the world, yet we would never know of qualia, and that therefore qualia are not contained in the physical world. Furthermore, he claims that qualia are epiphenomenal: that is, that they are caused by physical processes, but do not have any effect on the physical world. In this paper, I will argue that this claim that qualia are epiphenomenal destroys the Jackson’s first argument, and yet, this same argument does not work without epiphenomenalism. I will conclude by discussing the reasons I think that we should continue to believe a physicalist thesis regarding qualia.

Epiphenomenalism

In the second half of his paper, Jackson argues that there is no good reason that one should not accept that qualia might be epiphenomena—caused by physical processes, yet with no causal power whatsoever in the physical world. I do not believe that he establishes the epiphenomenalism as a sound hypothesis.

Jackson lists three major objections that he feels philosophers often have against epiphenomenalism:

  1. “It is supposed to be just obvious that the hurtfulness of pain is partly responsible for the subject seeking to avoid pain, saying ‘it hurts’ and so on.”
  2. “According to natural selection the traits that evolve over time are those conducive to physical survival. We may assume that qualia evolved over time—we have them, the earliest forms of life do no—and so we should expect qualia to be conducive to survival. The objection is that they could hardly help us to survive if they do nothing to the physical world.”
  3. “…how can a person’s behavior provide any reason for believing he had qualia like mine, or indeed any qualia at all, unless this behavior can be regarded as the outcome of the qualia…And an epiphenomenalist cannot regard behavior, or indeed anything physical, as an outcome of qualia.”

The first objection, as Jackson phrases it, is silly, as all arguments resting on “obvious”-ness are, and so his reply to it is not very interesting for our purposes. The second objection betrays a substantial misunderstanding of evolutionary theory, and I think Jackson’s reply to it is correct[1]. The third objection, however, is very interesting, and it is in his reply to this objection that Jackson makes the argument that I now wish to deconstruct.

Jackson’s reply to this third objection is in the form of an analogy, which I will paraphrase. Suppose I read in the New York Times that the mayor of NYC is cutting funding for police. I can reasonably infer that the New York Post also reports on this fact, even though the New York Times and the New York Post may not have any influence on each other. I can do this because I know that the New York Times and the New York Post both report on issues of interest to New Yorkers, that the New York Times reporting that the mayor is cutting funding for police is a good indication that the mayor is, in fact, doing such a thing, that this is an issue of interest to New Yorkers, and that therefore the New York Post has also probably reported on this fact. The analogous case for qualia (which Jackson never explicitly lays out) is that given that my nervous system produces both qualia and certain types of behaviors (analogous to how the mayor’s actions “produce” the stories in both the New York Times and the New York Posts), and that your behaviors are similar to mine, I can infer that you have the sort of nervous system that would also produce qualia.

The difficulty with this argument, as Daniel Dennett (“’Epiphenomenal Qualia?”) points out, comes from the fact that, according to Jackson’s account, qualia have no causal powers in the physical world. Because of this, qualia cannot influence my behavior in any way[2]: otherwise what we have is not epiphenomenalism, but interactionist dualism. For this reason, it is impossible for me to know that I myself have qualia, as their existence or non-existence would make no difference to the workings of my brain.

“Ah,” it may be replied, “but just because qualia do not have any effect on the workings of your brain, they may still have an effect on the workings of your mind, say, your belief that you have qualia.” This is, strictly speaking, a valid move. However, it has troubling consequences: for example (referencing Dennett), if I lost my qualia, I would presumably no longer believe that I had them, but I would still act exactly as if I did. Mental states could no longer be said to influence behavior, as those mental states are potentially influenced by qualia and qualia cannot influence behavior[3]. It seems, then, that the epiphenomenalist is left with two options. One, they could say that qualia influence mental states, in which case mental states cannot influence behavior and my mind is completely severed from the physical world, or two—if we want to preserve the influence of mental states on behavior—that qualia do not influence mental states, in which case it is impossible for me to know whether I myself have qualia. The first option is unpalatable (as I will discuss below), yet if the epiphenomenalist takes the second position, he/she must admit that there is no way for me to infer the existence of qualia in others, since I cannot even use myself as an example!

The Knowledge Argument

“Alright,” says the epiphenomenalist. “Let us assume your analysis is correct. If qualia have no effect on the physical world, then it is impossible for us to know whether we ourselves have qualia. However, there may be many things that it is impossible for us to know about, but which nevertheless exist. For example, there might be creatures that live beyond—and will always live beyond—the area of the universe that we can observe. It would be impossible for us to ever know about them, yet that does not change their existence.”

Against this, many would argue that, in the absence of any sort of evidence, it is useless to speak of some thing’s “existence”. There is a much larger philosophical issue at stake here. Happily, we do not need to discuss this issue now. For if qualia are truly epiphenomenal, then Jackson’s Knowledge Argument—the central part of his paper—completely falls apart.

This argument is as follows. Imagine that there is a neuroscientist, Mary, who for some reason has been forced to live her whole life inside a black-and-white room[4]. Despite this, Mary becomes a specialist in color vision and, eventually, learns everything that it is possible to know about how color vision works: complete descriptions of the physical mechanisms, the functional roles that color perception and various colors play, etc. One day, Mary’s captors decide to let her out of the room. She gets handed a tomato—the fruit of choice for released prisoners—and finds that, despite her omniscience regarding color vision, the color of the tomato is a completely new experience for her. “Aha,” she exclaims, “so this is what ‘red’, the color I know is associated with tomatoes, actually looks like!” She has learned a new fact about color that was not amongst any of the physical facts: what such color is like. This would seem to imply, then, that this fact about what red is like—the quale associated with red—is not part of the physical world.

However, this story does not make any sense if we accept that qualia are epiphenomenal. Firstly, if qualia have no influence on the physical world, it cannot be the new quale that caused Mary to make any sort of exclamation. She would have said the same thing, quale or not. This objection does not mean much, though—it can easily be countered that even though we cannot know from her actions whether or not Mary has learned a new fact, it is still the case that her subjective experience obviously now includes that fact. Perhaps we can imagine that Mary merely thought about the fact that she now had this new qualia, so we don’t have to worry about its effect on her actions. This does not help, though, as we must now ask whether Mary’s thoughts are reflected in the workings of her brain. I.e., could some sort of neuroscientific Laplace’s Demon, who knows everything about the physical structure of Mary’s brain at any time, be able to tell us what she is thinking? I think Jackson would want the answer to this to be yes. Yet, if it is true that qualia can influence Mary’s thoughts, and that thoughts are reflected by brain states, then one of two options become must be chosen: either qualia can influence brain states, and are therefore not epiphenomenal, or thoughts—and this would seem to include all thoughts, not just those specifically about qualia, as any given thought could always have qualia as part of its causal history—are entirely distinct from those brain states, even though they are reflected by them. If we choose this latter option, then the causal relationship between thoughts and brain states, if one exists, must be a one-way street: thoughts are caused by the brain states, but thoughts cannot influence brain states, and are, in fact, epiphenomenal. However, if thoughts are epiphenomenal, then anything that can be influenced by thoughts would have to be epiphenomenal too (otherwise, thoughts could exert some sort of influence on the physical world). This would likely include any mental state Mary might have (thoughts are, after all, quite influential in the mental world). In fact, if the entity called Mary can learn a new fact from experiencing a new quale—Mary can be influenced by qualia—then Mary herself must be an epiphenomenon: something caused by the lump of matter that has just been released from its black-and-white room, but with no ability to influence that lump of matter in any way.

Does the Knowledge Argument work without Epiphenomenal Qualia?

We have seen that if qualia are truly epiphenomenal, and yet I can learn a new fact from experiencing new qualia, then I myself must be epiphenomenal—otherwise, qualia can influence the physical world through me. While there is no immediately apparent logical reason why this view could not, possibly, be true, I think very few would be willing to accept it. If I am epiphenomenal, then, by definition, I have no power over the physical world—not even my own body. I am merely experiencing the effects of processes over which I have no control. In my mind this consequence is a sort of informal reductio ad absurdum, and I think most philosophers would agree[5].

However, perhaps we have been too concerned with epiphenomena. Perhaps Jackson’s Knowledge Argument still works—perhaps he just made a mistake by attaching epiphenomenal qualia to it. Of course, there are numerous objections to the Knowledge Argument not related to epiphenomenalism: perhaps what Mary learns is not a new fact, but a new mode of knowing some fact—not a “what”, but a “how”, or perhaps, as Dennett points out, our reaction to the Mary story merely reflects that we cannot conceive of what it would mean for Mary to know everything physical that there is to know—perhaps, with that knowledge, she would be able to reconstruct the subjective experience of seeing red. According to Dennett’s objection, Jackson is merely begging the question by pre-supposing that knowing all the physical facts will not allow Mary to have the experience. However, I want to take a different tack. I want to argue that, without epiphenomenalism, the Knowledge Argument does not work. Of course, we also just saw that with epiphenomenalism the Knowledge Argument doesn’t work. This means that if I’m successful, the Knowledge Argument will have nowhere to go.

If qualia do exist, but are not epiphenomenal, that is, they do have some sort of influence on the physical world, how does this affect the story of Mary? Let’s go back to before Mary left the room, when she knew everything physical there is to know about color vision. If the qualia associated with seeing color have a physical effect, then this effect would have to be included in Mary’s knowledge. This would lead to one of two options: either qualia are non-physical, in which case Mary’s description of the physical mechanism of color vision would have to have some sort of causal hole in it—that is, she could say only that A causes B, and then B causes something or other, and that something or other causes D—or that qualia is included in the physical description, in which case Mary would already know the subjective sensation of seeing red before leaving the room. Obviously, the second option leads to physicalism (although it is possible that the physical description might include stuff that we have not yet incorporated into our neuroscience), so it is obviously the first option that anyone who wants to say that qualia are non-physical must choose. This option implies that the physical world is not closed under physical causation—that is, that there are non-physical things that can nevertheless have physical effects. Now, there is no prima facie reason why this isn’t a viable option. However, there is also no prima facie reason to choose one of the two options over the other (at least, not until one considers the arguments of those such as Dennett). Once again, we are faced with the problem of begging the question: Jackson must pre-suppose that qualia would not be included in a complete physical description. Of course, those who want non-physical qualia could say that the physicalist is also guilty of question begging. The question is empirical.

Qualia of the Gaps

The question is empirical, but we must ask: can it, in fact, be answered? We move now to the realm of personal opinion. It seems likely to me that there is no possible empirical evidence that could allow us to choose between the two options until such time as we do find a physicalist description of qualia. That is, there is no way that we could determine whether the causal gap we have encountered—the place where we throw up our hands and say “well, something happens”—is, in principle, impossible to bridge. The question now becomes: what assumption shall we make when we encounter a seeming explanatory gap? Do we assume that it is intrinsically inexplicable, or do we assume that we just don’t know enough, or may not even be smart enough? I think the latter is a much more satisfactory option, as it allows and motivates us to continue searching for an answer. Furthermore, the history of science includes numerous examples of the solution of seemingly insoluble problems, problems that were declared to be permanently insoluble. Physicalism has triumphed in the past, and I believe that we can reasonably assume that it will continue to do so.


[1] Namely, that the fact that some trait exists does not mean that it was evolutionarily selected for. It may, instead, be a necessary byproduct of some other trait that was selected for.

[2] To clarify, in all following discussion I assume that epiphenomenalism defines the inability to influence the physical world as a necessary property of qualia instead of a contingent one. I.e., I take it to mean not only that qualia do not happen to influence the physical world, but that they cannot influence the physical world (at least not in our reality).

[3] At this point a particularly legalistic reader might have begun to object to my characterization of “influence”. “Qualia influence mental states, and mental states influence behavior, but you have failed to show that qualia influence behavior,” that reader may say. To this I would reply: imagine that qualia were taken out of the causal chain. Would this not change the behavior of things further down in the chain? At least one property of influence/causal power is that if in order to give a complete causal account of the behavior of B one must describe the behavior of A, then A influences B.

[4] We may suppose also that, for whatever reason, she cannot see the colors of her own body.

[5] It may be objected I am pre-supposing the existence of free-will here, which would be ironic, as I do not believe in free-will. However, I think the objection is unfounded. Even though we may not have control over our volitions, it still seems very hard to hold that those volitions do not have the causal powers that we feel they do.

Extended Cognition

Hampshire College
Philosophy of Mind

Introduction

In their paper “The extended mind”, Clark & Chalmers argue that the traditional view of cognition and the mind as being “inside the skull” is incorrect. Instead, they argue, cognitive processes—and, indeed, the mind itself—are partially constituted by the surrounding environment. Clark & Chalmers’ position is more than simple externalism—the position that a consideration of external, environmental factors is important in understanding the workings of the mind—it is radical, active externalism, arguing that the external, environmental factors are part of the mind. Clark & Chalmers focus in particular on the ways that belief can be extended. In this paper, I argue that belief is not something that can be, as of now[1], extended, and that memory can much more plausibly be considered to be extended, and I propose a criterion by which we could consider something part of the mind. The mind is extended, just not in the particular way Clark & Chalmers argue for.

Clark & Chalmers’ Argument

In their paper, Clark & Chalmers spend most of their focus on beliefs as an example of extended cognition. Specifically, they introduce us to Otto, a man with Alzheimer’s who has developed a system for keeping track of the various information that he would like to recall: he writes it all down in a notebook, a notebook that is reliably available to him whenever he needs it. When Otto hears that the Museum of Modern Art is having an exhibition he would like to see, and that the MoMA is on 53rd St., he writes this information down in his trusty notebook. For all intents and purposes, Clark & Chalmers argue, the content of Otto’s notebook constitutes his standing beliefs about the world. Because beliefs are part of the mind, Otto’s mind extends outside his skull, and, similarly, our own mind can be considered extended, since after all we quite often store information outside ourselves and there is no important difference between our writing stuff down in a notebook and Otto doing the same (except, of course, that we do not have Alzheimer’s).

Beliefs, and Why Otto’s Notebook Doesn’t Contain Them

Clark & Chalmers make a distinction between standing and occurrent beliefs, a distinction that is useful to go over before we continue. Occurrent beliefs are the beliefs that one is conscious of at any given moment: for example, the belief that is now brought to my mind that this essay is due today. Standing beliefs are those beliefs we have that we may not be consciously aware of at this moment, but which influence our actions and may, at any time, become occurrent. It is not even necessary that we be conscious in order to have standing beliefs: my standing belief that Mars is red exists even when I am in non-REM sleep.

It is my (occurrent and standing) belief that the contents of Otto’s notebook do not satisfy the definition of standing beliefs and also lack some important properties of such beliefs that are not contained in the definition (though perhaps they should be). Imagine some unfortunate person Alicia, who has sustained brain trauma that has made it very difficult for her to access semantic information about cars. Perhaps she can’t access it at all without some form of rehabilitation[2]. Furthermore, she has no implicit memories regarding cars: she behaves exactly as one who knows nothing about them. However, she does not lack the information: with appropriate help she can access it without having to relearn it. It is simply impossible for her to access it without significant effort. Does she have any beliefs regarding cars before she is able to access the information? It is apparent that she does not.

What is it about Alicia’s case that makes it apparent that she lacks beliefs about cars? First, that standing beliefs not only influence our behavior, but that they can do so without us being aware of them. So, for example, I do not have to consciously recall my beliefs about gravity to become concerned when some fragile object falls, even though without such belief that situation would have no reason to concern me (if I have no beliefs regarding gravity, for all I know the object may become safely suspended in midair). Or, for another example, at any particular moment I may act as if there is or isn’t a God without consciously referring to my beliefs on the matter. (Indeed, it seems that quite often the only times we become aware of many of our standing beliefs is when they are challenged in some way: things aren’t where we expected them to be, someone argues against our basic assumptions, etc.) Alicia’s information about cars cannot do this. And the content of Otto’s notebook cannot do this either: it cannot influence Otto’s behavior without him becoming consciously aware of it. Secondly, and on a related note, standing beliefs can, and often do, become occurrent beliefs automatically and/or without our conscious control. For example, when someone mentions the Empire State Building the belief that it is in New York may occur to me, quite without my willing it to. Once again, Alicia’s information about cars cannot do this and, once again, the contents of Otto’s notebook cannot do this: someone mentioning the MoMA would not cause any occurrent beliefs on the matter for Otto, unless he made a willful decision to check.

There are likely many more important differences between the sorts of things that standing beliefs can do and the sort of things that the contents of Otto’s notebook can do. At least in regards to the two I’ve described, which seem particularly essential, the first by itself is justification for rejecting the description of the contents of Otto’s notebook as “beliefs”, as those contents cannot influence behavior directly.

Memories

So while beliefs can’t be plausibly extended, does Clark & Chalmers’ thesis still hold: can the mind be extended? Is there some aspect of our self that reaches out into the world? I believe that memory fits the bill. When Otto writes down information in his notebook, what is contained in the notebook is not beliefs, but memories.

The distinction between memories and beliefs is perhaps subtle, but it is clear that there is a difference. Memories can do many of the things beliefs do that I described above: they can influence our behavior without us being aware of them, they can be recalled automatically, etc. In the case of memories, however, these properties are not in any way essential. Going back to the case of Alicia, we can ask whether she has memories regarding cars. It is apparent that she does: the memories were there, she just could not access them. What would she be re-accessing otherwise? Memories need be nothing more than the storage of information[3]. (Of course, one could ask whether, without access to information regarding cars, that information, and hence those memories, could be said to belong to Alicia. I discuss this question—that of the ownership of memories—later on.)

Otto’s, and Our, Memories

Memories seem to be much more plausibly extended than beliefs[4]. Information stored outside the body seems to serve the same functional role as memory, for the same reasons Clark & Chalmers argued that such information serves the same function as belief. If we follow Otto around, we will see that where we would commit some information to memory, he writes it down, and where we would consult our memory, he consults his notebook. It is more difficult and effortful, perhaps, for him to access the information in his notebook than it normally is for us to access our memories while in a normal state; however, it is no more difficult for him than it would be for us to access our memories if we were tired, distracted, or had suffered the same sort of brain trauma as Alicia. In fact, even in a normal state the access of our own memories can be difficult: I often find, for instance, that the name of some person is inaccessible to me (despite the fact that I know it) and that I have to perform some sort of effortful memory search to re-access it. There is perhaps a qualitative difference in our memories: whereas my memories “feel” like information that I knew all along, the contents of the notebook would not. But this only reflects the fact that we are accessing the memories through different modalities.[5]

Who’s Memories?

In this discussion, the question may arise: if memories can be stored outside of the body, what makes those memories one person’s instead of another’s? Or, furthermore, what makes some information I’ve written down my memory, instead of just a record of information? I think this is the wrong question to ask. It assumes that my memories are encased in a unitary shell, that one person’s memories cannot be another’s, and that something is either my memory or it isn’t. I propose, instead, as an object of further study, that there is a degree to which a memory can be considered to belong to a particular individual. This degree is determined by a combination of properties, neither of which are necessary in and of themselves, although it is necessary that there be at least one or some combination of them: first, how easily is the information recalled by that particular person, and secondly, how important is the information to the identity of that person? By the first criteria, anything stored “in the head”, which when recalled has the qualitative sense of being “mine”, is indisputably my memory, regardless of how relevant it may be to my identity; and by the second criteria, a record of my life, sealed away so that I even cannot review it, would also constitute my memory. These are the extreme examples—most memories share both properties to some extent. In Otto’s case, the information in his notebook serves as his memory because of its reliability and ease of access, and also because without his notebook he would likely feel that he had lost an important piece of himself. [6]

But What of the Mind?

Perhaps, at this point, the reader will object that while memories may be extended, they are not aspects of the mind; therefore, the mind is not extended. Perhaps the mind is only those operations we perform on our memories; after all, there are many sources of information that the mind draws from in order to function that are not themselves part of the mind—sensory data, for instance, or, if you feel that the senses are part of the mind, the content of that data—and perhaps memory is such a source. To this, I would reply that in my formulation my “mind” is identical with my “self”, and it is our memories that make us who we are: if you remove some memory that is important to my conception of self, I would not longer be the same person, and, therefore, I am not of the same mind. Therefore, memories are part of the mind.

I think the difficulty lies in thinking of “the mind”: of the self as being one unitary thing, that if extended is extended in its entirety. Instead, I propose as a question for further study that the mind is made up of several different parts. Our conscious awareness is one such part, but our memories are also important. This non-unitary structure explains how outside sources of information can be memories and part of the mind, even though our recall of them is qualitatively different than that of memories stored “in the head”: the conscious aspect of our mind recalls them in different ways. Of course, now we need to ask how we determine what, exactly, can be considered part of the mind. There are many important factors that allow us to think—the beating of our heart, etc. How do we determine which of these should be considered part of the mind? I propose that the factors that are part of our minds are those that contribute functionally to the workings of the mind in such a way that they cannot be replaced without changing the identity of that mind—regardless of whether these factors are inside the head or not.



[1] I don’t consider it at all impossible that beliefs could be extended through the use of neural prosthetics in the future, as long as they satisfy the requirements listed later on in this paper.

[2] That, for the purposes of this thought experiment, does not include relearning the information.

[3] Although one can argue about whether and what certain types of stored information can be considered “memories”. I do not discuss this here.

[4] Furthermore, it seems much more natural to extend them this way: colloquially, we talk about a book of recollections, or a box full of memories, or even of places that remind us of the past as somehow containing the memories themselves (“this house is full of memories”), in a way that we do not for beliefs. While this fact about natural usage of terms does not show that it is in fact wrong to speak of beliefs as being extended (although as seen earlier it seems that it is), it does lend support to that assertion, and it make it even more likely that memories can be plausibly extended, as the idea does not seem too far from our natural intuitions about the use of our terms.

[5] One objection could be raised related to recent research on memory: it seems that when we recall memories, we re-imagine and rewrite them. Otto does not do this with the contents of his notebook (or, at least, it isn’t necessarily part of the process), therefore the contents are not memories. I would reply by saying that this fact about memory as it happens to work in no way describes an essential property: it could have turned out that the recollection of memories did not work this way, yet we would still call it memory.

[6] It is also possible for entities other than individuals to have memories: collective memory is allowed for, if the information is easily accessed by and/or important to the identity of the people as a collective.

Mind-Brain Identity

Hampshire College

Philosophy of Mind


What is identity theory? In this paper, I explain it, and explain why it is flawed, namely, because it does not account for multiple realization.
Identity theory is the metaphysical doctrine that, while epistemically distinct, brain states and mental states are ontologically identical. By this I mean that mental states and brain states are the same thing: the only difference is in how one perceives, thinks about, and describes this thing. The fact that brain states and mental states are identical is an empirical observation: it is not known a priori. An analogy that will perhaps make this whole idea clearer is the case of lightning, and its identity with electrical discharge. It is known that lightning is electrical discharge; however, the way we perceive, think about, and describe lightning is completely different from the way we perceive, think about, and describe electrical discharge. Furthermore, we would not know that they are the same thing if it had not been experimentally confirmed (ala Benjamin Franklin and his kite). Similarly, evidence from cognitive neuroscience leads to the hypothesis of identity. Identity theory is a reductionist theory; it posits that the description of any higher-level phenomenon can be reduced to a description of lower-level phenomena without a loss of information, i.e., mental states can be reduced to brain states, which, being physical states, are further reducible to the laws of physics.

Identity theory is, furthermore, a theory of type-identity: it claims that any particular mental state type (such as pain) is reducible to a particular brain state type (such as C-fibers firing)—indeed, that it must be that brain state type, just as lighting must be electrical discharge. This is distinguished from token-identity, which claims merely that any instance of some mental state is reducible to some sort of brain state. Type-identities, though not without their difficulties, are much easier to clearly define than token-identities. Take, for example, the claim that “pain is C-fibers firing”. If something is pain, it is C-fibers firing, and if it is C-fibers firing, it is pain—plain and simple. Token identities are fuzzier: for example, what is it about this object in my hand that makes it a mug? If we want strict definitions of our token-identities, we must be able to say that those things, and only those things, that satisfy a certain set of criteria (has a handle, holds liquid, etc.) are tokens of a certain type. We must either posit functional types—i.e., that all tokens that are representative of the type X have the particular combination of functions Y, and that all things that have the particular combination of functions Y are X—or physical types—all tokens that are representative of type X share a particular physical trait Y, and all things that share the physical trait Y are X.

The advantages of a reductionist theory like type-identity theory are apparent: it gives our experience a sort of explanatory, causal, and even ontological coherence and closure. When things are reduced, the method of explanation for any particular phenomenon is no longer different from the method of explanation for any other phenomena: they can both be explained in terms of the same underlying process. Causally, reductionism implies that there are not separate chains of causality for different phenomena, i.e. that there are not mental processes going on according to their own laws of causality separate from the physical processes and their laws of causality. And ontologically, reductionism reduces the number of posited kinds of entities (minds, heat, etc.) down to, potentially, one.

There is, however, a problem, which may be fatal for identity (or, specifically, type-identity) theory, and, indeed, reductionism in general—multiple realization, i.e., the fact that the same higher-level system can be implemented on entirely different underlying physical systems. For an example, consider an algorithm (a set of rules for how to perform some function). Let’s say the rules of this algorithm are to take an input number and add 2 to it. This algorithm can be run on multiple, almost incomparable physical systems. For example, I can do it in my head, I can do it with a pen and pencil, I can design a contraption out of Tinkertoys to do it, I can create a program on my calculator to do it, it can be done on a Mac, and it can be done on a PC. In fact, there are a potentially infinite number of ways that this one algorithm could be implemented, with nothing physically similar about any of them.

Similarly, there seem to be multiple ways in which a mental event, for example seeing green, can be correlated with brain events—after all, the way that you see green might be quite different from the way I see green. In fact, in order for the description “seeing green” to be useful, it must describe these multiple ways: otherwise, the only thing that could be described as truly seeing green would be me, as everyone’s brain is different and I am the only person with the exactly the physical brain state that I have when seeing green! The case of C-fibers being pain would seem to be easier: C-fibers are, after all, a particular type of neuron shared by us all. But if pain is C-fibers firing, does that mean that a petri dish full of C-fibers and nothing else would experience pain? If there is any conclusion we do not want our theory of mind to entail, it is this. This would be true of any mental event: if one attempts to identify it with some very specific thing (the activities of a type of cell, the presence of a chemical, etc.) one must claim that that thing, all by itself, is the said mental event, a claim which leads to absurdity. Pain, instead, seems to identify with various events interacting amongst different parts of the brain, the specific organization of which is once again going to be less and less similar between any two people. The problem becomes even more difficult when considering the supposed mental lives of non-humans. Monkeys, dogs, reptiles, etc., have brains that are physically more or less different from ours. Identity theory would seem to suggest that these creatures cannot have the same sort of mental events we do[1]. And these creatures at least are biologically similar to us: what about computers or aliens?

In fact, everything that has mental states would seem to have to have distinct types of mental states! Our ontologically simple theory has failed us: instead of pain, for example, we now have my pain, your pain, Sue’s pain, dog pain, alien pain, computer pain, etc. Instead of simplifying our ontology, identity theory requires us to posit a new type of mental state for each possible case. This is no good—we would like a theory of mentality that not only explains the nature of mental states we happen to have, but that also has some degree of generalizability, allowing us to say something about the mental states of others in a way that such states can be related to our own. Identity theory has very weak explanatory power in this regard—it would be as if we had a theory of biology that didn’t allow us the make comparisons between different species.

Is there a solution, wherein we can avoid dualism, keep physicalism, and not run into these problems? I believe so, and I believe the answer lies in a sort of functional type-identity theory that I glossed over earlier. Let’s go back to the algorithm: I said that it was the same algorithm regardless of the physical implementation: but what makes this the case? It is that the algorithm is performing the same function. Similarly, a mental event is a sort of function processing information in a certain way. While there may be difficulties with this theory, it seems to me to be one worth pursuing.



[1] I must qualify this statement, because identity theory has, occasionally, done the exact opposite. If we identify pain with C-fibers firing, and another creature has C-fibers, then they feel pain: we can therefore infer that animals such as, say, cows experience the same kind of pain we do, as the physiology is quite similar. This argument, unfortunately, does not hold under scrutiny, since, as said earlier, pain identifies the interaction of several parts of the brain which is going to be very different between us and other creatures.

Logic and Mathematics

Hampshire College
Low-Tech Comupting

Mathematics and logic are, firstly, systems whose original purpose for existence is the same: to provide a system for more effectively operating within our world. Both, as they began, did so through a codification of common sense. They quantified our experience and defined what operations we could apply to those quantities so that we could systematically find facts of which we were previously unaware. The distinction between the two was in what they attempted to quantify: mathematics was primarily concerned with objects in the world, such as money, land, and later the laws of physics, whereas logic was primarily concerned with concepts in the mind, such as propositions and categories (although those concepts often related to objects).

However, despite their different foci, logic and mathematics were based on a common method: deduction, that is, a system of rules which you can apply in such a way as to arrive at a conclusion which is both new and necessarily true (pg. 99). As such, it was almost inevitable that the two would meet. One of the earliest examples of this meeting is probably Euclid’s “Elements” (pg. 324), but the true synthesis came with the work of Boole and Frege, where it was shown that logic could be dealt with mathematically, and that one could attempt to build mathematics on a logical foundation (pg. 329-330).

The question now raises itself: is mathematics a branch of logic or logic a branch of mathematics? The logicists such as Frege and Russell believed the first, intuitionists believed the latter (pg. 328). I believe both are wrong. Just as philosophy is not a branch of logic, but instead logic is the method by which we do philosophy, so too is mathematics not a branch of logic—logic is the method by which we do mathematics, in the construction of theorems and proofs. Because of this, though, logic cannot be considered a branch of mathematics: while it is a mathematical system that works in a way particular unto itself and distinct from other areas of mathematics, it nevertheless permeates the entire structure of the enterprise. If mathematics is a tree with branches, logic is how the tree grows.

However, it is here that we run into our difficulty. Because while logic is how the tree grows, logic won’t necessarily make it grow the way we want. Logic and mathematics both have the same problem that the way that they allow us to systematically analyze the world by turning that world into symbols, which are then manipulated according to a strict set of rules. This is formalism, which attempts to avoid errors due to flaws in human intuition by making logic and math completely devoid of meaning. But as long as the symbols, statements, and rules aren’t mutually inconsistent, you can come up with whatever rules you like and make whatever statements you like. Yet, only some of these rules and statements will give you a system that provides an accurate description of the world—which is, after all, the original purpose of mathematics and logic. The test of a logical or mathematical system’s truth would seem to be, then, concurrence with the actual world, with experience. But if this is the case, then why should we try so hard to logically prove that, for example, 1+1=2, as was done in 300 pages by Russell and Whitehead? To do so is to use a system whose truth is based on experience to prove something that according to experience we already know to be true!

I must admit that I don’t know the answer, and furthermore don’t know enough about mathematics to even know whether I’m asking the right questions. I can say, however, that I have a very strong conviction that we should attempt to prove as much as we can by starting with the least and most simple and obvious assumptions. I suppose this really is the maxim of all work in philosophy, mathematics, and science, which has allowed those fields to flourish: to not take anything for granted, unless one absolutely has to.