Search This Blog

Pages

About Me

My photo
19 years old. Homeschooled, then went to a community college instead of high school. Currently at Hampshire College. http://www.facebook.com/NamelessWonderBand http://myspace.com/namelesswondermusic http://youtube.com/namelesswonderband http://twitter.com/NamelessWonder7 http://www.youtube.com/dervine7 http://ted.com/profiles/778985

Monday, September 19, 2011

Some moral propositions

Just a quick attempt to sketch out some moral views:

  • Actions in and of themselves can't be good or bad.
  • What makes them good or bad are their consequences/logical implications.
  • Actions can be "intrinsically wrong" if they necessarily imply/lead to a bad consequences. For example murder is intrinsically wrong since it necessarily implies the unjustified death of someone.
  • Actions can also be "contingently wrong", if, practically, they lead to bad consequences, but don't necessarily. For example, incest is only contingently wrong: in most circumstances it would lead to problems, but it is possible for people to engage in incest without any repercussions.
  • A lot of confusion in thinking and discussions about morality come from not properly distinguishing between intrinsically wrong and contingently wrong.
NOTE: I'm seriously rethinking my statements in this post. More to come.

Saturday, July 9, 2011

One statement that never ceases to frustrate me is "I'm fine with so and so believing X, I just wish they wouldn't get in everyone's faces about it." For example, "I'm fine with vegetarians, I just wish they wouldn't get in my face about it." Well, it would certainly seem to me that if one believes that killing animals for food is murder they should be getting in everyone's face about it. To stand idly by while people continue to benefit from murder would be gross neglect of one's moral duty, akin to ignoring the Holocaust when you know it to be deeply wrong.

Take another example. "I'm fine with such and such a religious group, I just wish they would stop trying to convert me." Once again, this ignores the fact that, because of the very nature of someone's beliefs, for them to not try to convert you would be to neglect their moral duty. After all, they believe that if you are not converted you will suffer, which means, in fact, that not attempting to convert you would not only be neglecting their moral duty but, specifically, neglecting their moral duty towards you. To not try to convert you would be akin to letting you drown when they had an opportunity to try to rescue you. Similarly, it is absurd to expect the religious to keep their faith separate from their politics: if one believes, for example, that homosexuality is a sin and dangerous to our nation's spiritual well-being, it would be negligent for them to not try to combat it.

Now, one can argue whether the killing of animals for meat does constitute murder*, or whether such and such is the path to salvation, or whether homosexuality is a sin. But this is exactly the issue that is at stake: whether or not someone's beliefs themselves are sound.


*incidentally, I myself am conflicted on this point and, as such, have not become a vegetarian.

Wednesday, June 29, 2011

Born This Way

So one subject that keeps getting discussed in the gay marriage debate is whether or not homosexuality is innate. Gay advocates argue that if it is (and the evidence seems to be that it is), then homosexuals are being discriminated against for something they did not choose.
However, it has always seemed to me that this question - of whether homosexuality is innate - is a red herring. In fact, while it's an interesting scientific question, it has no bearing on whether or not homosexuals have the right to engage in their preferred lifestyle.
Suppose (as is almost certainly the case) that homosexuality is innate. Why should this give homosexuals the right to engage in their lifestyle? After all, many mental disorders are innate, but that doesn't mean that a psychopath has the right to go out and kill people. Someone who has a mental disorder that causes them to engage in immoral behavior has one of two options: either overcome the disorder, or be removed from society. This is regardless of whether the disorder is curable or not. And one can't say "God made me this way", because God made people with destructive mental disorders too.
And what if it isn't innate? Well, neither are most lifestyles people choose. We aren't born, for example, to live a certain religious lifestyle: we choose it (or are forced into it). But as long as lifestyle isn't hurting others, it does not concern them (obviously, there ARE some cases where a lifestyle might be hurting the person engaged in it in a way that is of concern to society as a whole). This is the important point. The question isn't whether one was born homosexual or chose the lifestyle: it's whether people have the right to behave the way they would like as long as it doesn't hurt others (offense doesn't count). This is why I'm an ally.

* * *

While I'm on the subject of bad arguments for causes I support, let's talk about abortion. I am pro-choice in a fuzzy way: I believe that the abortion of, say, a week-old embryo is completely blameless regardless of circumstances, partial-birth abortions are immoral unless for extreme medical reasons, and the time in between is one big gray fuzzy mess. I discuss my views in detail here.
So, obviously, I do not agree with the pro-lifers. However, I think the argument that a woman has the right to do what she wants with her body completely misses the point. It is the pro-lifers who are actually talking about what matters in this case: namely, whether the fetus is a human life. My beliefs are that it isn't (until fairly late in its development), and this is why I think abortion is OK. But if it is, then it seems to me that a woman's right to choose becomes questionable at best. It is intuitively likely that one's right to choose what to do with one's body stops at the point that that choice entails killing another human being. We could justify it on utilitarian grounds that the psychological and physical distress of carrying and giving birth to the child outweighs the value of its life (which is how I'd look at it): but even in this case the woman's right to choose is only considered relative to the what the value is of the human to be destroyed.

Friday, June 17, 2011

Blah blah blah

So, first, I totally failed in regards to the whole "keeping up with this blog" thing. Obviously I tried to pretend that I wasn't failing by posting my papers from college, but...that's cheating. I know this now.

Anyway, I've been thinking about the following:
Understanding. I may end up doing my Div III (senior dissertation for all you non-Hampshire people) on it. Here's the problem: what does it mean to "understand" something? It is not an intentional state ("intentional state" is a fancy philosophical way of saying "thought", although it specifically refers to thoughts about things in the world: for example, when I think to myself "The Eiffel Tower is in Paris" [as I am wont to do] the thought is about the Eiffel Tower. Beliefs are also intentional states, since if you believe something, you believe something about something [once again, my belief that the Eiffel Tower is in Paris is about the Eiffel Tower]), since it doesn't correspond to anything. I.e. when I say "I understand the theory of relativity" I'm not making a statement about anything out there in the world. So maybe "understanding" is a subjective sensation. But no! We can't be wrong about our subjective sensations. But we can be wrong as to whether we do, in fact, understand something, so it seems. We can think we've understood something without understanding it at all. And is the converse possible? Can we not think we understand something when in fact we do? Why is any of this important? Because it's at the core of almost all arguments. Namely, we have to understand something in order for it to mean anything.

Thursday, June 2, 2011

Meat n' Stuff

So I was think about all the commercials for dog food where they're like "it's made with fresh ingredients" "you're dog knows it's delicious" etc. and they show beautiful images of fresh ingredients. I decided there should be a dog food brand called "Meat n' Stuff", which is exactly that: low quality meat and various nutrients that your dog needs to survive. Here are some slogans I came up with (NSFW):

Meat n' Stuff: because your dog doesn't give a fuck
Meat n' Stuff: because your dog is eating it, not you
Meat n' Stuff: for crying out loud, your dog likes to eat it's own shit

Thursday, May 19, 2011

Did I lock the door? The cognitive neuroscience of Obsessive-Compulsive Disorder

Hampshire College
Brain and Cognition

Introduction

Obsessive-Compulsive Disorder (OCD) is a common disorder that appears in both children and adults. It consists of obsessions, compulsions, or both. Obsessions consist of recurring thoughts or impulses that are intrusive and unwarranted by the environment (but still perceived as originating from the patients own mind, distinguishing them from “though-insertion”), causing distress or anxiety, and compulsions are repetitive behaviors the obsessive-compulsive feels he/she must perform. Compulsions are often related to the obsessions, and are meant to neutralize the distress caused by such obsessions. Some of the most common obsessions include germaphobia, fear of causing harm to oneself or to others, and worrying that important tasks have been left undone, for example being unsure whether one has locked the door, and compulsions include behavior such as hand washing, counting, and checking. The patient is usually aware of the absurdity of their thoughts and actions, but nevertheless feels powerless to stop them (American Psychological Association, 2000).
In this paper, I will discuss the neurocognitive findings regarding OCD, particularly as they relate to the SEC/OCD model proposed by Huey et al (2008). I will begin with an overview of some of the brain studies of OCD, moving on to some popular models of the disorder. Finally, I will discuss the SEC/OCD model.

The Brain Regions Implicated in OCD and their Functions

Despite copious amounts of research, the precise neuropsychology of OCD is uncertain (Markarian et al, 2010). Different studies often provide different and even contradictory results. Despite this, there are some generally agreed upon neuroanatomical features of OCD. Specifically, it has been consistently found that obsessive-compulsives reveal anatomical and functional abnormalities in the orbitofrontal cortex and the anterior cingulate cortex, and also the basal ganglia, prefrontal cortex, and thalamus (Markarian et al, 2010; Huey et al, 2008).

Several studies have reported reduced bilateral orbitofrontal cortex volumes in obsessive-compulsives, and there seems to be a correlation between greater reductions and worse symptoms (Maia, Cooney, & Peterson, 2008). The orbitofrontal cortex seems to be involved in reward learning and emotions, and the regulation of complex behavior. It has been found that the orbitofrontal cortex responds to reward stimuli, but not when the desire for the stimuli has been satiated. Macaques with orbitofrontal have difficulty learning the reward value of stimuli, and are slow to change their behavior when reward conditions change (Huey et al, 2008).

There is evidence of increased anterior cingulate cortex activation in obsessive-compulsives (Fitzgerald et al, 2005). The anterior cingulate cortex is implicated in decision-making. In particular, it acts as a conflict and error detector, responding when there is a discrepancy between expected and actual events. Activation of the anterior cingulate cortex seems to lead to negative emotional states, such as anxiety (Huey et al, 2008).

It has been found that damage to the basal ganglia through infection can lead to obsessive-compulsive symptoms. The basal ganglia seem to play an important role in the generation and regulation of motor activities. Specifically, it has been suggested that the basal ganglia acts as a sort of gate for motor signals, facilitating certain motor actions while suppressing others (Huey et al, 2008).

The thalamus shows higher activation in subjects with OCD compared to controls, and has been reported to be larger in obsessive-compulsives (Maia, Cooney, & Peterson, 2008; Huey et al, 2008)). The thalamus seems to be a gateway for interactions between brain areas involved in OCD (Huey et al, 2008).

Some Models of Obsessive-Compulsive Disorder

Based on behavioral and neuroanatomical evidence, there have been numerous models proposed for OCD.

The Cognitive-Behavioral Model: According to this model, OCD arises from dysfunctional beliefs regarding the importance of thoughts. Almost everyone has had intrusive thoughts that are perceived as inappropriate: for example, the fleeting, unwarranted, and unwanted mental image of stabbing a loved one with a knife. Healthy subjects recognize this as just meaningless junk in the stream of consciousness. However, obsessive-compulsives, according to the cognitive-behavioral model, incorrectly view these thoughts as highly significant—for example, as evidence that one will, in fact, lose control and stab someone. Because of the importance attached to these thoughts, they develop into obsessions, and compulsions arise as a way to attempt to get rid of these unwanted thoughts and/or neutralize the danger associated with such thoughts. These compulsions are then reinforced by the temporary reduction in anxiety they cause and the fact that they prevent the obsessive-compulsive from learning that harmful consequences will not arise as the result of their thoughts (McKay & Abramowitz, 2010). This model has gathered a lot of support (with some exceptions), but does not provide a neuropsychological explanation.

The Standard Model: The standard neuroanatomical model of OCD proposes that the disorder arises as a dysfunction of elements of a post-frontal cortex-basal ganglia-thalamic loop. This model is consistent with most of the data collected on the neuroanatomy of OCD and forms the basis of many following models. However, it does not explain the psychological mechanisms of OCD (Huey et al, 2008).

The “Feeling of Knowing” Model: Szechtman & Woody (2004) propose a model wherein OCD is caused by an inability to create the “feeling of knowing” that a task has been completed. Specifically, they argue that the common symptoms of OCD—washing, checking, fear of causing harm, etc.—are those that evolutionarily would have been related to the security of the organism and its fellows. This need for security leads to the evolution of a security-seeking system. Because there are no external stimuli that indicate the completion of a security-seeking task (for example, there could always a predator that the animal has missed), the completion of such tasks is indicated by an internally generated “feeling of knowing”. Obsessive-compulsives have a deficit in generating this subjective sensation, leading to the odd phenomena in which the obsessive-compulsive is perfectly aware, objectively, that, for example, their hands are perfectly clean, but they do not feel clean, leading to further washing.

The SEC/OCD Model

Huey et al’s (2008) model of OCD expands on Szechtman &Woody’s model and also on their own earlier work where they propose that complex behaviors with beginnings, middles, and ends are stored in the prefrontal cortex in the form of Structured Event Complexes, of SECs. For example, the knowledge of how to correctly eat dinner at a restaurant—finding a seat, ordering, eating, paying the bill, leaving—would be an SEC. SECs are usually implicitly recalled, and in this respect are similar to procedural memory. SECs are stored when a complex sequence of behavior leads to a reward, in order that such a sequence may be repeated. As evidence of these SECs, patients with damage to the prefrontal cortex have often reported difficulty with ordering and sequencing events and actions.

Just as the completion of an SEC can be rewarding, so too can the inability to complete an SEC can feel punishing. Furthermore, there are SECs that are themselves punishing but which bring about the removal of punishment: for example, few people feel good about doing their taxes, but most are relieved when their taxes are finally done.

In the SEC/OCD model, it is proposed that the initiation of an SEC is accompanied by a motivational signal, experienced as anxiety encouraging the animal to complete the SEC. In healthy subjects, the completion of the SEC is accompanied by a reward signal. According to the model, obsessive-compulsives have a deficiency in this latter process. Although the SEC is complete, the obsessive-compulsive does not have the sensation that it is done. This leads the anterior cingulate cortex to produce an error signal. The orbitofrontal cortex responds to this error as punishment, leading to a feeling of anxiety. This feeling is unconscious, leading the obsessive-compulsive to attempt to assign an explicit cause to it. This interpretation forms the basis of an obsession. The compulsion is caused by the fact that the completion of an SEC, for example hand washing, gives the obsessive-compulsives only partial relief, so that they feel they must repeat the SEC.

In regards to the basal ganglia, Huey et al suggest that just as it facilitates some motor actions while suppressing others, so too does it gate SECs. It is proposed that the basal ganglia sets thresholds for the activation of SECs, and when this threshold is lowered, for example by damage, it can lead to the overactivation of SECs, causing symptoms of OCD.

Conclusion

The cognitive neuroscience of OCD is still in its infancy. Much work is still to be done. However, Huey at al’s model provides a useful paradigm for further work. It shares and integrates elements from many of the previous models: along with expanding on Szechtman & Woody, it provides an explanation of why undue importance would be attached to fleeting thoughts as per the cognitive-behavioral model (the brain is looking for an explicit source of anxiety), and explains the possible psychological mechanisms of OCD that the standard model leaves out.


References

American Psychiatric Association (2000). Diagnostic and Statistical Manual of Mental Disorders: DSM-IV-TR, 4th ed Washington, DC: American Psychiatric Association Text Revision.

Fitzgerald, K., Welsh, R. C., Gehring, W. J., Abelson, J. L., Himle, J. A., Liberzon, I., & Taylor, S. F. (2005). Error-Related Hyperactivity of the Anterior Cingulate Cortex in Obsessive-Compulsive Disorder. Biological Psychiatry, 57(3), 287-294.

Huey, E. D., Zahn, R., Krueger, F., Moll, J., Kapogiannis, D., Wassermann, E. M., & Grafman, J. (2008). A psychological and neuroanatomical model of obsessive- compulsive disorder. The Journal of Neuropsychiatry and Clinical Neurosciences, 20(4), 390-408.

Maia, T. V., Cooney, R. E., & Peterson, B. S. (2008). The neural bases of obsessive- compulsive disorder in children and adults. Development and Psychopathology, 20(4), 1251-1283.

Markarian, Y., Larson, M. J., Aldea, M. A., Baldwin, S. A., Good, D., Berkeljon, A., & ... McKay, D. (2010). Multiple pathways to functional impairment in obsessive– compulsive disorder. Clinical Psychology Review, 30(1), 78-88.

McKay, D., Taylor, S., & Abramowitz, J. S. (2010). Obsessive-compulsive disorder. In D. McKay, J. S. Abramowitz, S. Taylor, D. McKay, J. S. Abramowitz, S. Taylor (Eds.) , Cognitive-behavioral therapy for refractory cases: Turning failure into success (pp. 89-109). Washington, DC US: American Psychological Association.

Szechtman, H., & Woody, E. (2004). Obsessive-Compulsive Disorder as a Disturbance of Security Motivation. Psychological Review, 111(1), 111-127.

Epiphenonema: can't live with 'em, can't live without 'em

Hampshire College

Philosophy of Mind

Introduction

Frank Jackson, in his paper “Epiphenomenal Qualia”, argues that we could know all the physical facts about the world, yet we would never know of qualia, and that therefore qualia are not contained in the physical world. Furthermore, he claims that qualia are epiphenomenal: that is, that they are caused by physical processes, but do not have any effect on the physical world. In this paper, I will argue that this claim that qualia are epiphenomenal destroys the Jackson’s first argument, and yet, this same argument does not work without epiphenomenalism. I will conclude by discussing the reasons I think that we should continue to believe a physicalist thesis regarding qualia.

Epiphenomenalism

In the second half of his paper, Jackson argues that there is no good reason that one should not accept that qualia might be epiphenomena—caused by physical processes, yet with no causal power whatsoever in the physical world. I do not believe that he establishes the epiphenomenalism as a sound hypothesis.

Jackson lists three major objections that he feels philosophers often have against epiphenomenalism:

  1. “It is supposed to be just obvious that the hurtfulness of pain is partly responsible for the subject seeking to avoid pain, saying ‘it hurts’ and so on.”
  2. “According to natural selection the traits that evolve over time are those conducive to physical survival. We may assume that qualia evolved over time—we have them, the earliest forms of life do no—and so we should expect qualia to be conducive to survival. The objection is that they could hardly help us to survive if they do nothing to the physical world.”
  3. “…how can a person’s behavior provide any reason for believing he had qualia like mine, or indeed any qualia at all, unless this behavior can be regarded as the outcome of the qualia…And an epiphenomenalist cannot regard behavior, or indeed anything physical, as an outcome of qualia.”

The first objection, as Jackson phrases it, is silly, as all arguments resting on “obvious”-ness are, and so his reply to it is not very interesting for our purposes. The second objection betrays a substantial misunderstanding of evolutionary theory, and I think Jackson’s reply to it is correct[1]. The third objection, however, is very interesting, and it is in his reply to this objection that Jackson makes the argument that I now wish to deconstruct.

Jackson’s reply to this third objection is in the form of an analogy, which I will paraphrase. Suppose I read in the New York Times that the mayor of NYC is cutting funding for police. I can reasonably infer that the New York Post also reports on this fact, even though the New York Times and the New York Post may not have any influence on each other. I can do this because I know that the New York Times and the New York Post both report on issues of interest to New Yorkers, that the New York Times reporting that the mayor is cutting funding for police is a good indication that the mayor is, in fact, doing such a thing, that this is an issue of interest to New Yorkers, and that therefore the New York Post has also probably reported on this fact. The analogous case for qualia (which Jackson never explicitly lays out) is that given that my nervous system produces both qualia and certain types of behaviors (analogous to how the mayor’s actions “produce” the stories in both the New York Times and the New York Posts), and that your behaviors are similar to mine, I can infer that you have the sort of nervous system that would also produce qualia.

The difficulty with this argument, as Daniel Dennett (“’Epiphenomenal Qualia?”) points out, comes from the fact that, according to Jackson’s account, qualia have no causal powers in the physical world. Because of this, qualia cannot influence my behavior in any way[2]: otherwise what we have is not epiphenomenalism, but interactionist dualism. For this reason, it is impossible for me to know that I myself have qualia, as their existence or non-existence would make no difference to the workings of my brain.

“Ah,” it may be replied, “but just because qualia do not have any effect on the workings of your brain, they may still have an effect on the workings of your mind, say, your belief that you have qualia.” This is, strictly speaking, a valid move. However, it has troubling consequences: for example (referencing Dennett), if I lost my qualia, I would presumably no longer believe that I had them, but I would still act exactly as if I did. Mental states could no longer be said to influence behavior, as those mental states are potentially influenced by qualia and qualia cannot influence behavior[3]. It seems, then, that the epiphenomenalist is left with two options. One, they could say that qualia influence mental states, in which case mental states cannot influence behavior and my mind is completely severed from the physical world, or two—if we want to preserve the influence of mental states on behavior—that qualia do not influence mental states, in which case it is impossible for me to know whether I myself have qualia. The first option is unpalatable (as I will discuss below), yet if the epiphenomenalist takes the second position, he/she must admit that there is no way for me to infer the existence of qualia in others, since I cannot even use myself as an example!

The Knowledge Argument

“Alright,” says the epiphenomenalist. “Let us assume your analysis is correct. If qualia have no effect on the physical world, then it is impossible for us to know whether we ourselves have qualia. However, there may be many things that it is impossible for us to know about, but which nevertheless exist. For example, there might be creatures that live beyond—and will always live beyond—the area of the universe that we can observe. It would be impossible for us to ever know about them, yet that does not change their existence.”

Against this, many would argue that, in the absence of any sort of evidence, it is useless to speak of some thing’s “existence”. There is a much larger philosophical issue at stake here. Happily, we do not need to discuss this issue now. For if qualia are truly epiphenomenal, then Jackson’s Knowledge Argument—the central part of his paper—completely falls apart.

This argument is as follows. Imagine that there is a neuroscientist, Mary, who for some reason has been forced to live her whole life inside a black-and-white room[4]. Despite this, Mary becomes a specialist in color vision and, eventually, learns everything that it is possible to know about how color vision works: complete descriptions of the physical mechanisms, the functional roles that color perception and various colors play, etc. One day, Mary’s captors decide to let her out of the room. She gets handed a tomato—the fruit of choice for released prisoners—and finds that, despite her omniscience regarding color vision, the color of the tomato is a completely new experience for her. “Aha,” she exclaims, “so this is what ‘red’, the color I know is associated with tomatoes, actually looks like!” She has learned a new fact about color that was not amongst any of the physical facts: what such color is like. This would seem to imply, then, that this fact about what red is like—the quale associated with red—is not part of the physical world.

However, this story does not make any sense if we accept that qualia are epiphenomenal. Firstly, if qualia have no influence on the physical world, it cannot be the new quale that caused Mary to make any sort of exclamation. She would have said the same thing, quale or not. This objection does not mean much, though—it can easily be countered that even though we cannot know from her actions whether or not Mary has learned a new fact, it is still the case that her subjective experience obviously now includes that fact. Perhaps we can imagine that Mary merely thought about the fact that she now had this new qualia, so we don’t have to worry about its effect on her actions. This does not help, though, as we must now ask whether Mary’s thoughts are reflected in the workings of her brain. I.e., could some sort of neuroscientific Laplace’s Demon, who knows everything about the physical structure of Mary’s brain at any time, be able to tell us what she is thinking? I think Jackson would want the answer to this to be yes. Yet, if it is true that qualia can influence Mary’s thoughts, and that thoughts are reflected by brain states, then one of two options become must be chosen: either qualia can influence brain states, and are therefore not epiphenomenal, or thoughts—and this would seem to include all thoughts, not just those specifically about qualia, as any given thought could always have qualia as part of its causal history—are entirely distinct from those brain states, even though they are reflected by them. If we choose this latter option, then the causal relationship between thoughts and brain states, if one exists, must be a one-way street: thoughts are caused by the brain states, but thoughts cannot influence brain states, and are, in fact, epiphenomenal. However, if thoughts are epiphenomenal, then anything that can be influenced by thoughts would have to be epiphenomenal too (otherwise, thoughts could exert some sort of influence on the physical world). This would likely include any mental state Mary might have (thoughts are, after all, quite influential in the mental world). In fact, if the entity called Mary can learn a new fact from experiencing a new quale—Mary can be influenced by qualia—then Mary herself must be an epiphenomenon: something caused by the lump of matter that has just been released from its black-and-white room, but with no ability to influence that lump of matter in any way.

Does the Knowledge Argument work without Epiphenomenal Qualia?

We have seen that if qualia are truly epiphenomenal, and yet I can learn a new fact from experiencing new qualia, then I myself must be epiphenomenal—otherwise, qualia can influence the physical world through me. While there is no immediately apparent logical reason why this view could not, possibly, be true, I think very few would be willing to accept it. If I am epiphenomenal, then, by definition, I have no power over the physical world—not even my own body. I am merely experiencing the effects of processes over which I have no control. In my mind this consequence is a sort of informal reductio ad absurdum, and I think most philosophers would agree[5].

However, perhaps we have been too concerned with epiphenomena. Perhaps Jackson’s Knowledge Argument still works—perhaps he just made a mistake by attaching epiphenomenal qualia to it. Of course, there are numerous objections to the Knowledge Argument not related to epiphenomenalism: perhaps what Mary learns is not a new fact, but a new mode of knowing some fact—not a “what”, but a “how”, or perhaps, as Dennett points out, our reaction to the Mary story merely reflects that we cannot conceive of what it would mean for Mary to know everything physical that there is to know—perhaps, with that knowledge, she would be able to reconstruct the subjective experience of seeing red. According to Dennett’s objection, Jackson is merely begging the question by pre-supposing that knowing all the physical facts will not allow Mary to have the experience. However, I want to take a different tack. I want to argue that, without epiphenomenalism, the Knowledge Argument does not work. Of course, we also just saw that with epiphenomenalism the Knowledge Argument doesn’t work. This means that if I’m successful, the Knowledge Argument will have nowhere to go.

If qualia do exist, but are not epiphenomenal, that is, they do have some sort of influence on the physical world, how does this affect the story of Mary? Let’s go back to before Mary left the room, when she knew everything physical there is to know about color vision. If the qualia associated with seeing color have a physical effect, then this effect would have to be included in Mary’s knowledge. This would lead to one of two options: either qualia are non-physical, in which case Mary’s description of the physical mechanism of color vision would have to have some sort of causal hole in it—that is, she could say only that A causes B, and then B causes something or other, and that something or other causes D—or that qualia is included in the physical description, in which case Mary would already know the subjective sensation of seeing red before leaving the room. Obviously, the second option leads to physicalism (although it is possible that the physical description might include stuff that we have not yet incorporated into our neuroscience), so it is obviously the first option that anyone who wants to say that qualia are non-physical must choose. This option implies that the physical world is not closed under physical causation—that is, that there are non-physical things that can nevertheless have physical effects. Now, there is no prima facie reason why this isn’t a viable option. However, there is also no prima facie reason to choose one of the two options over the other (at least, not until one considers the arguments of those such as Dennett). Once again, we are faced with the problem of begging the question: Jackson must pre-suppose that qualia would not be included in a complete physical description. Of course, those who want non-physical qualia could say that the physicalist is also guilty of question begging. The question is empirical.

Qualia of the Gaps

The question is empirical, but we must ask: can it, in fact, be answered? We move now to the realm of personal opinion. It seems likely to me that there is no possible empirical evidence that could allow us to choose between the two options until such time as we do find a physicalist description of qualia. That is, there is no way that we could determine whether the causal gap we have encountered—the place where we throw up our hands and say “well, something happens”—is, in principle, impossible to bridge. The question now becomes: what assumption shall we make when we encounter a seeming explanatory gap? Do we assume that it is intrinsically inexplicable, or do we assume that we just don’t know enough, or may not even be smart enough? I think the latter is a much more satisfactory option, as it allows and motivates us to continue searching for an answer. Furthermore, the history of science includes numerous examples of the solution of seemingly insoluble problems, problems that were declared to be permanently insoluble. Physicalism has triumphed in the past, and I believe that we can reasonably assume that it will continue to do so.


[1] Namely, that the fact that some trait exists does not mean that it was evolutionarily selected for. It may, instead, be a necessary byproduct of some other trait that was selected for.

[2] To clarify, in all following discussion I assume that epiphenomenalism defines the inability to influence the physical world as a necessary property of qualia instead of a contingent one. I.e., I take it to mean not only that qualia do not happen to influence the physical world, but that they cannot influence the physical world (at least not in our reality).

[3] At this point a particularly legalistic reader might have begun to object to my characterization of “influence”. “Qualia influence mental states, and mental states influence behavior, but you have failed to show that qualia influence behavior,” that reader may say. To this I would reply: imagine that qualia were taken out of the causal chain. Would this not change the behavior of things further down in the chain? At least one property of influence/causal power is that if in order to give a complete causal account of the behavior of B one must describe the behavior of A, then A influences B.

[4] We may suppose also that, for whatever reason, she cannot see the colors of her own body.

[5] It may be objected I am pre-supposing the existence of free-will here, which would be ironic, as I do not believe in free-will. However, I think the objection is unfounded. Even though we may not have control over our volitions, it still seems very hard to hold that those volitions do not have the causal powers that we feel they do.

Extended Cognition

Hampshire College
Philosophy of Mind

Introduction

In their paper “The extended mind”, Clark & Chalmers argue that the traditional view of cognition and the mind as being “inside the skull” is incorrect. Instead, they argue, cognitive processes—and, indeed, the mind itself—are partially constituted by the surrounding environment. Clark & Chalmers’ position is more than simple externalism—the position that a consideration of external, environmental factors is important in understanding the workings of the mind—it is radical, active externalism, arguing that the external, environmental factors are part of the mind. Clark & Chalmers focus in particular on the ways that belief can be extended. In this paper, I argue that belief is not something that can be, as of now[1], extended, and that memory can much more plausibly be considered to be extended, and I propose a criterion by which we could consider something part of the mind. The mind is extended, just not in the particular way Clark & Chalmers argue for.

Clark & Chalmers’ Argument

In their paper, Clark & Chalmers spend most of their focus on beliefs as an example of extended cognition. Specifically, they introduce us to Otto, a man with Alzheimer’s who has developed a system for keeping track of the various information that he would like to recall: he writes it all down in a notebook, a notebook that is reliably available to him whenever he needs it. When Otto hears that the Museum of Modern Art is having an exhibition he would like to see, and that the MoMA is on 53rd St., he writes this information down in his trusty notebook. For all intents and purposes, Clark & Chalmers argue, the content of Otto’s notebook constitutes his standing beliefs about the world. Because beliefs are part of the mind, Otto’s mind extends outside his skull, and, similarly, our own mind can be considered extended, since after all we quite often store information outside ourselves and there is no important difference between our writing stuff down in a notebook and Otto doing the same (except, of course, that we do not have Alzheimer’s).

Beliefs, and Why Otto’s Notebook Doesn’t Contain Them

Clark & Chalmers make a distinction between standing and occurrent beliefs, a distinction that is useful to go over before we continue. Occurrent beliefs are the beliefs that one is conscious of at any given moment: for example, the belief that is now brought to my mind that this essay is due today. Standing beliefs are those beliefs we have that we may not be consciously aware of at this moment, but which influence our actions and may, at any time, become occurrent. It is not even necessary that we be conscious in order to have standing beliefs: my standing belief that Mars is red exists even when I am in non-REM sleep.

It is my (occurrent and standing) belief that the contents of Otto’s notebook do not satisfy the definition of standing beliefs and also lack some important properties of such beliefs that are not contained in the definition (though perhaps they should be). Imagine some unfortunate person Alicia, who has sustained brain trauma that has made it very difficult for her to access semantic information about cars. Perhaps she can’t access it at all without some form of rehabilitation[2]. Furthermore, she has no implicit memories regarding cars: she behaves exactly as one who knows nothing about them. However, she does not lack the information: with appropriate help she can access it without having to relearn it. It is simply impossible for her to access it without significant effort. Does she have any beliefs regarding cars before she is able to access the information? It is apparent that she does not.

What is it about Alicia’s case that makes it apparent that she lacks beliefs about cars? First, that standing beliefs not only influence our behavior, but that they can do so without us being aware of them. So, for example, I do not have to consciously recall my beliefs about gravity to become concerned when some fragile object falls, even though without such belief that situation would have no reason to concern me (if I have no beliefs regarding gravity, for all I know the object may become safely suspended in midair). Or, for another example, at any particular moment I may act as if there is or isn’t a God without consciously referring to my beliefs on the matter. (Indeed, it seems that quite often the only times we become aware of many of our standing beliefs is when they are challenged in some way: things aren’t where we expected them to be, someone argues against our basic assumptions, etc.) Alicia’s information about cars cannot do this. And the content of Otto’s notebook cannot do this either: it cannot influence Otto’s behavior without him becoming consciously aware of it. Secondly, and on a related note, standing beliefs can, and often do, become occurrent beliefs automatically and/or without our conscious control. For example, when someone mentions the Empire State Building the belief that it is in New York may occur to me, quite without my willing it to. Once again, Alicia’s information about cars cannot do this and, once again, the contents of Otto’s notebook cannot do this: someone mentioning the MoMA would not cause any occurrent beliefs on the matter for Otto, unless he made a willful decision to check.

There are likely many more important differences between the sorts of things that standing beliefs can do and the sort of things that the contents of Otto’s notebook can do. At least in regards to the two I’ve described, which seem particularly essential, the first by itself is justification for rejecting the description of the contents of Otto’s notebook as “beliefs”, as those contents cannot influence behavior directly.

Memories

So while beliefs can’t be plausibly extended, does Clark & Chalmers’ thesis still hold: can the mind be extended? Is there some aspect of our self that reaches out into the world? I believe that memory fits the bill. When Otto writes down information in his notebook, what is contained in the notebook is not beliefs, but memories.

The distinction between memories and beliefs is perhaps subtle, but it is clear that there is a difference. Memories can do many of the things beliefs do that I described above: they can influence our behavior without us being aware of them, they can be recalled automatically, etc. In the case of memories, however, these properties are not in any way essential. Going back to the case of Alicia, we can ask whether she has memories regarding cars. It is apparent that she does: the memories were there, she just could not access them. What would she be re-accessing otherwise? Memories need be nothing more than the storage of information[3]. (Of course, one could ask whether, without access to information regarding cars, that information, and hence those memories, could be said to belong to Alicia. I discuss this question—that of the ownership of memories—later on.)

Otto’s, and Our, Memories

Memories seem to be much more plausibly extended than beliefs[4]. Information stored outside the body seems to serve the same functional role as memory, for the same reasons Clark & Chalmers argued that such information serves the same function as belief. If we follow Otto around, we will see that where we would commit some information to memory, he writes it down, and where we would consult our memory, he consults his notebook. It is more difficult and effortful, perhaps, for him to access the information in his notebook than it normally is for us to access our memories while in a normal state; however, it is no more difficult for him than it would be for us to access our memories if we were tired, distracted, or had suffered the same sort of brain trauma as Alicia. In fact, even in a normal state the access of our own memories can be difficult: I often find, for instance, that the name of some person is inaccessible to me (despite the fact that I know it) and that I have to perform some sort of effortful memory search to re-access it. There is perhaps a qualitative difference in our memories: whereas my memories “feel” like information that I knew all along, the contents of the notebook would not. But this only reflects the fact that we are accessing the memories through different modalities.[5]

Who’s Memories?

In this discussion, the question may arise: if memories can be stored outside of the body, what makes those memories one person’s instead of another’s? Or, furthermore, what makes some information I’ve written down my memory, instead of just a record of information? I think this is the wrong question to ask. It assumes that my memories are encased in a unitary shell, that one person’s memories cannot be another’s, and that something is either my memory or it isn’t. I propose, instead, as an object of further study, that there is a degree to which a memory can be considered to belong to a particular individual. This degree is determined by a combination of properties, neither of which are necessary in and of themselves, although it is necessary that there be at least one or some combination of them: first, how easily is the information recalled by that particular person, and secondly, how important is the information to the identity of that person? By the first criteria, anything stored “in the head”, which when recalled has the qualitative sense of being “mine”, is indisputably my memory, regardless of how relevant it may be to my identity; and by the second criteria, a record of my life, sealed away so that I even cannot review it, would also constitute my memory. These are the extreme examples—most memories share both properties to some extent. In Otto’s case, the information in his notebook serves as his memory because of its reliability and ease of access, and also because without his notebook he would likely feel that he had lost an important piece of himself. [6]

But What of the Mind?

Perhaps, at this point, the reader will object that while memories may be extended, they are not aspects of the mind; therefore, the mind is not extended. Perhaps the mind is only those operations we perform on our memories; after all, there are many sources of information that the mind draws from in order to function that are not themselves part of the mind—sensory data, for instance, or, if you feel that the senses are part of the mind, the content of that data—and perhaps memory is such a source. To this, I would reply that in my formulation my “mind” is identical with my “self”, and it is our memories that make us who we are: if you remove some memory that is important to my conception of self, I would not longer be the same person, and, therefore, I am not of the same mind. Therefore, memories are part of the mind.

I think the difficulty lies in thinking of “the mind”: of the self as being one unitary thing, that if extended is extended in its entirety. Instead, I propose as a question for further study that the mind is made up of several different parts. Our conscious awareness is one such part, but our memories are also important. This non-unitary structure explains how outside sources of information can be memories and part of the mind, even though our recall of them is qualitatively different than that of memories stored “in the head”: the conscious aspect of our mind recalls them in different ways. Of course, now we need to ask how we determine what, exactly, can be considered part of the mind. There are many important factors that allow us to think—the beating of our heart, etc. How do we determine which of these should be considered part of the mind? I propose that the factors that are part of our minds are those that contribute functionally to the workings of the mind in such a way that they cannot be replaced without changing the identity of that mind—regardless of whether these factors are inside the head or not.



[1] I don’t consider it at all impossible that beliefs could be extended through the use of neural prosthetics in the future, as long as they satisfy the requirements listed later on in this paper.

[2] That, for the purposes of this thought experiment, does not include relearning the information.

[3] Although one can argue about whether and what certain types of stored information can be considered “memories”. I do not discuss this here.

[4] Furthermore, it seems much more natural to extend them this way: colloquially, we talk about a book of recollections, or a box full of memories, or even of places that remind us of the past as somehow containing the memories themselves (“this house is full of memories”), in a way that we do not for beliefs. While this fact about natural usage of terms does not show that it is in fact wrong to speak of beliefs as being extended (although as seen earlier it seems that it is), it does lend support to that assertion, and it make it even more likely that memories can be plausibly extended, as the idea does not seem too far from our natural intuitions about the use of our terms.

[5] One objection could be raised related to recent research on memory: it seems that when we recall memories, we re-imagine and rewrite them. Otto does not do this with the contents of his notebook (or, at least, it isn’t necessarily part of the process), therefore the contents are not memories. I would reply by saying that this fact about memory as it happens to work in no way describes an essential property: it could have turned out that the recollection of memories did not work this way, yet we would still call it memory.

[6] It is also possible for entities other than individuals to have memories: collective memory is allowed for, if the information is easily accessed by and/or important to the identity of the people as a collective.

Mind-Brain Identity

Hampshire College

Philosophy of Mind


What is identity theory? In this paper, I explain it, and explain why it is flawed, namely, because it does not account for multiple realization.
Identity theory is the metaphysical doctrine that, while epistemically distinct, brain states and mental states are ontologically identical. By this I mean that mental states and brain states are the same thing: the only difference is in how one perceives, thinks about, and describes this thing. The fact that brain states and mental states are identical is an empirical observation: it is not known a priori. An analogy that will perhaps make this whole idea clearer is the case of lightning, and its identity with electrical discharge. It is known that lightning is electrical discharge; however, the way we perceive, think about, and describe lightning is completely different from the way we perceive, think about, and describe electrical discharge. Furthermore, we would not know that they are the same thing if it had not been experimentally confirmed (ala Benjamin Franklin and his kite). Similarly, evidence from cognitive neuroscience leads to the hypothesis of identity. Identity theory is a reductionist theory; it posits that the description of any higher-level phenomenon can be reduced to a description of lower-level phenomena without a loss of information, i.e., mental states can be reduced to brain states, which, being physical states, are further reducible to the laws of physics.

Identity theory is, furthermore, a theory of type-identity: it claims that any particular mental state type (such as pain) is reducible to a particular brain state type (such as C-fibers firing)—indeed, that it must be that brain state type, just as lighting must be electrical discharge. This is distinguished from token-identity, which claims merely that any instance of some mental state is reducible to some sort of brain state. Type-identities, though not without their difficulties, are much easier to clearly define than token-identities. Take, for example, the claim that “pain is C-fibers firing”. If something is pain, it is C-fibers firing, and if it is C-fibers firing, it is pain—plain and simple. Token identities are fuzzier: for example, what is it about this object in my hand that makes it a mug? If we want strict definitions of our token-identities, we must be able to say that those things, and only those things, that satisfy a certain set of criteria (has a handle, holds liquid, etc.) are tokens of a certain type. We must either posit functional types—i.e., that all tokens that are representative of the type X have the particular combination of functions Y, and that all things that have the particular combination of functions Y are X—or physical types—all tokens that are representative of type X share a particular physical trait Y, and all things that share the physical trait Y are X.

The advantages of a reductionist theory like type-identity theory are apparent: it gives our experience a sort of explanatory, causal, and even ontological coherence and closure. When things are reduced, the method of explanation for any particular phenomenon is no longer different from the method of explanation for any other phenomena: they can both be explained in terms of the same underlying process. Causally, reductionism implies that there are not separate chains of causality for different phenomena, i.e. that there are not mental processes going on according to their own laws of causality separate from the physical processes and their laws of causality. And ontologically, reductionism reduces the number of posited kinds of entities (minds, heat, etc.) down to, potentially, one.

There is, however, a problem, which may be fatal for identity (or, specifically, type-identity) theory, and, indeed, reductionism in general—multiple realization, i.e., the fact that the same higher-level system can be implemented on entirely different underlying physical systems. For an example, consider an algorithm (a set of rules for how to perform some function). Let’s say the rules of this algorithm are to take an input number and add 2 to it. This algorithm can be run on multiple, almost incomparable physical systems. For example, I can do it in my head, I can do it with a pen and pencil, I can design a contraption out of Tinkertoys to do it, I can create a program on my calculator to do it, it can be done on a Mac, and it can be done on a PC. In fact, there are a potentially infinite number of ways that this one algorithm could be implemented, with nothing physically similar about any of them.

Similarly, there seem to be multiple ways in which a mental event, for example seeing green, can be correlated with brain events—after all, the way that you see green might be quite different from the way I see green. In fact, in order for the description “seeing green” to be useful, it must describe these multiple ways: otherwise, the only thing that could be described as truly seeing green would be me, as everyone’s brain is different and I am the only person with the exactly the physical brain state that I have when seeing green! The case of C-fibers being pain would seem to be easier: C-fibers are, after all, a particular type of neuron shared by us all. But if pain is C-fibers firing, does that mean that a petri dish full of C-fibers and nothing else would experience pain? If there is any conclusion we do not want our theory of mind to entail, it is this. This would be true of any mental event: if one attempts to identify it with some very specific thing (the activities of a type of cell, the presence of a chemical, etc.) one must claim that that thing, all by itself, is the said mental event, a claim which leads to absurdity. Pain, instead, seems to identify with various events interacting amongst different parts of the brain, the specific organization of which is once again going to be less and less similar between any two people. The problem becomes even more difficult when considering the supposed mental lives of non-humans. Monkeys, dogs, reptiles, etc., have brains that are physically more or less different from ours. Identity theory would seem to suggest that these creatures cannot have the same sort of mental events we do[1]. And these creatures at least are biologically similar to us: what about computers or aliens?

In fact, everything that has mental states would seem to have to have distinct types of mental states! Our ontologically simple theory has failed us: instead of pain, for example, we now have my pain, your pain, Sue’s pain, dog pain, alien pain, computer pain, etc. Instead of simplifying our ontology, identity theory requires us to posit a new type of mental state for each possible case. This is no good—we would like a theory of mentality that not only explains the nature of mental states we happen to have, but that also has some degree of generalizability, allowing us to say something about the mental states of others in a way that such states can be related to our own. Identity theory has very weak explanatory power in this regard—it would be as if we had a theory of biology that didn’t allow us the make comparisons between different species.

Is there a solution, wherein we can avoid dualism, keep physicalism, and not run into these problems? I believe so, and I believe the answer lies in a sort of functional type-identity theory that I glossed over earlier. Let’s go back to the algorithm: I said that it was the same algorithm regardless of the physical implementation: but what makes this the case? It is that the algorithm is performing the same function. Similarly, a mental event is a sort of function processing information in a certain way. While there may be difficulties with this theory, it seems to me to be one worth pursuing.



[1] I must qualify this statement, because identity theory has, occasionally, done the exact opposite. If we identify pain with C-fibers firing, and another creature has C-fibers, then they feel pain: we can therefore infer that animals such as, say, cows experience the same kind of pain we do, as the physiology is quite similar. This argument, unfortunately, does not hold under scrutiny, since, as said earlier, pain identifies the interaction of several parts of the brain which is going to be very different between us and other creatures.

Logic and Mathematics

Hampshire College
Low-Tech Comupting

Mathematics and logic are, firstly, systems whose original purpose for existence is the same: to provide a system for more effectively operating within our world. Both, as they began, did so through a codification of common sense. They quantified our experience and defined what operations we could apply to those quantities so that we could systematically find facts of which we were previously unaware. The distinction between the two was in what they attempted to quantify: mathematics was primarily concerned with objects in the world, such as money, land, and later the laws of physics, whereas logic was primarily concerned with concepts in the mind, such as propositions and categories (although those concepts often related to objects).

However, despite their different foci, logic and mathematics were based on a common method: deduction, that is, a system of rules which you can apply in such a way as to arrive at a conclusion which is both new and necessarily true (pg. 99). As such, it was almost inevitable that the two would meet. One of the earliest examples of this meeting is probably Euclid’s “Elements” (pg. 324), but the true synthesis came with the work of Boole and Frege, where it was shown that logic could be dealt with mathematically, and that one could attempt to build mathematics on a logical foundation (pg. 329-330).

The question now raises itself: is mathematics a branch of logic or logic a branch of mathematics? The logicists such as Frege and Russell believed the first, intuitionists believed the latter (pg. 328). I believe both are wrong. Just as philosophy is not a branch of logic, but instead logic is the method by which we do philosophy, so too is mathematics not a branch of logic—logic is the method by which we do mathematics, in the construction of theorems and proofs. Because of this, though, logic cannot be considered a branch of mathematics: while it is a mathematical system that works in a way particular unto itself and distinct from other areas of mathematics, it nevertheless permeates the entire structure of the enterprise. If mathematics is a tree with branches, logic is how the tree grows.

However, it is here that we run into our difficulty. Because while logic is how the tree grows, logic won’t necessarily make it grow the way we want. Logic and mathematics both have the same problem that the way that they allow us to systematically analyze the world by turning that world into symbols, which are then manipulated according to a strict set of rules. This is formalism, which attempts to avoid errors due to flaws in human intuition by making logic and math completely devoid of meaning. But as long as the symbols, statements, and rules aren’t mutually inconsistent, you can come up with whatever rules you like and make whatever statements you like. Yet, only some of these rules and statements will give you a system that provides an accurate description of the world—which is, after all, the original purpose of mathematics and logic. The test of a logical or mathematical system’s truth would seem to be, then, concurrence with the actual world, with experience. But if this is the case, then why should we try so hard to logically prove that, for example, 1+1=2, as was done in 300 pages by Russell and Whitehead? To do so is to use a system whose truth is based on experience to prove something that according to experience we already know to be true!

I must admit that I don’t know the answer, and furthermore don’t know enough about mathematics to even know whether I’m asking the right questions. I can say, however, that I have a very strong conviction that we should attempt to prove as much as we can by starting with the least and most simple and obvious assumptions. I suppose this really is the maxim of all work in philosophy, mathematics, and science, which has allowed those fields to flourish: to not take anything for granted, unless one absolutely has to.

Sunday, April 3, 2011

Agnosticism does not exist

OK, so this argument makes some people angry, so follow me:

  1. In the literature, there are usually two varieties of Atheism that are distinguished from each other: weak (negative) atheism and strong (positive) atheism. Weak atheism is the lack of belief in the existence of God: weak atheists feel that those who believe in God carry the burden of evidence. Strong atheism is the belief that God does not exist: strong atheists feel that the evidence points to God not existing. (Note that a lack of belief in X does not necessarily entail a belief in Not X.)
  2. One cannot both believe that God exists and not believe that God exists. They can, however, believe neither.
  3. Now, an agnostic is, according to Merriam Webster, "one who is not committed to believing in either the existence or the nonexistence of God or a god". Agnostics do not believe that God exists, nor do they believe that God does not exist.
  4. Therefore, by 2 and the definition of agnosticism, agnostics do not believe that God exists.
  5. Therefore, by 4 and the definition of weak atheism, agnostics are weak atheists.
Now, there are people who would probably consider themselves religious agnostics: they believe in God but think there's room for doubt. But in this case they ARE "committed to believing in...the existence of...God or a god."

So in summary: one can either believe that God exists, or not believe that God exists (although the latter does not entail believing that God does NOT exist). Because agnostics do not hold any particular attitude about God's existence, they do not believe that God exists. Therefore they are weak atheists.

Note on "belief" and "knowledge": "agnosticism" strictly means "not knowing". So strictly speaking, religious people and atheists could be agnostic, even though they have beliefs on the matter, if they do not feel that they know whether god exists. However, the term has come to mean not believing anything one way or the other (as can be seen in the definition). With this meaning, my argument holds.

Tuesday, February 22, 2011

Why did I think this post was a good idea?

So I decided it would be worthwhile to check out the stats on my blog.

Search Keywords that got people here:

All Time
  1. dervine7.blogspot.com
  2. peter benzi
  3. sandra pettinico
  4. taliesin nyala
  5. "richard wayne lee" or "lee, richard wayne" strained bedfellows
  6. nvcc homeschool pdf connecticut
  7. patricia pallis
  8. peter benzi nvcc
  9. petr benzi
  10. "douglas hofstadter" utilitarianism
All pretty predicatble. Stuff I've mentioned in my posts (especially the teachers I mentioned in my commencement speech). It's mildly interesting that 2 people apparently got to my blog looking for the article by Richard Wayne Lee that I cited. But who is this Taliesin Nyala? Turns out that she's an Alum of Hampshire, and she shows up on my page because I follow the Culture, Brain, and Development blog. Which makes these next keyword stats very confusing...

Past Month
  1. dervine7.blogspot.com
  2. "taliesin nyala"
  3. agent hujinikabolokov
  4. religious humanism strained bedfellows pagans
  5. taliesin nyala how pleasure works
  6. taliesin nyala naked
  7. taliesin nyala sex
Your guess is as good as mine. Especially as if you search those last two the only website you get that has ALL those words in it is mine! ("naked" appears in my blog post about the sayings of Jesus, and "sex" appears in my favorite quotations)

(By the way, Agent Hujinikabolokov is from Sleep Talkin' Man.)

Referring Sites

Actually, never mind. This wasn't a terribly interesting blog post to begin with and now it's 2:24AM. Except for uupdates.net
and Facebook, the referring sites just a bunch of referral spam (I'm guessing porn), which is mildly depressing. From Russia, it seems. Although I have been getting a lot more views from Facebook recently, which means my friends are looking at this blog, which is cool! Speaking of Russia, that spam makes it so that they are one of the top countries to view this blog. So there you go.

Thursday, February 17, 2011

EXCITING DEVELOPMENTS!

I'm going to start trying to post regularly! I'm sure that for all of you guys who check this blog regularly only to be disappointed by the lack of, well, blogging, this will once again fill your lives with meaning...

OK, so the title of this blog is hyperbolic. Which leads to the second development, which is that I realized my blog was distinctly serious and formal. Which is odd, because I have a distinct lack of seriousness and formality. So, more silliness.



So......yeah. That's the story.

Wednesday, February 16, 2011

Doodled this during a class...

The Turing Test

Hampshire College
Philosophy of Mind

The Turing Test involves the following procedure: a person, the interrogator, is talking to both a machine, designed to imitate a human, and to an actual human. (The communication is entirely through text.) The interrogator’s job is to determine what is the human and what is the machine. If no interrogator can determine which is which, the machine is judged to be thinking. Is this accurate? In this paper, I will argue that it is, because we can only judge thought based on behavior, and if the behavior of the machine is identical to a human’s but the machine is not thinking, we must doubt the existence of human thought.

When discussing the Turing Test, it is important to make a distinction between what we can measure and the “inner nature”, so to speak, of that which we are measuring. The discussion then involves two questions: by what criteria are we to judge something as thinking, and is that thing, in fact, thinking?

In regards to the first question, let’s begin with considering the method that we, in fact, employ, before moving on to the method we ought to employ. I.e., how is it that we naturally determine that some particular thing in our environment is thinking? A first answer to this question might be that we ascribe thought to other members of the human race, and nothing else. This will not do, however, as we often say that certain humans lack thought: definitely if they are brain dead, for instance, and more controversially if they are mentally handicapped or a child. And although there are no non-controversial examples of non-humans thinking in our experience, we nevertheless have no problem imagining that non-human beings, such as extraterrestrials—that have no genetic or physiological resemblance to us (or, in some cases, no physiology at all)—could nevertheless think. Also, it seems that we often ascribe thought to non-human animals of which we do have experience, i.e., “the dog is thinking about where to hide its bone”. Perhaps in this later case we use “thinking” as a figure of speech, a sort of shortcut for “acting as if they are thinking”. But why then do we not say that when we talk of other humans “thinking” we are using the same shortcut? After all, we do not perceive thoughts (except for our own)—we perceive their manifestations, the acting-as-ifs. For those who need to be convinced of this fact, consider the following thought experiment.

Suppose that someone had been rendered completely incapable of moving any part of his/her body, either through nerve damage or some sort of outside force: furthermore, the parts of his/her nervous system that we believe are responsible for thought have been hidden from us in some way (perhaps encased in some material which is opaque to any sort of scan), so that we cannot determine whether they are active or damaged. (We could also suppose that there has been no damage to the nerves carrying signals to their brain, if this supposition is deemed necessary.) It is undeniable that this person could be thinking: victims of temporary paralysis can describe the experiences and thoughts they had while they were paralyzed. But it would be completely impossible for us determine whether this person is, in fact, thinking.

So how do we naturally determine whether other things in our environment are thinking? The examples given of brain-dead individuals and aliens make it apparent that this judgment is not ultimately made on the basis of belonging to a certain species, human. It may be initially made on that basis—we probably have an instinctual tendency to ascribe thought to any other human we meet and ascribe a lack of thought to any non human (for example, on encountering a brain-dead individual we might assume he/she is aware until we find out their condition, or in meeting a sufficiently strange alien we might assume it’s just a non-thinking creature until we learn more about it)—but it isn’t ultimately made on it. Instead, we make this judgment based on whether the thing behaves in a way that appears to indicate thought.[1] Furthermore, the example in the thought experiment above makes it clear that this is the only way that we can make this judgment; and, pragmatically, it is the way that we ought to make this judgment. This means that for a machine that passes the Turing Test we should judge it to be thinking: it is behaving exactly like a human, human’s think, therefore it is behaving in the sort of way that indicates thought—and if from these facts we do not infer that machine is thinking, then we are holding it to a different standard than the other things about which we make such judgments.

Now we move on to the second question. Does our judgment, made for pragmatic reasons, reflect actual reality? Is the machine, in fact, thinking?

It is useful now to define exactly what we mean by “thinking”. Turing’s definition[2] is that the sort of thing that thinks is the sort of thing that passes the Turing Test, a definition which is useful for him as a computer scientist interested in what computers can do but not very useful for philosophers of mind, as the definition makes it tautological that a machine which passes the Turing Test is thinking. For our purposes, I propose the following definition: something is thinking when the part of it responsible for thought is manipulating models of the world (which are not physical models), and when it has a subjective, qualitative awareness of the models and manipulations that it is performing on them. The first part of the definition comes from the fact that one of the things that distinguishes thought from non-thought is that while the latter consists of observably deterministic responses to force and/or stimulus and in solving problems purely by trial and error behavior (randomly producing behavior until something works), the former consists of considering the best course of behavior before doing anything, and this consideration is made by the thinking thing modeling the situation and then running through the possible solutions, noting what those different solutions do within the model—observationally, this means that something that acts as if it thinks can look at a puzzle and then proceed to quickly (relative to trial-and-error) perform the solution. This is how we, who are thinking things, behave, and it is a major part of the acting-as-if by which we judge that other things are thinking. However, the second part of this definition is important, as there are many things that manipulate models of the world that may not be thinking: any computer would fall under this category. The second part of the definition is more essential to our question, as we are making the distinction between what we can measure and the “inner nature” of that which we are measuring. Because of this, it is important to specify that the subjective awareness be qualitative, i.e., that there is something that “it is like”, so to speak, for the thing to be aware of what it’s doing—as computers can monitor and analyze their own internal processes and still not be considered to be thinking.

However, the fact that computers can do this might mean that there is something “it is like” and we just do not realize it. For this reason I will not ask the general question about what sorts of computers could think, but the specific question of whether the sort of computer that can pass the Turing Test thinks; I’m concerned with whether passing the Turing Test is sufficient for allowing us to infer thought, not whether it is necessary.[3]

We are now ready to answer the question: is the machine that passes the Turing Test thinking? We have shown that if we are to judge whether it is thinking the way that we judge whether other things are thinking, we must judge it to be thinking. And based on the facts considered so far, I believe that our judgment would be accurate. The machine that passes the Turing Test is the machine that perfectly imitates human behavior. If it is not, in fact, thinking, then humans that pass the Turing Test would not have to be thinking either. Thinking would not have any explanatory force or necessary connection to behavior, and we therefore would have no reason to assume its existence. Furthermore, it could be argued that if the machine is not thinking, humans must not be thinking, as we have a case of two things that are behaving in an identical manner and therefore it would seem that any phenomenon produced by one must be produced by the other (even if the phenomenon looks different: for example the same program run on my Mac plays music, whereas if it were run on a mechanical computer it does not). If the machine that passes the Turing Test is not thinking, then solipsism becomes a truly viable option: and this, I think, is a conclusion no reasonable person wants to accept.





[1] One who objects to this assertion might bring this up the case of the paralyzed patient who, while not exhibiting any behavior, nevertheless exhibits certain brain activity from which we infer that he/she is thinking. However, the only reason that we can make this inference is that that sort of activity is normally associated with thought-exhibiting behavior: if we had not observed such a correlation, we would not know what, if anything, the brain activity indicated. We can also imagine that a neurological (or whatever word we’d use for the study of the part of the alien involved in cognition) examination of a thinking alien might reveal completely different sorts of activity. Therefore, my assertion can be easily extended to say that, in cases where we cannot make judgments based on behavior, we can infer thought if the thing we’re dealing with exhibits some observable phenomena that is normally correlated with thought-exhibiting behavior amongst examples of that thing (if examples of that thing engage in thought-exhibiting behavior).

[2] “Computing Machinery and Intelligence”, pg. 3

[3] Indeed, it is not necessary: our paralyzed patient from the earlier thought experiment would not pass it, although he/she is thinking. It is also possibly that a beings with a higher level of thought than our own would fail it: the sorts of things they might say might appear to us to be total gibberish.