We Are Human

A very good essay here.  This is indeed what happens when we forget the “lifeworld.”  The materialist can tell us all day long that there is no ultimate purpose or meaning to existence, that his or her love for their significant other is nothing more than neurons firing in the brain and the electro-chemical processing of “information,” but we know this same person goes home and kisses his/her significant other, eats dinner, laughs, listens to music, reads, and lives knowing their existence is so much more than their reductions (at the level of theory) to matter-in-motion. 
Talk about cognitive dissonance.  Talk about a philosophy and way of perceiving and interpreting the world that is utterly divorced from actually living in that world.  We normally call that fantasy.
Imagine carrying around within ourselves (our minds, emotions, dreams, fears, hopes, loves) the very confirmations that destroy our theory or belief about how the world is supposed to be.  You can’t get outside yourself.  Tough spot that.  But the materialist must constantly convince himself that things like free-will and all the things we mark out as making us human and distinctly different from animals or machines are illusory and subjective imaginings.  In other words, all those things that make life wondrous, mysterious, and beautiful at the “human” level are false returns, ghosts, phantoms.  But no one can or does live that way in those quiet moments of reflective solitude.
Or in those moments over quiet candle it dinners with the person they love as a full moon reflects off the water, music plays softly, and nothing need be said between the two.  In those moments we are caught off-guard as it were, and we know we are alive and fully human and utterly different from animal or machine.
 
Imagine a worldview where the holder of that view must constantly convince himself that his most powerful moments of being fully alive and human are ultimately false and meaningless.
This entry was posted in materialism. Bookmark the permalink.

8 Responses to We Are Human

  1. Burk Braun says:

    Hi, Darrell-

    It is hard to believe that you let yourself in for such ignorance and luddite-ism. Straw men are strewn about, fallacies litter every sentence. Alas.

    I understand the emotion. But the philosophy of this anti-naturalism makes no sense. If you combine high computer capacity with emotions, you get a being with meaning. Implanting emotions into our computers would be easy, if not as refined as what evolution has brought to such a high pitch. But who wants a computer with PMS? Not me.

    In the end, we can all agree that we are human.. that is the biggest straw man of all. The question is what being human is and means. Does it mean believing in vaporous fantasies, or facing reality, whatever it contains? What do our brains do for us? Where is the self? We have been over all this in the vitalism debate.. how did that end up?

    Like

  2. Darrell says:

    Burk,

    Please point out one straw-man argument or fallacy in the essay. And what in the world is “luddite” about the essay? He is saying nothing against machinery or technology??? Did you read the essay?

    And, how in the world are we expected to read something like this, “If you combine high computer capacity with emotions, you get a being with meaning. Implanting emotions into our computers would be easy…” without breaking out in loud abject laughter as I did when I read it. And you think belief in God is bizarre! I would love for you to put a paper out there or blog on how “easy” it would be to implant emotions into a computer! Too many sci-fi movies anyone. You do know that R2D2 is not real, right?

    Either you have no idea what emotions are or you’ve missed the entire point of anything in the essay or my post.

    Like

  3. Burk Braun says:

    Darrell-

    Starting from the top.. “I’d like to suggest why I stubbornly continue to believe that I’m a human being — something more than other animals, and essentially more than any computer.”

    But the argument was not that he or any other human was not a human. The argument is that he exists at the end of an evolutionary process by virtue of which he shares many sub-systems with his fellow creatures, physical and mental. The argument was not that he is a termite, but that the pressures on termites and the solutions they have developed have some small relationship to those in human societies. For instance, the ability to band together into larger cultures of organisms creates an entirely new ecological entity. Ditto for humans.

    Etc and so forth. The more I hear from philosophers, the more it seems that their training reaches some kind of plateau at rhetoric.

    And the whole sloppy brain stuff- it is truer for him than he knows!

    Turning to emotions, you seem to find it hard to fathom, but emotions are easy to program. That is, expressing them, not necessarily experiencing them.. the whole issue of consciousness remains a bit hazy. But we could easily have a computer shout “Ouch” when its CPU gets to hot, say. Would that satisfy you? Or it could have headaches, and it could be attracted to other computers. It is all a matter of programming. Your cell phone probably seems to have a life of its own these days, with all the automatic updates, ads, notifications, etc. It is all oriented to being a tool for us, so it has focussed on clearly exhibiting their own emotions, but clearly the potential would be there. Or remember the Furbie? Many people become attached to them.

    Then the question becomes- where is the divide between feeling emotions and just expressing them? Personally, I don't think that is such a chasm. The programming to express them accurately become far easier if the organism as a whole reacting on a consistent basis. That is, is feeling the emotion completely. So the self-monitoring becomes dominated by emotional states, and there we are, as humans, more or less.

    At any rate, this will all become clearer as computers advance, and you can keep right on complaining about the non-feeling-ness and inhumanity of sentient robots.

    Coming at this from the other perspective, what do you think is going on in the brain? When Mr. Polt has a stroke and can no longer remember things or read language, what do you think has happened? Rage all you want, but the writing is on the wall.

    Like

  4. Burk Braun says:

    Let me try another one, to go slightly deeper into his arguments “But since the human race has evolved to be capable of a wide range of both selfish and altruistic behavior, there is no reason to say that altruism is superior to selfishness in any biological sense.”

    He seems to misunderstand evolution here and its relation to ethics. The point of these evolutionary studies is to clarify why we have certain capabilities- why we are capable of altruism, for instance. The evolutionary argument is not that altruism is always good, or that it is always bad. Indeed both are losing strategies. The argument is that, being social and socially dependent, we have keen consciousness of who is in our group and who is not, and have an interest in altruism towards the former, not the latter. And we have had this for a very long time.

    But these capabilities are only roughly encoded. They can help explain why tribalism is such a strong propensity, to the point of genocide. But it is only a tendency, submerged in a highly complex learning machine that deals with the world day in and day out, solving the many problems we need to survive and thrive. We are also very diverse, which itself has an evolutionary explanation, incidentally. As I mentioned in my solution to free will, we can learn, and that is the key to being morally accountable, and also to much else in ethics, where we mix our motivations and desires with our wisdom about what would satisfy us most in the long run .. to come up with morality, etiquette, laws, etc… the apparatus of our social existence.

    In the end, he seem so extremely obtuse that it is impossible to believe that he is arguing seriously. He is either shockingly ignorant (unlikely) or is so intent of scoring rhetorical points that he has given up posing a reasonable argument. Not a great advertisement for Xavier, incidentally.

    Like

  5. Darrell says:

    Burk,

    Wow. Where to begin. His point had nothing, whatsoever, to do with addressing some assertion that he was not human. His point was he is different from animals and machines. He is something “more.” He is a “human” being in the very sense of demarcation. If your other examples of straw-men or fallacies are like this one, then you have nothing…as I suspected.

    The more I hear from “scientists” the more it seems their training didn’t include basic reading comprehension.

    “It is all a matter of programming. Your cell phone probably seems to have a life of its own these days, with all the automatic updates, ads, notifications, etc. It is all oriented to being a tool for us, so it has focussed on clearly exhibiting their own emotions, but clearly the potential would be there. Or remember the Furbie? Many people become attached to them.”

    Burk, even for you this is ridiculous. A cell phone? Are you saying a cell phone has emotions? If so, I have a bridge in Brooklyn I would love to sell you.

    As to the rest, are you joking? You think a computer could actually be “attracted” to another computer if “programmed?” You think that would be similar to what happens when people are attracted to each other? That is the silliest thing I have heard in a long time. Complete rubbish. You might as well tell us you think time travel and teleportation are right around the corner. “Furbie?” Again, are you joking? You think that is an example of a robot exhibiting emotion on par with humans or even close?

    It is so obvious the writer was talking about “experiencing” emotion and not artificially “expressing” such. Why not address the real issue? Talk about straw-man arguments. He is talking about the difference between the very thing you are noting as examples and the “real” thing that we experience as humans. That difference and gulf is as wide as the universe.

    “the whole issue of consciousness remains a bit hazy.” Really? That might be the understatement of the century. It is beyond “hazy.” “Science” is no closer to solving that problem (or rather “fact”) than NASA is closer to sending a man to the farthest known planet.

    This absolute nonsense reminds one of Chesterton’s quotes, which describes perfectly what is going on here:

    “It is often supposed that when people stop believing in God, they believe in nothing. Alas, it is worse than that. When they stop believing in God, they believe in anything.”

    To follow your logic we might as well believe the narrative laid down for us in Star Wars. Some people will indeed believe in anything.

    Like

  6. Burk Braun says:

    Darrell-

    Sorry to get into such an unproductive mode here. We clearly are coming from vastly different directions, specifically in terms of our understanding of emotion. While it is easy to valorize emotions as amazing, savor-of-life, earthshaking, etc., there is also another side, where they can be viewed as very primitive- the most basic components of any life form. Worms have emotions. The divide between good and bad, tasty and bitter, attractive and repulsive is universal in animals and lower forms of life as well.

    So in the first place, it is mistaken to hang our hats as humans on some qualitative emotional difference from other species. We have certainly made things very complex for ourselves, but others feel very deeply as well.

    Secondly, you are sort of right to mock my argument about exhibiting emotion versus feeling emotion.. they are somewhat different things. But not as much as you may think. Our emotions are sometimes turned off, for instance in extremely dangerous situations.. you often hear of someone saying the everything started to go in slow motion, their head cleared, and they just did what was necessary.

    This kind of thing is a sign that emotions are a module, not a compositional universal, in our mental makeup. Much of our emotional experience is also not directly felt but judged from how our body is responding. Blushing is an example. As you know, I would bet that even the most direct kind of consciousness and feeling of these kind of things is going to be analyzed and reproduced in the not to distant future as well.

    Generally, it seems to me bizarre for a philosopher especially to be fronting these kind of demarcation arguments about how humans are somehow totally different from the rest of the animal and computational world, at this late date. Yes, we are more than our neurons.. we are the mind founded on billions of neurons, on enormous social history, and many other bequests. It is all amazing and wonderful. But that does not alter the utility of reductionism to drill down into the inner workings and figure out how the machinery really works. To think otherwise is to both deny the huge amount that has already been learned about how our minds work, and to deny future progress on a luddite program of “can't do that, so shouldn't try”.

    Like

  7. Darrell says:

    Burk,

    “Worms have emotions.”

    You have no idea whatsoever whether worms feel and experience emotions like humans do, no matter what behavioral or outward stimulus is being noted or tracked.

    “The divide between good and bad, tasty and bitter, attractive and repulsive is universal in animals and lower forms of life as well.”

    While reactions such as “tasty” and “bitter” might be able to be detected in animals and lower forms of life, you have no way of knowing if “good” “bad” “attractive” or “repulsive” even come close to what humans mean when they use those terms. Instinct and behavior do not translate into or capture what humans mean by those terms. This is so obvious a truth it staggers the mind that one would have to defend it. I feel like I’m in a conversation with someone who thinks zombies are possible.

    “So in the first place, it is mistaken to hang our hats as humans on some qualitative emotional difference from other species. We have certainly made things very complex for ourselves, but others feel very deeply as well.”

    What “others?” Unless these “others” are human, how would you know what they are feeling whether “deeply” or not? What does “deep” even mean in a materialist reductionist philosophy?

    “It is all amazing and wonderful.”

    Not for any reasons that would come from a reductionist materialist philosophy. In fact, the very purpose seems to be to reduce any amazement or wonderment. Part of the writer’s point was to resist this drive to “reduce” what it is that makes humans different as to animals and machines.

    “But that does not alter the utility of reductionism to drill down into the inner workings and figure out how the machinery really works. To think otherwise is to both deny the huge amount that has already been learned about how our minds work, and to deny future progress on a luddite program of “can't do that, so shouldn't try”.”

    Here is a perfect example (while you could show none) of a straw-man argument. Nowhere was the writer or I trying to say that scientific work in these areas should stop or be demeaned. No one was saying that work in the area of neurology or behavioral sciences have been without benefit. Burk, you are so sure you know what people are “really” saying you can’t even read or “hear” what they might truly be saying.

    His point was that none of the work in those areas presently, or for the foreseeable future, gives us any reason to believe that animals or machines can experience emotion or what it means to be “human” in the very same way we do. And he is absolutely correct.

    Like

  8. Burk Braun says:

    Well, this is the issue. Your writer says “I have no beef with entomology or evolution, but I refuse to admit that they teach me much about ethics.”

    At the same time, those he rails against say, as Darwin .. “He who understands baboon would do more towards metaphysics than Locke.”

    The writer goes on..
    “Knowing how my selfish and altruistic feelings evolved doesn’t help me decide at all.”

    “In fact, the very idea of an “ought” is foreign to evolutionary theory. It makes no sense for a biologist to say that some particular animal should be more cooperative, much less to claim that an entire species ought to aim for some degree of altruism.”

    “In short, a purely evolutionary ethics makes ethical discourse meaningless.”

    The hostility is all over the place. So your claim doesn't really hold water.

    But anyhow, the issue is not that scientists are calling humans ants, or telling us to learn to be more like mole rats. Those are straw men the writer freely strews about. The point that scientists make is simply that we are explicable. We can learn where our tribalism came from, and how it is encoded. We can learn where our sex drive came from and how it is encoded. We can learn where our altruism came from and how it is encoded.

    I know that you are hostile to that position. And that hostility has a very direct influence on what hypotheses you are willing to entertain and what kind of science you would support. And what is the point? A philosopher sure would understand the dictum, know thyself. That is what this all about- the narrative and knowledge surrounding our self-understanding. That which the arts all aim at, as do some of the sciences. And religion as well. Who will tell us the most about ourselves? Hmmm

    “Show me the computer that can feel the slightest twinge of pain or burst of pleasure; only then will I believe that our machines have started down the long road to thought.”

    That is an odd definition of thought indeed. Genocidal maniacs have thought, but computers don't? If you love emotion, just say so. Don't call it thought. And if you value humans because of their emotional capacity, then you should value dogs and cats as well, right? How far do you go towards animal rights?

    Clearly, the issue is consciousness, which is indeed missing from computers. As I noted, emotions wouldn't be hard to encode, but consciously experiencing them in some human-like way.. that is something we don't yet understand, and have not yet reproduced. But they question again is whether there is in principle some reason to think it can not be done, or should not be done.

    Like

Comments are closed.