Straw Dogs Chapter 1: The Human

In this first chapter, John Gray presents the main thesis that are going to be developed in the book, and gives some arguments to support them.

Here is my own rendering of those main thesis:

1) Humans are not special. They are not masters of their own destiny, but just another animal, with the same problems and irrational behavior that other animals show. “The upshot of scientific inquiry is that humans cannot be other than irrational. ”

2) Humanism is not a rational view of the world, but superstition based on Christian faith.

3) When Darwinism is properly understood, we can understand that there is no hope for human salvation. As the author states in the preface to the paperback edition: “A truly naturalistic view of the world Leaves no room for secular hope”

4) Science  serves two main human interests: hope in a better future and being able to censor those people that do not think like the majority. It is the only unquestioned source for such beliefs in our humanistic world nowadays.

5) Technology is not a source for good, but a way for humans to look for what is more pressing now, without caring about moral issues or consequences in the long run. “Pogroms are as old as christendom, but without railroads, telegraphs and poison gas there could have been no holocaust. […] humanity’s worst crimes were made possible only by modern technologies.”

So far, the book doesn’t sound than other nostalgic for an ancient regime of shared beliefs and social norms. This chapter gives us powerful insights. Let me point out the ones I found more relevant:

-It is very difficult to establish the effect that a technology is going to have in humans beforehand.

– Technology is not something humans do. It has its own separated existence and inner laws. We can find technology in insects as well.

– the distinction between artificial and natural doesn’t make ultimate sense: “Cities are no more artificial than hives of bees”

– Green humanism, despite their luddite pretensions considering technology harmful ,share with technological humanism the Christian superstition of humans being something special, with free will and the ultimate capacity to save themselves.

– Religious fundamentalism is not the answer to the problems that technological humanism gives us. “Religious fundamentalists see themselves as having remedies for the maladies of the modern world. In reality they are symptoms of the disease they want to cure.

– Any fantasies about returning to a former time in which we share common views of existence, a common faith and so on, are doomed. “We cannot believe as we please. Our beliefs are traces left by our unchosen lives.”

And then, some other parts I don’t find them that good:

Arguments based on philosophy of science are not that impressive. When he quotes Feyerabend at the ultimate reference about Galileo he is forgetting decades of further research of history and philosophy of science that shows that there is more to the debate on Heliocentrism than being good with rhetoric and writing in Italian. Similarly, when he says that Popper philosophy of science can’t explain real developments in science, he chooses Einstein, which is a very bad choice indeed. One of the main case studies in favor of Popper’s falsifiability is the famous crucial experiment during a solar eclipse in 1919 that showed -in a very Popperian fashion- that Einstein’s General Theory of Relativity was -provisionally- right.

Plus, even if Gray is right stating that Popper got wrong how science works -an statement that most philosophers of science would agree nowadays- that will only show that Popper was wrong, not that science is irrational.

Also, throwing quantum mechanics in the discussion to show that the world is irrational is quite cheap. Unfortunately it seems like a common resource for people that want to show that science is irrational.  I am planning to get signatures  to approve a law that anyone that uses in a book quantum mechanics in a oblique way related to some metaphysical debate should give 10% of their royalties to the “Leave Quantum Mechanics Alone Foundation”.

A great deal of the argumentation from this chapter is based on Lovelock’s hypothesis on Gaia: the Earth as a sort of superorganism, to show that humans instead of being the kings of the creation and masters of their own destiny are just a disease on the Planet, which the Earth will sooner or late wipe out completely. The tone is dark and pessimistic. This sentence from the preface to the paperback edition says it all:

happily humans will never live in a world of his own making

Advertisements

8 thoughts on “Straw Dogs Chapter 1: The Human

  1. I second your motion to approve a law banning the use of quantum mechanics as an argument against rationality. If passed, it should be applied retroactively to many postmodern French writers as well.

    Back to Gray: another idea I find very interesting (you do mention it tangentially) is the destructive potential of modern technology. Gray posits that the Gulag or the Holocaust would not have been possible without modern means or communication and transport. This is only partially true, of course. You do not need modern technologies in order to produce genocide, the Rwandan genocide, for instance, simply required low-tech machetes and it occurred at least as fast as the Shoa. On the other side, modern technology (not only transport and communications but also operations and management technology!) was indeed indispensable for the careful planning and execution or the Gulag and the Holocaust. It is, I reckon, not about the number or the scale, but about the way it happened.

    Another idea I find very interesting is the notion that “the uses of knowledge will always be as shifting and crooked as humans are themselves”. Humans use what they know to meet their needs, even if it results in ruin. BTW, I’d say that this is the kernel of one of his arguments against the singularity, but that’s in a different book.

  2. Certainly. It should be applied retroactively. There you have the misconceptions of Baudrillard or VIrilio, and also all the garbage from the New Age “Thinkers”.

    And yes, management and information theory aren’t as neutral as we like to think. The book “IBM and the Holocaust” explain with great detail how Nazis made an extensive use of databases for the development of the “Final Solution”.

  3. I have a lot of issues about the book. Some of them formal and some of them content related.

    On the formal side, I cannot stop feeling as if the author were shouting at me. We writes a lot of things in an aphorism like style, without further explanations, and goes on using a kind of self assurance to try to impress you as a replacement for missing arguments. Like door to door book sellers do. I don’t buy.

    Second, I read a lot of pages without understanding anything. “human species is not master of its destiny?” and someone is?. And what does it mean to be “master of its own destiny” at all?. Its only after reading it again and again that I noticed that the book is written exclusively to “épater l’humaniste”. That is, it is written using humanistic categories in order to insult and infuriate them. But it is difficult to follow it if you are not part of the sect. It is like a Buddhist reading a treatise denying the virginity of Mother Mary. He doesn’t see the point.

    Third, there is a lot of crap in it. Since the book is so aphoristic, it can pack idiotic statement after idiotic statement and nobody ever will take the huge effort of trying to rebate them. And in the middle of it all, surprisingly, there appear correct ideas, interesting insights and nice metaphors. It seems to me as if John Gray were a pseudonym for a pair of authors that work together, the smart and lazy one, who writes only a few sentences and a less brilliant one that fills pages and pages until the book is thick enough to be published.

    The book is so dense that for this first chapter, in order to be able to discuss something about it, we would have had to divide it into the 12 sections of it (one for week I mean). Otherwise, you don’t really know too much where to begin with.

    But I am enjoying it. Sometimes I think it is a kind of black humor literature. And I don’t rule out that it can sow interesting ideas into our brain. Much more thought provoking than Diamonds autobiography.

    • I agree that his aphoristic style can become annyoing. My intuition -after reading only the first chapter- is that he is only lying out the general premisses and the general tone -like you said a frontal attack towards humanists- and that those subjects will be developed and better argued in the following chapters.
      Nevertheless, even if he continues with the aphoristic style, as long as he gives interesting ideas to discuss, I’ll be content.
      We can also reconsider the way we comment the book. If he is just throwing ideas to us without a proper argumentation, instead of revisiting every idea he presents, the person in charge of commenting the chapter can choose a couple of ideas he find more relevant and discuss just those.

      • Well, in fact, I have always assumed that there is no established “way we comment the book” and that we are free to do it as we please. Your idea of discussing some ideas seems to me a promising approach for this book. Anyone is free to add new ideas in the comments section if feels that something is missing.

  4. There is a central idea of this guy: Technological progress is going to end very bad because our ugly human nature doesn’t change and we are going to use technological power for dirty things.
    I think that he is even too optimistic. Technological progress is dangerous EVEN in the hands of angels because the power it produces multiplies the effect of random noise in nature to the point that it MUST (in my opinion) sooner or later destroy everything. You don’t need to mishandle very powerful technologies. Sooner or later, something horrible is going to happen.

    • I totally agree with you regarding technological progress multiplying the intrinsic random risks present in nature. On the other hand, I believe he doesn’t only mean to alert us about “wrong” use of technology or the myth of progress (“good” use of technology), I reckon, as I understand Gray, that his point is that the very use of technology (or maybe technology itself) inherits the ethics of humans, who are intrinsically flawed.

  5. Pingback: Straw Dogs: 2. The Deception | El Pla Subtil

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s