Straw Dogs by John Gray

Straw Dogs is John Gray’s assault on humanism. Gray, a British philosopher, doesn’t do optimism. He challenges the belief in human progress and our supposed uniqueness in nature.

The title comes from an ancient Chinese ritual: straw dogs were treated as sacred during ceremonies, then unceremoniously discarded afterward. For Gray, humanity itself is such a straw dog. Temporarily elevated by our own narratives, but ultimately disposable in nature’s indifferent scheme.

Straw Dogs by John Gray

Against Humanism: The Religion of Progress

Humanism, Gray argues, is a post-Christian religion masquerading as secular rationality. The assumption that humans can improve the world through reason and moral action is, in his view, dangerous folly inherited from Christianity’s teleological worldview.

Where Christianity promised salvation through Christ, humanism promises salvation through science, technology, and moral progress.

But Gray sees no evidence for this optimism. Humans became the dominant species not just through evolutionary luck. Climate change may be the mechanism through which the planet strikes back. Like other animals under stress, humans respond to environmental pressure with reduced reproduction, increased infections, and war. Not with enlightened cooperation but with the exact brutal mechanisms that govern all of nature.

Human (Non-)Exceptionalism

Gray’s most provocative claim: human consciousness does not make us special.

He draws on Schopenhauer’s dismissal of Kant’s rational individual. Humans are not autonomous conscious agents but, like all animals, embodiments of a universal Will. Our self-awareness is neither unique nor elevating.

This connects to Douglas Hofstadter’s “strange loop” theory in Gödel, Escher, Bach. Consciousness emerges from lower-level neural activity, like intelligence emerges from an ant colony.

Where Hofstadter finds beauty in this emergent complexity, Gray sees only further evidence that our consciousness is nothing special. Just another natural phenomenon. Nothing that elevates us above other animals or grants us cosmic significance.

Free will? A trick of the mind. A post-hoc rationalization we use to justify our actions. We tell ourselves stories about our choices, but these narratives are illusions.

Unconsciousness is just as powerful as consciousness, which is why meditation and similar practices aim to quiet the chattering mind. Gray doesn’t criticize these practices. He frames them as a correct understanding of the human condition and a solution to the problem of the burdensome conscious self.

Technology: Master or Plaything?

We cannot control technology, Gray insists. Humankind will misuse it despite our benign intentions. Science cannot bring reason to an irrational world. This contrasts with our current techno-optimism.

Gray’s vision of humans being replaced by their technical creations parallels Yuval Noah Harari’s warnings about AI and biotechnology. But Harari’s view is humanistic, concerned with preserving Homo sapiens as we know them. For Gray, human obsolescence is simply another turn in nature’s wheel. His question, “Would these machine replacements be more destructive than humans? Would it be worse?” betrays his anti-humanist stance. There is no cosmic scorecard. No inherent value in human survival.

In the future Gray envisions, digital technology will create a new wilderness, incomprehensible to humans in its entirety, extending the real world. Machines will have souls, spirits. Animism will extend to technology.

This is not science fiction dystopia but natural evolution. Consciousness was never exclusively human, so why shouldn’t it manifest in our mechanical offspring?

Language, Media, and the Manufactured Self

We use language to look back and forward, to create stories about ourselves. Christianity and humanism both destroy tragedy as a concept because they insist that there is always a better life possible. Either in this world through progress or in an afterlife.

But tragedy requires accepting that some suffering is meaningless, some losses irredeemable.

Gray observes that consciousness emerged as a side effect of language. Today, it has become a byproduct of the media. This connects directly to Neil Postman’s argument in Amusing Ourselves to Death about how media shapes consciousness.

Postman warned in his book that our obsession with entertainment and visual media would create what Huxley feared: a trivial culture “preoccupied with some equivalent of the feelies, the orgy porgy, and the centrifugal bumblepuppy.”

Gray’s observation that consciousness itself has become a media byproduct represents the ultimate fulfillment of Postman’s prophecy. We no longer consume media; media constitutes our inner lives. The self is manufactured, edited, and curated. A performance staged for an audience of ourselves and others, mediated through screens and feeds.

This connects to Marshall McLuhan’s famous dictum: “the medium is the message.” The technology itself, not its content, shapes consciousness and social organization. As Oliver Burkeman argues in Four Thousand Weeks, we’ve become so addicted to our devices and information streams that we’ve lost touch with our finite existence.

Gray would agree. Our media-saturated consciousness is just another distraction from the fundamental fact that we’re animals, not special beings with privileged access to truth or meaning.

Morality as Accident

Gray follows Freud in arguing that a sense of justice depends on childhood accidents. Being good is a result of good luck, not moral choice.

Moral intentions have a short history. Equality, the current moral orthodoxy, may well be succeeded by another framework. And so will our concepts of justice.

This relativism extends to the good life itself. Personal autonomy is an imagination. The most essential things in our lives are unchosen. We must improvise. The good life has no principles, no purpose. It simply is. What needs to be done is individual, not bound by universal morality. It comes naturally—or it doesn’t.

Provocatively, Gray notes that pleasure is most intense when mixed with sensations of immorality. (Like humor is best when it has a vile edge.) The good life flourishes not through following moral truths but despite, or because of, immorality.

This isn’t nihilism so much as naturalism. Animals don’t consult ethical frameworks, yet they live and flourish.

Economic Realities and the Obsolescence of the Masses

Industrialization created the working class and will make it obsolete. Gray predicted this before Piketty and Sandel analyzed how meritocracy creates a new aristocracy.

Sandel’s The Tyranny of Merit nails it: our meritocratic system humiliates losers while making winners insufferable. Piketty and Sandel want progressive taxation, greater equality, and what Sandel calls “contributive justice”. Ensuring everyone can contribute to the common good and receive recognition.

Gray would call this a more humanist delusion. The very belief that we can engineer a more just society through policy reform is the folly he attacks. Moral intentions have a short history. Today’s orthodoxy of equality will be succeeded by another. Justice itself is contingent, not absolute.

Economic life is geared toward satisfaction, manufacturing increasingly exotic needs, goods, and experiences. Drugs, sex, violence: antidotes to boredom. This is consumer capitalism’s truth, stripped of pretense. We’re not building toward anything. We’re distracting ourselves from the void.

Gray wrote during a period when wars were increasingly seen as non-state-driven: Al Qaeda, terrorism. We know better now. Russia operates as a mafia-based anarcho-capitalist state, spreading its model across the Western world. The US, Hungary, elsewhere. (Putin’s kleptocracy as export model—what a time to be alive.)

Future wars will be wars of security, not ideology. War has become a game, an entertainment for consumers in rich countries. Real war remains a habit of the poor, a violent chase for the dream of freedom.

Religion, Atheism, and the Death of God

Atheism, Gray argues, is part of Christianity. In polytheism, it never existed.

Christianity was the first religion to claim exclusive truth: one God, one path to salvation. When Europeans stopped believing in God, they didn’t abandon this structure. They simply replaced God with other absolutes: progress, reason, science, humanity.

Technical immortalists believe technology can make humans immortal. (Really, these Silicon Valley types are just monks in hoodies.) They’re engaged not in a scientific project but in a religious one, attempting to free us from fate and mortality.

Suffering, savior, deliverance: constructs designed to attract and retain believers in faiths, including Christianity and humanism. In humanism, miracle, mystery, and authority are embodied by science and technology.

But this is, as the Dutch say, a hersenschim—a phantom, an illusion.

The advance of our knowledge deludes us into thinking we’re different from animals. We’re not.

Gray’s Consolation: The Art of Contemplation

After this relentless demolition, Gray offers an unexpected consolation, a way to deal with the horrific facts we mortal humans face.

Action to create progress is illusory. Contemplation is underrated. Progress implies a destination. Play has no point. We labor like Sisyphus, pushing the boulder up the hill, watching it roll back down.

But can we make labor more playful? Can we approach technology and science not as means of mastering the world but as forms of play? No mastering, no progress. Just play.

Spiritual life, in Gray’s conception, is a release from the search for meaning. The perfection of humankind is a dreary purpose. The idea of progress is like searching for immortality, a denial of what we are.

Contemplation means surrendering to the never-returning moments, turning away from yearnings, and focusing on mortal, transient things. Groundless facts, things that simply are, without justification or purpose, are the proper objects of contemplation.

The aim of life: to see.

Not to improve. Not to progress. Not to perfect. Just to see. Clearly. Without humanistic hope blurring the view.

Conclusion: Debunking as Philosophy

Gray’s Straw Dogs is philosophy as demolition. Not comfort, not guidance. Just stripping away delusions.

Harari warns of AI doom. Piketty and Sandel champion equality. Postman’s media warnings were vindicated and ignored. We still believe in progress, in human perfectibility.

Gray’s voice? Either necessary corrective or intolerable provocation.

Probably both.

Connections

Without a preconceived plan, I have written about Neil Postman’s media critique, about Burkeman’s meditation on mortality in Four Thousand Weeks, about McLuhan’s “the medium is the message.” Gray’s pessimism dialogues with all of them. Also with Hofstadter on consciousness, with Piketty and Sandel on meritocracy, with Harari on technology’s future.

Gray rejects control and mastery, like Taleb in Antifragile. Taleb’s distinction between the fragile (technology, complex systems) and the antifragile (natural processes, ancient wisdom) parallels Gray’s preference for contemplation over action. Both recognize that human attempts to engineer perfect systems inevitably backfire.

Burkeman’s meditation on our four thousand weeks echoes Gray’s call to surrender to finitude. Where humanists seek immortality through progress or technology, both Burkeman and Gray counsel acceptance of mortality as the path to authentic living. The “paradox of limitation” Burkeman describes (that embracing our constraints makes life more meaningful) is fundamentally Gray’s position: stop trying to transcend your animal nature and simply live within it.

AI considered not so harmful

Cal Newport

Computer Science professor, writer, and podcaster Cal Newport debunks hysterical reactions to the latest AI developments. Much of this hysteria originates from the media’s search for attention rather than research executed with scientific rigor. “We have summoned an alien intelligence,” writes Harari, who is slowly but surely turning into a Luddite and occupational technology pessimist.

Cal Newport does what Harari and others should have done. In his Deep Questions podcast Defusing AI panic, he takes the subject apart.

Only by taking the time to investigate how this technology actually works—from its high-level concepts down to its basic digital wiring—can we understand what we’re dealing with.

Cal Newport tells us what ChatGPT does and how intelligent it is. We will see that it is pretty limited.

The result of these efforts might very well be jaw-dropping in its nuance and accuracy, but behind the scenes, its generation lacks majesty. The system’s brilliance turns out to be the result less of a ghost in the machine than of the relentless churning of endless multiplications.

A system like ChatGPT doesn’t create, it imitates.

Consciousness depends on a brain’s ability to maintain a constantly updated conception of itself as a distinct entity interacting with a model of the external world. The layers of neural networks that make up systems like ChatGPT, however, are static…

It’s hard to predict exactly how these large language models will end up integrated into our lives going forward, but we can be assured that they’re incapable of hatching diabolical plans, and are unlikely to undermine our economy.

In the podcast, Cal Newport is more technical in his explanations. From the transcript (with light editing for punctuation by me):

What a large language model does is it takes an input. This information moves forward through layers. It’s fully feed forward and out of the other end comes a token which is a part of a word in reality. It’s a probability distribution over tokens but whatever a part of a word comes out the other end that’s all a language model can do. Now, how it generates what token to spit out next can have a huge amount of sophistication …

When I talk to people is when you begin to combine this really really sophisticated word generator with control layers. Something that sits outside of and works with the language model that’s really where everything interesting happens . Okay this is what I want to better understand: the control logic that we place outside of the language models that we get a better understanding of the possible capabilities of artificial intelligence because it’s the combined system language model plus control logic that becomes more interesting. Because what can control logic do?

It can do two things: it chooses what to activate the model with, what input to give it and it can then second: actuate in the real world or the world based on what the model says. So it’s the control logic that can put input into the model and then take the output of the model and actuate that, like take action, do something on the Internet, move a physical thing.”

Something I’ve been doing recently is sort of thinking about the evolution of control logic that can be appended to generative AI systems like large language models…

If you look at the picture I created after Cal Newport’s talk, you can see the different control layers. As Cal Newport points out, that is where the actual work is done. The LLM is static; it gives a word, and that’s it. That control logic knows what to do with the work.

Control layer in contemporary artificial intelligence

Now, the control logic has increased in complexity. We know better what to do with the answers AI gives us.

Newport fantasizes about a third control layer that can interact with several AI models, keep track of intention, have visual recognition, and execute complex logic. That is where we are approaching Artificial General Intelligence.

But, as Newport points out, Nobody is working on this.

Just as important, this control logic is entirely programmed by humans. We are not even close to AI-generated control logic and self-learning control logic. What Newport calls intentional AI (iAI). It is not clear whether this is possible with our current AI technology.

It’s the control logic where the exciting things happen.

It’s still people doing the control logic.

In 1990, a friend of mine graduated from Fuzzy Logic. This period was probably at the height of the Fuzzy Logic hype. Fuzzy Logic was one of the technologies that would turn societies upside down. Nowadays, Fuzzy Logic is just one technology applied, like others, for the proper purpose and problem space.

What looks like science fiction today is the mainstream technology of tomorrow. Today’s AI is tomorrow’s plumbing. That is my take on Cal Newports’ explanation of today’s state of AI art.

Opting out of Instagram AI

As European users, we can opt out of Instagram and Facebook using our posts for AI training. I’ve exercised this control, as I am the product of Facebook and Instagram, but I strive to limit their use of me as such.

Opting out on Instagram looks deliberately cumbersome. However, from Facebook, which is also owned by Meta, I received an email with very simple instructions.

Now, I am curious if they can prove they are not using my data for AI.

The cost of AI and other challenges

I stumbled upon this fascinating article by Stuart Mills looking at the challenges that further development and operations of AI models face.

The costs of model development and operation are increasing. Efficiencies in development and operation are challenging but may be addressed in the future. However, model quality remains a significant challenge that is more difficult to solve.

Data is running out. Solutions such as synthetic data also have their limitations.

There is also a severe challenge around chips. There is a supply shortage in the context of geopolitical tensions between China, the US, and the EU. Also, the environmental costs of running large AI models are significant.

The costs of model development and operation are increasing. Efficiencies in development and operation are challenging but may be addressed in the future. However, model quality remains a significant challenge that is more difficult to solve.

Data is running out. Solutions such as synthetic data also have their limitations.

There is also a severe challenge around chips. There is a supply shortage in the context of geopolitical tensions between China, the US, and the EU. Also, the environmental costs of running large AI models are significant.

Two revenue models may emerge in the AI industry, each with its own take on the cost aspects highlighted above. The first is the ‘foundation model as a platform’ (OpenAI, Microsoft, Google), which demands increasing generality and functionality of foundation models.

The second is the ‘bespoke model’ (IBM), which focuses on developing specific models for corporate clients.

Government action can support and undermine the AI industry. Investment in semiconductor manufacturing in the US and China may increase the supply of chips, and strategic passivity from governments around regulations such as copyrights is suitable for the industry. Government interventions should regulate the AI industry in areas related to the socially and environmentally damaging effects of data centers, copyright infringement, exploitation of laborers, discriminatory practices, and market competition.

AI, duh; make it personal (and analog)

ai photographer
The competition

When Artificial Intelligence-generated images win photo contests, should we oppose that?

I just think the developments of AI are telling us to do things differently, to stand out. AI has become the competition (and maybe just a tool), just like all other photographers are. So, we have to treat AI as competition, too. You can try to deny this reality, but you can also look at how you, as a photographer or artist, can differentiate yourself from this new collegue/competition.

Ideas:

  • Stories instead of single images. Combine with text.
  • An analog version of your work: a print, a book, wallpaper, toilet paper, t-shirts, quilt covers, printed bags, whatever.
  • Combine your photos into a video.
  • Handmade books.
  • Collages.

Personal and analog distinguish you from the aggregated, statistically generated products of AI.