AI’s Influence on the Meaning of Life

Food for Thought

I suspect I’m not the only one who feels that AI is throwing into question what I want to do with my life. And I don’t just mean what I should do to survive—beyond such practical concerns, AI also raises questions about our desires. Last month, while spending time in Japan, I thought it might be fun to get back into drawing cartoons. But then I had to ask myself: why should I want to draw by hand when AI can generate what I envision? Which aspect of cartooning am I actually desiring? It’s time to take a step back and ask some existential questions.

Philosophically, I’m neither for nor against AI. It is just what is happening in the world that I’m observing. My mind is too limited to predict whether AI will save humanity or bring about its end. Even in the worst-case scenario, as far as the Earth or the universe is concerned, it’s just another species going extinct out of countless others. The role of morality is to govern human behavior; outside of our minds, it is meaningless.

However, I must admit that I find the pursuit of endless productivity silly. Now, we are witnessing an AI arms race, particularly between the US and China. Ultimately, it is a race of productivity—who can outproduce the other—and AI is merely a tool, or a weapon, for that war. As a philosopher, I must ask what the point is, but in the grand scheme of things, nothing we do has a point. We concoct a meaning from what we do and act as if it’s universally meaningful. In that sense, productivity as the meaning of life is no better or worse than any other. What comes across as silly is the “acting as if” part.

For instance, Sam Altman, the CEO of OpenAI, sure acts as if he is saving the world, or at least Americans, even though there is no fundamental need to develop AI. Humanity will be fine without it. It’s merely a solution that turned into a problem. We solved all the real problems a long time ago. All our problems today (like climate change) were created by our “solutions” (like industrialization). If we stopped solving problems, new problems wouldn’t arise, but human minds need problems and obstacles. Without something to strive for or overcome, we would cease to be human, which is why I’m not for or against AI. We are destined to struggle, even if it means we have to create our own struggles artificially.

“Man’s desire is the desire of the Other,” as Jacques Lacan said. As the Other is transformed by AI, our desires are inevitably transformed as well. The web was originally developed as a public repository of knowledge, but large language models like OpenAI’s have already harnessed and distilled much of that knowledge. As people begin turning to AI for answers instead of search engines, the desire to share knowledge publicly on the web will diminish. Information will still be used to train AI models, but few will visit your website to read it—no fun if your goal is to engage with others.

As of today, most of our social engagement—not just on social media, but across platforms for any purpose—consists of emailing, chatting, and talking with other humans. AI agents like ChatGPT will soon take over a significant share of these interactions as they become more knowledgeable and intelligent. When it comes to practical information and advice, consulting other humans, with their limited knowledge and intelligence, will begin to feel archaic and inefficient. It would be like paying a hundred dollars for a handmade mug when all you need is something functional for your office. Even if that price fairly compensates for the time, skill, and materials involved, buying it will feel increasingly wasteful.

Often, what we enjoy about our work is the process, not the results. It’s not just about acquiring information but about the process of seeking it from others. It’s not just about the final song but about composing, playing, and recording. Yet, in the name of productivity, AI will make it increasingly difficult to monetize these enjoyable processes. We will have to fight for processes that AI has not yet mastered. There will still be problems for us to solve, but only because our own solutions will artificially create new problems.

I, for one, am optimistic about our ability to generate challenging, unnecessary problems—but I’m less certain whether we can continue to enjoy the process. The speed of technological disruption will only accelerate, and it is already outpacing our ability to adopt and adapt. At some point, we might all throw in the towel. What that world looks like, I have no clue—but I’m curious to see.