What concerns me the most about AI is that it is being self-directed by techies and entrepreneurs. It reminds me of the 90s when engineers designed user interfaces and blamed average users for their inability to use them. The engineers thought other people were dumb because they couldn’t see what they saw, unaware of their own blind spots. When I listen to people like Sam Altman and Bill Gates talk about AI, despite their intelligence, I’m reminded of their philosophical blindness.
For instance, they assume “productivity” as being unconditionally positive for humanity. But what if your joy in life is the process of baking bread? From the point of view of productivity, you should let AI-driven machines make artisanal bread since they will be able to bake much faster 24/7.
AI poses an existential threat to humanity. If techies and entrepreneurs self-directed the development of AI, at the most fundamental level, they will pass on unconscious assumptions about their own meaning of life. This is an area in which philosophers, who have dedicated their careers to such questions, should be involved.
Unfortunately, everyone thinks they are qualified to have philosophical debates without formal study, much like many users today think they are user-experience experts simply because they use websites and apps daily.
Non-engineers eventually reined in poor user experience, but I’m not sure if the same will happen to AI since it has a life of its own. By the time we realize AI has one-dimensional assumptions about why we exist, it might be too late. The masses will be hypnotized by those assumptions and never question why they do what they do in life, which is not much different from what the media does today. However, the media is at least a human product with competing interests. AI in charge will lead to totalitarianism with a democratic facade where minority voices will have no chance of being heard.
I will email you when I post a new article.