This essay by Thomas Wolf has been generating buzz among AI enthusiasts, and for good reason. I agree that an Einstein-type AI model is not possible on our current trajectory. This is clear to anyone who has experience training machine learning models. The “intelligence” in AI is, at its core, pattern recognition. You feed it thousands of photos of roses, and it detects patterns, eventually recognizing what we mean by “rose.” Even though there is no single definitive feature that categorically defines a flower as a rose, AI, given enough data, begins to recognize a fuzzy, inexplicable pattern. This is precisely what our brains do. We cannot agree on a universal definition of, say, “art,” yet we can recognize a pattern that eludes language. When we speak of “intelligence” in AI, we are referring to this very specific type of pattern-based intelligence. However, it is important to acknowledge its significance rather than dismiss it outright as a limited form of intelligence.
Pattern recognition is precisely what A-students excel at. Those with high IQs and top SAT scores tend to have superior abilities to recognize patterns. Wolf argues that this is not the kind of intelligence required to be a paradigm-shifting scientist. “We need a B-student who sees and questions what everyone else missed.” True. When it comes to pattern recognition, AI models are already more intelligent than most of us. They have essentially mastered human knowledge within one standard deviation of the bell curve. If you want to know what the “best practices” of any field are, AI’s answers are hard to beat because it has access to more collective human knowledge than any individual. One caveat, however, is that “best practices” are not necessarily the best solutions—they are merely what most people do. The assumption is that widespread adoption signals superiority, but that is not always the case.
This is, of course, useless if your goal is to be a Copernicus. Imagine if AI had existed in his time. Even if his heliocentric model were included in an AI’s training data, it would have been just one idea among billions. A unique idea cannot form a pattern by itself—yet paradigm shifts depend on precisely such anomalies.
Could AI engineers build a model that recognizes the pattern of paradigm shifts? I don’t know, but it would be relatively easy to test. All we need to do is ask AI to trade stocks. If it can consistently generate profit, then we will have achieved it. Why? Because the stock market is a great example of a pattern-defying pattern. When any pattern is identified—say, in arbitrage trades—machines can exploit it for profit, but once the pattern becomes widely recognized, it disappears. This is akin to the observer effect in science. To succeed, AI would need to grasp not just patterns but the nature of patterns themselves. It would need to understand what a “pattern” is in the same way that we might understand the meaning of “meaning.” I would not say this is impossible, but we do not yet have such an AI. I imagine some scientists are working on this problem as we speak.
Though this discussion may seem abstract, it has deeply practical implications for all of us. If AI is essentially an infinitely scalable A+ student, then the future for human A+ students looks bleak—because their abilities can now be purchased for $20 a month. So how do we avoid their fate? As teachers and parents, what should we encourage our children to pursue? Here, we run into the very problem we’ve been discussing. Any solution we propose will be a generalized pattern. We cannot communicate an idea unless it can form a pattern. The solution, therefore, will be akin to an algorithmic trading model: profitable for a short time until others detect the same strategy and neutralize it. To be a Copernicus or an Einstein, one must transcend patterns, not simply navigate them.
Institutional learning offers no help because institutions, by definition, rely on patterns. They cannot admit students at random; they must adhere to a philosophy, worldview, or ideology that informs their selection process. In other words, institutional learning is structurally at odds with the nature of true paradigm-shifting thinkers. Institutions, by necessity, attract those with superior pattern recognition skills—individuals who can discern the patterns of admission criteria and master them. This means that, in theory, it is impossible to build an institution that consistently produces Copernicuses or Einsteins.
The only viable approach is to discourage children from focusing too much on pattern recognition, as it has already been commodified. The one remaining form of intelligence that AI has yet to replicate is the inexplicable human ability to question established patterns and make meaningful, transformative departures from them.
I will email you when I post a new article.