Interest Intersection
by Carson Kempf
Risks of Simplifying Complex Systems
Simplifying complex systems can present risks, particularly when the distinction between abstraction and simplification is not clear. Abstraction hides implementation details but preserves essential functionality, while simplification reduces complexity by removing features or nuance, potentially causing a loss of information.
The risks of simplification include the system functioning as a “black box,” which makes debugging nearly impossible. Additionally, hiding complexity often pushes it to another part of the system.
AI and Abstraction
AI can assist with abstraction and create a layered understanding, which differs from mere simplification. Unlike traditional linters that simply flag issues, AI systems can explain the reasoning behind code patterns, suggest contextual improvements, and adapt explanations to a user’s experience level. This approach, known as complexity scaffolding, gradually introduces complexity as the user becomes more comfortable. Successful abstraction preserves the user’s ability to “peek under the hood”.
The Authenticity of Machine-Mediated Learning
A key question is whether AI-mediated learning is genuine education or fosters a dependency. Pedagogy suggests that struggling with complexity leads to the development of mental models and problem-solving skills. This raises the question of whether removing this struggle also removes learning.
The transparency illusion is another concern, where a complex system may not be transparent even if it appears to be. Better maps might not always lead to better navigation and can sometimes mislead us about the true nature of the terrain.
The Nature of Understanding
Understanding can be defined in multiple ways: being able to successfully modify code, predict its behavior, explain its purpose, or grasp the programmer’s intent. A person might possess only conceptual knowledge, raising the question of whether they truly understand. This involves a distinction between procedural knowledge (“knowing how”) and declarative knowledge (“knowing that”).
The Intentionality Problem
Code embodies human intentions, which are often implicit. When an AI explains code, it’s not clear whose intentions it’s interpreting: the original programmer’s or the collective wisdom of its training data. This can result in the AI attributing intentions to a programmer that were not actually there. Programmers themselves sometimes do not understand the implications of their own code.
The Democratization Paradox
The use of AI tools makes programming more accessible, which is a positive development. However, a counter-argument exists that programming has historically served as a “natural filter,” ensuring that those who work with complex systems develop necessary skills like patience and prioritization. This creates a tension between gatekeeping and quality control.
Philosophical Reflections: The Story of Theuth and Thamus
In Plato’s Phaedrus, Socrates tells the story of the Egyptian god Theuth, who invented writing and presented it to King Thamus. Theuth claimed writing would make Egyptians wiser and improve their memories. Thamus, however, rejected the gift, arguing that it would “actually weaken human memory and understanding” by creating “forgetfulness in the learners’ souls” because they would rely on external characters and not their own memories. Thamus was concerned with whether people would gain the “appearance of wisdom” or “true wisdom”.
This story illustrates a recurring tension in the development of human technology. Just as Thamus feared writing would “atrophy human memory,” people now worry that AI will diminish problem-solving and critical thinking skills.
Tools as Cognitive Extensions
According to the 1995 “theory of extended mind” developed by Andy Clark and David Chalmers, tools don’t just help us think; they become part of our cognitive model of ourselves. An experienced carpenter’s hammer, for instance, becomes “an extension of her mind,” and the brain “literally extends into our sense of self”. John Dewey also argued that thinking is not confined to our skulls but involves our entire interaction with the environment. He viewed cognition as an “organism-environment coupling” rather than an internal mental process.
Conclusion
The same debate, spanning from Plato’s time to today, is not about whether or not we should avoid using new tools, but how to use them maintaining the same intentions. One way to keep our intentions intact is to distinguish between technologies that simplify by removing features (simplificaiton) and technologies that successfully abstract by preserving essential functionality (abstraction). AI, like other tools, has become a cognitive extension. It strengthens our ability to interact with the environment and solve simple problems. The risk lies not in using AI, but in allowing ourselves to use AI to retain pseudo understanding of our real environment. The people who get the most out of AI will be the best at knowing what they don’t know.