On AI, ambition, and who actually controls the new superpower
A few months ago, a teenager in rural Romania used a free AI tool to write a business plan that beat out submissions from MBA students at a Bucharest competition. She had no business education, no mentor, no library card that actually mattered. What she had was a phone, an internet connection, and something that, until very recently, simply did not exist: a machine that could think alongside her.
This story is heartwarming. It is also the opening act of something much more complicated.
We are living through an acceleration in artificial intelligence that does not behave like previous technological shifts. The printing press took decades to reshape literacy. The internet took years to penetrate households. AI is moving week by week, not in a metaphorical sense, but literally. In the span of a single month in early 2024, systems crossed thresholds in legal reasoning, medical diagnosis, and creative writing that researchers had projected were still years away. By the time you read this, something else will have changed. The scoreboard updates faster than we can interpret the score.
The questions this raises are not primarily technical. They are human. What does it mean to be educated when education’s core promise, that knowledge grants advantage, is being redistributed by an API call? What happens to motivation, to striving, to the slow and painful act of becoming someone, when you can simply borrow competence on demand? And who, exactly, is holding the keys to all of this?
The Credential in Crisis
To understand what is happening to the value of education right now, it helps to remember what education was actually selling. It was never purely about knowledge transfer. It was about credentialing, yes, signaling to employers that you had the discipline, the baseline intelligence, the social conformity to sit through four years of structured learning. But it was also about a kind of cognitive scaffolding. A medical degree does not just prove you know pharmacology; it proves you can hold enormous complexity in your head and make decisions under pressure.
The question AI is forcing us to confront is: which parts of that scaffolding are now being outsourced, and which parts remain irreducibly human?
Anthropic, the AI safety company behind the Claude series of models, released research examining the degree to which AI systems are already performing tasks across a wide range of occupations. The findings are not abstract future projections. They map current capabilities against current job functions. Some results are expected: paralegals, data entry clerks, and junior copywriters appear in the high-exposure column. Others are more unsettling. Radiologists, whose decade-long training once represented one of medicine’s more defensible moats, are on the list. So are financial analysts, software developers doing routine code generation, and, in a detail that tends to silence rooms, a meaningful slice of the work done by first and second-year associates at law firms.
These are not predictions about a distant recession of human relevance. These are descriptions of what is already possible, right now, with tools that are free or cheap and getting cheaper. The Anthropic research is careful not to claim that jobs will simply vanish, the historical record of technology and employment is more complicated than that, but the honest reading is that the economic justification for large swaths of mid-level professional work is under genuine pressure.
A seventeen-year-old watching this unfold is not irrational to wonder: if the job I am training for may not exist in the form I imagine it, why am I taking on the debt, the years, the social and psychological cost of a traditional education to get there?
This is not laziness. It is pattern recognition. And it is arriving at exactly the wrong moment, because the answer, though deeply uncomfortable, is more education, not less. Just a very different kind.
The Sudden Superman Problem
There is a phenomenon that does not yet have a clean name but that anyone who works in education or workforce development is starting to recognize. Call it competence inflation, the experience of people who, through AI tools, can suddenly perform at levels that would have taken years of learning to reach. The feeling is not fraudulent. The output is often genuinely good. The problem is what it does to the person producing it.
Consider someone with no legal training who uses an AI to draft a contract that is, in most respects, legally sound. They have produced something real. But they lack the framework to know what the AI got wrong. They do not know which clauses are jurisdiction-specific, which boilerplates carry hidden liability, which seemingly minor omissions could matter in three years when a dispute arises. They have the output without the judgment, and judgment, unlike output, cannot be borrowed.
For people who have spent their lives feeling shut out of systems that seemed to require expensive credentials, the democratizing feeling of AI is real and should not be dismissed. There is something genuinely significant about a first-generation immigrant using an AI to write a professional cover letter that actually sounds professional, or a dyslexic student producing written work that reflects their actual intelligence rather than their processing difference. These are not trivial gains. These are moments where a technology catches up with human potential that systems had previously ignored.
But there is a darker edge to this same dynamic. When the distance between knowing something and producing something that looks like you know it collapses, it becomes harder to develop genuine expertise. The friction of learning, the failed attempts, the confusion, the slow accumulation of understanding, is not a bug in the process. It is largely the process. When AI smooths that friction away, what remains is the performance of competence without its substance.
This matters especially for younger people who are forming their identities around what they can do. The satisfaction of solving a hard problem yourself, of writing something that is genuinely yours, of developing a skill through struggle, these experiences are formative in ways that go beyond professional utility. They are how people build the internal architecture of competence: the self-knowledge to know what they are capable of, and the confidence that comes from having earned it.
The worry is not that AI will make young people dumb. It is that AI will make it harder for them to discover what they actually are.
Five Companies and the Shape of Thought
At some point in this conversation, someone will say: but the internet was also concentrated, and we survived. Google controlled search. Facebook controlled social connection. Amazon controlled commerce. The consolidation of digital infrastructure was uncomfortable, but civilization continued.
This analogy is worth taking seriously, and then moving past.
The difference between controlling a search index and controlling a reasoning system is not a matter of degree. It is a difference in kind. When Google shaped what information you could find, you still had to process that information yourself, interpret it, weigh it, decide what to do with it. The cognitive work remained human. When an AI system shapes how you reason about a problem, the intervention is upstream of the output. It is present in the thinking itself.
The companies currently at the frontier of AI, OpenAI, Anthropic, Google DeepMind, Meta AI, and a handful of others, are making decisions about how these systems behave that have consequences extending far beyond product design. They decide what the models will and will not say. They decide which views are represented in training data and which are systematically underweighted. They decide how the models handle contested political, scientific, and ethical questions. They choose the values, however imperfectly, that get baked into systems that millions of people will use to help them think.
There is no neutrality available here. A model that refuses to answer a question about medication dosages has made a value judgment. A model that offers a balanced perspective on a political controversy has made a design choice about what balance means. A model trained predominantly on English-language text carries with it the cultural assumptions embedded in that corpus. These are not hypothetical concerns about future misuse. They are present-tense features of how these systems currently operate.
The economic dimension compounds this. Training a frontier AI model currently costs somewhere in the range of tens to hundreds of millions of dollars. Running one at scale costs significant ongoing compute, requiring data centers, cooling infrastructure, and chip supply chains that only a small number of actors can access. This means the conversation about who controls AI is not, in practice, a very long conversation. It is a short list of entities with massive capital requirements and significant leverage over the rest of the world’s access.
Governments are beginning to understand this. The EU’s AI Act, the Biden-era executive order on AI (now modified under the subsequent administration), China’s domestic AI regulations, these are early, clumsy attempts to insert democratic accountability into a process that, left to its own devices, will be resolved by whoever can write the largest check to Nvidia. But regulation moves in years. The technology is moving in weeks.
What Education Is Actually For, Reconsidered
Return, for a moment, to the question of why a young person should bother to learn deeply when AI can produce the surface of expertise on demand.
The honest answer is not that AI will leave their chosen field untouched, it almost certainly will not. Nor is it that credentials will retain their current value indefinitely, many will not. The honest answer is that the most durable thing education produces is not a degree. It is a mind capable of evaluating its own thinking.
AI is extraordinarily good at generating plausible-sounding output. It is not good at knowing when its plausible-sounding output is wrong in ways that matter. That gap, between fluency and correctness, between confidence and accuracy, is precisely where human judgment becomes essential. And human judgment, the real thing, does not come from using tools. It comes from having made enough mistakes in a domain to develop a feel for where the errors hide.
The students who will thrive in an AI-abundant world are not the ones who learn to use AI most efficiently. They are the ones who know their subject well enough to catch the AI when it is confidently wrong, and who have developed, through the slow work of actually learning things, the intellectual independence to trust their own judgment over the machine’s.
This is not an argument for preserving education as it currently exists. Universities, in their current form, are doing a poor job of preparing people for a world where content generation is cheap and contextual judgment is scarce. The lecture hall model, the multiple-choice exam, the degree-as-credential, all of these will need to evolve, and probably faster than most institutions would prefer.
But the evolution required is toward more rigorous thinking, not less. More emphasis on epistemology, on how we know what we know, and what it looks like when we are being fooled. More emphasis on ethics, on the history of how powerful technologies have been deployed and misused. More emphasis on genuine expertise in something specific, because the value of knowing a domain deeply is not the content it produces but the judgment it enables.
The teenagers looking at AI and wondering whether to bother are asking a real question. The answer is not: go to university because that is what you do. The answer is: figure out what kind of thinking you want to be capable of, and then pursue that with the same seriousness you would have applied to any other survival skill in any other era. Because what is coming will require more than the ability to prompt an AI effectively. It will require knowing what to do with the answer.
The Bargain We Are All Implicitly Accepting
There is a deal being offered right now, and most of us are taking it without reading the terms.
The offer is: access to unprecedented cognitive capability, at low or no cost, in exchange for dependency on systems you do not control, built by companies whose interests may or may not align with yours, trained on values you had no voice in selecting, running on infrastructure that can be revoked, repriced, or redirected at any time.
For individuals, this bargain is often worth taking, the capability gains are real, and the alternatives are not obvious. But we should be clear-eyed about what we are trading. Every time a professional outsources a judgment call to an AI, they erode a little of the capacity that made them a professional. Every time a student uses AI to complete work they could have struggled through themselves, they forgo a little of the formation that the struggle would have produced. These are not catastrophic individual choices. Aggregated across a generation, they have structural consequences.
For societies, the stakes are higher. A world where the infrastructure of reasoning is controlled by five companies is a world with an unprecedented concentration of epistemic power. The people running those companies are not villains, many of them are genuinely motivated by something closer to public interest than the standard Silicon Valley playbook might suggest. But good intentions are not a substitute for accountability structures. And accountability structures for AI barely exist yet.
The political dimensions of this are just beginning to surface. AI systems can already generate persuasive political content at scale, simulate personalities, produce disinformation at a cost approaching zero, and tailor messaging to psychological profiles with a precision no human political operative could match.
None of this means the technology should be resisted. Resistance is not really on the table, the genie question was settled some time ago. What is on the table is how thoughtfully societies engage with what is happening, how honestly we talk to young people about both the opportunities and the tradeoffs, and how aggressively we pursue the governance frameworks that could make this technology something other than an enormous accumulation of unaccountable power.
The Real Question
The Romanian teenager who won that competition is still out there, presumably, navigating what comes next. She has tasted what is possible. The question for her, and for the millions of young people like her, is not whether to use these tools. It is whether to understand them deeply enough to remain their user rather than become their product.
That distinction, between using a powerful system and being shaped by it, is going to be one of the central lines of differentiation in the decades ahead. It will separate people who can direct AI toward their purposes from people whose purposes are quietly being redirected by AI. It will separate societies that maintain meaningful agency over their technological infrastructure from those that outsource that agency to whoever happened to have the capital and the talent to get there first.
The acceleration is not slowing. The weekly updates to frontier models, the quarterly capability jumps, the expanding reach of these systems into medicine, law, education, creative work, governance, none of this is going to pause while we figure out how we feel about it. The only option is to engage with it with more seriousness and more honesty than we have managed so far.
That means telling young people the truth: yes, these tools are genuinely powerful, and yes, the world they are entering will be shaped by AI in ways we cannot fully predict. And also: the thinking you develop through the hard work of actually learning something is not made obsolete by a machine that can mimic its outputs. It becomes more valuable, not less. Because the world does not just need people who can prompt an AI. It needs people who can tell when the AI is wrong, who understand why it matters, and who have thought carefully enough about what they actually believe to not simply outsource that question along with everything else.
The machine is powerful. That is not the problem. The problem, and the opportunity, is deciding what kind of people we want to be while using it.