Grounded Before Takeoff

What the EU’s AI Act Could Learn from the Wright Brothers.
Dayton, Ohio, 1903. Wilbur Wright steps into the workshop. He shows the headline of a newspaper that reads “Congress Passes MOTOR Act: Regulating Otto-Type Engines.”
A lean man with oil-smeared sleeves glances over the headline, then focuses back on a gasoline engine of his own design. Charles Taylor cranks the thing to life. The engine sputters and rattles. After weeks of false starts and busted parts, the first sign of success. Taylor, a self-taught mechanic with a curious mind, smiles. If this motor holds up, it could power the first airplane in human history.
The MOTOR Act moves swiftly to regulate all gasoline engines based on their potential uses. Stationary models for pumps or generators are deemed low-risk. Anything mobile—nobody thought yet of flying—is classified as high-risk and buried under layers of oversight. Without a single test flight, the Wrights’ aircraft is effectively outlawed before it even gets off the ground.
One Law to Rule Them All
Thankfully, the MOTOR Act never happened. But its spiritual successor did: the European Union’s AI Act. This time, it’s not gasoline engines under scrutiny, but artificial intelligence. The risk categories are back. The red tape too. As you might know, the AI Act corrals all kinds of AI applications into a single framework called a risk-based approach. That means applications are sorted into risk-bins, which have regulatory duties attached to them. Risk evaluation applies to all applications powered by AI, from banking credit models to language learning apps.
This misses a crucial point: most of the riskier applications of AI already fall under existing rules. Surveillance? That’s a matter of privacy law, not software architecture. Discrimination in hiring? Covered by labor and anti-discrimination law. Adding an AI to the mix may amplify a problem, but it rarely creates one that wasn’t already legally or ethically defined.
From a citizen’s point of view, it doesn’t matter whether they’re being watched by a neural network or a squad of over-caffeinated interns with binoculars. The concern is being watched. The technology may change the scale or efficiency of an activity, but it doesn’t change what that activity is. And that’s what regulation should focus on.
Adding an AI to the mix may amplify a problem, but it rarely creates one that wasn’t already legally or ethically defined.
A Better Way: How We Regulated Motors
When combustion engines arrived on the scene, governments didn’t respond by trying to regulate the internal mechanics of the engine itself. They looked at how it was used. Over time, specific laws emerged for specific domains: emissions limits for cars, safety inspections for trucks, licensing for boats, airworthiness certificates for planes. Nobody tried to draft a single, universal “Engine Act” to cover all possible uses of a piston.
This was common sense. Driving cars and flying planes will both qualify for a “high risk” category, but would anybody try to regulate them by one and the same law? We built guardrails around the application, not the engine. Regulation responded to externalities that were visible, measurable, and context-specific.
AI, like the engine, is a general-purpose technology. Its impact depends almost entirely on where it lands. Regulating it in isolation is a folly.
What Are We Regulating, Exactly?
The AI Act tries to walk two paths at once and stumbles on both. On one hand, it claims to target AI applications based on their potential impact in the real world. On the other, it slips into regulating AI as a technical artifact, focusing on the mere presence of machine learning components regardless of how they’re used. The result is a law that can’t decide whether it’s regulating use cases or math formulas.
Under its current definition, a modern machine learning model qualifies as AI. So does a decision tree. So does a sorting algorithm that recommends products based on customer clicks. In fact, some have pointed out that even a spreadsheet with a few nested “if”-statements might qualify [1]. If that sounds absurd, it’s because it is. By sweeping nearly everything under the AI umbrella, the Act risks becoming both overreaching and underfocused.
The result is a double failure. As a technology law, the AI Act offers no sharp criteria, no meaningful thresholds, no real understanding of how these systems work. As a risk law, it imposes obligations on sectors it barely understands. It often duplicates domain-specific rules already in place. In trying to regulate everything, it regulates nothing particularly well.
When the Model Is the Application
The AI Act arrived just as large language models were reshaping the landscape. While earlier drafts focused on regulating “high-risk” applications that used AI, later versions scrambled to account for general-purpose systems like GPT. The draft law had been designed for tools. Now it could not keep up with emerging platforms, even more so as those digested the narrow AIs around them: If an idea built on top of an LLM proves useful, it’s only a matter of time before the next generation of models absorbs the functionality natively. The boundary between model and product is dissolving.
Faced with this collapse, the AI Act unfortunately doubles down on regulating AI systems in a white-box manner. It would be better to hold on to the risk-based perspective: regulate real-world outcome instead of inner workings. Granted, it’s easy to see why this crept in. Fairness is notoriously hard to define. But rather than confront this head-on, the law redirects its focus toward input requirements: high-quality datasets, documentation, transparency obligations. This is unlikely to scale. As systems become smarter and even more complex, judging their fairness by inspecting their inner workings will be fruitless. There is a reason we don’t do that with human employees.
Conclusion: Let It Fly
The AI Act may slow things down. It may tangle promising ideas in paperwork or steer small innovators toward safer ground. But it won’t stop AI innovation. The European Union’s misguided attempt at a catch-all law will soon be outdated.
Today’s foundation models are evolving rapidly, adapting to new tasks, absorbing useful functions, and lowering the barrier to experimentation. The lines between product, model, and platform are blurring. But the momentum is unmistakable: AI won’t wait for perfect regulation. Nor should it.
TL;DR
- AI is a general-purpose technology, like the Otto engine. Regulation should target the field of application, not the inner workings.
- Most fields of applications already have proven regulations in place.
- Foundation models blur product boundaries, making white-box obligations brittle.
In real life, the Wright Flyer took off in December 1903 and stayed airborne for almost a minute. The ingenuity of Charles Taylor’s engine changed history. Let’s make sure we don’t forget how flight begins.
[1] Judge for yourself: “ ‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.
