What Engineers Take With Them in the Age of AI
In April 2025, Shopify CEO Tobi Lütke sent an internal memo that shook the tech world. The message was blunt: AI usage is now a "fundamental expectation" at Shopify. Product designers must use AI for all feature prototypes. Teams must prove that AI cannot do the job before asking for more headcount. AI skills will be part of every performance review.
This was not a suggestion. It was a mandate.
A few months later, Matt Shumer's blog post "Something Big Is Happening" went viral with over 50 million views on X. His core message was simple: the people who will thrive are not the ones who master one tool. They are the ones who get comfortable with the pace of change itself.
Of course, not everyone agrees. There is valid pushback against blind AI hype — AI still hallucinates, still writes buggy code, still fails when reliability truly matters. That criticism is fair, and we should take it seriously. But it does not change the direction. Whether AI replaces 30% of your work or 90%, the engineers who know how to work with AI are already pulling ahead. And the gap is compounding fast.
This raises a question that every engineer should think about: when you leave a company, what experience do you actually take with you? And is that experience still worth anything?
The Old Model of Experience
For a long time, the answer was straightforward.
If you were a machine learning engineer, your experience was your intuition. You knew how learning rates behave across different architectures. You knew when a model was overfitting before the metrics told you. You had a mental library of what works and what doesn't, built from years of training models and reading papers.
If you were an infrastructure engineer, your experience was your battle scars. You had debugged cascading failures at 3am. You knew how to balance performance and cost. You understood the tradeoffs between consistency and availability — not from a textbook, but from production incidents that woke you up at night.
This kind of experience was durable. It transferred across companies. It showed up in interviews. It was the thing that separated a senior engineer from a junior one.
That model is not gone. Domain knowledge still matters. But it is no longer enough.
The Divergence
Here is the uncomfortable part.
Two engineers can have the same title, the same years of experience, and work at companies that both call themselves "tech companies." But the experience they are accumulating is fundamentally different.
At an AI-native company — Shopify, OpenAI, Anthropic, or any of the growing number of AI-first startups — engineers use AI as a daily collaborator. They learn to decompose problems in ways that make AI effective. They develop judgment about when to trust AI output and when to override it. They build workflows where AI handles the routine parts so they can focus on the hard parts. This is not just about using ChatGPT. It is a different way of thinking about engineering work.
At a more conservative company — maybe one with strict policies against AI tools, or one that simply hasn't prioritized adoption — engineers are still working the old way. They are not bad engineers. But they are not building the meta-skill that is quickly becoming essential.
The gap between these two engineers compounds like interest. After one year, the difference is small. After three years, it is structural. This is why I think we need a new concept: AI-native YOE. Not all years of experience are equal anymore. An engineer with three years at an AI-native company has three AI-native YOE. An engineer with three years at a conservative company might have zero. Traditional YOE measures time served. AI-native YOE measures time spent actually working in the new paradigm. And increasingly, it is AI-native YOE that matters.
The AI-native engineer has internalized patterns and workflows that the other engineer has never been exposed to. And here is the key insight: you cannot cram for this in two weeks before an interview. It is embedded in how you think, not just what tools you know.
When these two engineers enter the job market, the divergence becomes visible. The AI-native engineer can talk about how they used agents to automate testing pipelines, how they evaluated and corrected AI-generated system designs, how they used AI to explore solution spaces they wouldn't have considered on their own. The other engineer has strong fundamentals — but fundamentals alone are starting to look like table stakes.
This is not about blame. Many engineers at conservative companies are there for good reasons — stability, compensation, domain interest. But if you are not aware of this divergence, you might not realize that your "experience" is quietly depreciating.
And the pressure is coming from the outside too. We may soon see a new role emerge: the AI adoption engineer — someone hired specifically to help companies transition to AI-native workflows. Think of it as a transformation specialist. They identify the bottlenecks in how a team works, introduce AI tooling, redesign workflows, and measure the results. If this sounds a bit like a "terminator" role — similar to how some HR leads are brought in specifically to restructure and downsize — that's because it might be. Part of identifying AI bottlenecks is identifying where humans are doing work that AI can do better. Not every company will need this role. But the fact that it could exist tells you something about where things are heading.
What Interviews Might Look Like
If the definition of experience is changing, interviews will change too. Let's think about what that could look like.
Agentic engineering replaces Leetcode. The traditional coding interview tests whether you can write an algorithm on a whiteboard under time pressure. In a world where AI writes code faster and more reliably than most humans, this test measures the wrong thing. A more relevant test: give a candidate a complex, ambiguous task and watch how they orchestrate AI agents to solve it. Can they break down the problem? Can they evaluate the AI's output? Can they course-correct when the agent goes off track? This is a fundamentally different skill than writing a binary search from memory.
AI-assisted system design replaces whiteboard architecture. Today's system design interviews are mostly verbal. You talk through how you would build a system, draw some boxes on a whiteboard. In the future, the interview might look more like a real work session: here is a problem, here are your AI tools, design a system. The evaluation shifts from "can you recite CAP theorem" to "can you use AI to explore tradeoffs, identify edge cases, and arrive at a sound design — faster and more thoroughly than you could alone."
Behavioral questions expand to cover AI collaboration. How do you decide when to trust AI output? How do you handle a situation where AI confidently gives you a wrong answer? How do you use AI to learn something outside your area of expertise?
That last question matters more than it seems. Here is why.
AI is, by default, an echo chamber. It agrees with you. It builds on your assumptions. It answers the questions you ask — but it does not challenge the questions you don't ask. This means your AI usage is bounded by your own knowledge. If you don't know that a better approach exists, you won't ask AI about it. And AI won't volunteer it.
The engineers who stand out will be the ones who actively use AI to push beyond what they already know. They use AI not just as a productivity tool, but as an exploration tool — asking it to challenge their assumptions, suggest approaches from other domains, find the blind spots in their thinking. This is a learnable skill, but it requires intention. And it is something that a good interview process should test for.
What To Do
If you are at an AI-native company, keep pushing. The fact that your environment supports AI usage is an advantage, but don't let it make you passive. Actively experiment with new models, new workflows, new ways of using AI beyond your comfort zone.
If you are at a company that is slow to adopt AI, recognize the gap. This doesn't mean you need to quit tomorrow. But it means you need to invest your own time. Use AI seriously in side projects. Build things with AI agents. Practice the workflow of collaborating with AI on real problems — not just asking it trivia questions. Start accumulating your AI-native YOE now — even if your employer won't help you do it.
And when you evaluate your next role, treat the company's AI culture as a first-class criterion. Ask in the interview: what AI tools do your engineers use? Is AI usage encouraged or restricted? How has AI changed your development workflow in the past year? The answers will tell you a lot about whether that company will help you grow — or quietly let your skills fall behind.
The Real Takeaway
Maybe the experience that truly compounds — the one that never depreciates no matter where you work — has always been the same thing: the ability to learn how to learn.
Before AI, this mattered. You had to keep up with new frameworks, new paradigms, new best practices. But the pace was manageable. You could coast for a while on what you already knew.
AI changed the speed, not the principle. Now, the learning cycle is measured in months, not years. The tools you master today will be obsolete next year. The workflows you build now will need to be rebuilt. The only durable advantage is being someone who adapts — quickly, intentionally, and without fear.
So the next time you think about what experience you are gaining at your current job, ask yourself: am I learning how to learn? Or am I just getting comfortable?
That question has always mattered. AI just made it impossible to ignore.