For more than a decade, I have taught software engineers how to implement testing, React, Remix, MCP, and more.
I built courses around practice. I would simulate a real work environment: a product manager gives you a task, you read the docs, you work in the codebase, you build the feature, and then you compare your solution with mine.
That was valuable because implementation was valuable.
It still is. But it is becoming less scarce.
One step before the end
AI coding agents are slowly eating away at the tasks software engineers have done for decades. That's alarming for many members of our discipline, myself included. A year ago, most of us would not have predicted that agents could be as good at implementation as they are today (I definitely didn't). That experience makes me less confident in my ability to predict how good they will be a year from now.
Eventually, the software development industry is going to end.
Maybe by the heat death of the universe. Maybe by artificial intelligence actually taking over.
But I do not find it useful to plan for the end. It is impossible to put a reliable timeline on the dissolution of our industry. And even if you could, planning what you would do in a world where software development is no longer a discipline is not useful. That world is too different. There is not enough context to reason about it well... If it happens at all.
So I have been asking a different question.
Not: "what do we do after software engineering is gone?"
But: "what remains valuable right before that happens?"
The answer to that gives me the most durable thing I can continue to teach my fellow software development practitioners. Which is what I love to do and want to continue doing until I really can't anymore.
So, let's fast forward time a little bit. Let's go all the way up to the point where AI takes over all the value we currently create as software developers, then take one step back.
What is the last valuable thing the Last Software Engineer has to offer?
It's not typing code.
Not choosing libraries.
Not even designing the implementation.
The last valuable thing is judgment: deciding what is worth making real, what constraints must not be violated, what trade-offs are acceptable, and what damage would make success meaningless.
In other words, the Last Software Engineer does not merely know what to build.
The Last Software Engineer knows what should be built.
The arrows are changing š¹
For a long time, software engineers (like archers) were judged largely by their aim.
We were handed targets: build this feature, fix this bug, migrate this system, improve this metric. The best engineers were the ones who could draw the bow, account for the wind, and hit the target reliably.
That skill still matters.
But the arrows are changing.
AI agents are turning implementation into something closer to a homing arrow. You still have to point it in the right general direction, and the shot can still go wrong, but the arrow is getting better at finding its way to the target.
That changes the scarce skill.
When arrows are hard to aim, the best archer is the one with the steadiest hand.
When arrows can steer themselves, more targets become reachable and the best archer is the one who knows which target is worth hitting.
And that is the shift from software engineering as implementation to product engineering as judgment.
When the cost of shooting drops, target selection matters more, not less.
The problem is no longer only whether you can hit the target you were given. The problem is whether the target should exist, whether hitting it would damage something else, whether there is a better target nearby, and whether the apparent bullseye is even measuring the right outcome.
I actually experienced this in a conversation just today.
I was talking with my friend Zack Chapple about an early version of the new learning environment I want to build for this. I described the basic idea: put learners in a simulated stakeholder meeting, let them ask questions, uncover constraints, push back, and eventually produce a plan.
The valuable part of the conversation was not when we talked about whether it could be built.
It was when Zack started asking how we would know whether it worked.
He moved immediately from the idea to the evaluation criteria. What should a learner ask before implementing? When should they push back? How would we tell whether they uncovered the right constraints? How would we evaluate the conversation, not just the final plan?
He was also mapping the product consequences in real time: where the current system already fit, where it would need to change, what could be tested quickly, and what would matter later if this became a real learning environment.
That is product engineering.
What makes "should" hard to automate
That is why the word should matters.
You could pull up your AI coding agent right now and ask it for ideas for an MVP. It could give you a bunch of ideas for things you could build.
You could even come up with your own idea and then ask the agent to grill you with questions for how that idea could be implemented.
The agent would be able to respond with all kinds of things that you could do. It can give you options, compare trade-offs, draft requirements, identify risks, and recommend what you could do next.
But what you should do next is harder to delegate, because should means choosing one possible future over another and accepting the consequences of that choice. It requires judgment and balancing a variety of constraints. You have to decide which stakeholder's requirements to weigh more heavily, which user problems matter more than others, which risks are acceptable and which are not, which metrics to follow and optimize for, and what should be the north star of your product. Sometimes a solution is technically elegant and still wrong because it changes the product's promise to the user.
What makes these decisions and trade-offs so difficult to automate is that the relevant information is rarely all inside the prompt. It's exceedingly difficult to get that context into the prompt. Some of that context lives in users' expectations, the business model, brand trust, support burden, team capacity, legal exposure, migration risk, accessibility, performance, long-term maintainability, and the thousand small promises a product has made over time without writing them down.
Even if an agent could A/B test its way toward better measurable outcomes, measurement does not remove judgment. Someone still has to decide which outcomes matter, which metrics are proxies, which users are allowed to absorb the experiment, and what costs are unacceptable even if the numbers improve. Agents aren't there yet, but I can't confidently say they'll never get there. The problem is that the desired outcome is not always quantifiable, and someone still has to determine what the desired outcome actually is.
Why accountability still matters
It's not a question of whether AI can reason about this stuff, because it definitely can (and it's getting better at this all the time). No, the real question is whether AI can be responsible for deciding which of all of these trade-offs matter most. There needs to be ownership and accountability.
When you make a product decision to improve one metric, you are often trading off another. A bug fix for one user can become a workflow change for another. A new feature for one client can increase codebase complexity and add noise for everyone else.
A product engineer knows how to weigh those technical and human effects before implementation turns them into reality.
Accountability matters because it changes what counts as a good decision. A system that does not bear the cost of being wrong can recommend a locally optimal choice that no responsible owner should accept.
A coding agent can build the feature.
A stronger agent can critique the feature.
A very strong agent can propose metrics, rollout plans, failure modes, and alternatives.
But someone still has to say:
This is the future we are going to ship.
That is product engineering.
Product Engineering in practice
Product engineering is the discipline of connecting implementation decisions to product consequences.
That technical grounding is what separates product engineering from product management. A product engineer is not merely deciding what would be useful. They are deciding what would be useful given the shape of the existing system, the cost of the change, the risks of the implementation, and the experience users will actually have once the software exists (and as it changes over time).
In practice, product engineering means asking what user problem the task is really solving, defining success before implementation, naming the constraints that must not be violated, identifying who could be harmed or confused by the change, distinguishing measurable improvement from real improvement, and planning rollout and rollback before shipping.
As AI automates implementation, product engineering becomes the remaining skill.
And if Iām wrong, and AI never successfully automates implementation, product engineering is still the skill that separates great software engineers from merely productive ones.
Product judgment has always been the hallmark of the best software engineers. Turning a task from your issue tracker into working code is valuable. But knowing when to push back, when to simplify, when to consolidate it with other work, when to protect users from a technically correct solution, how to create mutually beneficial feedback loops, and how to join the discussion before the task exists is far more valuable.
For me, as an educator who wants to keep teaching skills worth learning, that gives me confidence in the future of our industry.
Product engineering is durable because it is not about producing software.
It is about deciding whether the software being produced is worth having.
