On the evening of May 1, 2026, I joined Sanjeev Dhanda from Google DeepMind and Abhi Aiyer from Mastra for a panel and happy hour at the SVB Experience Center in San Francisco, hosted by Kristopher Floyd of Frontier Syndicate together with Silicon Valley Bank and Acasia. The framing for the night was simple: as AI changes how software gets built, what does "engineering" actually mean now?
Here's the through-line from my contribution to the conversation.
Construction is being automated. Engineering is not.
We tend to use software development, coding, and engineering interchangeably, but they aren't the same thing — and separating them clarifies what AI is actually doing.
Software construction is the mechanical act of building code: writing functions, wiring APIs, scaffolding systems, moving data from one place to another. It's necessary, it's repetitive, and it's the part AI is getting very good at.
Software engineering is something else. It's responsibility for outcomes — planning, design, testing, debugging, correctness, security, behavior under real-world conditions. It's making sure that when something runs, it does what it's supposed to do, and nothing it's not.
Construction is execution. Engineering is responsibility.
For decades those two were tightly coupled, because if you wanted to engineer something you had to manually construct every part of it. That isn't true anymore. AI is driving the cost of construction toward zero, which means the bottleneck moves somewhere else.
As code becomes abundant, trust becomes scarce.
Where does the bottleneck go when construction is free? It goes to trust.
We're moving from a world where the question was "can we build this?" to one where the question is "can we trust what we've built?" Generating code is about possibility — a model can produce something that compiles, looks correct, even passes a few happy-path tests. Trust is about guarantees. Does the system enforce permissions correctly? Does it protect user data? Does it behave under edge cases? Does it fail safely?
You can't scale trust through inspection. You can't read every line of generated code. You have to scale it through systems.
So the engineer's role changes. You're not just writing code — you're designing the environment that code lives in. You're defining what "correct" means, where it's enforced, and how violations are detected. The question stops being "did I write this correctly?" and starts being "have I built a system where correct code naturally emerges?"
Tight feedback loops are how trust gets created.
Every system has a loop: think, code, save, understand. The faster and clearer that loop is, the better engineers — and agents — perform inside it. When the loop is slow, you lose context, you guess, you drift. When it's tight, you stay in flow and converge on correctness.
This matters even more for agents than for humans, because agents have no intuition. They operate entirely on signals. Vague errors and slow responses produce vague agents. Strong, fast, explicit signals produce effective ones.
I've watched this play out with our own coding agents. The instruction is simple — make the tests pass. What makes it work isn't the agent being clever; it's the test framework being well-designed. The tests don't just say "fail," they tell the agent what was expected and what actually happened. The tests become a guidance system, not just a verification tool. They encode intent in a form a machine can act on.
That's the shift: tests, policies, and invariants stop being checkboxes and become the way you express what you want, in a form the system can enforce.
Vibe code the edges. Trust the core.
Not every part of a system carries the same weight. If your UI glitches, you fix it. If your data layer is wrong, you have a serious problem. If your permission system is wrong, you've lost trust entirely.
So trust concentrates. The natural architecture splits into two halves:
- Fluid edges — the frontend, the workflows, the fast-moving parts. Generated, iterated, regenerated. You can move quickly here because nothing irreversible lives at the edge.
- A trusted core — your data layer, your policy layer, your test infrastructure. The place where invariants are defined, permissions are enforced, and correctness is non-negotiable. This is where guarantees live.
Vibe code the edges. Trust the core. Once the core is strong, you can be flexible everywhere else — including letting agents generate large parts of the system — because you know the core will hold.
What agentic engineering actually means
When we say agentic engineering, we're not talking about replacing engineers with agents. We're talking about shifting the engineer's role to where it matters most: defining the rules, designing the systems, and building the core that everything else depends on. Then letting agents operate within that environment — generating, experimenting, iterating, but always within constraints that make the outcome trustworthy.
The system becomes a kind of contract. It says: move fast, generate freely, explore — but these rules will always hold.
That's how you reconcile speed and trust, two things that used to be in tension. In the old world, you traded one for the other. In this new model, if the system is designed right, you get both.
What I took away from the night
Talking to Sanjeev about DeepMind's view of agent capability, and to Abhi about how Mastra is shaping the TypeScript-side agent stack, the message was consistent across very different layers of the problem: the real leverage is no longer in generating more code. It's in building the systems that decide whether generated code is trustworthy.
That's the move. From vibe coding, to agentic engineering. From construction, to engineering. From "did I write this right?" to "have I built a system where right is the default?"
Thanks to Kristopher and Frontier Syndicate, Silicon Valley Bank, and Acasia for hosting, and to Sanjeev and Abhi for a genuinely sharp conversation. More of these.