This is Part 2 of our PostgresConf 2026 series. Read Part 1 here.
At PostgresConf 2026, I had the privilege of giving the keynote address. The topic? A fundamental shift in how we must think about software security in the age of AI.
The core thesis of my talk, "When Trust Becomes Infrastructure," is simple: the way we build software is changing, and the database has to evolve with it.

The Death of Human Code Review
As we enter a world where AI agents generate, refactor, and execute code at lightning speed, relying on human code review as your primary security gate is a losing battle. The velocity of AI-written code will simply outpace a human's ability to audit it.
If you have autonomous agents running amok in your backend, you can no longer trust the application layer. Trust has to move down the stack.
Building Agent-Friendly Databases
We urgently need a secure-by-default database model. This is the entire philosophy behind what we are building at Constructive.
We must leverage compiler-driven Row-Level Security (RLS) right at the database layer. By doing this, we ensure that no matter what an agent tries to do, or what rogue query an LLM hallucinates, the database strictly enforces the rules. The database itself becomes the ultimate bouncer.
During my conversations with Postgres legends Greg Kemnitz and Curt Kolovson, we discussed this exact problem. As Greg noted: "Historically, developers are having a reaction to the word security: 'That sounds scary. That sounds hard. I don’t really want to look at it.'"
The reality is that most applications use Postgres, but most of those developers don't know what RLS is.
At Constructive, we're trying to remove that friction directly. The policies you'd normally have to author by hand and remember to apply are generated from your access model and compiled into the schema at table creation. The defaults are restrictive, not permissive: a new table is closed until you describe who can see it. And the same testing framework we use for application code runs against the database itself, exercising RLS the way an attacker—or an over-eager agent—would. Compiler-driven RLS, generated policies, secure defaults, and tests against the policies themselves are individually old ideas; what's new is making them the path of least resistance, so a developer who'd rather not look at security still ends up shipping a system that holds up.
That, in concrete terms, is what Greg meant by "translation" during our conversation: "Translating that into an understandable framework that developers can relate to without it being burdensome—that is the contribution."
From Passive Storage to Active Trust
The deeper shift behind all of this is what a database is for. For most of our careers, a database was something you put data into and got data out of—a passive substrate, with security pushed up into the application layer. That arrangement worked while the application was written by a small team, reviewed line by line by humans who shared the same intuitions about what should not happen.
That world is closing. When code is being generated faster than it can be read, the trust boundary can't live in code that nobody actually read. It has to live in the layer that owns the data and is in a position to refuse a request that breaks the rules—regardless of which agent, prompt, or pipeline produced it.
So the database stops being passive. It becomes the part of the stack that knows who you are, what you're allowed to see, and what it won't let happen—by construction, not by review. That isn't a far-future claim; it's the requirement of the next several years, and it's the most honest answer I have to the question of how we ship AI-built systems we can stand behind.
Greg's other line stuck with me through the rest of the conference: "Once your stuff gets widely adopted, the world will be better." I think he's right.
In Part 3, we will explore the mechanics of making Postgres modular and putting your database in package.json.