
Roundtable: Test and Validation in the Age of Agentic AI
AI can now change your codebase faster than you can understand it.
In Part I, we explored that when code is abundant, “good software” is defined less by code quality and more by understanding, constraints, and evolution. Part II is about the proof: what validation actually earns trust in that world?
Unit tests, PR reviews, and coverage targets were built for a time when humans wrote the code and change was expensive. In agentic workflows, code is cheap, continuous, and often unread.
So where does confidence come from now?
In this small-group roundtable, engineering leaders will explore:
- What should be validated when AI agents generate and modify the code
- How teams are shifting from testing correctness to validating system behavior and intent
- Which testing practices scale and which create a false sense of safety
- How guardrails, constraints, and feedback loops replace manual review
Testing is moving up the stack from verifying code to validating systems.
Join peers continuing the Engineering Judgment in the Age of Agentic AI series, and compare how leaders are rebuilding trust, safety, and velocity when AI becomes a core contributor to production software.
Host: Ben Segal, Senior Director of Engineering @ Swift Navigation
Ben leads the engineering team behind the Skylark Cloud GNSS Correction Service—a global, real-time platform enabling precise positioning for autonomous systems, mobile devices, and geospatial applications. He brings deep experience building and scaling cloud services, real-time data systems, and ML-powered platforms, and holds multiple patents in advanced positioning technologies. An entrepreneurial builder at the intersection of deep tech and software, Ben has guided multiple products from concept to launch.
