Featured
Collections
All Collections
All Content
Andrew McNamara, Director of Applied Machine Learning @ Shopify, joins the ELC podcast to share insights on building agentic platforms at scale, like Sidekick, that must keep reliability for its users at the forefront. Andrew describes the building philosophy behind Shopify and what it means to cultivate a culture of prototype-first while prioritizing hiring early-stage talent. We cover Sidekick’s development journey and how user feedback impacted its product vision, why evaluation is so important for determining ground truth sets, and the benefit of user-driven use cases. Andrew also dissects how they went about making product design decisions, such as building proactive agents and identifying subagent specializations.
Most engineering teams have adopted AI coding tools. The productivity gains are real. Research across 2,172 developer-weeks shows roughly 25% year-over-year improvement for regular AI users, along with meaningful gains in test coverage and review efficiency.
But adoption was the easy part. As AI becomes embedded in daily workflows, new questions are surfacing: code churn is climbing faster than output, duplication is expanding, and the metrics most teams rely on weren't designed to capture what's changing underneath.
This session digs into what the data actually shows about AI-assisted development, where the gains are durable, where they're fragile, and what engineering leaders should be paying attention to as AI goes from experiment to everyday.
Discussion based on findings from the AI Multiplier Effect report: https://www.gitkraken.com/reports/ai-multiplier-effect
# Roundtable
Comment
Inbal Shani (CPO and Head of R&D @ Twilio) deconstructs the transformation of the R&D org at Twilio! We explore the shift from a GM-led model to a unified platform strategy and “why structure must always follow strategy.” Inbal shares her framework for moving from output-focused metrics to input goals, prioritizing “time-to-value,” and the nuances of measuring AI products. We discuss using "R&D roadshows" as a strategic company transformation tool and why engineering leaders must master product positioning. We also dive into mental models for future-proofing your business, from "working backwards" to solve customer problems, to embedding systems thinking into the DNA of your engineering team, and critical questions to identify and optimize decisions around your company’s moat.
AI agents are quickly moving beyond proof-of-concepts and into real engineering workflows that can materially change how teams test software. But for engineering leaders, the real question is not whether the technology is interesting — it’s where it actually fits in a modern QA strategy.
This session takes a clear, practical look at what engineering teams should understand about AI testing agents today and how they may reshape testing over the next few years. We’ll explore the strategic signals driving adoption, the types of testing work agents are beginning to handle well, and where human judgment remains essential.
You’ll also see a short, focused demonstration of Autify Aximo as an illustrative example of an autonomous testing agent — how it operates in practice, what kinds of workflows it can validate, and the outcomes teams should realistically expect when adopting this approach.
Key Topics:
- The real capabilities of AI testing agents today — and the misconceptions leaders should avoid
- Where agents create the most value: expanding coverage, reducing manual validation, and handling workflows that are difficult to automate traditionally
- How to design safe, low-risk pilots that prove ROI before scaling adoption
Ready to give it a try? Try Aximo free → https://autify.com/products/aximo
Prefer a personalized walkthrough? Book a 1:1 focused on your use case → https://meetings.hubspot.com/ryo-chikazawa/ai-tester
# Roundtable
Comment
Enterprise customers demand 99.9% availability, regardless of how the underlying software is built. In this episode, Murali Swaminathan (CTO @ Freshworks) discusses how enterprises actually win with AI! We explore the “Architecture of Predictability” – proactive architectural safeguards to scale “responsible AI by design” across a global organization serving 75,000 customers. Murali shares his leadership playbook for implementing the technical safeguards and product trust controls that empower hundreds of engineers to build safely. We also dive into the shift from deterministic flowcharts to “workflows with a brain” and why backend systems engineers are the secret bedrock of agentic products. Plus, Murali deconstructs the dual evolution required of modern leaders: mastering strategic thinking at the business level while cultivating systems thinking at the engineering level.
AI is starting to change how teams respond to incidents, assisting with detection, triage, remediation and reporting. This advancement holds enormous potential to reduce on-call burden, but also raises important questions for engineering leaders about safely introducing these capabilities into live, mission-critical environments.
This session brings together a team of experts to explore how teams are experimenting with agentic incident response today. We’ll share tips on getting started, how much on-call work can be safely automated, and setting the appropriate guardrails to avoid introducing risk into production.
Key Topics:
- Where AI agents can improve your on-call process
- Strategies to avoid runaway automation that exposes you to further risk
- Maintaining the right blend of human and agentic decision-making in incident response
If you're interested in exploring moderizing your incident response, you can try xMatters (free) https://www.xmatters.com/
# Roundtable
Comment
Jon Hyman (CTO & Co-Founder @ Braze) returns to the podcast to share how he balances a mature, public-company roadmap with visionary AI innovation! We deconstruct Braze’s quantitative "Product Health" framework - a scoring system used to resolve competing prioritizations and mandate technical remediation. We also discuss shifting engineering leaders to think like GMs, how to realign teams by connecting abstract “vision” to specific releases, goals & outcomes. Plus, Jon’s three-tier mental model for AI products, how to identify AI features that actually drive revenue, and reimaging your product for future channels, teams and skills.

Jason Meltzer · Apr 1st, 2026
What actually happened when one team adopted AI tools, told through 14 months of git data.
# AI Productivity Metrics
# AI
# Engineering Leadership
Comment
We discuss what effective leadership looks like across three organizational archetypes: product-led, business-led, and design-led companies with Sebastiano Armeli (Engineering Leadership @ Meta). Drawing from his leadership journey at places like Meta, Spotify, Snap, and PayPal, Sebastiano deconstructs the situational leadership frameworks required to thrive in different environments. Plus we discuss how AI is moving managers from implementation to architecture, why the next bottleneck is managing the overhead of high-velocity experimentation, and the future of team topology where AI enables a single leader to oversee high-scale teams of 30–50 people. Whether you are scaling a design-driven startup or navigating a complex business-led enterprise, this conversation provides a framework for aligning your leadership style with your organization's core incentives.

