AI in Software Development: Perspectives from the CEO of Jellyfish (2026)
AI in Software Development: Perspectives from the CEO of Jellyfish (2026)
Artificial Intelligence
Dec 16, 2025


Not sure what to do next with AI?
Assess readiness, risk, and priorities in under an hour.
Not sure what to do next with AI?
Assess readiness, risk, and priorities in under an hour.
➔ Schedule a Consultation
AI is transforming software development by enhancing every phase of the SDLC—planning, coding, review, testing, and release. Jellyfish CEO Andrew Lau advises leaders to prioritize adoption and evolve metrics beyond coding to measure value-stream outcomes, ensuring genuine productivity gains rather than just local speed-ups.
Why this matters now
AI is no longer merely an add-on to coding; it’s reshaping how software workflows operate from start to finish—from planning and review to release and operations. In his McKinsey interview, Andrew Lau (CEO, Jellyfish) argues that the impact hinges on adoption and measurement across the entire lifecycle, not just on code generation.
Key points
AI drives transformation in SDLC: The software lifecycle is being redefined as teams incorporate AI assistance in planning, coding, review, testing, and release.
Productivity measurement must evolve: Organizations should move beyond output metrics (e.g., lines of code) to value-stream measures tied to business outcomes.
Smarter reviews and testing: AI speeds up code review and test generation, but end-to-end throughput only improves when surrounding processes are modernized.
What’s new or how it works
From Lau’s perspective, the successful players in 2026 will (1) integrate AI across the SDLC, (2) invest in enablement and change management, and (3) update metrics to track flow efficiency, quality, and customer impact. Jellyfish’s 2025 research shows increasing adoption and a belief that a significant portion of development will shift to AI over time—but real ROI depends on program-level adoption, not sporadic use.
Practical steps (playbook for 2026)
Instrument the entire value stream
Track lead time, review time, deployment frequency, change failure rate, and MTTR along with AI usage—not just coding speed. Use these to set guardrails and demonstrate real impact.Redesign code review with AI involved
Standardize prompts and policies for AI-assisted reviews; require human approval for risky changes; measure defect escape rate and rework over time.Move testing left
Use AI to propose test cases from requirements, generate unit tests with coverage targets, and auto-summarize flaky test patterns for remediation. Tie outcomes to escaped defects and incident counts.Adoption before expansion
Lau emphasizes that adoption drives impact. Start with a few teams, provide training and playbooks, and scale only when value-stream metrics improve.Update the measurement model
Replace local productivity proxies (PR count, LoC) with flow and outcome metrics (cycle time by stage, time to user value). Align incentives so teams optimize the whole system.
Reality check: Bain’s analysis (summarized by ITPro) finds coding is <40% of a developer’s day; coding-only enhancements won’t transform outcomes unless planning, review, and release are also streamlined.
Examples you can pilot this quarter
Review accelerator: AI suggests diffs to focus on, flags risky patterns, and drafts comments; maintainers approve/reject. Measure review turnaround and post-merge defects.
Requirements-to-tests: AI converts acceptance criteria into test skeletons; engineers complete edge cases. Track coverage and escaped bugs.
Ops summarizer: AI generates incident timelines and follow-up tasks after postmortems; measure MTTR and action closure rates.
FAQs
Q1: How does AI improve developer productivity?
By automating repetitive tasks and speeding up reviews/tests, but sustainable gains come from measuring and improving the full flow, not just coding speed. McKinsey & Company
Q2: What role does AI play in code review?
AI identifies risky changes, drafts comments, and streamlines reviewer focus, while humans retain approval. Teams should track review time, defect escape rate, and rework. McKinsey & Company
Q3: How is the SDLC affected overall?
According to Jellyfish, the SDLC is being redefined: adoption drives impact, measurement must evolve, and a new wave of tools is emerging—necessitating updated workflows and skills. LinkedIn
Sources:
McKinsey interview with Andrew Lau (Dec 10, 2025). McKinsey & Company
Jellyfish newsroom: 2025 State of Engineering Management highlights. Jellyfish
Jellyfish social recaps: “SDLC is being redefined; adoption drives impact; measurement must evolve.” LinkedIn
ITPro on Bain research: coding-only gains are “unremarkable” without lifecycle redesign. IT Pro
Next Steps
Want help measuring AI’s impact across your entire lifecycle, not just coding? Generation Digital can design a value-stream measurement model, pilot AI in code review and testing, and build the adoption plan.
AI is transforming software development by enhancing every phase of the SDLC—planning, coding, review, testing, and release. Jellyfish CEO Andrew Lau advises leaders to prioritize adoption and evolve metrics beyond coding to measure value-stream outcomes, ensuring genuine productivity gains rather than just local speed-ups.
Why this matters now
AI is no longer merely an add-on to coding; it’s reshaping how software workflows operate from start to finish—from planning and review to release and operations. In his McKinsey interview, Andrew Lau (CEO, Jellyfish) argues that the impact hinges on adoption and measurement across the entire lifecycle, not just on code generation.
Key points
AI drives transformation in SDLC: The software lifecycle is being redefined as teams incorporate AI assistance in planning, coding, review, testing, and release.
Productivity measurement must evolve: Organizations should move beyond output metrics (e.g., lines of code) to value-stream measures tied to business outcomes.
Smarter reviews and testing: AI speeds up code review and test generation, but end-to-end throughput only improves when surrounding processes are modernized.
What’s new or how it works
From Lau’s perspective, the successful players in 2026 will (1) integrate AI across the SDLC, (2) invest in enablement and change management, and (3) update metrics to track flow efficiency, quality, and customer impact. Jellyfish’s 2025 research shows increasing adoption and a belief that a significant portion of development will shift to AI over time—but real ROI depends on program-level adoption, not sporadic use.
Practical steps (playbook for 2026)
Instrument the entire value stream
Track lead time, review time, deployment frequency, change failure rate, and MTTR along with AI usage—not just coding speed. Use these to set guardrails and demonstrate real impact.Redesign code review with AI involved
Standardize prompts and policies for AI-assisted reviews; require human approval for risky changes; measure defect escape rate and rework over time.Move testing left
Use AI to propose test cases from requirements, generate unit tests with coverage targets, and auto-summarize flaky test patterns for remediation. Tie outcomes to escaped defects and incident counts.Adoption before expansion
Lau emphasizes that adoption drives impact. Start with a few teams, provide training and playbooks, and scale only when value-stream metrics improve.Update the measurement model
Replace local productivity proxies (PR count, LoC) with flow and outcome metrics (cycle time by stage, time to user value). Align incentives so teams optimize the whole system.
Reality check: Bain’s analysis (summarized by ITPro) finds coding is <40% of a developer’s day; coding-only enhancements won’t transform outcomes unless planning, review, and release are also streamlined.
Examples you can pilot this quarter
Review accelerator: AI suggests diffs to focus on, flags risky patterns, and drafts comments; maintainers approve/reject. Measure review turnaround and post-merge defects.
Requirements-to-tests: AI converts acceptance criteria into test skeletons; engineers complete edge cases. Track coverage and escaped bugs.
Ops summarizer: AI generates incident timelines and follow-up tasks after postmortems; measure MTTR and action closure rates.
FAQs
Q1: How does AI improve developer productivity?
By automating repetitive tasks and speeding up reviews/tests, but sustainable gains come from measuring and improving the full flow, not just coding speed. McKinsey & Company
Q2: What role does AI play in code review?
AI identifies risky changes, drafts comments, and streamlines reviewer focus, while humans retain approval. Teams should track review time, defect escape rate, and rework. McKinsey & Company
Q3: How is the SDLC affected overall?
According to Jellyfish, the SDLC is being redefined: adoption drives impact, measurement must evolve, and a new wave of tools is emerging—necessitating updated workflows and skills. LinkedIn
Sources:
McKinsey interview with Andrew Lau (Dec 10, 2025). McKinsey & Company
Jellyfish newsroom: 2025 State of Engineering Management highlights. Jellyfish
Jellyfish social recaps: “SDLC is being redefined; adoption drives impact; measurement must evolve.” LinkedIn
ITPro on Bain research: coding-only gains are “unremarkable” without lifecycle redesign. IT Pro
Next Steps
Want help measuring AI’s impact across your entire lifecycle, not just coding? Generation Digital can design a value-stream measurement model, pilot AI in code review and testing, and build the adoption plan.
Receive practical advice directly in your inbox
By subscribing, you agree to allow Generation Digital to store and process your information according to our privacy policy. You can review the full policy at gend.co/privacy.
Generation
Digital

Business Number: 256 9431 77 | Copyright 2026 | Terms and Conditions | Privacy Policy
Generation
Digital











