AI in Software Development: Insights from Jellyfish CEO (2026)
AI in Software Development: Insights from Jellyfish CEO (2026)
AI
16 déc. 2025


AI is reshaping software development by augmenting every SDLC stage—planning, coding, review, testing and release. Jellyfish CEO Andrew Lau advises leaders to prioritise adoption and evolve measurement beyond coding metrics to value-stream outcomes, ensuring real productivity gains rather than local speed-ups.
Why this matters now
AI is no longer a sidecar to coding; it’s reshaping how software work flows end-to-end—from planning and review to release and operations. In his McKinsey interview, Andrew Lau (CEO, Jellyfish) argues that impact depends on adoption and measurement across the full lifecycle, not just code generation.
Key points
AI drives SDLC transformation: The software lifecycle is being redefined as teams instrument planning, coding, review, testing and release with AI assistance.
Productivity measurement must evolve: Organisations should go beyond output metrics (e.g., lines of code) to value-stream measures tied to business outcomes.
Smarter reviews and testing: AI accelerates code review and test generation, but end-to-end throughput only improves when surrounding processes are modernised.
What’s new or how it works
From Lau’s perspective, the winners in 2026 will (1) embed AI across the SDLC, (2) invest in enablement and change management, and (3) modernise metrics to track flow efficiency, quality and customer impact. Jellyfish’s 2025 research shows rising adoption and belief that a significant share of development will shift to AI over time—but real ROI hinges on programme-level adoption, not pockets of usage.
Practical steps (playbook for 2026)
Instrument the entire value stream
Track lead time, review time, deployment frequency, change failure rate, and MTTR alongside AI usage—not just coding speed. Use these to set guardrails and show real impact.Redesign code review with AI in the loop
Standardise prompts and policies for AI-assisted reviews; require human approval for risky changes; measure defect escape rate and rework over time.Shift testing left
Use AI to propose test cases from requirements, generate unit tests with coverage targets, and auto-summarise flaky test patterns for remediation. Tie outcomes to escaped defects and incident counts.Adoption before expansion
Lau stresses that adoption drives impact. Start with a few teams, deliver training and playbooks, and scale only when value-stream metrics improve.Update the measurement model
Replace local productivity proxies (PR count, LoC) with flow and outcome metrics (cycle time by stage, time to user value). Align incentives so teams optimise the whole system.
Reality check: Bain’s analysis (summarised by ITPro) finds coding is <40% of a developer’s day; coding-only boosts won’t transform outcomes unless planning, review, and release are also streamlined.
Examples you can pilot this quarter
Review accelerator: AI suggests diffs to focus on, flags risky patterns, and drafts comments; maintainers approve/reject. Measure review turnaround and post-merge defects.
Requirements-to-tests: AI converts acceptance criteria into test skeletons; engineers complete edge cases. Track coverage and escaped bugs.
Ops summariser: AI generates incident timelines and follow-up tasks after postmortems; measure MTTR and action closure rates.
FAQs
Q1: How does AI improve developer productivity?
By automating repetitive tasks and accelerating reviews/tests, but sustainable gains come from measuring and improving the full flow, not just coding speed. McKinsey & Company
Q2: What role does AI play in code review?
AI surfaces risky changes, drafts comments, and speeds up reviewer focus, while humans retain approval. Teams should track review time, defect escape rate, and rework. McKinsey & Company
Q3: How is the SDLC affected overall?
Per Jellyfish, the SDLC is being redefined: adoption drives impact, measurement must evolve, and a new wave of tools is arriving—requiring updated workflows and skills. LinkedIn
Sources:
McKinsey interview with Andrew Lau (Dec 10, 2025). McKinsey & Company
Jellyfish newsroom: 2025 State of Engineering Management highlights. Jellyfish
Jellyfish social recaps: “SDLC is being redefined; adoption drives impact; measurement must evolve.” LinkedIn
ITPro on Bain research: coding-only gains are “unremarkable” without lifecycle redesign. IT Pro
Next Steps
Want help measuring AI’s impact across your entire lifecycle, not just coding? Generation Digital can design a value-stream measurement model, pilot AI in code review and testing, and build the adoption plan.
AI is reshaping software development by augmenting every SDLC stage—planning, coding, review, testing and release. Jellyfish CEO Andrew Lau advises leaders to prioritise adoption and evolve measurement beyond coding metrics to value-stream outcomes, ensuring real productivity gains rather than local speed-ups.
Why this matters now
AI is no longer a sidecar to coding; it’s reshaping how software work flows end-to-end—from planning and review to release and operations. In his McKinsey interview, Andrew Lau (CEO, Jellyfish) argues that impact depends on adoption and measurement across the full lifecycle, not just code generation.
Key points
AI drives SDLC transformation: The software lifecycle is being redefined as teams instrument planning, coding, review, testing and release with AI assistance.
Productivity measurement must evolve: Organisations should go beyond output metrics (e.g., lines of code) to value-stream measures tied to business outcomes.
Smarter reviews and testing: AI accelerates code review and test generation, but end-to-end throughput only improves when surrounding processes are modernised.
What’s new or how it works
From Lau’s perspective, the winners in 2026 will (1) embed AI across the SDLC, (2) invest in enablement and change management, and (3) modernise metrics to track flow efficiency, quality and customer impact. Jellyfish’s 2025 research shows rising adoption and belief that a significant share of development will shift to AI over time—but real ROI hinges on programme-level adoption, not pockets of usage.
Practical steps (playbook for 2026)
Instrument the entire value stream
Track lead time, review time, deployment frequency, change failure rate, and MTTR alongside AI usage—not just coding speed. Use these to set guardrails and show real impact.Redesign code review with AI in the loop
Standardise prompts and policies for AI-assisted reviews; require human approval for risky changes; measure defect escape rate and rework over time.Shift testing left
Use AI to propose test cases from requirements, generate unit tests with coverage targets, and auto-summarise flaky test patterns for remediation. Tie outcomes to escaped defects and incident counts.Adoption before expansion
Lau stresses that adoption drives impact. Start with a few teams, deliver training and playbooks, and scale only when value-stream metrics improve.Update the measurement model
Replace local productivity proxies (PR count, LoC) with flow and outcome metrics (cycle time by stage, time to user value). Align incentives so teams optimise the whole system.
Reality check: Bain’s analysis (summarised by ITPro) finds coding is <40% of a developer’s day; coding-only boosts won’t transform outcomes unless planning, review, and release are also streamlined.
Examples you can pilot this quarter
Review accelerator: AI suggests diffs to focus on, flags risky patterns, and drafts comments; maintainers approve/reject. Measure review turnaround and post-merge defects.
Requirements-to-tests: AI converts acceptance criteria into test skeletons; engineers complete edge cases. Track coverage and escaped bugs.
Ops summariser: AI generates incident timelines and follow-up tasks after postmortems; measure MTTR and action closure rates.
FAQs
Q1: How does AI improve developer productivity?
By automating repetitive tasks and accelerating reviews/tests, but sustainable gains come from measuring and improving the full flow, not just coding speed. McKinsey & Company
Q2: What role does AI play in code review?
AI surfaces risky changes, drafts comments, and speeds up reviewer focus, while humans retain approval. Teams should track review time, defect escape rate, and rework. McKinsey & Company
Q3: How is the SDLC affected overall?
Per Jellyfish, the SDLC is being redefined: adoption drives impact, measurement must evolve, and a new wave of tools is arriving—requiring updated workflows and skills. LinkedIn
Sources:
McKinsey interview with Andrew Lau (Dec 10, 2025). McKinsey & Company
Jellyfish newsroom: 2025 State of Engineering Management highlights. Jellyfish
Jellyfish social recaps: “SDLC is being redefined; adoption drives impact; measurement must evolve.” LinkedIn
ITPro on Bain research: coding-only gains are “unremarkable” without lifecycle redesign. IT Pro
Next Steps
Want help measuring AI’s impact across your entire lifecycle, not just coding? Generation Digital can design a value-stream measurement model, pilot AI in code review and testing, and build the adoption plan.
Get practical advice delivered to your inbox
By subscribing you consent to Generation Digital storing and processing your details in line with our privacy policy. You can read the full policy at gend.co/privacy.

McKinsey State of AI 2025: Key Findings & What to Do

Notion vs Confluence (2026): features, pricing, migration

From AI silos to systems: Miro workflows that scale

Notion in healthcare: military-grade decision templates & governance

Claude Skills and CLAUDE.md: a practical 2026 guide for teams

Perplexity partners with Cristiano Ronaldo: what it means for AI search

Gemini 3 Deep Think: how it works and how to turn it on

Arrêtez de vous répéter : Assistants AI qui se souviennent de votre travail

Halte au battage médiatique, commencez le manuel : Faire fonctionner l'IA dans l'entreprise (en toute sécurité) à grande échelle

Asana pour la fabrication : Créez une colonne vertébrale opérationnelle intelligente

McKinsey State of AI 2025: Key Findings & What to Do

Notion vs Confluence (2026): features, pricing, migration

From AI silos to systems: Miro workflows that scale

Notion in healthcare: military-grade decision templates & governance

Claude Skills and CLAUDE.md: a practical 2026 guide for teams

Perplexity partners with Cristiano Ronaldo: what it means for AI search

Gemini 3 Deep Think: how it works and how to turn it on

Arrêtez de vous répéter : Assistants AI qui se souviennent de votre travail

Halte au battage médiatique, commencez le manuel : Faire fonctionner l'IA dans l'entreprise (en toute sécurité) à grande échelle

Asana pour la fabrication : Créez une colonne vertébrale opérationnelle intelligente
Génération
Numérique

Bureau au Royaume-Uni
33 rue Queen,
Londres
EC4R 1AP
Royaume-Uni
Bureau au Canada
1 University Ave,
Toronto,
ON M5J 1T1,
Canada
Bureau NAMER
77 Sands St,
Brooklyn,
NY 11201,
États-Unis
Bureau EMEA
Rue Charlemont, Saint Kevin's, Dublin,
D02 VN88,
Irlande
Bureau du Moyen-Orient
6994 Alsharq 3890,
An Narjis,
Riyad 13343,
Arabie Saoudite
Numéro d'entreprise : 256 9431 77 | Droits d'auteur 2026 | Conditions générales | Politique de confidentialité
Génération
Numérique

Bureau au Royaume-Uni
33 rue Queen,
Londres
EC4R 1AP
Royaume-Uni
Bureau au Canada
1 University Ave,
Toronto,
ON M5J 1T1,
Canada
Bureau NAMER
77 Sands St,
Brooklyn,
NY 11201,
États-Unis
Bureau EMEA
Rue Charlemont, Saint Kevin's, Dublin,
D02 VN88,
Irlande
Bureau du Moyen-Orient
6994 Alsharq 3890,
An Narjis,
Riyad 13343,
Arabie Saoudite
Numéro d'entreprise : 256 9431 77
Conditions générales
Politique de confidentialité
Droit d'auteur 2026






