AI in Software Development: Insights from Jellyfish CEO (2026)

AI in Software Development: Insights from Jellyfish CEO (2026)

AI

Dec 16, 2025

In a modern office setting, a team of five individuals collaborates in front of a whiteboard filled with a software development workflow diagram labeled "Plan," "Code," "Review," "Test," and "Release," embodying AI's impact on project management, with additional activity visible in the background where employees work at computers.
In a modern office setting, a team of five individuals collaborates in front of a whiteboard filled with a software development workflow diagram labeled "Plan," "Code," "Review," "Test," and "Release," embodying AI's impact on project management, with additional activity visible in the background where employees work at computers.

AI is reshaping software development by augmenting every SDLC stage—planning, coding, review, testing and release. Jellyfish CEO Andrew Lau advises leaders to prioritise adoption and evolve measurement beyond coding metrics to value-stream outcomes, ensuring real productivity gains rather than local speed-ups.

Why this matters now

AI is no longer a sidecar to coding; it’s reshaping how software work flows end-to-end—from planning and review to release and operations. In his McKinsey interview, Andrew Lau (CEO, Jellyfish) argues that impact depends on adoption and measurement across the full lifecycle, not just code generation.

Key points

  • AI drives SDLC transformation: The software lifecycle is being redefined as teams instrument planning, coding, review, testing and release with AI assistance.

  • Productivity measurement must evolve: Organisations should go beyond output metrics (e.g., lines of code) to value-stream measures tied to business outcomes.

  • Smarter reviews and testing: AI accelerates code review and test generation, but end-to-end throughput only improves when surrounding processes are modernised.

What’s new or how it works

From Lau’s perspective, the winners in 2026 will (1) embed AI across the SDLC, (2) invest in enablement and change management, and (3) modernise metrics to track flow efficiency, quality and customer impact. Jellyfish’s 2025 research shows rising adoption and belief that a significant share of development will shift to AI over time—but real ROI hinges on programme-level adoption, not pockets of usage.

Practical steps (playbook for 2026)

  1. Instrument the entire value stream
    Track lead time, review time, deployment frequency, change failure rate, and MTTR alongside AI usage—not just coding speed. Use these to set guardrails and show real impact.

  2. Redesign code review with AI in the loop
    Standardise prompts and policies for AI-assisted reviews; require human approval for risky changes; measure defect escape rate and rework over time.

  3. Shift testing left
    Use AI to propose test cases from requirements, generate unit tests with coverage targets, and auto-summarise flaky test patterns for remediation. Tie outcomes to escaped defects and incident counts.

  4. Adoption before expansion
    Lau stresses that adoption drives impact. Start with a few teams, deliver training and playbooks, and scale only when value-stream metrics improve.

  5. Update the measurement model
    Replace local productivity proxies (PR count, LoC) with flow and outcome metrics (cycle time by stage, time to user value). Align incentives so teams optimise the whole system.

Reality check: Bain’s analysis (summarised by ITPro) finds coding is <40% of a developer’s day; coding-only boosts won’t transform outcomes unless planning, review, and release are also streamlined.

Examples you can pilot this quarter

  • Review accelerator: AI suggests diffs to focus on, flags risky patterns, and drafts comments; maintainers approve/reject. Measure review turnaround and post-merge defects.

  • Requirements-to-tests: AI converts acceptance criteria into test skeletons; engineers complete edge cases. Track coverage and escaped bugs.

  • Ops summariser: AI generates incident timelines and follow-up tasks after postmortems; measure MTTR and action closure rates.

FAQs

Q1: How does AI improve developer productivity?
By automating repetitive tasks and accelerating reviews/tests, but sustainable gains come from measuring and improving the full flow, not just coding speed. McKinsey & Company

Q2: What role does AI play in code review?
AI surfaces risky changes, drafts comments, and speeds up reviewer focus, while humans retain approval. Teams should track review time, defect escape rate, and rework. McKinsey & Company

Q3: How is the SDLC affected overall?
Per Jellyfish, the SDLC is being redefined: adoption drives impact, measurement must evolve, and a new wave of tools is arriving—requiring updated workflows and skills. LinkedIn

Sources:

  • McKinsey interview with Andrew Lau (Dec 10, 2025). McKinsey & Company

  • Jellyfish newsroom: 2025 State of Engineering Management highlights. Jellyfish

  • Jellyfish social recaps: “SDLC is being redefined; adoption drives impact; measurement must evolve.” LinkedIn

  • ITPro on Bain research: coding-only gains are “unremarkable” without lifecycle redesign. IT Pro

Next Steps

Want help measuring AI’s impact across your entire lifecycle, not just coding? Generation Digital can design a value-stream measurement model, pilot AI in code review and testing, and build the adoption plan.

AI is reshaping software development by augmenting every SDLC stage—planning, coding, review, testing and release. Jellyfish CEO Andrew Lau advises leaders to prioritise adoption and evolve measurement beyond coding metrics to value-stream outcomes, ensuring real productivity gains rather than local speed-ups.

Why this matters now

AI is no longer a sidecar to coding; it’s reshaping how software work flows end-to-end—from planning and review to release and operations. In his McKinsey interview, Andrew Lau (CEO, Jellyfish) argues that impact depends on adoption and measurement across the full lifecycle, not just code generation.

Key points

  • AI drives SDLC transformation: The software lifecycle is being redefined as teams instrument planning, coding, review, testing and release with AI assistance.

  • Productivity measurement must evolve: Organisations should go beyond output metrics (e.g., lines of code) to value-stream measures tied to business outcomes.

  • Smarter reviews and testing: AI accelerates code review and test generation, but end-to-end throughput only improves when surrounding processes are modernised.

What’s new or how it works

From Lau’s perspective, the winners in 2026 will (1) embed AI across the SDLC, (2) invest in enablement and change management, and (3) modernise metrics to track flow efficiency, quality and customer impact. Jellyfish’s 2025 research shows rising adoption and belief that a significant share of development will shift to AI over time—but real ROI hinges on programme-level adoption, not pockets of usage.

Practical steps (playbook for 2026)

  1. Instrument the entire value stream
    Track lead time, review time, deployment frequency, change failure rate, and MTTR alongside AI usage—not just coding speed. Use these to set guardrails and show real impact.

  2. Redesign code review with AI in the loop
    Standardise prompts and policies for AI-assisted reviews; require human approval for risky changes; measure defect escape rate and rework over time.

  3. Shift testing left
    Use AI to propose test cases from requirements, generate unit tests with coverage targets, and auto-summarise flaky test patterns for remediation. Tie outcomes to escaped defects and incident counts.

  4. Adoption before expansion
    Lau stresses that adoption drives impact. Start with a few teams, deliver training and playbooks, and scale only when value-stream metrics improve.

  5. Update the measurement model
    Replace local productivity proxies (PR count, LoC) with flow and outcome metrics (cycle time by stage, time to user value). Align incentives so teams optimise the whole system.

Reality check: Bain’s analysis (summarised by ITPro) finds coding is <40% of a developer’s day; coding-only boosts won’t transform outcomes unless planning, review, and release are also streamlined.

Examples you can pilot this quarter

  • Review accelerator: AI suggests diffs to focus on, flags risky patterns, and drafts comments; maintainers approve/reject. Measure review turnaround and post-merge defects.

  • Requirements-to-tests: AI converts acceptance criteria into test skeletons; engineers complete edge cases. Track coverage and escaped bugs.

  • Ops summariser: AI generates incident timelines and follow-up tasks after postmortems; measure MTTR and action closure rates.

FAQs

Q1: How does AI improve developer productivity?
By automating repetitive tasks and accelerating reviews/tests, but sustainable gains come from measuring and improving the full flow, not just coding speed. McKinsey & Company

Q2: What role does AI play in code review?
AI surfaces risky changes, drafts comments, and speeds up reviewer focus, while humans retain approval. Teams should track review time, defect escape rate, and rework. McKinsey & Company

Q3: How is the SDLC affected overall?
Per Jellyfish, the SDLC is being redefined: adoption drives impact, measurement must evolve, and a new wave of tools is arriving—requiring updated workflows and skills. LinkedIn

Sources:

  • McKinsey interview with Andrew Lau (Dec 10, 2025). McKinsey & Company

  • Jellyfish newsroom: 2025 State of Engineering Management highlights. Jellyfish

  • Jellyfish social recaps: “SDLC is being redefined; adoption drives impact; measurement must evolve.” LinkedIn

  • ITPro on Bain research: coding-only gains are “unremarkable” without lifecycle redesign. IT Pro

Next Steps

Want help measuring AI’s impact across your entire lifecycle, not just coding? Generation Digital can design a value-stream measurement model, pilot AI in code review and testing, and build the adoption plan.

Get practical advice delivered to your inbox

By subscribing you consent to Generation Digital storing and processing your details in line with our privacy policy. You can read the full policy at gend.co/privacy.

Ready to get the support your organisation needs to successfully use AI?

Miro Solutions Partner
Asana Platinum Solutions Partner
Notion Platinum Solutions Partner
Glean Certified Partner

Ready to get the support your organisation needs to successfully use AI?

Miro Solutions Partner
Asana Platinum Solutions Partner
Notion Platinum Solutions Partner
Glean Certified Partner

Generation
Digital

UK Office
33 Queen St,
London
EC4R 1AP
United Kingdom

Canada Office
1 University Ave,
Toronto,
ON M5J 1T1,
Canada

NAMER Office
77 Sands St,
Brooklyn,
NY 11201,
United States

EMEA Office
Charlemont St, Saint Kevin's, Dublin,
D02 VN88,
Ireland

Middle East Office
6994 Alsharq 3890,
An Narjis,
Riyadh 13343,
Saudi Arabia

UK Fast Growth Index UBS Logo
Financial Times FT 1000 Logo
Febe Growth 100 Logo (Background Removed)

Company No: 256 9431 77 | Copyright 2026 | Terms and Conditions | Privacy Policy

Generation
Digital

UK Office
33 Queen St,
London
EC4R 1AP
United Kingdom

Canada Office
1 University Ave,
Toronto,
ON M5J 1T1,
Canada

NAMER Office
77 Sands St,
Brooklyn,
NY 11201,
United States

EMEA Office
Charlemont St, Saint Kevin's, Dublin,
D02 VN88,
Ireland

Middle East Office
6994 Alsharq 3890,
An Narjis,
Riyadh 13343,
Saudi Arabia

UK Fast Growth Index UBS Logo
Financial Times FT 1000 Logo
Febe Growth 100 Logo (Background Removed)


Company No: 256 9431 77
Terms and Conditions
Privacy Policy
Copyright 2026