Introducing EVMbench: benchmarking AI for smart contract security

Introducing EVMbench: benchmarking AI for smart contract security

OpenAI

19 feb 2026

Two professionals collaborate in a modern office, analyzing data related to AI and smart contract security on multiple computer screens, surrounded by exposed brick walls and large windows displaying an urban skyline; whiteboards in the background display notes and diagrams on smart contract audits.

¿No sabes por dónde empezar con la IA?
Evalúa preparación, riesgos y prioridades en menos de una hora.

¿No sabes por dónde empezar con la IA?
Evalúa preparación, riesgos y prioridades en menos de una hora.

➔ Descarga nuestro paquete gratuito de preparación para IA

EVMbench is a new benchmark from OpenAI and Paradigm that tests whether AI agents can detect, patch and exploit high‑severity smart contract vulnerabilities in Ethereum Virtual Machine (EVM) environments. Built from 120 curated vulnerabilities, it measures performance in economically meaningful scenarios using reproducible, sandboxed deployments and programmatic grading.

Smart contracts routinely secure $100B+ in open-source crypto assets. As AI agents get better at reading, writing and running code, it becomes essential to measure what they can do in environments where mistakes — or misuse — have real economic consequences.

That’s the motivation behind EVMbench, a new benchmark introduced by OpenAI in collaboration with Paradigm. EVMbench evaluates AI agents’ ability to detect, patch, and exploit high‑severity smart contract vulnerabilities in a sandboxed blockchain environment.

Updated as of 19/02/2026: Based on OpenAI’s publication “Introducing EVMbench”.

What is EVMbench?

EVMbench is designed to move beyond “can a model spot a bug in a snippet?” and towards economically meaningful security evaluation.

The benchmark draws on 120 curated vulnerabilities from 40 audits, with many sourced from open audit competitions. It also includes additional scenarios from the security auditing process for the Tempo blockchain (a payments‑oriented L1), to reflect where agentic stablecoin payments and on‑chain financial activity may grow.

How EVMbench works: three capability modes

EVMbench evaluates agents across three modes:

1) Detect

Agents audit a smart contract repository and are scored on recall of ground‑truth vulnerabilities and associated audit rewards.

2) Patch

Agents modify vulnerable contracts and must preserve intended functionality while removing exploitability, verified via automated tests and exploit checks.

3) Exploit

Agents execute end‑to‑end fund‑draining attacks against deployed contracts in a sandboxed environment, graded programmatically via transaction replay and on‑chain verification.

To make results reproducible, OpenAI built a Rust-based harness that deploys contracts, replays agent transactions deterministically, and restricts unsafe RPC methods. Exploit tasks run in an isolated local Anvil environment (not live networks), and the included vulnerabilities are historical and publicly documented.

What the early results show

OpenAI reports that, in the exploit mode, GPT‑5.3‑Codex (via Codex CLI) achieved a score of 72.2%, compared with GPT‑5 at 31.9%.

Performance is notably weaker on detect and patch. OpenAI’s interpretation is revealing:

  • In detect, agents sometimes stop after finding one issue instead of auditing exhaustively.

  • In patch, removing subtle vulnerabilities while maintaining full functionality remains hard.

For practitioners, this is the key takeaway: execution‑optimised objectives (drain funds) can be easier for agents than the more ambiguous work of comprehensive auditing and safe remediation.

Why this matters for enterprises (not just crypto teams)

Even if you don’t ship smart contracts, EVMbench matters because it’s a preview of how AI cyber capability is evolving:

  • It measures end-to-end agent behaviour (planning + acting), not just static analysis.

  • It highlights where agents may become more effective for attackers — and where defenders can benefit first.

OpenAI positions EVMbench as both a measurement tool and a call to action: as models improve, developers and security teams should incorporate AI-assisted auditing into workflows.

Practical steps: how to use this insight defensively

If you’re responsible for security, engineering, or risk, here’s how to turn EVMbench into action.

Step 1: Treat agentic security as dual‑use

Agent capabilities can strengthen defence and enable misuse. Start by defining what “defensive use” means in your organisation (scanning, triage, patch suggestions) and where you will not allow automation (exploitation, offensive research without authorisation).

Step 2: Pilot AI-assisted auditing with guardrails

Good pilots have boundaries:

  • read-only access to repos by default

  • explicit human approval for any patch merge

  • logging of prompts, outputs and changes

  • test-driven verification and regression checks

Step 3: Measure outcomes, not novelty

Track:

  • time to identify likely issues

  • false positive rate

  • patch acceptance rate after review

  • time to reproduce and verify exploitability in a safe environment

Step 4: Expand your control framework for agents

If you plan to use agents with tools (CI/CD, deployment, ticketing), adopt principles such as least privilege, step-by-step confirmations for risky actions, and separation of duties.

How Generation Digital can help

If you’re adopting AI for security workflows (or evaluating agents more broadly), we can help you:

  • design a measurable pilot for AI-assisted auditing

  • set governance and safe tooling controls

  • create enablement materials so teams use AI consistently

Summary

EVMbench is a practical benchmark for a real shift: AI agents are moving from “spot issues” to “execute end‑to‑end”, including exploitation. The best response is not to ignore it, but to adopt defensive AI workflows with strong guardrails — and measure what works.

Next steps

  • Identify one security workflow to pilot with AI assistance.

  • Set permissions and human approval gates.

  • Measure outcomes and iterate.

FAQs

Q1: What is EVMbench?
EVMbench is a benchmark from OpenAI and Paradigm that evaluates AI agents’ ability to detect, patch, and exploit high‑severity vulnerabilities in Ethereum Virtual Machine (EVM) smart contracts.

Q2: What data does EVMbench use?
It draws on 120 curated vulnerabilities from 40 audits, plus additional scenarios from the Tempo blockchain security auditing process.

Q3: What are the three modes in EVMbench?
Detect (find vulnerabilities), Patch (fix while preserving functionality), and Exploit (execute end‑to‑end fund‑draining attacks in a sandboxed environment).

Q4: What performance did OpenAI report?
OpenAI reports GPT‑5.3‑Codex scored 72.2% in exploit mode, compared to GPT‑5 at 31.9%.

Q5: How should organisations use this responsibly?
Use it to inform defensive AI adoption: start read‑only, require human approvals for changes, keep strong logs, and avoid enabling offensive misuse.

EVMbench is a new benchmark from OpenAI and Paradigm that tests whether AI agents can detect, patch and exploit high‑severity smart contract vulnerabilities in Ethereum Virtual Machine (EVM) environments. Built from 120 curated vulnerabilities, it measures performance in economically meaningful scenarios using reproducible, sandboxed deployments and programmatic grading.

Smart contracts routinely secure $100B+ in open-source crypto assets. As AI agents get better at reading, writing and running code, it becomes essential to measure what they can do in environments where mistakes — or misuse — have real economic consequences.

That’s the motivation behind EVMbench, a new benchmark introduced by OpenAI in collaboration with Paradigm. EVMbench evaluates AI agents’ ability to detect, patch, and exploit high‑severity smart contract vulnerabilities in a sandboxed blockchain environment.

Updated as of 19/02/2026: Based on OpenAI’s publication “Introducing EVMbench”.

What is EVMbench?

EVMbench is designed to move beyond “can a model spot a bug in a snippet?” and towards economically meaningful security evaluation.

The benchmark draws on 120 curated vulnerabilities from 40 audits, with many sourced from open audit competitions. It also includes additional scenarios from the security auditing process for the Tempo blockchain (a payments‑oriented L1), to reflect where agentic stablecoin payments and on‑chain financial activity may grow.

How EVMbench works: three capability modes

EVMbench evaluates agents across three modes:

1) Detect

Agents audit a smart contract repository and are scored on recall of ground‑truth vulnerabilities and associated audit rewards.

2) Patch

Agents modify vulnerable contracts and must preserve intended functionality while removing exploitability, verified via automated tests and exploit checks.

3) Exploit

Agents execute end‑to‑end fund‑draining attacks against deployed contracts in a sandboxed environment, graded programmatically via transaction replay and on‑chain verification.

To make results reproducible, OpenAI built a Rust-based harness that deploys contracts, replays agent transactions deterministically, and restricts unsafe RPC methods. Exploit tasks run in an isolated local Anvil environment (not live networks), and the included vulnerabilities are historical and publicly documented.

What the early results show

OpenAI reports that, in the exploit mode, GPT‑5.3‑Codex (via Codex CLI) achieved a score of 72.2%, compared with GPT‑5 at 31.9%.

Performance is notably weaker on detect and patch. OpenAI’s interpretation is revealing:

  • In detect, agents sometimes stop after finding one issue instead of auditing exhaustively.

  • In patch, removing subtle vulnerabilities while maintaining full functionality remains hard.

For practitioners, this is the key takeaway: execution‑optimised objectives (drain funds) can be easier for agents than the more ambiguous work of comprehensive auditing and safe remediation.

Why this matters for enterprises (not just crypto teams)

Even if you don’t ship smart contracts, EVMbench matters because it’s a preview of how AI cyber capability is evolving:

  • It measures end-to-end agent behaviour (planning + acting), not just static analysis.

  • It highlights where agents may become more effective for attackers — and where defenders can benefit first.

OpenAI positions EVMbench as both a measurement tool and a call to action: as models improve, developers and security teams should incorporate AI-assisted auditing into workflows.

Practical steps: how to use this insight defensively

If you’re responsible for security, engineering, or risk, here’s how to turn EVMbench into action.

Step 1: Treat agentic security as dual‑use

Agent capabilities can strengthen defence and enable misuse. Start by defining what “defensive use” means in your organisation (scanning, triage, patch suggestions) and where you will not allow automation (exploitation, offensive research without authorisation).

Step 2: Pilot AI-assisted auditing with guardrails

Good pilots have boundaries:

  • read-only access to repos by default

  • explicit human approval for any patch merge

  • logging of prompts, outputs and changes

  • test-driven verification and regression checks

Step 3: Measure outcomes, not novelty

Track:

  • time to identify likely issues

  • false positive rate

  • patch acceptance rate after review

  • time to reproduce and verify exploitability in a safe environment

Step 4: Expand your control framework for agents

If you plan to use agents with tools (CI/CD, deployment, ticketing), adopt principles such as least privilege, step-by-step confirmations for risky actions, and separation of duties.

How Generation Digital can help

If you’re adopting AI for security workflows (or evaluating agents more broadly), we can help you:

  • design a measurable pilot for AI-assisted auditing

  • set governance and safe tooling controls

  • create enablement materials so teams use AI consistently

Summary

EVMbench is a practical benchmark for a real shift: AI agents are moving from “spot issues” to “execute end‑to‑end”, including exploitation. The best response is not to ignore it, but to adopt defensive AI workflows with strong guardrails — and measure what works.

Next steps

  • Identify one security workflow to pilot with AI assistance.

  • Set permissions and human approval gates.

  • Measure outcomes and iterate.

FAQs

Q1: What is EVMbench?
EVMbench is a benchmark from OpenAI and Paradigm that evaluates AI agents’ ability to detect, patch, and exploit high‑severity vulnerabilities in Ethereum Virtual Machine (EVM) smart contracts.

Q2: What data does EVMbench use?
It draws on 120 curated vulnerabilities from 40 audits, plus additional scenarios from the Tempo blockchain security auditing process.

Q3: What are the three modes in EVMbench?
Detect (find vulnerabilities), Patch (fix while preserving functionality), and Exploit (execute end‑to‑end fund‑draining attacks in a sandboxed environment).

Q4: What performance did OpenAI report?
OpenAI reports GPT‑5.3‑Codex scored 72.2% in exploit mode, compared to GPT‑5 at 31.9%.

Q5: How should organisations use this responsibly?
Use it to inform defensive AI adoption: start read‑only, require human approvals for changes, keep strong logs, and avoid enabling offensive misuse.

Recibe noticias y consejos sobre IA cada semana en tu bandeja de entrada

Al suscribirte, das tu consentimiento para que Generation Digital almacene y procese tus datos de acuerdo con nuestra política de privacidad. Puedes leer la política completa en gend.co/privacy.

Próximos talleres y seminarios web

A diverse group of professionals collaborating around a table in a bright, modern office setting.

Claridad Operacional a Gran Escala - Asana

Webinar Virtual
Miércoles 25 de febrero de 2026
En línea

A diverse group of professionals collaborating around a table in a bright, modern office setting.

Trabaja con compañeros de equipo de IA - Asana

Taller Presencial
Jueves 26 de febrero de 2026
Londres, Reino Unido

A diverse group of professionals collaborating around a table in a bright, modern office setting.

De Idea a Prototipo: IA en Miro

Seminario Web Virtual
Miércoles 18 de febrero de 2026
En línea

Generación
Digital

Oficina en Reino Unido

Generation Digital Ltd
33 Queen St,
Londres
EC4R 1AP
Reino Unido

Oficina en Canadá

Generation Digital Americas Inc
181 Bay St., Suite 1800
Toronto, ON, M5J 2T9
Canadá

Oficina en EE. UU.

Generation Digital Américas Inc
77 Sands St,
Brooklyn, NY 11201,
Estados Unidos

Oficina de la UE

Software Generación Digital
Edificio Elgee
Dundalk
A91 X2R3
Irlanda

Oficina en Medio Oriente

6994 Alsharq 3890,
An Narjis,
Riad 13343,
Arabia Saudita

UK Fast Growth Index UBS Logo
Financial Times FT 1000 Logo
Febe Growth 100 Logo (Background Removed)

Número de la empresa: 256 9431 77 | Derechos de autor 2026 | Términos y Condiciones | Política de Privacidad

Generación
Digital

Oficina en Reino Unido

Generation Digital Ltd
33 Queen St,
Londres
EC4R 1AP
Reino Unido

Oficina en Canadá

Generation Digital Americas Inc
181 Bay St., Suite 1800
Toronto, ON, M5J 2T9
Canadá

Oficina en EE. UU.

Generation Digital Américas Inc
77 Sands St,
Brooklyn, NY 11201,
Estados Unidos

Oficina de la UE

Software Generación Digital
Edificio Elgee
Dundalk
A91 X2R3
Irlanda

Oficina en Medio Oriente

6994 Alsharq 3890,
An Narjis,
Riad 13343,
Arabia Saudita

UK Fast Growth Index UBS Logo
Financial Times FT 1000 Logo
Febe Growth 100 Logo (Background Removed)


Número de Empresa: 256 9431 77
Términos y Condiciones
Política de Privacidad
Derechos de Autor 2026