1M Context Window: Opus 4.6 & Sonnet 4.6 Pricing
1M Context Window: Opus 4.6 & Sonnet 4.6 Pricing
OpenAI
13 mar 2026

¿No sabes por dónde empezar con la IA?Evalúa preparación, riesgos y prioridades en menos de una hora.
¿No sabes por dónde empezar con la IA?Evalúa preparación, riesgos y prioridades en menos de una hora.
➔ Descarga nuestro paquete gratuito de preparación para IA
Claude Opus 4.6 and Claude Sonnet 4.6 now support a 1M token context window at standard pricing on the Claude Platform, with no long-context premium. Anthropic also increased media limits to 600 images or PDF pages per request, making it easier to analyse large document sets, repositories and multimodal inputs in one run.
Long-context models are only truly useful when they’re predictable: predictable pricing, predictable limits, and predictable performance when you push them beyond “chat-sized” prompts.
That’s why this update matters. Anthropic has made the full 1M token context window generally available for Claude Opus 4.6 and Claude Sonnet 4.6 at standard pricing, removing the need for special headers or long-context tiers. Alongside that, media limits have expanded to 600 images or PDF pages per request, opening up more realistic document and multimodal workflows.
What “1M context” actually means
A 1M token context window is the amount of input a model can consider in a single request — roughly the equivalent of hundreds of pages of text, sizeable codebases, or large collections of documents.
In practice, it enables workflows that were previously awkward or expensive:
analysing an entire policy library in one pass
comparing multiple contracts or reports without chunking
loading large repositories to reason across files and dependencies
reviewing long conversation or agent traces without losing earlier state
What’s changed in this release
1) Standard pricing across the full 1M window
The key shift is commercial and operational: standard pricing now applies across the entire 1M context window for Opus 4.6 and Sonnet 4.6. That means you’re not penalised with a separate “long-context premium” when your prompt crosses a threshold.
For teams building with long context, this reduces budgeting uncertainty — and it makes it much easier to decide when “just load the whole thing” is the right call.
2) Standard rate limits now apply
Anthropic also removed the dedicated 1M rate limits. Your normal account rate limits apply across every context length, which simplifies scaling and capacity planning.
3) Media limits increased to 600 images or PDF pages
If your workflows involve PDFs, scans, slide decks or image-heavy documents, the jump from 100 to 600 images or PDF pages per request is a big unlock. It’s especially relevant for:
due diligence packs
technical documentation collections
legal discovery-style bundles
product catalogues and image datasets
Practical examples: where 1M context makes a difference
Example 1: “One-shot” document analysis for governance and compliance
Instead of chunking a 300-page policy document, you can:
load the entire document
ask for a structured compliance gap analysis
generate a traceable summary with references to sections
The benefit is less context loss, fewer stitching errors, and fewer handoffs.
Example 2: Codebase-level reasoning for engineering teams
With long context, teams can:
load repository structure and key files
ask for impact analysis (“what breaks if we change X?”)
generate migration plans or refactors that span multiple modules
Example 3: Multi-document comparisons for procurement and strategy
For example: compare 15 vendor proposals, extract requirements coverage, and produce a scored matrix — without repeated chunk-and-merge.
What to watch out for
Long context changes the shape of work — but it also changes cost and risk.
Token usage can spike quickly. Standard pricing doesn’t mean “cheap”, it means “consistent.”
Garbage in, garbage out scales too. If you load 600 pages of unstructured noise, you may get confidently unhelpful answers.
Governance still matters. For regulated sectors, ensure the model’s use aligns with your data handling rules.
Summary
The GA rollout of 1M context for Claude Opus 4.6 and Sonnet 4.6 at standard pricing, plus 600 images/PDF pages per request, makes long-context AI more practical for real enterprise workloads — document analysis, codebase reasoning, and large-scale comparisons.
Next steps
If you want to turn long-context capability into measurable business value, Generation Digital can help you:
identify the workflows that benefit most from 1M context
design prompts and evaluation so outputs stay reliable
integrate long-context AI into your stack and governance model
FAQs
What is the 1M context window?
It’s the ability to include up to one million tokens of input in a single request, letting the model reason across very large document sets, codebases or long multimodal inputs.
How does standard pricing help users?
Standard pricing means the same per-token rates apply across the full 1M context window, reducing budgeting surprises and making long-context workflows easier to adopt.
What are the updated media limits?
When using the 1M context window, Anthropic increased the limit to 600 images or PDF pages per request, supporting bigger document and multimodal analysis workloads.
Do rate limits change with 1M context?
Anthropic indicates your standard account limits now apply across every context length, simplifying scaling and capacity planning.
What’s a sensible first use case?
Start with a controlled, high-value document workflow (e.g., policy gap analysis, contract comparison, or a due diligence pack), where outputs can be validated against source sections.
Claude Opus 4.6 and Claude Sonnet 4.6 now support a 1M token context window at standard pricing on the Claude Platform, with no long-context premium. Anthropic also increased media limits to 600 images or PDF pages per request, making it easier to analyse large document sets, repositories and multimodal inputs in one run.
Long-context models are only truly useful when they’re predictable: predictable pricing, predictable limits, and predictable performance when you push them beyond “chat-sized” prompts.
That’s why this update matters. Anthropic has made the full 1M token context window generally available for Claude Opus 4.6 and Claude Sonnet 4.6 at standard pricing, removing the need for special headers or long-context tiers. Alongside that, media limits have expanded to 600 images or PDF pages per request, opening up more realistic document and multimodal workflows.
What “1M context” actually means
A 1M token context window is the amount of input a model can consider in a single request — roughly the equivalent of hundreds of pages of text, sizeable codebases, or large collections of documents.
In practice, it enables workflows that were previously awkward or expensive:
analysing an entire policy library in one pass
comparing multiple contracts or reports without chunking
loading large repositories to reason across files and dependencies
reviewing long conversation or agent traces without losing earlier state
What’s changed in this release
1) Standard pricing across the full 1M window
The key shift is commercial and operational: standard pricing now applies across the entire 1M context window for Opus 4.6 and Sonnet 4.6. That means you’re not penalised with a separate “long-context premium” when your prompt crosses a threshold.
For teams building with long context, this reduces budgeting uncertainty — and it makes it much easier to decide when “just load the whole thing” is the right call.
2) Standard rate limits now apply
Anthropic also removed the dedicated 1M rate limits. Your normal account rate limits apply across every context length, which simplifies scaling and capacity planning.
3) Media limits increased to 600 images or PDF pages
If your workflows involve PDFs, scans, slide decks or image-heavy documents, the jump from 100 to 600 images or PDF pages per request is a big unlock. It’s especially relevant for:
due diligence packs
technical documentation collections
legal discovery-style bundles
product catalogues and image datasets
Practical examples: where 1M context makes a difference
Example 1: “One-shot” document analysis for governance and compliance
Instead of chunking a 300-page policy document, you can:
load the entire document
ask for a structured compliance gap analysis
generate a traceable summary with references to sections
The benefit is less context loss, fewer stitching errors, and fewer handoffs.
Example 2: Codebase-level reasoning for engineering teams
With long context, teams can:
load repository structure and key files
ask for impact analysis (“what breaks if we change X?”)
generate migration plans or refactors that span multiple modules
Example 3: Multi-document comparisons for procurement and strategy
For example: compare 15 vendor proposals, extract requirements coverage, and produce a scored matrix — without repeated chunk-and-merge.
What to watch out for
Long context changes the shape of work — but it also changes cost and risk.
Token usage can spike quickly. Standard pricing doesn’t mean “cheap”, it means “consistent.”
Garbage in, garbage out scales too. If you load 600 pages of unstructured noise, you may get confidently unhelpful answers.
Governance still matters. For regulated sectors, ensure the model’s use aligns with your data handling rules.
Summary
The GA rollout of 1M context for Claude Opus 4.6 and Sonnet 4.6 at standard pricing, plus 600 images/PDF pages per request, makes long-context AI more practical for real enterprise workloads — document analysis, codebase reasoning, and large-scale comparisons.
Next steps
If you want to turn long-context capability into measurable business value, Generation Digital can help you:
identify the workflows that benefit most from 1M context
design prompts and evaluation so outputs stay reliable
integrate long-context AI into your stack and governance model
FAQs
What is the 1M context window?
It’s the ability to include up to one million tokens of input in a single request, letting the model reason across very large document sets, codebases or long multimodal inputs.
How does standard pricing help users?
Standard pricing means the same per-token rates apply across the full 1M context window, reducing budgeting surprises and making long-context workflows easier to adopt.
What are the updated media limits?
When using the 1M context window, Anthropic increased the limit to 600 images or PDF pages per request, supporting bigger document and multimodal analysis workloads.
Do rate limits change with 1M context?
Anthropic indicates your standard account limits now apply across every context length, simplifying scaling and capacity planning.
What’s a sensible first use case?
Start with a controlled, high-value document workflow (e.g., policy gap analysis, contract comparison, or a due diligence pack), where outputs can be validated against source sections.
Recibe noticias y consejos sobre IA cada semana en tu bandeja de entrada
Al suscribirte, das tu consentimiento para que Generation Digital almacene y procese tus datos de acuerdo con nuestra política de privacidad. Puedes leer la política completa en gend.co/privacy.
Generación
Digital

Oficina en Reino Unido
Generation Digital Ltd
33 Queen St,
Londres
EC4R 1AP
Reino Unido
Oficina en Canadá
Generation Digital Americas Inc
181 Bay St., Suite 1800
Toronto, ON, M5J 2T9
Canadá
Oficina en EE. UU.
Generation Digital Américas Inc
77 Sands St,
Brooklyn, NY 11201,
Estados Unidos
Oficina de la UE
Software Generación Digital
Edificio Elgee
Dundalk
A91 X2R3
Irlanda
Oficina en Medio Oriente
6994 Alsharq 3890,
An Narjis,
Riad 13343,
Arabia Saudita
Número de la empresa: 256 9431 77 | Derechos de autor 2026 | Términos y Condiciones | Política de Privacidad
Generación
Digital

Oficina en Reino Unido
Generation Digital Ltd
33 Queen St,
Londres
EC4R 1AP
Reino Unido
Oficina en Canadá
Generation Digital Americas Inc
181 Bay St., Suite 1800
Toronto, ON, M5J 2T9
Canadá
Oficina en EE. UU.
Generation Digital Américas Inc
77 Sands St,
Brooklyn, NY 11201,
Estados Unidos
Oficina de la UE
Software Generación Digital
Edificio Elgee
Dundalk
A91 X2R3
Irlanda
Oficina en Medio Oriente
6994 Alsharq 3890,
An Narjis,
Riad 13343,
Arabia Saudita








