1M Context Window: Opus 4.6 & Sonnet 4.6 Pricing

1M Context Window: Opus 4.6 & Sonnet 4.6 Pricing

OpenAI

Mar 13, 2026

A diverse group of four professionals engage in a collaborative meeting around a table cluttered with laptops, documents, and coffee cups in a modern office setting, focusing on pricing strategies for 1M Context Window: Opus 4.6 & Sonnet 4.6 with a whiteboard displaying diagrams and notes in the background.

Uncertain about how to get started with AI?Evaluate your readiness, potential risks, and key priorities in less than an hour.

Uncertain about how to get started with AI?Evaluate your readiness, potential risks, and key priorities in less than an hour.

➔ Download Our Free AI Preparedness Pack

Claude Opus 4.6 and Claude Sonnet 4.6 now support a 1M token context window at standard pricing on the Claude Platform, with no long-context premium. Anthropic also increased media limits to 600 images or PDF pages per request, making it easier to analyse large document sets, repositories and multimodal inputs in one run.

Long-context models are only truly useful when they’re predictable: predictable pricing, predictable limits, and predictable performance when you push them beyond “chat-sized” prompts.

That’s why this update matters. Anthropic has made the full 1M token context window generally available for Claude Opus 4.6 and Claude Sonnet 4.6 at standard pricing, removing the need for special headers or long-context tiers. Alongside that, media limits have expanded to 600 images or PDF pages per request, opening up more realistic document and multimodal workflows.

What “1M context” actually means

A 1M token context window is the amount of input a model can consider in a single request — roughly the equivalent of hundreds of pages of text, sizeable codebases, or large collections of documents.

In practice, it enables workflows that were previously awkward or expensive:

  • analysing an entire policy library in one pass

  • comparing multiple contracts or reports without chunking

  • loading large repositories to reason across files and dependencies

  • reviewing long conversation or agent traces without losing earlier state

What’s changed in this release

1) Standard pricing across the full 1M window

The key shift is commercial and operational: standard pricing now applies across the entire 1M context window for Opus 4.6 and Sonnet 4.6. That means you’re not penalised with a separate “long-context premium” when your prompt crosses a threshold.

For teams building with long context, this reduces budgeting uncertainty — and it makes it much easier to decide when “just load the whole thing” is the right call.

2) Standard rate limits now apply

Anthropic also removed the dedicated 1M rate limits. Your normal account rate limits apply across every context length, which simplifies scaling and capacity planning.

3) Media limits increased to 600 images or PDF pages

If your workflows involve PDFs, scans, slide decks or image-heavy documents, the jump from 100 to 600 images or PDF pages per request is a big unlock. It’s especially relevant for:

  • due diligence packs

  • technical documentation collections

  • legal discovery-style bundles

  • product catalogues and image datasets

Practical examples: where 1M context makes a difference

Example 1: “One-shot” document analysis for governance and compliance

Instead of chunking a 300-page policy document, you can:

  • load the entire document

  • ask for a structured compliance gap analysis

  • generate a traceable summary with references to sections

The benefit is less context loss, fewer stitching errors, and fewer handoffs.

Example 2: Codebase-level reasoning for engineering teams

With long context, teams can:

  • load repository structure and key files

  • ask for impact analysis (“what breaks if we change X?”)

  • generate migration plans or refactors that span multiple modules

Example 3: Multi-document comparisons for procurement and strategy

For example: compare 15 vendor proposals, extract requirements coverage, and produce a scored matrix — without repeated chunk-and-merge.

What to watch out for

Long context changes the shape of work — but it also changes cost and risk.

  • Token usage can spike quickly. Standard pricing doesn’t mean “cheap”, it means “consistent.”

  • Garbage in, garbage out scales too. If you load 600 pages of unstructured noise, you may get confidently unhelpful answers.

  • Governance still matters. For regulated sectors, ensure the model’s use aligns with your data handling rules.

Summary

The GA rollout of 1M context for Claude Opus 4.6 and Sonnet 4.6 at standard pricing, plus 600 images/PDF pages per request, makes long-context AI more practical for real enterprise workloads — document analysis, codebase reasoning, and large-scale comparisons.

Next steps

If you want to turn long-context capability into measurable business value, Generation Digital can help you:

  • identify the workflows that benefit most from 1M context

  • design prompts and evaluation so outputs stay reliable

  • integrate long-context AI into your stack and governance model

FAQs

What is the 1M context window?

It’s the ability to include up to one million tokens of input in a single request, letting the model reason across very large document sets, codebases or long multimodal inputs.

How does standard pricing help users?

Standard pricing means the same per-token rates apply across the full 1M context window, reducing budgeting surprises and making long-context workflows easier to adopt.

What are the updated media limits?

When using the 1M context window, Anthropic increased the limit to 600 images or PDF pages per request, supporting bigger document and multimodal analysis workloads.

Do rate limits change with 1M context?

Anthropic indicates your standard account limits now apply across every context length, simplifying scaling and capacity planning.

What’s a sensible first use case?

Start with a controlled, high-value document workflow (e.g., policy gap analysis, contract comparison, or a due diligence pack), where outputs can be validated against source sections.

Claude Opus 4.6 and Claude Sonnet 4.6 now support a 1M token context window at standard pricing on the Claude Platform, with no long-context premium. Anthropic also increased media limits to 600 images or PDF pages per request, making it easier to analyse large document sets, repositories and multimodal inputs in one run.

Long-context models are only truly useful when they’re predictable: predictable pricing, predictable limits, and predictable performance when you push them beyond “chat-sized” prompts.

That’s why this update matters. Anthropic has made the full 1M token context window generally available for Claude Opus 4.6 and Claude Sonnet 4.6 at standard pricing, removing the need for special headers or long-context tiers. Alongside that, media limits have expanded to 600 images or PDF pages per request, opening up more realistic document and multimodal workflows.

What “1M context” actually means

A 1M token context window is the amount of input a model can consider in a single request — roughly the equivalent of hundreds of pages of text, sizeable codebases, or large collections of documents.

In practice, it enables workflows that were previously awkward or expensive:

  • analysing an entire policy library in one pass

  • comparing multiple contracts or reports without chunking

  • loading large repositories to reason across files and dependencies

  • reviewing long conversation or agent traces without losing earlier state

What’s changed in this release

1) Standard pricing across the full 1M window

The key shift is commercial and operational: standard pricing now applies across the entire 1M context window for Opus 4.6 and Sonnet 4.6. That means you’re not penalised with a separate “long-context premium” when your prompt crosses a threshold.

For teams building with long context, this reduces budgeting uncertainty — and it makes it much easier to decide when “just load the whole thing” is the right call.

2) Standard rate limits now apply

Anthropic also removed the dedicated 1M rate limits. Your normal account rate limits apply across every context length, which simplifies scaling and capacity planning.

3) Media limits increased to 600 images or PDF pages

If your workflows involve PDFs, scans, slide decks or image-heavy documents, the jump from 100 to 600 images or PDF pages per request is a big unlock. It’s especially relevant for:

  • due diligence packs

  • technical documentation collections

  • legal discovery-style bundles

  • product catalogues and image datasets

Practical examples: where 1M context makes a difference

Example 1: “One-shot” document analysis for governance and compliance

Instead of chunking a 300-page policy document, you can:

  • load the entire document

  • ask for a structured compliance gap analysis

  • generate a traceable summary with references to sections

The benefit is less context loss, fewer stitching errors, and fewer handoffs.

Example 2: Codebase-level reasoning for engineering teams

With long context, teams can:

  • load repository structure and key files

  • ask for impact analysis (“what breaks if we change X?”)

  • generate migration plans or refactors that span multiple modules

Example 3: Multi-document comparisons for procurement and strategy

For example: compare 15 vendor proposals, extract requirements coverage, and produce a scored matrix — without repeated chunk-and-merge.

What to watch out for

Long context changes the shape of work — but it also changes cost and risk.

  • Token usage can spike quickly. Standard pricing doesn’t mean “cheap”, it means “consistent.”

  • Garbage in, garbage out scales too. If you load 600 pages of unstructured noise, you may get confidently unhelpful answers.

  • Governance still matters. For regulated sectors, ensure the model’s use aligns with your data handling rules.

Summary

The GA rollout of 1M context for Claude Opus 4.6 and Sonnet 4.6 at standard pricing, plus 600 images/PDF pages per request, makes long-context AI more practical for real enterprise workloads — document analysis, codebase reasoning, and large-scale comparisons.

Next steps

If you want to turn long-context capability into measurable business value, Generation Digital can help you:

  • identify the workflows that benefit most from 1M context

  • design prompts and evaluation so outputs stay reliable

  • integrate long-context AI into your stack and governance model

FAQs

What is the 1M context window?

It’s the ability to include up to one million tokens of input in a single request, letting the model reason across very large document sets, codebases or long multimodal inputs.

How does standard pricing help users?

Standard pricing means the same per-token rates apply across the full 1M context window, reducing budgeting surprises and making long-context workflows easier to adopt.

What are the updated media limits?

When using the 1M context window, Anthropic increased the limit to 600 images or PDF pages per request, supporting bigger document and multimodal analysis workloads.

Do rate limits change with 1M context?

Anthropic indicates your standard account limits now apply across every context length, simplifying scaling and capacity planning.

What’s a sensible first use case?

Start with a controlled, high-value document workflow (e.g., policy gap analysis, contract comparison, or a due diligence pack), where outputs can be validated against source sections.

Receive weekly AI news and advice straight to your inbox

By subscribing, you agree to allow Generation Digital to store and process your information according to our privacy policy. You can review the full policy at gend.co/privacy.

Generation
Digital

Canadian Office
33 Queen St,
Toronto
M5H 2N2
Canada

Canadian Office
1 University Ave,
Toronto,
ON M5J 1T1,
Canada

NAMER Office
77 Sands St,
Brooklyn,
NY 11201,
USA

Head Office
Charlemont St, Saint Kevin's, Dublin,
D02 VN88,
Ireland

Middle East Office
6994 Alsharq 3890,
An Narjis,
Riyadh 13343,
Saudi Arabia

UK Fast Growth Index UBS Logo
Financial Times FT 1000 Logo
Febe Growth 100 Logo (Background Removed)

Business Number: 256 9431 77 | Copyright 2026 | Terms and Conditions | Privacy Policy

Generation
Digital

Canadian Office
33 Queen St,
Toronto
M5H 2N2
Canada

Canadian Office
1 University Ave,
Toronto,
ON M5J 1T1,
Canada

NAMER Office
77 Sands St,
Brooklyn,
NY 11201,
USA

Head Office
Charlemont St, Saint Kevin's, Dublin,
D02 VN88,
Ireland

Middle East Office
6994 Alsharq 3890,
An Narjis,
Riyadh 13343,
Saudi Arabia

UK Fast Growth Index UBS Logo
Financial Times FT 1000 Logo
Febe Growth 100 Logo (Background Removed)


Business No: 256 9431 77
Terms and Conditions
Privacy Policy
© 2026