Switch to Claude: Import Memory in Under a Minute
Switch to Claude: Import Memory in Under a Minute
Claude
Mar 4, 2026

Uncertain about how to get started with AI?Evaluate your readiness, potential risks, and key priorities in less than an hour.
Uncertain about how to get started with AI?Evaluate your readiness, potential risks, and key priorities in less than an hour.
➔ Download Our Free AI Preparedness Pack
Anthropic makes it easier to switch to Claude by letting you import key context (“memory”) from another chatbot. You copy a pre-written prompt into your current assistant, paste the output into Claude’s import tool, then review what’s stored. The smart approach is to import only what’s useful—never sensitive data.
If you’ve used an AI assistant daily for months, you’ve probably trained it in ways you don’t even notice anymore: the tone you like, the projects you’re on, the formats you prefer, the “don’t forget this” details.
That’s exactly what Anthropic is now targeting. Claude has rolled out a smoother “switch to Claude” experience: memory is available on the free plan, and there’s a dedicated import tool that uses a pre-written prompt you run in your existing chatbot, then paste the results into Claude. The pitch is simple: your first conversation should feel like your hundredth. (businessinsider.com)
But there’s a bigger story here than an easy migration. If AI context becomes portable, organisations will treat models more like utilities: swap providers, choose the best model per task, and reduce lock-in. That’s good for innovation — and it raises new questions about privacy, governance, and what should count as “memory”.
Why Anthropic is pushing switching now
Anthropic isn’t being subtle: it’s actively promoting how quickly you can move your history and working style across to Claude. Business Insider reports the experience was improved with an updated interface and a dedicated landing page that highlights an “under a minute” switch process using a copy‑paste prompt.
At the same time, The Verge notes that Claude’s memory feature — previously paid-only — is now available on the free plan, with the import tool sitting alongside it in settings. That’s a classic “remove friction, grow share” move. (theverge.com)
The obvious takeaway is product-led growth. The more interesting one is strategic: as models converge in capability, the differentiator shifts to workflow. Memory, apps, agents, and tool integrations become the reason teams stay.
What Claude “memory” actually means (and what it doesn’t)
Memory features vary by provider, but the principle is similar: instead of relying on one long thread, the assistant can retain small, reusable facts about your preferences and recurring context.
In Claude, you can enable memory by going to Settings → Capabilities, and (as The Verge describes) you can also find the memory importing tool there.
Two important clarifications for teams:
Memory is not the same as full chat history. Import tools typically extract and summarise what’s “worth remembering” rather than copying every message.
Memory is a policy decision as much as a product feature. Your organisation needs to decide what belongs in memory, who owns it, and how it’s reviewed.
How to switch to Claude in practice
Below is a sensible, low-risk way to do it. Even if you’re not moving permanently, this process helps you build a portable “AI profile” you can reuse across tools.
Step 1: Create an “AI profile” first (don’t import blind)
Before you copy anything, write a short profile you want any assistant to know. Keep it factual, non-sensitive, and useful.
Include:
Your role and what you’re responsible for
The industries you work in
Your preferred tone (e.g., “British English, direct, no fluff”)
Output formats you reuse (briefs, agendas, email drafts, SEO rewrites)
Tools you use (e.g., Asana, Slack, Google Workspace)
Boundaries (“Never include personal data; ask before storing anything”)
Avoid:
Client names, personal identifiers, passwords, internal financials
Anything regulated unless you have explicit approval and a safe environment
Step 2: Export memory from your current assistant
Anthropic’s method is deliberately simple: you copy a pre-written prompt into your current chatbot, then copy the output into Claude’s import tool. Business Insider highlights this as the main change making switching faster.
If your current assistant supports memory export, use that first. If not, the “prompt-based summary” route is often enough.
Step 3: Import into Claude and review line by line
Paste the output into Claude’s importing tool (found in settings alongside memory). The Verge describes the flow as copying the output from your previous AI back into Claude’s importing tool.
Now do the most important part: review what’s being stored.
Treat it like a permissions request:
Is each item necessary?
Is it phrased in a way you’d be comfortable sharing with a new colleague?
Does it create risk if misinterpreted?
If the tool lets you edit, edit. If it doesn’t, re-run the export prompt with stricter instructions (“Exclude names, exclude company identifiers, summarise to generic preferences only”).
Step 4: Test with three real tasks
Don’t judge the switch on a hello-world chat. Run three tasks you actually do:
A writing task (e.g., turn notes into a brief)
A reasoning task (e.g., options and trade-offs)
A workflow task (e.g., create a plan with milestones)
You’re testing whether Claude’s imported context is usable and whether it changes outcomes — not whether it feels “friendly”.
A safe checklist for teams (the bit most people skip)
If you’re switching personally, the main risk is oversharing. If you’re switching at work, the risk is creating a shadow knowledge base no one governs.
Here’s a lightweight checklist that scales:
Data minimisation: import preferences and patterns, not sensitive content.
Ownership: decide whether “AI memory” is personal, team-owned, or org-owned.
Review: set a cadence (monthly is fine) to inspect and prune memory.
Separation: keep personal and work contexts separate.
Vendor portability: store your “AI profile” in a plain doc so it works across models.
What this means for AI strategy in 2026
When switching becomes frictionless, multi-model becomes normal.
You’ll see more organisations:
Standardise a portable “AI profile” and prompt library
Route tasks to different models (coding, drafting, analysis)
Treat assistants as part of a governed toolchain, not a single vendor bet
This aligns with the broader shift towards “AI as a utility” thinking: operational models, governance, evaluation, and cost control matter as much as the model choice.
And it connects to where Claude is heading: more agentic workflows (like Claude Cowork on desktop) and deeper integrations via apps and protocols. (gend.co)
Next steps
If you’re evaluating Claude (or any assistant) for real work, don’t start with “which model is best?” Start with:
What workflows you want to accelerate
What data you can safely use
How you’ll measure outcomes
What governance you need so the wins stick
Generation Digital can help you design a multi-model operating model, run evaluations that map to real tasks, and set governance your leadership team can defend.
FAQ
Question: Can I import my ChatGPT or Gemini history into Claude?
Answer: You can import key context by generating a structured summary (via a pre-written prompt) in your current chatbot and pasting it into Claude’s import tool. Review the imported “memory” carefully and remove anything sensitive. (businessinsider.com)
Question: Is Claude memory available on the free plan?
Answer: Yes. Anthropic has made Claude’s memory feature available to free users, with controls in settings under capabilities. (theverge.com)
Question: What should I avoid importing into Claude memory?
Answer: Avoid personal identifiers, confidential client information, passwords, internal financials, or anything regulated unless your organisation has approved a compliant setup.
Question: How do I make my AI context portable across tools?
Answer: Maintain a short “AI profile” document with your preferences, formats, and boundaries. Use it as the source of truth, and import only a minimal subset into any model’s memory.
Question: Why is Anthropic promoting switching now?
Answer: Anthropic has improved the switching interface and is using memory + an import tool to reduce friction for users moving from rival assistants, as Claude’s popularity rises. (businessinsider.com)
Anthropic makes it easier to switch to Claude by letting you import key context (“memory”) from another chatbot. You copy a pre-written prompt into your current assistant, paste the output into Claude’s import tool, then review what’s stored. The smart approach is to import only what’s useful—never sensitive data.
If you’ve used an AI assistant daily for months, you’ve probably trained it in ways you don’t even notice anymore: the tone you like, the projects you’re on, the formats you prefer, the “don’t forget this” details.
That’s exactly what Anthropic is now targeting. Claude has rolled out a smoother “switch to Claude” experience: memory is available on the free plan, and there’s a dedicated import tool that uses a pre-written prompt you run in your existing chatbot, then paste the results into Claude. The pitch is simple: your first conversation should feel like your hundredth. (businessinsider.com)
But there’s a bigger story here than an easy migration. If AI context becomes portable, organisations will treat models more like utilities: swap providers, choose the best model per task, and reduce lock-in. That’s good for innovation — and it raises new questions about privacy, governance, and what should count as “memory”.
Why Anthropic is pushing switching now
Anthropic isn’t being subtle: it’s actively promoting how quickly you can move your history and working style across to Claude. Business Insider reports the experience was improved with an updated interface and a dedicated landing page that highlights an “under a minute” switch process using a copy‑paste prompt.
At the same time, The Verge notes that Claude’s memory feature — previously paid-only — is now available on the free plan, with the import tool sitting alongside it in settings. That’s a classic “remove friction, grow share” move. (theverge.com)
The obvious takeaway is product-led growth. The more interesting one is strategic: as models converge in capability, the differentiator shifts to workflow. Memory, apps, agents, and tool integrations become the reason teams stay.
What Claude “memory” actually means (and what it doesn’t)
Memory features vary by provider, but the principle is similar: instead of relying on one long thread, the assistant can retain small, reusable facts about your preferences and recurring context.
In Claude, you can enable memory by going to Settings → Capabilities, and (as The Verge describes) you can also find the memory importing tool there.
Two important clarifications for teams:
Memory is not the same as full chat history. Import tools typically extract and summarise what’s “worth remembering” rather than copying every message.
Memory is a policy decision as much as a product feature. Your organisation needs to decide what belongs in memory, who owns it, and how it’s reviewed.
How to switch to Claude in practice
Below is a sensible, low-risk way to do it. Even if you’re not moving permanently, this process helps you build a portable “AI profile” you can reuse across tools.
Step 1: Create an “AI profile” first (don’t import blind)
Before you copy anything, write a short profile you want any assistant to know. Keep it factual, non-sensitive, and useful.
Include:
Your role and what you’re responsible for
The industries you work in
Your preferred tone (e.g., “British English, direct, no fluff”)
Output formats you reuse (briefs, agendas, email drafts, SEO rewrites)
Tools you use (e.g., Asana, Slack, Google Workspace)
Boundaries (“Never include personal data; ask before storing anything”)
Avoid:
Client names, personal identifiers, passwords, internal financials
Anything regulated unless you have explicit approval and a safe environment
Step 2: Export memory from your current assistant
Anthropic’s method is deliberately simple: you copy a pre-written prompt into your current chatbot, then copy the output into Claude’s import tool. Business Insider highlights this as the main change making switching faster.
If your current assistant supports memory export, use that first. If not, the “prompt-based summary” route is often enough.
Step 3: Import into Claude and review line by line
Paste the output into Claude’s importing tool (found in settings alongside memory). The Verge describes the flow as copying the output from your previous AI back into Claude’s importing tool.
Now do the most important part: review what’s being stored.
Treat it like a permissions request:
Is each item necessary?
Is it phrased in a way you’d be comfortable sharing with a new colleague?
Does it create risk if misinterpreted?
If the tool lets you edit, edit. If it doesn’t, re-run the export prompt with stricter instructions (“Exclude names, exclude company identifiers, summarise to generic preferences only”).
Step 4: Test with three real tasks
Don’t judge the switch on a hello-world chat. Run three tasks you actually do:
A writing task (e.g., turn notes into a brief)
A reasoning task (e.g., options and trade-offs)
A workflow task (e.g., create a plan with milestones)
You’re testing whether Claude’s imported context is usable and whether it changes outcomes — not whether it feels “friendly”.
A safe checklist for teams (the bit most people skip)
If you’re switching personally, the main risk is oversharing. If you’re switching at work, the risk is creating a shadow knowledge base no one governs.
Here’s a lightweight checklist that scales:
Data minimisation: import preferences and patterns, not sensitive content.
Ownership: decide whether “AI memory” is personal, team-owned, or org-owned.
Review: set a cadence (monthly is fine) to inspect and prune memory.
Separation: keep personal and work contexts separate.
Vendor portability: store your “AI profile” in a plain doc so it works across models.
What this means for AI strategy in 2026
When switching becomes frictionless, multi-model becomes normal.
You’ll see more organisations:
Standardise a portable “AI profile” and prompt library
Route tasks to different models (coding, drafting, analysis)
Treat assistants as part of a governed toolchain, not a single vendor bet
This aligns with the broader shift towards “AI as a utility” thinking: operational models, governance, evaluation, and cost control matter as much as the model choice.
And it connects to where Claude is heading: more agentic workflows (like Claude Cowork on desktop) and deeper integrations via apps and protocols. (gend.co)
Next steps
If you’re evaluating Claude (or any assistant) for real work, don’t start with “which model is best?” Start with:
What workflows you want to accelerate
What data you can safely use
How you’ll measure outcomes
What governance you need so the wins stick
Generation Digital can help you design a multi-model operating model, run evaluations that map to real tasks, and set governance your leadership team can defend.
FAQ
Question: Can I import my ChatGPT or Gemini history into Claude?
Answer: You can import key context by generating a structured summary (via a pre-written prompt) in your current chatbot and pasting it into Claude’s import tool. Review the imported “memory” carefully and remove anything sensitive. (businessinsider.com)
Question: Is Claude memory available on the free plan?
Answer: Yes. Anthropic has made Claude’s memory feature available to free users, with controls in settings under capabilities. (theverge.com)
Question: What should I avoid importing into Claude memory?
Answer: Avoid personal identifiers, confidential client information, passwords, internal financials, or anything regulated unless your organisation has approved a compliant setup.
Question: How do I make my AI context portable across tools?
Answer: Maintain a short “AI profile” document with your preferences, formats, and boundaries. Use it as the source of truth, and import only a minimal subset into any model’s memory.
Question: Why is Anthropic promoting switching now?
Answer: Anthropic has improved the switching interface and is using memory + an import tool to reduce friction for users moving from rival assistants, as Claude’s popularity rises. (businessinsider.com)
Receive weekly AI news and advice straight to your inbox
By subscribing, you agree to allow Generation Digital to store and process your information according to our privacy policy. You can review the full policy at gend.co/privacy.
Generation
Digital

Business Number: 256 9431 77 | Copyright 2026 | Terms and Conditions | Privacy Policy
Generation
Digital










