Can Super Memory Finally End the AI Goldfish Memory Problem?

M
Mark JonesAuthorPublished Apr 23, 2026
11

At a Glance

Target Audience
Microsoft 365 Trainers, Power Platform Developers, M365 Admins
Problem Solved
Repeatedly pasting product docs/pricing into AI tools like Claude, causing inefficiency, hallucinations, and time loss in M365 content/customer tasks.
Use Case
AI-assisted content generation, customer support replies, and feature coding for Power Apps/Power Automate training platforms.

Every new AI tool, same grind. Paste the Collab365 Spaces product specs. Copy the product docs. Re-explain how a customer who bought our Power Apps workshop links to the courses and conferences they already have access to.

Claude for a blog post. Antigravity for vibe-coding a new feature. Cloudflare's models for inbox replies. Every one of them, starting from zero. Every single time.

It becomes really tiresome.

Helen and I started Collab365 five years ago as a training library for Microsoft 365 pros. Deep dives on Power Apps, Power Automate, the usual suspects. Then late 2022 hit like a truck. ChatGPT launched, Google traffic fell off a cliff, inboxes got AI-gated, and half our content was obsolete before the quarter closed. We pivoted hard into an AI-native intelligence engine and ripped out Azure's fifteen-service mess for Cloudflare's edge. But none of that solved the actual daily pain: the tools we lived in kept forgetting who we were.

You'd think with context windows pushing 1 million tokens we'd have room to breathe. Nope as AI still wont read it all. In some models, that's barely enough for one deep customer thread, never mind our full product stack, pricing history, and the web of upsell paths from a Power Apps buyer into our conferences. Jump into Antigravity? Start over. Cloudflare inbox agent drafting a reply? Paste it all again. Every session resets to zero, like talking to a goldfish with a PhD.

And the cost sneaks up on you. A five-minute job turns into an hour of context-hunting (where are those docs?!?!), and hallucinations creep in on the small stuff that has contradicting, out of date information, like a price that moved from $55 to $58 months ago. When you're a small team (Helen on ops, me architecting on the fly) that copy-paste tax is brutal.

So we are trialling a new service that claims to solve it called Super Memory.

It's a RAG database that vectorises our unstructured brain (facts, stats, opinions, weird little product rules, hates/likes, culture) and stitches them into a "knowledge graph". This means that, the Power Apps workshop transcript now links semantically to the sales page, or the course bundles and conference slots. No babysitting.

Here's the flow we landed on:

  1. Super Memory ingests via an MCP layer from everywhere we actually work: Antigravity sessions, Cloudflare AI inbox, transcripts from our morning walks with Hugo, Google Docs, even X bookmarks.
  2. I ask Claude to write a blog post on accessing Collab365 Spaces.
  3. Super Memory retrieves the chain (pricing, features, the FAQ) and injects it into the prompt via API.
  4. And voila, the model suddenly sounds like it's worked here for years.
  5. Helen or I review, edit, ship.

One rule we stuck to: no customer data, no sensitive personal information in there. Privacy first, and structured data just burns tokens and invites bad pulls. We keep it to the unstructured edges, the stuff that actually makes Collab365 sound like Collab365.

The thing I didn't see coming is this: memory is going to be the moat, not the model - as close to a third of GenAI users adopt RAG. We will change Models like we change socks. But a knowledge graph of your business, built up over years, tangled with interconnections only you have?

That's the thing nobody can copy and you can't easily migrate out. Which is also why I think Google, Anthropic, OpenAI, AWS or Microsoft will fight dirty for this layer. Whoever owns your memory, owns you and just imagine what the likes of Google can do with that data!

I'd love to know though, what are you using for cross-model memory? Or, are you still shoving docs into every chat message?