The Context Graph Narrative is Wrong
Foundation Capital is right about context graphs but wrong about where context comes from.
We are opening up (domain ) to alpha testing. If you want early access and have not joined the list yet. Sign up here.
AI systems are racing to build “context graphs”, interconnected webs of information that help them reason more effectively. Foundation Capital just published a thesis on this. They’re right that it matters.
But they’re missing a huge point. Context doesn’t magically exist. It’s created. It cannot be synthetically manufactured.
The most valuable context in any organization doesn’t live in documents or chat logs. It lives in the spaces in between. In conversation. Let’s examine a few examples.
Why a CEO rejected the “rational” option
Which advisor changed the direction of a strategy
What objections almost killed a deal
Which risks were knowingly accepted
Where the team still disagrees, but doesn’t want to say the thing
Did we actually identify the elephant in the room?
That context is social, not computational. It’s human, not statistical.
Most AI products try to reconstruct it after the fact, scraping docs, summarizing Slack, transcribing meetings. But by then, the signal has already degraded, or worst, lost completely. This is the difference between a flattened narrative, which we are all becoming more aware of and capable of detecting, instead of living judgment, which feels so much more real, human, dare I say it, authentic.
Recall, while really important, and AI is clearly insanely great at it, it’s not a replacement for thinking.
So, what is the answer?
We think we know and we’re building it at Domain.
We started out from a different premise: Context isn’t something you capture after decisions are made. It’s something you create while decisions are being formed. For three years through our sister company, ON_Discourse, we ran hundreds of closed-door strategy sessions with founders, operators, and executives.
What we learned was that the highest-value context emerges between people, not inside tools, and decisions improve when AI orchestrates the flow, but humans bring a lived and embodied experience. The last mile of thinking, the part that actually changes outcomes, which falls apart if you remove disagreement and pushback.
So instead of building another AI assistant that summarizes your past, we’re building a system that structures human discourse in real time and turns it into durable, reusable context, which optimizes towards tangible outcomes.
Here’s what that looks like on the Domain platform:
A founder raises a strategic question
AI reframes it into sharp provocations
The right people are pulled in, for contribution and collaboration
Discourse happens (sync and async)
AI tracks tension, agreement, and evolution, not just comments
Outputs are synthesized into decision-grade outputs
The reasoning behind decisions becomes part of a living system
Context lives across every Domain
Every Domain session strengthens the system: who contributes, how they challenge, what changes as a result, which insights persist. Over time, this becomes something most AI platforms can’t produce: a graph of trust, judgment, and applied intelligence.
So why do we think this matters at this moment?
AI is accelerating the speed of work, that feels true, but it is clearly a race to the bottom. Strategy is getting automated before it is properly understood, networks are getting larger and less useful, and context is a summary of everything, but an expression of nothing.
Foundation Capital is right: context graphs are inevitable.
But the winning systems won’t be the ones with the most data. They’ll be the ones that understand how humans actually think together.
Example: In Slack, someone raises a question, or shares an idea. Fifteen people react with emojis. Three leave comments. The thread dies. Six months later, you’re searching for “why we didn’t we move forward with that idea”, but the weight of the objection, who pushed back hardest, and what changed someone’s mind? Gone.
In Domain, that same conversation is structured: AI surfaces the core tension, pulls in the people whose judgment matters, tracks how positions evolve, and preserves not just what was said but why it mattered, who challenged it, and what changed because of it.
That’s the difference between a chat log and a context graph. Not another chat tool. Not another AI copilot. Not another professional feed.
A thinking system, where context is created deliberately, pressure-tested through discourse, and retained structurally. Because the most valuable context isn’t what AI can infer. It’s what humans forge together.



