How to Use the Knowledge Graph
Most AI tools forget everything between sessions. Tendril's Knowledge Graph is the opposite — it accumulates a structured model of your codebase, your decisions, and your patterns, building a richer understanding of your codebase with every task you complete.
What Is the Knowledge Graph
The Knowledge Graph is a persistent, project-scoped memory store that lives alongside your codebase. It's a graph database — nodes represent discrete pieces of knowledge (files, components, architectural decisions, coding patterns, past agent outputs), and edges represent relationships between them.
Unlike a vector database search or a chat history, the Knowledge Graph is structured.
When an agent asks "how does authentication work in this project?", it doesn't search raw
text — it traverses the graph to find the auth node, follows edges to
middleware files, decision records, and related tests, then uses that subgraph as context.
This produces more accurate, less hallucinatory answers than any prompt-stuffing approach.
The Knowledge Graph is a Pro-only feature. Free users still get project context from initial ingestion, but that context doesn't persist or grow between runs.
How the Graph Grows Over Time
Every completed agent run contributes to the graph in three ways:
1. Structural updates
When an agent adds, removes, or renames files, the graph is updated to reflect the new structure. Nodes for deleted files are archived (not deleted) so the graph retains historical awareness — useful when an agent needs to understand why something was removed.
2. Decision records
When you write a "Request changes" note on an agent's diff, that feedback is stored as a decision record node, connected to the affected files and the run that produced them. Future agents can query decision records before making similar choices.
3. Pattern extraction
After each approved run, Tendril's graph engine analyzes the completed diffs to extract recurring patterns — naming conventions, file organization habits, testing approaches — and stores them as pattern nodes. These are summarized as a few sentences and prepended to future runs as soft constraints.
You can inspect the current state of your graph at any time in the Knowledge Graph panel in the project sidebar. Nodes are colored by type (blue = files, green = patterns, amber = decisions, purple = past runs).
Cross-Project Edge Connections
On Pro, you can connect multiple repositories to the same Tendril workspace. When you do, the Knowledge Graph is shared across projects — and over time, cross-project edges form between related nodes in different repos.
For example, if your backend API and your frontend client both use the same authentication
library, the graph will eventually connect the auth nodes in both projects with a
shares-dependency edge. An agent working on the frontend can then traverse
that edge to understand how the backend expects auth tokens to be formatted — without you
having to explain it.
Practical examples of cross-project edges
- shared-type — a TypeScript interface defined in a shared package, referenced in both a consumer app and a test harness
- derived-from — a generated client that was scaffolded from an OpenAPI spec in a sibling repo
- pattern-match — two repos that use the same folder structure convention, surfaced as a pattern node shared between them
- decision-propagated — a decision made in one project (e.g., "always use named exports") that Tendril inferred should apply to a related project
Cross-project edges are surfaced as suggestions, not hard constraints. Agents can use them or flag them as irrelevant — and that feedback updates the graph.
How to Interpret Cost-Per-Task Trends
In the Analytics tab of your workspace, you'll find a cost-per-task chart. This shows the average token cost (in USD) per agent subtask across runs over time.
For a new project, early runs are more expensive — agents need to process more context to orient themselves. As the Knowledge Graph matures, agents can rely on the graph's compact structural representation instead of re-reading raw source files, which reduces the token count per prompt. Over time, you should see a clear downward trend.
What a healthy trend looks like
- Runs 1–5: Highest cost. Agents are orienting, building baseline context.
- Runs 6–20: Gradual decline. Pattern nodes are forming. Decision records begin to reduce revisit work.
- Run 20+: The graph is well-established. Agents spend less time on discovery and more time on the actual task.
If cost-per-task rises after an initial decline, it usually means a large structural refactor happened and the graph needs a few runs to reorient. This is expected — the graph re-establishes itself within 3–5 runs after a major change.
Tendril uses your own API keys (BYOK). Cost-per-task is calculated from your actual provider usage, so the numbers reflect real spend — not estimates.
Knowledge Graph Is a Pro Feature
Free users get initial project context from ingestion, but context does not persist between runs and no graph is built. Every Free run starts from a fresh ingestion of your repo.
Pro includes Knowledge Graph, unlimited concurrent projects (hardware-limited), GitHub repo connection, Tendy AI assistant, Preview & Annotation, Screenshots, and Analytics.
Upgrade to Pro →Next Steps
- Run Parallel Agents → — every parallel run you complete contributes more data to the graph
- Connect Your First Repo → — the graph starts building from the moment a repo is connected
- Upgrade to Pro → — unlock the Knowledge Graph and start accumulating persistent project memory
Was this helpful?