Logo (Mobile)
Workspace Logo
3/6/2025Noah Ö

From Retrieval to Understanding: How Knowledge Graphs Enable True AI Agents for Consulting

At Kenley, we build AI-native platforms for consultants. A foundational component of this system is a data layer of the firm’s institutional knowledge – its decks, client relationships, market reports, expert call transcripts, etc.

Why RAG?

The natural starting point for this type of system is a Retrieval-Augmented Generation (RAG) pipeline consisting of two main components:

  1. Retrieval: Fetches relevant information through semantic search, typically using vector embeddings stored in a vector database.
  2. Generation: Synthesizes retrieved information into coherent responses tailored to user queries.

This approach goes quite far. Semantic search over data from disparate source systems is powerful, and combining it with the intelligence of frontier language models already provides a great starting point.

Why not just RAG?

However, a naive RAG system treats all information as equal and relies on vector embeddings to capture the complete semantics of the firm’s data. This approach is limited because consulting data comprises structured and unstructured information with different levels of context and authority. Beyond the final deliverable decks, firms rely on internal meeting notes, transcripts of expert interviews, external market reports, and more.

The naive approach of chunking these documents and indexing them in a single index fails to capture information like:

  • An expert call transcript may be from an expert call linked to a specific project and deliverable. Without capturing this relational link, the system could not answer questions like: “What experts did we consult for Project X, and what were the main takeaways?”.
  • A CRM or resourcing system might contain metadata about who was staffed for a project or what projects have been undertaken for a given client. A naive document RAG system would be unable to use this information.

This is where a knowledge graph comes in, playing the crucial role of the scaffolding around a vector embedding retrieval system. We’ll discuss what a knowledge graph is, what it looks like in consulting, and how to build one using structured and unstructured source systems.

Knowledge Graphs—and why consulting AI needs them

A knowledge graph is a structured representation of knowledge that models real-world entities—such as clients, projects, or industries—and explicitly captures their relationships. The knowledge graph comprises nodes (concepts) and edges (relationships). Each entity has a schema for the metadata that is attached to it. A simplified knowledge graph for a strategy consulting firm’s operations may look like:

Two key points are essential to highlight. First, the knowledge graph provides a flexible structure that captures highly complex operational realities. Second, knowledge graphs are inherently adaptable—they frequently vary between different firms and even across business units within the same consultancy.

Once a firm’s data has been transformed into this pre-defined structure, it becomes an instantiation of the knowledge graph. Unlike a naive vector embedding index, this provides a traversable structure that captures a firm's operational reality.

Let's examine a specific example to understand how this knowledge graph significantly improves retrieval performance. Suppose a consultant submits the prompt: “I’m working on a proposal for a healthcare project in oncology with client X. Give me a brief success story from related past projects, and mention the client we worked for.” A traditional (naive) Retrieval-Augmented Generation (RAG) system relies solely on document chunks and embeddings, making retrieval unpredictable. It would depend entirely on having indexed a chunk explicitly stating something like: “Our oncology project, Project Y, for client Z resulted in 20% operational efficiency and improved patient outcomes.”

The knowledge graph, however, provides a set of tools that an AI agent could use in the following sequence:

  1. Identify relevant projects: Search the knowledge graph for entities (nodes) of type “Project” that are related to the term “oncology”—using semantic search or keyword matching.
  2. Find success stories: Traverse links from each matching project node to associated documents and specifically look for mentions of “wins” or “successes.”
  3. Retrieve related clients: Navigate the link from each relevant project node directly to its connected “Client” entity.

Instead of relying on chunking and vector embeddings to magically capture structured information in our firm’s institutional knowledge, the knowledge graph provides a highly reliable and performant way to retrieve the correct information given arbitrary queries.

Constructing the knowledge graph

The next question becomes how to leverage this abstract concept of a knowledge graph and instantiate it given a firm’s institutional knowledge. To explain this, we must first understand the shape of the input data.

Consulting firms generate vast amounts of disparate data across platforms such as CRMs, PSAs, and document stores.

Systems like this are often partially structured in a way that maps neatly onto the knowledge graph of the firm. For example, a CRM is usually modeled as a relational database, where a single row in one table maps to a project/client engagement, with a one-to-one link to a “Clients” table and a one-to-many link to a “Consultant” table. File storage systems may have a folder structure or metadata tags that link files to a given project. One can transform this data and instantiate the knowledge graph with simple transformations.

However, given the complex reality of a real consulting firm, there are often gaps in this structured data modeling. For example, some firms may not store “Clients” as separate entities or folder structures may be inconsistently applied.

Turning this unstructured data into a structured form would be time-consuming for the firm’s consultants. This is where LLMs can come into play. AI can do a first pass at inferring the structure, and a junior analyst can quickly approve or amend the results, giving a significant speed-up. Such a system can help instantiate concepts, metadata, and relationships in the knowledge graph. For instance, looking at a project deliverable, we might extract metadata about the project and a relationship to a client mentioned in the document (if this isn’t already captured, e.g., in a CRM).

Thus, the process of instantiating a knowledge graph becomes one of transforming already structured data into the shape expected by the knowledge graph and using a human-in-the-loop AI system to extract structure from unstructured data quickly and reliably.

Conclusion

In this article, we've introduced the concept of a knowledge graph and its role as a critical complement to traditional vector store-based retrieval systems. The knowledge graph mirrors how consultants think of their business and captures information crucial to retrieving arbitrary insights from a firm’s institutional context. Instantiating it requires transforming data from disparate source systems using a combination of traditional data transformation and AI.

In a future post, we’ll explore how Kenley leverages the knowledge graph at search time to deliver more contextually relevant insights to consultants.

Till next time,

Noah Ö.

Request a demo to learn how leading consulting firms are winning today with Gen AI.

Specialized Agents for Specialized work

Specialized Agents for Specialized work


Y Combinator
Kenley Logo
© 2026 Kenley. All rights reserved.San Francisco, California, United States