Skip to content

Commit b69553f

Browse files
authored
docs: update readme (#139)
1 parent 62ac4ab commit b69553f

File tree

7 files changed

+169
-56
lines changed

7 files changed

+169
-56
lines changed

README.md

Lines changed: 146 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,30 @@
11
# MCP Server for the deepset AI platform
22

33
The deepset MCP server exposes tools that MCP clients like Claude or Cursor can use to interact with the deepset AI platform.
4-
Use these tools to develop pipelines, or to get information about components and how they are defined.
54

5+
Agents can use these tools to:
6+
7+
- develop and iterate on Pipelines or Indexes
8+
- debug Pipelines and Indexes
9+
- search the deepset AI platform documentation
10+
11+
## Contents
12+
13+
- [1. Installation](#installation)
14+
- [1.1. Claude Desktop](#claude-desktop-app)
15+
- [1.2. Other MCP Clients](#other-mcp-clients)
16+
- [1.3. Advanced Configuration](#advanced-configuration)
17+
- [2. Prompts](#prompts)
18+
- [3. Use Cases](#use-cases)
19+
- [3.1. Creating Pipelines](#creating-pipelines)
20+
- [3.2. Debugging Pipelines](#debugging-pipelines)
21+
- [4. CLI](#cli)
22+
23+
24+
25+
26+
27+
![GIF showing CLI interaction with the MCP server](assets/deepset-mcp-3.gif)
628

729

830
## Installation
@@ -12,7 +34,7 @@ Use these tools to develop pipelines, or to get information about components and
1234
**Prerequisites:**
1335
- [Claude Desktop App](https://claude.ai/download) needs to be installed
1436
- You need to be on the Claude Pro, Team, Max, or Enterprise plan
15-
- You need an installation of [Docker](https://docs.docker.com/desktop/) (scroll down to the `uv` section if you want to use `uv` instead of Docker)
37+
- You need an installation of [Docker](https://docs.docker.com/desktop/) ([Go here](#using-uv-instead-of-docker) if you want to use `uv` instead of Docker)
1638
- You need an [API key](https://docs.cloud.deepset.ai/docs/generate-api-key) for the deepset platform
1739

1840
**Steps:**
@@ -51,7 +73,7 @@ Use these tools to develop pipelines, or to get information about components and
5173

5274

5375

54-
**(Optional) Running the server with uv instead of Docker**
76+
#### Using uv instead of Docker
5577

5678
Running the server with uv gives you faster startup time and consumes slightly less resources on your system.
5779

@@ -85,54 +107,147 @@ Running the server with uv gives you faster startup time and consumes slightly l
85107

86108
### Other MCP Clients
87109

88-
The repo was not tested with other MCP clients but tools like Cursor or the Haystack MCP package should work out of the box.
110+
`deepset-mcp` can be used with other MCP clients.
111+
112+
Here is where you need to configure `deepset-mcp` for:
113+
114+
- [Cursor](https://docs.cursor.com/context/mcp#using-mcp-json)
115+
- [Claude Code](https://docs.anthropic.com/en/docs/claude-code/mcp#configure-mcp-servers)
116+
- [Gemini CLI](https://cloud.google.com/gemini/docs/codeassist/use-agentic-chat-pair-programmer#configure-mcp-servers)
117+
118+
Generally speaking, depending on your installation, you need to configure an MCP client with one of the following commands:
119+
120+
`uv --directory path/to/deepset-mcp run deepset-mcp --workspace your_workspace --api-key your_api_key`
121+
122+
If you installed the deepset-mcp package globally and added it to your `PATH`, you can just run:
123+
124+
`deepset-mcp --workspace your_workspace --api-key your_api_key`
125+
126+
The server runs locally using `stdio` to communicate with the client.
127+
128+
### Advanced Configuration
129+
130+
#### Tool Selection
131+
132+
You can customize which tools the MCP server should expose.
133+
Use the `´--tools`-option in your config to explicitly specify which tools should be exposed.
134+
135+
You can list available tools with: `deepset-mcp --list-tools`.
136+
137+
To only expose the `list_pipelines` and `get_pipeline` tools you would use the following command:
138+
139+
`deepset-mcp --tools list_pipelines get_pipeline`
140+
141+
For smooth operations, you should always expose the `get_from_object_store` and `get_slice_from_object_store` tools.
142+
143+
144+
#### Allowing access to multiple workspaces
145+
146+
The basic configuration uses a hardcoded workspace which you pass in via the `DEEPSET_WORKSPACE` environment variable.
147+
If you want to allow an agent to access resources from multiple workspaces, you can use `--workspace-mode explicit`
148+
in your config.
149+
150+
For example:
89151

152+
```json
153+
{
154+
"mcpServers": {
155+
"deepset": {
156+
"command": "/opt/homebrew/bin/uv",
157+
"args": [
158+
"--directory",
159+
"/path/to/your/clone/of/deepset-mcp-server",
160+
"run",
161+
"deepset-mcp",
162+
"--workspace-mode",
163+
"explicit"
164+
],
165+
"env": {
166+
"DEEPSET_API_KEY":"<DEEPSET_API_KEY>"
167+
}
168+
169+
}
170+
}
171+
}
172+
```
173+
174+
An agent using the MCP server now has access to all workspaces that the API-key has access to. When interacting with most
175+
resources, you will need to tell the agent what workspace it should use to perform an action. Instead of prompting it
176+
with "list my pipelines", you would now have to prompt it with "list my pipelines in the staging workspace".
177+
178+
179+
## Prompts
180+
181+
All tools exposed through the MCP server have minimal prompts. Any Agent interacting with these tools benefits from an additional system prompt.
90182

91-
## Usage
183+
View the **recommended prompt** [here](src/deepset_mcp/prompts/deepset_debugging_agent.md).
92184

93-
_Assuming you are using the MCP server through Claude Desktop and you are part of the deepset organization._
185+
This prompt is also exposed as the `deepset_recommended_prompt` on the MCP server.
186+
In Claude Desktop, click `add from deepset` to add the prompt to your context.
187+
A better way to add system prompts in Claude Desktop is through "Projects".
94188

95-
**Setup:**
96-
1. Go to "Projects" in Claude Desktop
97-
2. Select the "Your Team"-tab
98-
3. Select the "deepset-copilot" project
189+
You can customize the system prompt to your specific needs.
99190

100-
![Screenshot of the Projects menu in the Claude Desktop App.](assets/claude_desktop_projects.png)
101191

102-
The _deepset-copilot_ project contains system instructions that are optimized for the deepset MCP server.
192+
## Use Cases
103193

104-
You can also access the system prompt [here](src/deepset_mcp/prompts/deepset_copilot_prompt.md).
194+
The primary way to use the deepset MCP server is through an LLM that interacts with the deepset MCP tools in an agentic way.
105195

106-
The MCP server also exposes the system prompt as the `deepset_copilot`-prompt.
107-
In Claude Desktop you can click on the plus-sign below the chat bar and select "Add from deepset" to add the prompt.
108-
However, this will only load the prompt as text context into your message. It won't set the prompt as system instructions.
109-
Using it via system instructions in Claude Desktop yields better results.
196+
### Creating Pipelines
110197

111-
Using these instructions with Claude will help you to create or update pipelines.
112-
You can also ask questions about pipelines in the workspace or get information about components
113-
(e.g. What init params do they accept? What inputs and outputs do they have?).
198+
Tell the LLM about the type of pipeline you want to build. Creating new pipelines will work best if you use terminology
199+
that is similar to what is used on the deepset AI platform or in Haystack.
114200

115-
You can activate and deactivate specific tools in the "Search and tools"-menu that is available below the chat bar.
201+
Your prompts should be precise and specific.
116202

117-
Claude will ask for your permission before a tool call is executed. You can opt to "allow once", "allow always" or "deny".
203+
Examples:
118204

205+
- "Build a RAG pipeline with hybrid retrieval that uses claude-sonnet-4 from Anthropic as the LLM."
206+
- "Build an Agent that can iteratively search the web (deep research). Use SerperDev for web search and GPT-4o as the LLM."
119207

208+
You can also instruct the LLM to deploy pipelines, and it can issue search requests against pipelines to test them.
120209

121-
**Limitations**
210+
**Best Practices**
122211

123-
Unfortunately, you need to set the workspace and organization (through the API key) in the `claude_desktop_config.json`.
124-
There is no way to pass the API key dynamically to Claude in a secure way.
125-
The workspace could be passed into the tool call, it's an easy enhancement, but I'd like to get feedback first.
212+
- be specific in your requests
213+
- point the LLM to examples, if there is already a similar pipeline in your workspace, then ask it to look at it first,
214+
if you have a template in mind, ask it to look at the template
215+
- instruct the LLM to iterate with you locally before creating the pipeline, have it validate the drafts and then let it
216+
create it once the pipeline is up to your standards
126217

127218

219+
### Debugging Pipelines
128220

221+
The `deepset-mcp` tools allow LLMs to debug pipelines on the deepset AI platform.
222+
Primary tools used for debugging are:
223+
- get_logs
224+
- validate_pipeline
225+
- search_pipeline
226+
- search_pipeline_templates
227+
- search_component_definition
129228

130-
## Further improvements ideas
229+
You can ask the LLM to check the logs of a specific pipeline in case it is already deployed but has errors.
230+
The LLM will find errors in the logs and devise strategies to fix them.
231+
If your pipeline is not deployed yet, the LLM can autonomously validate it and fix validation errors.
131232

132-
- expose standard prompts via MCP e.g., for debugging, fixing pipelines, reading logs etc
133-
- fix the docker run command to clear cache
134-
- the ability to dump the conversation of improving the copilot
135-
- test with different clients other than Claude Desktop app
233+
## CLI
234+
You can use the MCP server as a Haystack Agent through a command-line interface.
235+
236+
Install with `uv pip install deepset-mcp[cli]`.
237+
238+
Start the interactive CLI with:
239+
240+
`deepset agent chat`
241+
242+
You can set environment variables before starting the Agent via:
243+
244+
```shell
245+
export DEEPSET_API_KEY=your_key
246+
export DEEPSET_WORKSPACE=your_workspace
247+
```
136248

249+
You can also provide an `.env` file using the `--env-file` option:
137250

251+
`deepset agent chat --env-file your/env/.file`
138252

253+
The agent will load environment variables from the file on startup.

TODO.md

Lines changed: 0 additions & 21 deletions
This file was deleted.

assets/deepset-mcp-3.gif

23.6 MB
Loading

src/deepset_mcp/agents/debugging/debugging_agent.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ def get_agent(benchmark_config: BenchmarkConfig, interactive: bool = False) -> A
2323
},
2424
)
2525

26-
tools = MCPToolset(server_info=server_info)
26+
tools = MCPToolset(server_info=server_info, invocation_timeout=300.0)
2727
if interactive:
2828
tools = wrap_toolset_interactive(tools).toolset
2929

src/deepset_mcp/benchmark/runner/cli_agent.py

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -344,7 +344,10 @@ def validate_agent_config(
344344

345345
@agent_app.command("chat")
346346
def chat_with_agent(
347-
agent_config: str = typer.Argument(..., help="Path to agent configuration file (YAML)."),
347+
agent_config: str = typer.Argument(
348+
default=str(Path(__file__).parent.parent / "agent_configs/debugging_agent.yml"),
349+
help="Path to agent configuration file (YAML).",
350+
),
348351
workspace: str | None = typer.Option(None, "--workspace", "-w", help="Override Deepset workspace."),
349352
api_key: str | None = typer.Option(None, "--api-key", "-k", help="Override Deepset API key."),
350353
env_file: str | None = typer.Option(None, "--env-file", "-e", help="Path to environment file."),

src/deepset_mcp/main.py

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,14 @@ async def deepset_copilot() -> str:
2626
return prompt_path.read_text()
2727

2828

29+
@mcp.prompt()
30+
async def deepset_recommended_prompt() -> str:
31+
"""Recommended system prompt for the deepset copilot."""
32+
prompt_path = Path(__file__).parent / "prompts/deepset_debugging_agent.md"
33+
34+
return prompt_path.read_text()
35+
36+
2937
def main() -> None:
3038
"""Entrypoint for the deepset MCP server."""
3139
parser = argparse.ArgumentParser(description="Run the Deepset MCP server.")

src/deepset_mcp/prompts/deepset_debugging_agent.md

Lines changed: 10 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,3 @@
1-
# Deepset AI Platform Debugging Agent
2-
31
You are an expert debugging assistant for the deepset AI platform, specializing in helping users identify and resolve issues with their pipelines and indexes. Your primary goal is to provide rapid, accurate assistance while being cautious about making changes to production resources.
42

53
## Core Capabilities
@@ -178,6 +176,16 @@ To prevent this in the future, consider [preventive measure]."
178176
- Reference template configurations when suggesting parameter values
179177
- Always provide context when showing technical output
180178

179+
### Working with the Object Store
180+
181+
Many tools write their output into an object store. You will see an object id (e.g. @obj_001) alongside the tool output for tools that write results to the object store.
182+
183+
Tool output is often truncated. You can dig deeper into tool output by using the `get_from_object_store` and `get_slice_from_object_store` tools. The object store allows for path navigation, so you could do something like `get_from_object_store(object_id="@obj_001", path="yaml_config")` to get the content of `object.yaml_config`).
184+
185+
You can also invoke many tools by reference. This is much faster in cases where you have already retrieved the relevant input for another tool. Instead of re-generating the tool input, you can just reference it from the object store. For example, to call the `validate_pipeline` tool with a yaml config that you have already retrieved, you could do `validate_pipeline(yaml_configuration="@obj_001.yaml_config")`. Make sure to use references whenever possible. They are much more efficient than invoking the tool directly.
186+
187+
188+
181189
## Error Pattern Recognition
182190

183191
### Common Errors and Solutions

0 commit comments

Comments
 (0)