You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+26-19Lines changed: 26 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
# MCP Server for the deepset AI platform
2
2
3
3
This is the _official_ MCP server for the [deepset AI platform](https://www.deepset.ai/products-and-services/deepset-ai-platform).
4
-
It allows Agents in tools like Cursor or Claude Code to build and debug pipelines on the platform.
4
+
It allows Agents in tools like Cursor or Claude Code to build and debug pipelines on the deepset platform.
5
5
6
6
The MCP server exposes up to 30 hand-crafted tools that are optimized for Agents interacting with the deepset platform.
7
7
Using the server, you benefit from faster creation of pipelines or indexes and speedy issue resolution through agentic debugging.
@@ -39,7 +39,7 @@ Using the server, you benefit from faster creation of pipelines or indexes and s
39
39
40
40
## Installation
41
41
42
-
The recommended way to use the`deepset-mcp`-package is via [uv](https://docs.astral.sh/uv/).
42
+
Before configuring MCP clients to work with`deepset-mcp`, you need to install [uv](https://docs.astral.sh/uv/), a modern Python package manager.
43
43
44
44
If `uv` is not installed on your system, you can install it via:
45
45
@@ -68,11 +68,11 @@ Once you have `uv` installed, you can follow one of the guides below to configur
68
68
**Configuration**
69
69
70
70
Latest instructions on how to set up an MCP server for Cursor are covered in their [documentation](https://docs.cursor.com/context/mcp#using-mcp-json).
71
-
You can either configure the MCP server for a single project or globally across all projects.
71
+
You can either configure the MCP server for a single Cursor project or globally across all projects.
72
72
73
-
To configure the `deepset-mcp` server for your project:
73
+
To configure the `deepset-mcp` server for a single project:
74
74
75
-
1. create a file with the name `mcp.json` in your `.cursor` directory
75
+
1. create a file with the name `mcp.json` in your `.cursor` directory at the root of the project
76
76
2. Add the following configuration
77
77
78
78
```json
@@ -91,7 +91,7 @@ To configure the `deepset-mcp` server for your project:
91
91
```
92
92
93
93
This creates a virtual environment for the `deepset-mcp` package and runs the command to start the server.
94
-
The `deepset-mcp` server should appear in the "Tools & Integrations"-section of your "Cursor Settings".
94
+
The `deepset-mcp` server should appear in the "Tools & Integrations"section of your "Cursor Settings".
95
95
The tools on the server are now available to the Cursor Agent.
96
96
97
97
It is recommended to create a file named `.cursorrules` at the root of your project (if not already there)
@@ -192,9 +192,12 @@ If running with Docker, you need to use the following configuration with your MC
192
192
193
193
### Multiple Workspaces
194
194
195
-
The basic configuration uses a `static` workspace which you pass in via the `DEEPSET_WORKSPACE` environment variable
195
+
In the default configuration, the Agent can only interact with resources in a fixed deepset workspace.
196
+
You configure this deepset workspace either through the `DEEPSET_WORKSPACE` environment variable
196
197
or the `--workspace` option.
197
-
You can configure this behaviour by using the `--workspace-mode`-option (default: `static`).
198
+
199
+
The `--workspace-mode`-option (default: `static`) determines if the Agent can interact with a fixed, pre-configured workspace,
200
+
or if it should have access to resources in multiple workspaces.
198
201
If you want to allow an Agent to access resources from multiple workspaces, use `--workspace-mode dynamic`
199
202
in your configuration.
200
203
@@ -267,7 +270,7 @@ You can view documentation for all tools in the [tools section](#tools). For man
267
270
In this case, it is recommended to deactivate tools that are not needed. Using fewer tools has the following benefits:
268
271
269
272
- some MCP clients limit the maximum number of tools
270
-
- the LLM will be more focused on the task at hand and not call tools that it does not need
273
+
- the Agent will be more focused on the task at hand and not call tools that it does not need
271
274
- some savings for input tokens (minimal)
272
275
273
276
If you are working in `static` workspace mode, you can deactivate the following tools:
@@ -299,6 +302,10 @@ all index tools except `get_index` because the Agent does not need to interact w
299
302
300
303
If you are only working on indexes but not pipelines, you might deactivate all [pipeline tools](#pipelines).
301
304
305
+
306
+
**Tools You Should Keep**
307
+
308
+
302
309
You should **not** deactivate any tools related to the [object store](#object-store). These tools are special tools that help
303
310
with lowering input token count for Agents and speeding up execution by allowing to call tools with outputs from other tools.
304
311
@@ -317,9 +324,9 @@ This prompt is also exposed as the `deepset_recommended_prompt` on the MCP serve
317
324
318
325
In Cursor, add the prompt to `.cursorrules`.
319
326
320
-
In Claude Desktop, create a "Project" and add set the prompt as system instructions.
327
+
In Claude Desktop, create a "Project" and add the prompt as system instructions.
321
328
322
-
You may find that customizing the prompt for your specific needs yields best results
329
+
You may find that customizing the prompt for your specific needs yields best results.
323
330
324
331
325
332
## Use Cases
@@ -328,7 +335,7 @@ The primary way to use the deepset MCP server is through an LLM that interacts w
328
335
329
336
### Creating Pipelines
330
337
331
-
Tell the LLM about the type of pipeline you want to build. Creating new pipelines will work best if you use terminology
338
+
Tell the Agent about the type of pipeline you want to build. Creating new pipelines will work best if you use terminology
332
339
that is similar to what is used on the deepset AI platform or in Haystack.
333
340
334
341
Your prompts should be precise and specific.
@@ -338,30 +345,30 @@ Examples:
338
345
- "Build a RAG pipeline with hybrid retrieval that uses claude-sonnet-4 from Anthropic as the LLM."
339
346
- "Build an Agent that can iteratively search the web (deep research). Use SerperDev for web search and GPT-4o as the LLM."
340
347
341
-
You can also instruct the LLM to deploy pipelines, and it can issue search requests against pipelines to test them.
348
+
You can also instruct the Agent to deploy pipelines, and it can issue search requests against pipelines to test them.
342
349
343
350
**Best Practices**
344
351
345
352
- be specific in your requests
346
-
- point the LLM to examples, if there is already a similar pipeline in your workspace, then ask it to look at it first,
353
+
- point the Agent to examples, if there is already a similar pipeline in your workspace, then ask it to look at it first,
347
354
if you have a template in mind, ask it to look at the template
348
-
- instruct the LLM to iterate with you locally before creating the pipeline, have it validate the drafts and then let it
355
+
- instruct the Agent to iterate with you locally before creating the pipeline, have it validate the drafts and then let it
349
356
create it once the pipeline is up to your standards
350
357
351
358
352
359
### Debugging Pipelines
353
360
354
-
The `deepset-mcp` tools allow LLMs to debug pipelines on the deepset AI platform.
361
+
The `deepset-mcp` tools allow Agents to debug pipelines on the deepset AI platform.
355
362
Primary tools used for debugging are:
356
363
- get_logs
357
364
- validate_pipeline
358
365
- search_pipeline
359
366
- search_pipeline_templates
360
367
- search_component_definition
361
368
362
-
You can ask the LLM to check the logs of a specific pipeline in case it is already deployed but has errors.
363
-
The LLM will find errors in the logs and devise strategies to fix them.
364
-
If your pipeline is not deployed yet, the LLM can autonomously validate it and fix validation errors.
369
+
You can ask the Agent to check the logs of a specific pipeline in case it is already deployed but has errors.
370
+
The Agent will find errors in the logs and devise strategies to fix them.
371
+
If your pipeline is not deployed yet, the Agent can autonomously validate it and fix validation errors.
0 commit comments