Skip to content

Commit 277b4e7

Browse files
authored
docs: update readme (#158)
1 parent 87530a9 commit 277b4e7

File tree

1 file changed

+26
-19
lines changed

1 file changed

+26
-19
lines changed

README.md

Lines changed: 26 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# MCP Server for the deepset AI platform
22

33
This is the _official_ MCP server for the [deepset AI platform](https://www.deepset.ai/products-and-services/deepset-ai-platform).
4-
It allows Agents in tools like Cursor or Claude Code to build and debug pipelines on the platform.
4+
It allows Agents in tools like Cursor or Claude Code to build and debug pipelines on the deepset platform.
55

66
The MCP server exposes up to 30 hand-crafted tools that are optimized for Agents interacting with the deepset platform.
77
Using the server, you benefit from faster creation of pipelines or indexes and speedy issue resolution through agentic debugging.
@@ -39,7 +39,7 @@ Using the server, you benefit from faster creation of pipelines or indexes and s
3939

4040
## Installation
4141

42-
The recommended way to use the `deepset-mcp`-package is via [uv](https://docs.astral.sh/uv/).
42+
Before configuring MCP clients to work with `deepset-mcp`, you need to install [uv](https://docs.astral.sh/uv/), a modern Python package manager.
4343

4444
If `uv` is not installed on your system, you can install it via:
4545

@@ -68,11 +68,11 @@ Once you have `uv` installed, you can follow one of the guides below to configur
6868
**Configuration**
6969

7070
Latest instructions on how to set up an MCP server for Cursor are covered in their [documentation](https://docs.cursor.com/context/mcp#using-mcp-json).
71-
You can either configure the MCP server for a single project or globally across all projects.
71+
You can either configure the MCP server for a single Cursor project or globally across all projects.
7272

73-
To configure the `deepset-mcp` server for your project:
73+
To configure the `deepset-mcp` server for a single project:
7474

75-
1. create a file with the name `mcp.json` in your `.cursor` directory
75+
1. create a file with the name `mcp.json` in your `.cursor` directory at the root of the project
7676
2. Add the following configuration
7777

7878
```json
@@ -91,7 +91,7 @@ To configure the `deepset-mcp` server for your project:
9191
```
9292

9393
This creates a virtual environment for the `deepset-mcp` package and runs the command to start the server.
94-
The `deepset-mcp` server should appear in the "Tools & Integrations"-section of your "Cursor Settings".
94+
The `deepset-mcp` server should appear in the "Tools & Integrations" section of your "Cursor Settings".
9595
The tools on the server are now available to the Cursor Agent.
9696

9797
It is recommended to create a file named `.cursorrules` at the root of your project (if not already there)
@@ -192,9 +192,12 @@ If running with Docker, you need to use the following configuration with your MC
192192

193193
### Multiple Workspaces
194194

195-
The basic configuration uses a `static` workspace which you pass in via the `DEEPSET_WORKSPACE` environment variable
195+
In the default configuration, the Agent can only interact with resources in a fixed deepset workspace.
196+
You configure this deepset workspace either through the `DEEPSET_WORKSPACE` environment variable
196197
or the `--workspace` option.
197-
You can configure this behaviour by using the `--workspace-mode`-option (default: `static`).
198+
199+
The `--workspace-mode`-option (default: `static`) determines if the Agent can interact with a fixed, pre-configured workspace,
200+
or if it should have access to resources in multiple workspaces.
198201
If you want to allow an Agent to access resources from multiple workspaces, use `--workspace-mode dynamic`
199202
in your configuration.
200203

@@ -267,7 +270,7 @@ You can view documentation for all tools in the [tools section](#tools). For man
267270
In this case, it is recommended to deactivate tools that are not needed. Using fewer tools has the following benefits:
268271

269272
- some MCP clients limit the maximum number of tools
270-
- the LLM will be more focused on the task at hand and not call tools that it does not need
273+
- the Agent will be more focused on the task at hand and not call tools that it does not need
271274
- some savings for input tokens (minimal)
272275

273276
If you are working in `static` workspace mode, you can deactivate the following tools:
@@ -299,6 +302,10 @@ all index tools except `get_index` because the Agent does not need to interact w
299302

300303
If you are only working on indexes but not pipelines, you might deactivate all [pipeline tools](#pipelines).
301304

305+
306+
**Tools You Should Keep**
307+
308+
302309
You should **not** deactivate any tools related to the [object store](#object-store). These tools are special tools that help
303310
with lowering input token count for Agents and speeding up execution by allowing to call tools with outputs from other tools.
304311

@@ -317,9 +324,9 @@ This prompt is also exposed as the `deepset_recommended_prompt` on the MCP serve
317324

318325
In Cursor, add the prompt to `.cursorrules`.
319326

320-
In Claude Desktop, create a "Project" and add set the prompt as system instructions.
327+
In Claude Desktop, create a "Project" and add the prompt as system instructions.
321328

322-
You may find that customizing the prompt for your specific needs yields best results
329+
You may find that customizing the prompt for your specific needs yields best results.
323330

324331

325332
## Use Cases
@@ -328,7 +335,7 @@ The primary way to use the deepset MCP server is through an LLM that interacts w
328335

329336
### Creating Pipelines
330337

331-
Tell the LLM about the type of pipeline you want to build. Creating new pipelines will work best if you use terminology
338+
Tell the Agent about the type of pipeline you want to build. Creating new pipelines will work best if you use terminology
332339
that is similar to what is used on the deepset AI platform or in Haystack.
333340

334341
Your prompts should be precise and specific.
@@ -338,30 +345,30 @@ Examples:
338345
- "Build a RAG pipeline with hybrid retrieval that uses claude-sonnet-4 from Anthropic as the LLM."
339346
- "Build an Agent that can iteratively search the web (deep research). Use SerperDev for web search and GPT-4o as the LLM."
340347

341-
You can also instruct the LLM to deploy pipelines, and it can issue search requests against pipelines to test them.
348+
You can also instruct the Agent to deploy pipelines, and it can issue search requests against pipelines to test them.
342349

343350
**Best Practices**
344351

345352
- be specific in your requests
346-
- point the LLM to examples, if there is already a similar pipeline in your workspace, then ask it to look at it first,
353+
- point the Agent to examples, if there is already a similar pipeline in your workspace, then ask it to look at it first,
347354
if you have a template in mind, ask it to look at the template
348-
- instruct the LLM to iterate with you locally before creating the pipeline, have it validate the drafts and then let it
355+
- instruct the Agent to iterate with you locally before creating the pipeline, have it validate the drafts and then let it
349356
create it once the pipeline is up to your standards
350357

351358

352359
### Debugging Pipelines
353360

354-
The `deepset-mcp` tools allow LLMs to debug pipelines on the deepset AI platform.
361+
The `deepset-mcp` tools allow Agents to debug pipelines on the deepset AI platform.
355362
Primary tools used for debugging are:
356363
- get_logs
357364
- validate_pipeline
358365
- search_pipeline
359366
- search_pipeline_templates
360367
- search_component_definition
361368

362-
You can ask the LLM to check the logs of a specific pipeline in case it is already deployed but has errors.
363-
The LLM will find errors in the logs and devise strategies to fix them.
364-
If your pipeline is not deployed yet, the LLM can autonomously validate it and fix validation errors.
369+
You can ask the Agent to check the logs of a specific pipeline in case it is already deployed but has errors.
370+
The Agent will find errors in the logs and devise strategies to fix them.
371+
If your pipeline is not deployed yet, the Agent can autonomously validate it and fix validation errors.
365372

366373
## Reference
367374

0 commit comments

Comments
 (0)