-
Notifications
You must be signed in to change notification settings - Fork 197
feat(mcp-registry): add MCP Registry metadata and server manifest #584
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
| "source": "github" | ||
| }, | ||
| "version": "0.0.0", | ||
| "packages": [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe also include the OCI package type here? https://github.com/modelcontextprotocol/registry/blob/main/docs/modelcontextprotocol-io/package-types.mdx#dockeroci-images
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Noticing that this will require us to ensure that we label our image correctly: LABEL io.modelcontextprotocol.server.name="io.github.username/kubernetes-manager-mcp"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem with this one is that you'll need to add the volume mounts to the local kubeconfig or add extra configuration so that the MCP is functional.
I'm not sure how'd this work in the scope of the registry and the consumer MCP Hosts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that's a good point - maybe we can upstream this question to the registry itself? Seems like a reasonable use case
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added the LABEL to the container file since this PR is mostly to cover the prerequisites to be able to publish the processed server.json.
The issue will remain open until the publishing is complete.
This will also require removing a v1.0.0 of the MCP server that was published by accident: modelcontextprotocol/registry#104
Yeah that's a good point - maybe we can upstream this question to the registry itself? Seems like a reasonable use case
Maybe that's something to try.
However, this is a particular case for MCP servers such as ours which rely on local files to automatically set up their configuration.
The other (and less powerful) Kubernetes MCP server has a hard requirement on a kubecontext file or passing its content through an environment variable.
In the Docker MCP hub there's a note at the end on how to use the MCP server:
https://hub.docker.com/mcp/server/kubernetes/overview
What I see is that by providing the 3 options, some clients might opt to use the container package (which is the safest) which won't work in most cases.
My concern is that these users might be driven away from the MCP server which essentially defeats the purpose of having it published in a registry.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My concern is that these users might be driven away from the MCP server which essentially defeats the purpose of having it published in a registry.
That's a good point. For now let's stick with not using the OCI package then, but ideally we should find a way (perhaps with the upstream registry folks) to specify this config.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Noticing that this will require us to ensure that we label our image correctly:
LABEL io.modelcontextprotocol.server.name="io.github.username/kubernetes-manager-mcp"
Yes, this is required for the upstream repo.
@rdimitrov will this also be part of ToolHive registry validations? (e.g. stacklok/toolhive-registry-server#265)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem with this one is that you'll need to add the volume mounts to the local kubeconfig or add extra configuration so that the MCP is functional.
Are you planning to deploy it locally or in-cluster?
In the latter case, I image it uses the in-cluster config, right?
For those deploying the local server, you could specify the config either as an env variable:
"environmentVariables": [
{
"name": "KUBECONFIG",
"description": "Optional path to kubeconfig file (used with --kubeconfig flag)",
"format": "string",
"isSecret": false
}or using the --kubeconfig option:
"packageArguments": [
{
"type": "named",
"name": "--kubeconfig",
"description": "Optional path to kubeconfig file inside container",
"format": "filepath",
"isRequired": false,
"default": "/root/.kube/config"
},
{
"type": "named",
"name": "-v",
"description": "Optional volume mount specification",
"format": "string",
"isRequired": false,
"placeholder": "/host/path:/container/path"
}
],In both cases the -v is requested as a packageArguments.
I'm not sure how'd this work in the scope of the registry and the consumer MCP Hosts.
I hope it is clear that the registry only exposes the server metadata, and it's the admin responsibility to deploy it according to the provided instructions (e.g. start a docker container or a K8s Deployment).
The Anthropic registry is immutable, so you can't add the "remotes" field to the registered server after deployment. Instead, once deployed with ToolHive as an MCPServer instance, it will be automatically discovered by the ToolHive registry and the endpoint published to be consumed:
"remotes": [
{
"type": "streamable-http",
"url": "<deployment>/<namespace>.svc.cluster.local",
]
},There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the detailed information ❤️
The problem with this one is that you'll need to add the volume mounts to the local kubeconfig or add extra configuration so that the MCP is functional.
Are you planning to deploy it locally or in-cluster?
Here I'm only considering the local deployment scenario.
For in-cluster, we already provide Helm charts and other docs. In fact, the challenge for in-cluster deployment is mostly about exposing the MCP server in a secure way.
All comments refer to local deployment.
I hope it is clear that the registry only exposes the server metadata, and it's the admin responsibility to deploy it according to the provided instructions (e.g. start a docker container or a K8s Deployment).
Yes, that's why I have the concern with the container image package.
The goal is that users can use the registry to discover the MCP server and are able to run it flawlessly.
With the NPM and Python package wrappers, we know for certain that if the user has a kubeconfig file, it will work straight away.
However, for the container image deployment, the volume and kubeconfig CLI flags will need to be set.
I think that if we add the suggested packageArguments, at least users will be aware that they need some extra-config to make things work.
I'm not sure of how clients (MCP Hosts) decide on the packages precedence when setting up an MCP server for a user from the registry. Maybe this is also something to consider registry-wise.
Add required metadata for publishing to the official Model Context Protocol (MCP) Registry: - Add mcpName field to npm package.json generation - Add mcp-name metadata to README.md for Python package - Add io.modelcontextprotocol.server.name to container image - Create server.json manifest with npm and PyPI package definitions Signed-off-by: Marc Nuri <[email protected]>
b692726 to
47757c3
Compare
| "version": "0.0.0", | ||
| "transport": { | ||
| "type": "stdio" | ||
| } | ||
| }, | ||
| { | ||
| "registryType": "pypi", | ||
| "registryBaseUrl": "https://pypi.org", | ||
| "identifier": "kubernetes-mcp-server", | ||
| "version": "0.0.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@manusa should these be the same version as our most recent release? or is the idea to replace that version somewhere else?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is reflected in the issue:
The version fields use a placeholder value (0.0.0) that will be replaced by the workflow during publishing.
The idea is that the pipeline will replace the value with the tagged version (similar to what's done for the wrapper python and npm packages).
This way there's no need to keep updating the version for each release.
Cali0707
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Up to you @manusa when you want to merge here
Part of #555 (1,2,3)
Add required metadata for publishing to the official Model Context Protocol (MCP) Registry: