Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
62 changes: 41 additions & 21 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ jobs:
steps:
- name: Test Local Action
id: inference
uses: actions/ai-inference@v1
uses: actions/ai-inference@v2
with:
prompt: 'Hello!'

Expand All @@ -42,7 +42,7 @@ supports both plain text files and structured `.prompt.yml` files:
steps:
- name: Run AI Inference with Text File
id: inference
uses: actions/ai-inference@v1
uses: actions/ai-inference@v2
with:
prompt-file: './path/to/prompt.txt'
```
Expand All @@ -56,7 +56,7 @@ support templating, custom models, and JSON schema responses:
steps:
- name: Run AI Inference with Prompt YAML
id: inference
uses: actions/ai-inference@v1
uses: actions/ai-inference@v2
with:
prompt-file: './.github/prompts/sample.prompt.yml'
input: |
Expand Down Expand Up @@ -132,7 +132,7 @@ of an inline system prompt:
steps:
- name: Run AI Inference with System Prompt File
id: inference
uses: actions/ai-inference@v1
uses: actions/ai-inference@v2
with:
prompt: 'Hello!'
system-prompt-file: './path/to/system-prompt.txt'
Expand All @@ -146,7 +146,7 @@ This can be useful when model response exceeds actions output limit
steps:
- name: Test Local Action
id: inference
uses: actions/ai-inference@v1
uses: actions/ai-inference@v2
with:
prompt: 'Hello!'

Expand All @@ -169,7 +169,7 @@ repository management, issue tracking, and pull request operations.
steps:
- name: AI Inference with GitHub Tools
id: inference
uses: actions/ai-inference@v1.2
uses: actions/ai-inference@v2
with:
prompt: 'List my open pull requests and create a summary'
enable-github-mcp: true
Expand All @@ -183,14 +183,33 @@ and the GitHub MCP server:
steps:
- name: AI Inference with Separate MCP Token
id: inference
uses: actions/ai-inference@v1.2
uses: actions/ai-inference@v2
with:
prompt: 'List my open pull requests and create a summary'
enable-github-mcp: true
token: ${{ secrets.GITHUB_TOKEN }}
github-mcp-token: ${{ secrets.USER_PAT }}
```

#### Configuring GitHub MCP Toolsets

By default, the GitHub MCP server provides a standard set of tools (`context`, `repos`, `issues`, `pull_requests`, `users`). You can customize which toolsets are available by specifying the `github-mcp-toolsets` parameter:

```yaml
steps:
- name: AI Inference with Custom Toolsets
id: inference
uses: actions/ai-inference@v2
with:
prompt: 'Analyze recent workflow runs and check security alerts'
enable-github-mcp: true
token: ${{ secrets.USER_PAT }}
github-mcp-toolsets: 'repos,issues,pull_requests,actions,code_security'
```

**Available toolsets:**
See: https://github.com/github/github-mcp-server/blob/main/README.md#tool-configuration

When MCP is enabled, the AI model will have access to GitHub tools and can
perform actions like searching issues and PRs.

Expand All @@ -199,20 +218,21 @@ perform actions like searching issues and PRs.
Various inputs are defined in [`action.yml`](action.yml) to let you configure
the action:

| Name | Description | Default |
| -------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------ |
| `token` | Token to use for inference. Typically the GITHUB_TOKEN secret | `github.token` |
| `prompt` | The prompt to send to the model | N/A |
| `prompt-file` | Path to a file containing the prompt (supports .txt and .prompt.yml formats). If both `prompt` and `prompt-file` are provided, `prompt-file` takes precedence | `""` |
| `input` | Template variables in YAML format for .prompt.yml files (e.g., `var1: value1` on separate lines) | `""` |
| `file_input` | Template variables in YAML where values are file paths. The file contents are read and used for templating | `""` |
| `system-prompt` | The system prompt to send to the model | `"You are a helpful assistant"` |
| `system-prompt-file` | Path to a file containing the system prompt. If both `system-prompt` and `system-prompt-file` are provided, `system-prompt-file` takes precedence | `""` |
| `model` | The model to use for inference. Must be available in the [GitHub Models](https://github.com/marketplace?type=models) catalog | `openai/gpt-4o` |
| `endpoint` | The endpoint to use for inference. If you're running this as part of an org, you should probably use the org-specific Models endpoint | `https://models.github.ai/inference` |
| `max-tokens` | The max number of tokens to generate | 200 |
| `enable-github-mcp` | Enable Model Context Protocol integration with GitHub tools | `false` |
| `github-mcp-token` | Token to use for GitHub MCP server (defaults to the main token if not specified). This must be a PAT for MCP to work | `""` |
| Name | Description | Default |
| --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------ |
| `token` | Token to use for inference. Typically the GITHUB_TOKEN secret | `github.token` |
| `prompt` | The prompt to send to the model | N/A |
| `prompt-file` | Path to a file containing the prompt (supports .txt and .prompt.yml formats). If both `prompt` and `prompt-file` are provided, `prompt-file` takes precedence | `""` |
| `input` | Template variables in YAML format for .prompt.yml files (e.g., `var1: value1` on separate lines) | `""` |
| `file_input` | Template variables in YAML where values are file paths. The file contents are read and used for templating | `""` |
| `system-prompt` | The system prompt to send to the model | `"You are a helpful assistant"` |
| `system-prompt-file` | Path to a file containing the system prompt. If both `system-prompt` and `system-prompt-file` are provided, `system-prompt-file` takes precedence | `""` |
| `model` | The model to use for inference. Must be available in the [GitHub Models](https://github.com/marketplace?type=models) catalog | `openai/gpt-4o` |
| `endpoint` | The endpoint to use for inference. If you're running this as part of an org, you should probably use the org-specific Models endpoint | `https://models.github.ai/inference` |
| `max-tokens` | The max number of tokens to generate | 200 |
| `enable-github-mcp` | Enable Model Context Protocol integration with GitHub tools | `false` |
| `github-mcp-token` | Token to use for GitHub MCP server (defaults to the main token if not specified). This must be a PAT for MCP to work | `""` |
| `github-mcp-toolsets` | Comma-separated list of toolsets to enable for GitHub MCP (e.g., `repos,issues,pull_requests,actions`). Use `all` for all toolsets or `default` for the default set | `""` |

## Outputs

Expand Down
4 changes: 2 additions & 2 deletions __tests__/main.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,7 @@ describe('main.ts', () => {

await run()

expect(mockConnectToGitHubMCP).toHaveBeenCalledWith('fake-token')
expect(mockConnectToGitHubMCP).toHaveBeenCalledWith('fake-token', '')
expect(mockMcpInference).toHaveBeenCalledWith(
expect.objectContaining({
messages: [
Expand All @@ -226,7 +226,7 @@ describe('main.ts', () => {

await run()

expect(mockConnectToGitHubMCP).toHaveBeenCalledWith('fake-token')
expect(mockConnectToGitHubMCP).toHaveBeenCalledWith('fake-token', '')
expect(mockSimpleInference).toHaveBeenCalled()
expect(mockMcpInference).not.toHaveBeenCalled()
expect(core.warning).toHaveBeenCalledWith('MCP connection failed, falling back to simple inference')
Expand Down
34 changes: 34 additions & 0 deletions __tests__/mcp.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -113,6 +113,40 @@ describe('mcp.ts', () => {
expect(result?.tools).toHaveLength(0)
expect(core.info).toHaveBeenCalledWith('Retrieved 0 tools from GitHub MCP server')
})

it('uses default toolsets when toolsets parameter is not provided', async () => {
const token = 'test-token'

mockConnect.mockResolvedValue(undefined)
mockListTools.mockResolvedValue({tools: []})

await connectToGitHubMCP(token)

expect(core.info).toHaveBeenCalledWith('Using default GitHub MCP toolsets')
})

it('uses custom toolsets when toolsets parameter is provided', async () => {
const token = 'test-token'
const toolsets = 'repos,issues,pull_requests,actions'

mockConnect.mockResolvedValue(undefined)
mockListTools.mockResolvedValue({tools: []})

await connectToGitHubMCP(token, toolsets)

expect(core.info).toHaveBeenCalledWith('Using GitHub MCP toolsets: repos,issues,pull_requests,actions')
})

it('ignores empty toolsets parameter', async () => {
const token = 'test-token'

mockConnect.mockResolvedValue(undefined)
mockListTools.mockResolvedValue({tools: []})

await connectToGitHubMCP(token, ' ')

expect(core.info).toHaveBeenCalledWith('Using default GitHub MCP toolsets')
})
})

describe('executeToolCall', () => {
Expand Down
4 changes: 4 additions & 0 deletions action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,10 @@ inputs:
description: The token to use for GitHub MCP server (defaults to the main token if not specified). This must be a PAT for MCP to work.
required: false
default: ''
github-mcp-toolsets:
description: 'Comma-separated list of toolsets to enable for GitHub MCP (e.g., "repos,issues,pull_requests,actions"). Use "all" for all toolsets, "default" for default set. If not specified, uses default toolsets (context,repos,issues,pull_requests,users).'
required: false
default: ''

# Define your outputs here.
outputs:
Expand Down
22 changes: 16 additions & 6 deletions dist/index.js

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion dist/index.js.map

Large diffs are not rendered by default.

3 changes: 2 additions & 1 deletion src/main.ts
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,7 @@ export async function run(): Promise<void> {

// Get GitHub MCP token (use dedicated token if provided, otherwise fall back to main token)
const githubMcpToken = core.getInput('github-mcp-token') || token
const githubMcpToolsets = core.getInput('github-mcp-toolsets')

const endpoint = core.getInput('endpoint')

Expand All @@ -81,7 +82,7 @@ export async function run(): Promise<void> {
let modelResponse: string | null = null

if (enableMcp) {
const mcpClient = await connectToGitHubMCP(githubMcpToken)
const mcpClient = await connectToGitHubMCP(githubMcpToken, githubMcpToolsets)

if (mcpClient) {
modelResponse = await mcpInference(inferenceRequest, mcpClient)
Expand Down
20 changes: 15 additions & 5 deletions src/mcp.ts
Original file line number Diff line number Diff line change
Expand Up @@ -35,17 +35,27 @@ export interface GitHubMCPClient {
/**
* Connect to the GitHub MCP server and retrieve available tools
*/
export async function connectToGitHubMCP(token: string): Promise<GitHubMCPClient | null> {
export async function connectToGitHubMCP(token: string, toolsets?: string): Promise<GitHubMCPClient | null> {
const githubMcpUrl = 'https://api.githubcopilot.com/mcp/'

core.info('Connecting to GitHub MCP server...')

const headers: Record<string, string> = {
Authorization: `Bearer ${token}`,
'X-MCP-Readonly': 'true',
}

// Add toolsets header if specified
if (toolsets && toolsets.trim() !== '') {
headers['X-MCP-Toolsets'] = toolsets
core.info(`Using GitHub MCP toolsets: ${toolsets}`)
} else {
core.info('Using default GitHub MCP toolsets')
}

const transport = new StreamableHTTPClientTransport(new URL(githubMcpUrl), {
requestInit: {
headers: {
Authorization: `Bearer ${token}`,
'X-MCP-Readonly': 'true',
},
headers,
},
})

Expand Down