Skip to content

Conversation

@longseespace
Copy link

  • Ensure the client’s background task breaks out of its receive loop once the transport’s AsyncThrowingStream finishes, preventing tight spins on closed connections.

Motivation and Context

When an MCP server process exited, Client.connect kept calling connection.receive() inside a repeat…while true loop even though the underlying stream had already finished. That resulted in a tight loop consuming ~100% CPU per disconnected server (e.g., killing 8 servers spiked to ~800% CPU). Breaking out when the stream finishes stops the runaway task and lets reconnect logic create a fresh transport instead.

How Has This Been Tested?

Tested in the BoltAI macOS client using this local package:

  • Start MCP servers, then kill them repeatedly (manual reloads and process terminations).
  • Verified CPU usage stays low and no new “hot” threads appear in LLDB.
  • Confirmed normal message handling continues when the stream stays open.

Breaking Changes

No breaking API changes. Existing clients automatically benefit from the fix and still receive the same stream/error handling behavior.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation update

Checklist

  • I have read the MCP Documentation
  • My code follows the repository's style guidelines
  • New and existing tests pass locally
  • I have added appropriate error handling
  • I have added or updated documentation as needed

Additional context

The fix keeps the transient “resource temporarily unavailable” retry path intact, so transports that briefly report EAGAIN will still retry. Only a fully finished receive stream now terminates the loop, which matches MCP transport semantics.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant