Stop MCP client receive loop from spinning after transport closes #171
+4
−2
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Motivation and Context
When an MCP server process exited,
Client.connectkept callingconnection.receive()inside arepeat…while trueloop even though the underlying stream had already finished. That resulted in a tight loop consuming ~100% CPU per disconnected server (e.g., killing 8 servers spiked to ~800% CPU). Breaking out when the stream finishes stops the runaway task and lets reconnect logic create a fresh transport instead.How Has This Been Tested?
Tested in the BoltAI macOS client using this local package:
Breaking Changes
No breaking API changes. Existing clients automatically benefit from the fix and still receive the same stream/error handling behavior.
Types of changes
Checklist
Additional context
The fix keeps the transient “resource temporarily unavailable” retry path intact, so transports that briefly report EAGAIN will still retry. Only a fully finished receive stream now terminates the loop, which matches MCP transport semantics.