You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Note left of C: Contains model used,<br/>stop reason, final content
267
+
268
+
Note over S: Server continues with<br/>AI-assisted result
269
+
```
270
+
271
+
This human-in-the-loop design ensures that users
272
+
maintain control over what the LLM sees and generates,
273
+
even when servers initiate the requests.
274
+
192
275
### Error Handling
193
276
194
277
Handle common client errors:
@@ -504,6 +587,49 @@ server.withMethodHandler(GetPrompt.self) { params in
504
587
}
505
588
```
506
589
590
+
### Sampling
591
+
592
+
Servers can request LLM completions from clients through sampling. This enables agentic behaviors where servers can ask for AI assistance while maintaining human oversight.
593
+
594
+
> [!NOTE]
595
+
> The current implementation provides the correct API design for sampling, but requires bidirectional communication support in the transport layer. This feature will be fully functional when bidirectional transport support is added.
596
+
597
+
```swift
598
+
// Enable sampling capability in server
599
+
let server =Server(
600
+
name: "MyModelServer",
601
+
version: "1.0.0",
602
+
capabilities: .init(
603
+
sampling: .init(), // Enable sampling capability
604
+
tools: .init(listChanged: true)
605
+
)
606
+
)
607
+
608
+
// Request sampling from the client (conceptual - requires bidirectional transport)
609
+
do {
610
+
let result =tryawait server.requestSampling(
611
+
messages: [
612
+
Sampling.Message(role: .user, content: .text("Analyze this data and suggest next steps"))
613
+
],
614
+
systemPrompt: "You are a helpful data analyst",
615
+
maxTokens: 150,
616
+
temperature: 0.7
617
+
)
618
+
619
+
// Use the LLM completion in your server logic
620
+
print("LLM suggested: \(result.content)")
621
+
622
+
} catch {
623
+
print("Sampling request failed: \(error)")
624
+
}
625
+
```
626
+
627
+
Sampling enables powerful agentic workflows:
628
+
-**Decision-making**: Ask the LLM to choose between options
629
+
-**Content generation**: Request drafts for user approval
630
+
-**Data analysis**: Get AI insights on complex data
631
+
-**Multi-step reasoning**: Chain AI completions with tool calls
632
+
507
633
#### Initialize Hook
508
634
509
635
Control client connections with an initialize hook:
0 commit comments