-
Notifications
You must be signed in to change notification settings - Fork 63
Add /api/generate endpoint for model loading and unloading #266
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Reviewer's GuideThis PR introduces a dedicated "load" endpoint in the scheduler to warm up models without running inference, defines corresponding request/response types, integrates a WarmupModel flow into the desktop CLI (invoked via --detach) with tests, updates documentation to reflect the new mode, and includes minor formatting cleanups. Sequence diagram for model warmup via new load endpointsequenceDiagram
actor User
participant CLI as "Desktop CLI"
participant Scheduler
participant Backend
User->>CLI: Run command with --detach
CLI->>Scheduler: POST /engines/{backend}/load { model }
Scheduler->>Backend: Wait for backend installation
Scheduler->>Backend: Load model into memory
Backend-->>Scheduler: Model loaded
Scheduler-->>CLI: { status: "loaded", message }
CLI-->>User: Success (model warmed up)
Class diagram for new LoadRequest and LoadResponse typesclassDiagram
class LoadRequest {
+string Model
}
class LoadResponse {
+string Status
+string Message
}
Class diagram for updated Scheduler and Client methodsclassDiagram
class Scheduler {
+Load(w http.ResponseWriter, r *http.Request)
}
class Client {
+WarmupModel(ctx context.Context, backend, model string) error
}
Scheduler --> LoadRequest
Scheduler --> LoadResponse
Client --> Scheduler
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
Summary of ChangesHello @ericcurtin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the model management capabilities by introducing a dedicated API endpoint for loading models into memory without requiring an immediate inference request. This new Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey there - I've reviewed your changes - here's some feedback:
- Add unit tests for the new Load handler to cover both success and failure branches (e.g., invalid backend, missing model, oversized request body).
- Consider returning structured JSON error payloads in the Load endpoint instead of plain text to keep error responses consistent across your API.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Add unit tests for the new Load handler to cover both success and failure branches (e.g., invalid backend, missing model, oversized request body).
- Consider returning structured JSON error payloads in the Load endpoint instead of plain text to keep error responses consistent across your API.
## Individual Comments
### Comment 1
<location> `pkg/inference/scheduling/scheduler.go:451` </location>
<code_context>
+ return
+ }
+
+ body, err := io.ReadAll(http.MaxBytesReader(w, r.Body, maximumOpenAIInferenceRequestSize))
+ if err != nil {
+ if _, ok := err.(*http.MaxBytesError); ok {
</code_context>
<issue_to_address>
**suggestion:** Consider logging or surfacing the actual error for unknown read failures.
For non-MaxBytesError failures, logging or returning the specific error will improve troubleshooting and support.
Suggested implementation:
```golang
if _, ok := err.(*http.MaxBytesError); ok {
http.Error(w, "request too large", http.StatusBadRequest)
} else {
// Log the actual error for troubleshooting
log.Printf("error reading request body: %v", err)
http.Error(w, "error reading request body: "+err.Error(), http.StatusInternalServerError)
}
return
```
Make sure the `log` package is imported at the top of your file:
```go
import "log"
```
If you do not want to expose the error message to the client, you can remove `err.Error()` from the `http.Error` call and only log it.
</issue_to_address>
### Comment 2
<location> `pkg/inference/scheduling/scheduler.go:467` </location>
<code_context>
+ return
+ }
+
+ if loadRequest.Model == "" {
+ http.Error(w, "model name is required", http.StatusBadRequest)
+ return
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Check for whitespace-only model names.
Trim whitespace from the model name and ensure it is not only spaces to prevent invalid input from being accepted.
Suggested implementation:
```golang
+ if strings.TrimSpace(loadRequest.Model) == "" {
+ http.Error(w, "model name is required", http.StatusBadRequest)
+ return
```
```golang
import (
"encoding/json"
"io"
"net/http"
"strings"
```
</issue_to_address>
### Comment 3
<location> `cmd/cli/desktop/desktop.go:844-847` </location>
<code_context>
+ }
+ defer resp.Body.Close()
+
+ if resp.StatusCode != http.StatusOK {
+ body, _ := io.ReadAll(resp.Body)
+ return fmt.Errorf("load failed with status %d: %s", resp.StatusCode, body)
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Handle non-JSON error responses gracefully.
Limit the size when reading error response bodies and handle invalid UTF-8 to prevent issues with large or malformed payloads.
```suggestion
if resp.StatusCode != http.StatusOK {
const maxErrBody = 4096
body, _ := io.ReadAll(io.LimitReader(resp.Body, maxErrBody))
// Ensure valid UTF-8, replace invalid bytes
errMsg := string(body)
if !utf8.ValidString(errMsg) {
errMsg = string(bytes.Runes(body))
}
return fmt.Errorf("load failed with status %d: %s", resp.StatusCode, errMsg)
}
```
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR introduces a new /load endpoint for preloading models into memory without performing inference, and updates the --detach flag to use this dedicated endpoint instead of making a minimal chat request.
Key Changes:
- Added new
/loadHTTP endpoint for explicit model loading - Replaced
--detachimplementation to useWarmupModel()instead ofChat() - Added comprehensive test coverage for the new warmup functionality
Reviewed Changes
Copilot reviewed 6 out of 8 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| pkg/inference/scheduling/scheduler.go | Implements the new Load() handler and registers /load routes |
| pkg/inference/scheduling/api.go | Defines LoadRequest and LoadResponse data structures |
| cmd/cli/desktop/desktop.go | Implements WarmupModel() client method for the load endpoint |
| cmd/cli/desktop/desktop_test.go | Adds unit tests for WarmupModel() success and error cases |
| cmd/cli/commands/run.go | Updates --detach to call WarmupModel() instead of Chat() |
| README.md | Documents the new load endpoint and detach mode usage |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a new /load endpoint to pre-warm models and updates the run --detach CLI command to use it. The implementation is well-structured, including changes to the API, server-side scheduler, client-side logic, documentation, and tests. My review focuses on improving error handling and code maintainability. I've identified a few medium-severity issues related to an ignored error, duplicated code, and inconsistent error wrapping that would be beneficial to address.
f6278e3 to
167b136
Compare
66d0b6a to
858bfb4
Compare
858bfb4 to
9c5e958
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
c671ac4 to
c58ceb8
Compare
|
@doringeman @ilopezluna this has a vLLM optimization also, load the model as soon as you execute "docker model run" in the background while you are typing the prompt |
So we can load and unload models Signed-off-by: Eric Curtin <[email protected]>
c58ceb8 to
50f4324
Compare
doringeman
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest adding a new endpoint /load to only load the model without also running inference.
|
|
||
| const cmdArgs = "MODEL" | ||
| c := &cobra.Command{ | ||
| Use: "stop " + cmdArgs, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need this and don't use unload?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've found myself using docker model stop in the past, I think its exactly the same behavior than docker model unload, to me docker model stop its more intuitive.
If the goal of the stop command is to unload a model, maybe we could use an alias? so docker model stop and docker model unload do the same thing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, an alias would be the way to go for this IMO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree, lets alias, might as well create "docker model load" also
| // handleNotRunningError checks if the error indicates that the model was not running | ||
| // and returns a user-friendly message in that case | ||
| func handleNotRunningError(err error) error { | ||
| // For now, just return the error as-is | ||
| // This function can be expanded to handle specific "model not running" errors in the future | ||
| return err | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is done in the handleClientError.
| // handleNotRunningError checks if the error indicates that the model was not running | |
| // and returns a user-friendly message in that case | |
| func handleNotRunningError(err error) error { | |
| // For now, just return the error as-is | |
| // This function can be expanded to handle specific "model not running" errors in the future | |
| return err | |
| } |
| // HandleGenerate handles /api/generate requests | ||
| // If prompt is empty, loads the model into memory | ||
| // If prompt is empty and keep_alive is 0, unloads the model | ||
| func (s *Scheduler) HandleGenerate(w http.ResponseWriter, r *http.Request) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why isn't this under inference.InferencePrefix like all the other scheduler endpoints?
| // SanitizeModelNameForCommand sanitizes a model name to be safe for use in command arguments. | ||
| // This prevents command injection by allowing only safe characters in model names. | ||
| func SanitizeModelNameForCommand(modelName string) string { | ||
| if modelName == "" { | ||
| return "" | ||
| } | ||
|
|
||
| // Only allow alphanumeric characters, dots, hyphens, underscores, and forward slashes | ||
| // This covers common model name formats like "user/model-name", "model.v1", etc. | ||
| re := regexp.MustCompile(`^[a-zA-Z0-9._/-]+$`) | ||
| if !re.MatchString(modelName) { | ||
| // If the model name contains unsafe characters, clean it | ||
| // Replace unsafe characters with underscores | ||
| safeModelName := regexp.MustCompile(`[^a-zA-Z0-9._/-]`).ReplaceAllString(modelName, "_") | ||
| // Ensure it doesn't start or end with unsafe sequences like ".." that could be path traversal | ||
| safeModelName = strings.ReplaceAll(safeModelName, "..", "_") | ||
| return safeModelName | ||
| } | ||
|
|
||
| return modelName | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dead code.
| // SanitizeModelNameForCommand sanitizes a model name to be safe for use in command arguments. | |
| // This prevents command injection by allowing only safe characters in model names. | |
| func SanitizeModelNameForCommand(modelName string) string { | |
| if modelName == "" { | |
| return "" | |
| } | |
| // Only allow alphanumeric characters, dots, hyphens, underscores, and forward slashes | |
| // This covers common model name formats like "user/model-name", "model.v1", etc. | |
| re := regexp.MustCompile(`^[a-zA-Z0-9._/-]+$`) | |
| if !re.MatchString(modelName) { | |
| // If the model name contains unsafe characters, clean it | |
| // Replace unsafe characters with underscores | |
| safeModelName := regexp.MustCompile(`[^a-zA-Z0-9._/-]`).ReplaceAllString(modelName, "_") | |
| // Ensure it doesn't start or end with unsafe sequences like ".." that could be path traversal | |
| safeModelName = strings.ReplaceAll(safeModelName, "..", "_") | |
| return safeModelName | |
| } | |
| return modelName | |
| } |
| } else { | ||
| // Load the model by requesting a minimal inference | ||
| // This will trigger the loading mechanism in the loader | ||
| backend := s.defaultBackend |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why does this work for the default backend only?
| // This makes sure the model is loaded when the user types their first prompt | ||
| go func() { | ||
| _ = desktopClient.Chat(model, "", nil, func(content string) { | ||
| // Silently preload the model - discard output |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The request/response will still be recorder.
Why don't we just load the model without running inference?
So we can load and unload models