Skip to content

Conversation

@ericcurtin
Copy link
Contributor

@ericcurtin ericcurtin commented Oct 20, 2025

So we can load and unload models

Copilot AI review requested due to automatic review settings October 20, 2025 09:46
@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Oct 20, 2025

Reviewer's Guide

This PR introduces a dedicated "load" endpoint in the scheduler to warm up models without running inference, defines corresponding request/response types, integrates a WarmupModel flow into the desktop CLI (invoked via --detach) with tests, updates documentation to reflect the new mode, and includes minor formatting cleanups.

Sequence diagram for model warmup via new load endpoint

sequenceDiagram
actor User
participant CLI as "Desktop CLI"
participant Scheduler
participant Backend
User->>CLI: Run command with --detach
CLI->>Scheduler: POST /engines/{backend}/load { model }
Scheduler->>Backend: Wait for backend installation
Scheduler->>Backend: Load model into memory
Backend-->>Scheduler: Model loaded
Scheduler-->>CLI: { status: "loaded", message }
CLI-->>User: Success (model warmed up)
Loading

Class diagram for new LoadRequest and LoadResponse types

classDiagram
class LoadRequest {
  +string Model
}
class LoadResponse {
  +string Status
  +string Message
}
Loading

Class diagram for updated Scheduler and Client methods

classDiagram
class Scheduler {
  +Load(w http.ResponseWriter, r *http.Request)
}
class Client {
  +WarmupModel(ctx context.Context, backend, model string) error
}
Scheduler --> LoadRequest
Scheduler --> LoadResponse
Client --> Scheduler
Loading

File-Level Changes

Change Details Files
Add Load endpoint to scheduler for warming models
  • Register new POST /{backend}/load and /load routes
  • Implement Load handler: validate backend, parse JSON body, enforce installer readiness
  • Resolve model ID, invoke loader.load, release runner immediately, return JSON status
pkg/inference/scheduling/scheduler.go
Define API types for load requests and responses
  • Create LoadRequest struct with Model field
  • Create LoadResponse struct with Status and Message
pkg/inference/scheduling/api.go
Extend desktop CLI to support model warmup (detached mode)
  • Add Client.WarmupModel method to marshal request and POST to /load
  • Update run command to call WarmupModel on --detach instead of Chat
  • Add success and error tests for WarmupModel in desktop_test.go
cmd/cli/desktop/desktop.go
cmd/cli/desktop/desktop_test.go
cmd/cli/commands/run.go
Update CLI and curl documentation for new load endpoint
  • Add --detach example in run section of README
  • Insert curl usage for POST /load in Docker instructions
README.md
Miscellaneous formatting and whitespace cleanups
  • Trim trailing spaces and adjust indentation in loader.go
  • Fix indentation in scheduler_test.go
pkg/inference/scheduling/loader.go
pkg/inference/scheduling/scheduler_test.go

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@ericcurtin ericcurtin marked this pull request as draft October 20, 2025 09:46
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ericcurtin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the model management capabilities by introducing a dedicated API endpoint for loading models into memory without requiring an immediate inference request. This new /load endpoint is leveraged by the model-cli's --detach option, streamlining the process of pre-warming models. The changes also include necessary client-side integration, API request/response definitions, and updated documentation to guide users on this new functionality.

Highlights

  • New Load Endpoint: A dedicated API endpoint (/inference/{backend}/load and /inference/load) has been introduced to explicitly load models into memory without initiating an inference request.
  • Updated Detach Mode: The model-cli run --detach command now utilizes this new /load endpoint, streamlining the process of pre-warming models more efficiently.
  • Documentation Updates: The README.md file has been updated to include examples for using the model-cli run --detach command and making direct curl requests to the new /load API endpoint.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes - here's some feedback:

  • Add unit tests for the new Load handler to cover both success and failure branches (e.g., invalid backend, missing model, oversized request body).
  • Consider returning structured JSON error payloads in the Load endpoint instead of plain text to keep error responses consistent across your API.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Add unit tests for the new Load handler to cover both success and failure branches (e.g., invalid backend, missing model, oversized request body).
- Consider returning structured JSON error payloads in the Load endpoint instead of plain text to keep error responses consistent across your API.

## Individual Comments

### Comment 1
<location> `pkg/inference/scheduling/scheduler.go:451` </location>
<code_context>
+		return
+	}
+
+	body, err := io.ReadAll(http.MaxBytesReader(w, r.Body, maximumOpenAIInferenceRequestSize))
+	if err != nil {
+		if _, ok := err.(*http.MaxBytesError); ok {
</code_context>

<issue_to_address>
**suggestion:** Consider logging or surfacing the actual error for unknown read failures.

For non-MaxBytesError failures, logging or returning the specific error will improve troubleshooting and support.

Suggested implementation:

```golang
		if _, ok := err.(*http.MaxBytesError); ok {
			http.Error(w, "request too large", http.StatusBadRequest)
		} else {
			// Log the actual error for troubleshooting
			log.Printf("error reading request body: %v", err)
			http.Error(w, "error reading request body: "+err.Error(), http.StatusInternalServerError)
		}
		return

```

Make sure the `log` package is imported at the top of your file:
```go
import "log"
```
If you do not want to expose the error message to the client, you can remove `err.Error()` from the `http.Error` call and only log it.
</issue_to_address>

### Comment 2
<location> `pkg/inference/scheduling/scheduler.go:467` </location>
<code_context>
+		return
+	}
+
+	if loadRequest.Model == "" {
+		http.Error(w, "model name is required", http.StatusBadRequest)
+		return
</code_context>

<issue_to_address>
**suggestion (bug_risk):** Check for whitespace-only model names.

Trim whitespace from the model name and ensure it is not only spaces to prevent invalid input from being accepted.

Suggested implementation:

```golang
+	if strings.TrimSpace(loadRequest.Model) == "" {
+		http.Error(w, "model name is required", http.StatusBadRequest)
+		return

```

```golang
import (
	"encoding/json"
	"io"
	"net/http"
	"strings"

```
</issue_to_address>

### Comment 3
<location> `cmd/cli/desktop/desktop.go:844-847` </location>
<code_context>
+	}
+	defer resp.Body.Close()
+
+	if resp.StatusCode != http.StatusOK {
+		body, _ := io.ReadAll(resp.Body)
+		return fmt.Errorf("load failed with status %d: %s", resp.StatusCode, body)
</code_context>

<issue_to_address>
**suggestion (bug_risk):** Handle non-JSON error responses gracefully.

Limit the size when reading error response bodies and handle invalid UTF-8 to prevent issues with large or malformed payloads.

```suggestion
	if resp.StatusCode != http.StatusOK {
		const maxErrBody = 4096
		body, _ := io.ReadAll(io.LimitReader(resp.Body, maxErrBody))
		// Ensure valid UTF-8, replace invalid bytes
		errMsg := string(body)
		if !utf8.ValidString(errMsg) {
			errMsg = string(bytes.Runes(body))
		}
		return fmt.Errorf("load failed with status %d: %s", resp.StatusCode, errMsg)
	}
```
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR introduces a new /load endpoint for preloading models into memory without performing inference, and updates the --detach flag to use this dedicated endpoint instead of making a minimal chat request.

Key Changes:

  • Added new /load HTTP endpoint for explicit model loading
  • Replaced --detach implementation to use WarmupModel() instead of Chat()
  • Added comprehensive test coverage for the new warmup functionality

Reviewed Changes

Copilot reviewed 6 out of 8 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
pkg/inference/scheduling/scheduler.go Implements the new Load() handler and registers /load routes
pkg/inference/scheduling/api.go Defines LoadRequest and LoadResponse data structures
cmd/cli/desktop/desktop.go Implements WarmupModel() client method for the load endpoint
cmd/cli/desktop/desktop_test.go Adds unit tests for WarmupModel() success and error cases
cmd/cli/commands/run.go Updates --detach to call WarmupModel() instead of Chat()
README.md Documents the new load endpoint and detach mode usage

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new /load endpoint to pre-warm models and updates the run --detach CLI command to use it. The implementation is well-structured, including changes to the API, server-side scheduler, client-side logic, documentation, and tests. My review focuses on improving error handling and code maintainability. I've identified a few medium-severity issues related to an ignored error, duplicated code, and inconsistent error wrapping that would be beneficial to address.

@ericcurtin ericcurtin force-pushed the load-model-into-memory branch from f6278e3 to 167b136 Compare October 27, 2025 11:03
@ericcurtin ericcurtin changed the title Add load endpoint and update --detach to use it Add /api/generate endpoint for model loading and unloading Oct 27, 2025
@ericcurtin ericcurtin force-pushed the load-model-into-memory branch 2 times, most recently from 66d0b6a to 858bfb4 Compare October 27, 2025 11:22
@ericcurtin ericcurtin marked this pull request as ready for review October 27, 2025 11:25
@ericcurtin ericcurtin force-pushed the load-model-into-memory branch from 858bfb4 to 9c5e958 Compare October 27, 2025 11:25
Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes and they look great!


Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@ericcurtin ericcurtin force-pushed the load-model-into-memory branch 5 times, most recently from c671ac4 to c58ceb8 Compare October 27, 2025 22:05
@ericcurtin
Copy link
Contributor Author

@doringeman @ilopezluna this has a vLLM optimization also, load the model as soon as you execute "docker model run" in the background while you are typing the prompt

So we can load and unload models

Signed-off-by: Eric Curtin <[email protected]>
@ericcurtin ericcurtin force-pushed the load-model-into-memory branch from c58ceb8 to 50f4324 Compare October 28, 2025 09:45
Copy link
Contributor

@doringeman doringeman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest adding a new endpoint /load to only load the model without also running inference.


const cmdArgs = "MODEL"
c := &cobra.Command{
Use: "stop " + cmdArgs,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need this and don't use unload?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've found myself using docker model stop in the past, I think its exactly the same behavior than docker model unload, to me docker model stop its more intuitive.
If the goal of the stop command is to unload a model, maybe we could use an alias? so docker model stop and docker model unload do the same thing?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, an alias would be the way to go for this IMO.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree, lets alias, might as well create "docker model load" also

Comment on lines +42 to +49
// handleNotRunningError checks if the error indicates that the model was not running
// and returns a user-friendly message in that case
func handleNotRunningError(err error) error {
// For now, just return the error as-is
// This function can be expanded to handle specific "model not running" errors in the future
return err
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is done in the handleClientError.

Suggested change
// handleNotRunningError checks if the error indicates that the model was not running
// and returns a user-friendly message in that case
func handleNotRunningError(err error) error {
// For now, just return the error as-is
// This function can be expanded to handle specific "model not running" errors in the future
return err
}

// HandleGenerate handles /api/generate requests
// If prompt is empty, loads the model into memory
// If prompt is empty and keep_alive is 0, unloads the model
func (s *Scheduler) HandleGenerate(w http.ResponseWriter, r *http.Request) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why isn't this under inference.InferencePrefix like all the other scheduler endpoints?

Comment on lines +53 to +73
// SanitizeModelNameForCommand sanitizes a model name to be safe for use in command arguments.
// This prevents command injection by allowing only safe characters in model names.
func SanitizeModelNameForCommand(modelName string) string {
if modelName == "" {
return ""
}

// Only allow alphanumeric characters, dots, hyphens, underscores, and forward slashes
// This covers common model name formats like "user/model-name", "model.v1", etc.
re := regexp.MustCompile(`^[a-zA-Z0-9._/-]+$`)
if !re.MatchString(modelName) {
// If the model name contains unsafe characters, clean it
// Replace unsafe characters with underscores
safeModelName := regexp.MustCompile(`[^a-zA-Z0-9._/-]`).ReplaceAllString(modelName, "_")
// Ensure it doesn't start or end with unsafe sequences like ".." that could be path traversal
safeModelName = strings.ReplaceAll(safeModelName, "..", "_")
return safeModelName
}

return modelName
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dead code.

Suggested change
// SanitizeModelNameForCommand sanitizes a model name to be safe for use in command arguments.
// This prevents command injection by allowing only safe characters in model names.
func SanitizeModelNameForCommand(modelName string) string {
if modelName == "" {
return ""
}
// Only allow alphanumeric characters, dots, hyphens, underscores, and forward slashes
// This covers common model name formats like "user/model-name", "model.v1", etc.
re := regexp.MustCompile(`^[a-zA-Z0-9._/-]+$`)
if !re.MatchString(modelName) {
// If the model name contains unsafe characters, clean it
// Replace unsafe characters with underscores
safeModelName := regexp.MustCompile(`[^a-zA-Z0-9._/-]`).ReplaceAllString(modelName, "_")
// Ensure it doesn't start or end with unsafe sequences like ".." that could be path traversal
safeModelName = strings.ReplaceAll(safeModelName, "..", "_")
return safeModelName
}
return modelName
}

} else {
// Load the model by requesting a minimal inference
// This will trigger the loading mechanism in the loader
backend := s.defaultBackend
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does this work for the default backend only?

// This makes sure the model is loaded when the user types their first prompt
go func() {
_ = desktopClient.Chat(model, "", nil, func(content string) {
// Silently preload the model - discard output
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The request/response will still be recorder.
Why don't we just load the model without running inference?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants