-
Notifications
You must be signed in to change notification settings - Fork 355
feat: add support for OpenAI-compatible third-party provider #342
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Summary of ChangesHello @ChinaSiro, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a new Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a well-implemented and robust adapter for OpenAI-compatible APIs. The code is comprehensive, covering various features like streaming, function calling, and multimodal inputs. The inclusion of compatibility workarounds for non-standard third-party providers is a thoughtful addition. The unit tests are thorough, and the example usage is clear and helpful. I have one suggestion to improve performance in the content conversion logic.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
|
This update introduces two additional environment variables:
The API design intentionally keeps the same style as the existing Gemini adapter. model, err := gemini.NewModel(ctx, "gemini-2.5-flash", &genai.ClientConfig{
APIKey: os.Getenv("GOOGLE_API_KEY"),
})When switching to the OpenAI provider, developers only need to replace:
model, err := openai.NewModel(ctx, "any-model", &openai.ClientConfig{
APIKey: os.Getenv("OPENAI_API_KEY"),
})This keeps the usage consistent with the Gemini adapter and minimizes migration cost. |
|
Tested function calling examples |
| } | ||
| } | ||
|
|
||
| if config.BaseURL == "" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe ,make BaseURL optional?
cpunion
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's better for making BaseURL optional.
Thanks for testing this!
Since this implementation is primarily designed to be OpenAI-compatible, using an explicit BaseURL configuration is more appropriate. For that reason, we don't provide a default fallback when the value is missing. |
Thanks for your great work! I just tested OpenRouter and OpenAI models, it works!
I understand. BTW, some of latest models (e.g. gpt-5-codex) only supports responses API, would you plan to support both chat completion and responses API? maybe switch with a config field. There is another PR using responses API #242 |
Thanks for the suggestion! It’s true that some newer models (e.g., gpt-5-codex) only support the Responses API. But currently only OpenAI fully supports it, while most other providers — including local LLMs and third-party services — still rely on the traditional Chat Completions API. Adding Responses logic directly into this file could introduce extra complexity and potentially affect compatibility with those providers. To keep things clean, I think it’s better handled in a separate follow-up. My plan:
|
|
@ChinaSiro I have tested on a demo but it stopped at function calling in streaming mode, maybe need a rich demo and integration test. I attach my demo code for a example (AI generated), you can run it with options:
I just tested (need set OPENROUTER_API_KEY, OPENAI_API_KEY, TAVILY_API_KEY, GEMINI_API_KEY (or GOOGLE_API_KEY): # Works
$ go run . -prompt "search TESLA stock price, and get weather of New York" -model "openai:gpt-5-mini"`
# Doesn't work
$ go run . -prompt "search TESLA stock price, and get weather of New York" -model "openai:gpt-5-mini" -stream`
# Works
$ go run . -prompt "search TESLA stock price, and get weather of New York" -model "openrouter:openai/gpt-5-mini"`
# Doesn't work
$ go run . -prompt "search TESLA stock price, and get weather of New York" -model "openrouter:openai/gpt-5-mini" -stream`
# Works
$ go run . -prompt "search TESLA stock price, and get weather of New York" -model "gemini:gemini-2.5-flash"`
# Works
$ go run . -prompt "search TESLA stock price, and get weather of New York" -model "gemini:gemini-2.5-flash" -stream`BTW: I can review and do some tests, but I have no write permissions. |
Fixes #341
Key requirements: