-
Notifications
You must be signed in to change notification settings - Fork 331
[Bugfix] lmcache server points to wrong file in entrypoint #730
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary of ChangesHello @Senne-Mennes, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request resolves a bug in the Helm chart for the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request addresses a bug where the lmcache server entrypoint in the Helm chart pointed to an incorrect file depending on the image tag. The change simplifies the deployment-cache-server.yaml by removing the conditional logic that selected different server executables. Now, it consistently uses /opt/venv/bin/lmcache_server. This not only fixes the bug but also improves maintainability by removing unnecessary complexity and path inconsistencies from the template. The change is correct and a good improvement.
|
Hi, I don't see why it can be compatible with the version before v3.0.0. Maybe add some if-else? |
|
I tested which containers, that are available on DockerHub, had the old "lmcache_experimental_server", and made and if-else based on that. I assume future container tags will still be vx.x.x or nightly-20xx-xx-xx and use the new "/opt/venv/bin/lmcache_server". test results specifically had the change happen at these containers Testing: lmcache/vllm-openai:2025-05-08-v1
✓ NEW PATH: /opt/venv/bin/lmcache_server EXISTS
✗ OLD CMD: 'lmcache_experimental_server' NOT in PATH
Testing: lmcache/vllm-openai:2025-05-05-v1
✗ NEW PATH: /opt/venv/bin/lmcache_server NOT FOUND
✓ OLD CMD: 'lmcache_experimental_server' found in PATH
Location: /usr/local/bin/lmcache_experimental_serverI've also included edge cases for the tag "test" and "vllm-cpu-env" |
Signed-off-by: Senne-Mennes <[email protected]>
|
Hi, I asked the team and they said no one is using the previous version now, can you just use the"/opt/venv/bin/lmcache_server"? Sorry for the misleading information before. |
Signed-off-by: Senne-Mennes <[email protected]>
|
Hi, just using "/opt/venv/bin/lmcache_server" now |
zerofishnoodles
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Bug/fix is described in the issue below
FIX #676
BEFORE SUBMITTING, PLEASE READ THE CHECKLIST BELOW AND FILL IN THE DESCRIPTION ABOVE
-swhen doinggit commit[Bugfix],[Feat], and[CI].Detailed Checklist (Click to Expand)
Thank you for your contribution to production-stack! Before submitting the pull request, please ensure the PR meets the following criteria. This helps us maintain the code quality and improve the efficiency of the review process.
PR Title and Classification
Please try to classify PRs for easy understanding of the type of changes. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:
[Bugfix]for bug fixes.[CI/Build]for build or continuous integration improvements.[Doc]for documentation fixes and improvements.[Feat]for new features in the cluster (e.g., autoscaling, disaggregated prefill, etc.).[Router]for changes to thevllm_router(e.g., routing algorithm, router observability, etc.).[Misc]for PRs that do not fit the above categories. Please use this sparingly.Note: If the PR spans more than one category, please include all relevant prefixes.
Code Quality
The PR need to meet the following code quality standards:
pre-committo format your code. SeeREADME.mdfor installation.DCO and Signed-off-by
When contributing changes to this project, you must agree to the DCO. Commits must include a
Signed-off-by:header which certifies agreement with the terms of the DCO.Using
-swithgit commitwill automatically add this header.What to Expect for the Reviews
We aim to address all PRs in a timely manner. If no one reviews your PR within 5 days, please @-mention one of YuhanLiu11
, Shaoting-Feng or ApostaC.