Skip to content

Conversation

@b8zhong
Copy link
Collaborator

@b8zhong b8zhong commented Dec 5, 2025

#14510

2025-12-05 21:57:59 | INFO     | lmms_eval.loggers.evaluation_tracker:save_results_samples:287 - Saving samples to logs/__google__gemma-3-4b-it__/20251206_055548_samples_mmmu_val.jsonl
openai_compatible (model_version="google/gemma-3-4b-it",tp=1), gen_kwargs: (), limit: None, num_fewshot: None, batch_size: 64
| Tasks  |Version|Filter|n-shot| Metric |   |Value |   |Stderr|
|--------|------:|------|-----:|--------|---|-----:|---|------|
|mmmu_val|      0|none  |     0|mmmu_acc|↑  |0.3867|±  |   N/A|

Result
: {'results': {'mmmu_val': {'alias': 'mmmu_val', 'mmmu_acc,none': 0.38667, 'mmmu_acc_stderr,none': 'N/A', 'mmmu_acc_pass_at_k,none': [], 'mmmu_acc_pass_at_k_stderr,none': [], 'submission,none': [], 'submission_stderr,none': []}}, 'group_subtasks': {'mmmu_val': []}, 'configs': {'mmmu_val': {'task': 'mmmu_val', 'dataset_path': 'lmms-lab/MMMU', 'test_split': 'validation', 'full_docs': False, 'process_results_use_image': False, 'doc_to_visual': '<function mmmu_doc_to_visual at 0x712672700680>', 'doc_to_text': '<function mmmu_doc_to_text at 0x712672701580>', 'doc_to_target': 'answer', 'doc_to_messages': '<function mmmu_doc_to_messages at 0x712672702660>', 'process_results': '<function mmmu_process_results at 0x712672703880>', 'description': '', 'target_delimiter': ' ', 'fewshot_delimiter': '\n\n', 'num_fewshot': 0, 'metric_list': [{'metric': 'mmmu_acc', 'aggregation': '<function mmmu_aggregate_results at 0x712672720b80>', 'higher_is_better': True}], 'output_type': 'generate_until', 'generation_kwargs': {'max_new_tokens': 16, 'until': ['\n\n']}, 'repeats': 1, 'should_decontaminate': False, 'metadata': {'version': 0.0, 'interleaved_format': False}, 'lmms_eval_specific_kwargs': {'default': {'prompt_type': 'format', 'multiple_choice_prompt': "Answer with the option's letter from the given choices directly.", 'open_ended_prompt': 'Answer the question using a single word or phrase.'}, 'prompt_type': 'format', 'multiple_choice_prompt': "Answer with the option's letter from the given choices directly.", 'open_ended_prompt': 'Answer the question using a single word or phrase.'}}}, 'versions': {'mmmu_val': 0.0}, 'n-shot': {'mmmu_val': 0}, 'higher_is_better': {'mmmu_val': {'mmmu_acc': True}}, 'n-samples': {'mmmu_val': {'original': 900, 'effective': 900}}, 'config': {'model': 'openai_compatible', 'model_args': 'model_version="google/gemma-3-4b-it",tp=1', 'batch_size': '64', 'batch_sizes': [], 'device': None, 'use_cache': None, 'limit': None, 'bootstrap_iters': 100000, 'gen_kwargs': '', 'random_seed': 0, 'numpy_seed': 1234, 'torch_seed': 1234, 'fewshot_seed': 1234}, 'git_hash': 'ec7b2c16d', 'date': '20251206_055548', 'task_hashes': {'mmmu_val': '614600386b06b53646ff0656be64e14e0f33e49b2e8eb7e528d482ff0ba6e7ae'}, 'model_source': 'openai_compatible', 'model_name': '"google/gemma-3-4b-it"', 'model_name_sanitized': '__google__gemma-3-4b-it__', 'system_instruction': None, 'system_instruction_sha': None, 'fewshot_as_multiturn': False, 'chat_template': None, 'chat_template_sha': None, 'start_time': 7431120.422613234, 'end_time': 7431251.862152486, 'total_evaluation_time_seconds': '131.43953925184906'}
Model google/gemma-3-4b-it achieved accuracy: 0.3867
Cleaning up process 3841140
.
----------------------------------------------------------------------
Ran 1 test in 249.013s

OK

@github-actions github-actions bot added the Multi-modal multi-modal language model label Dec 5, 2025
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @b8zhong, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on optimizing the Continuous Integration (CI) pipeline for Vision-Language Models (VLM) by implementing changes designed to reduce test execution time. This is achieved through increasing the batch processing size for evaluations and substituting a larger VLM model with a more lightweight alternative for testing purposes, which in turn necessitated updates to the expected test durations.

Highlights

  • Increased Batch Size: The batch size for VLM evaluation tests has been doubled from 32 to 64, which should improve throughput and reduce overall test execution time.
  • Model Swap for Testing: The VLM model used for testing has been switched from 'google/gemma-3-27b-it' to a smaller 'google/gemma-3-4b-it' model, along with an adjustment to its expected accuracy.
  • Updated Test Durations: The expected run times for the 'test_vlm_models.py' suite have been updated in the test runner configuration, reflecting the anticipated speed improvements from the other changes.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to speed up the VLM CI by increasing the batch size for tests and switching to a smaller model. The changes are logical and directly support this goal. I have one suggestion to improve maintainability by replacing a duplicated magic number with a constant.

tp = 1
tasks = "mmmu_val"
batch_size = 32
batch_size = 64
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The batch size is now hardcoded to 64 in two places: here as batch_size, and in _run_vlm_mmmu_test for the --cuda-graph-max-bs argument (line 140). This could lead to inconsistencies if one is updated but not the other. To improve maintainability, consider defining this value as a constant at the module level (e.g., VLM_BATCH_SIZE = 64) and reusing it in both locations. This would ensure they always stay in sync.

@b8zhong
Copy link
Collaborator Author

b8zhong commented Dec 6, 2025

/tag-and-rerun-ci

@github-actions github-actions bot added the run-ci label Dec 6, 2025
@Kangyan-Zhou Kangyan-Zhou merged commit 3b47973 into sgl-project:main Dec 7, 2025
84 of 88 checks passed
eternally-z pushed a commit to AniZpZ/sglang that referenced this pull request Dec 8, 2025
chenzongyao200127 pushed a commit to openanolis/sglang that referenced this pull request Dec 8, 2025
@b8zhong b8zhong deleted the tiny-vlm-test-speedup branch December 9, 2025 00:08
Kevin-XiongC pushed a commit to novitalabs/sglang that referenced this pull request Dec 9, 2025
JustinTong0323 pushed a commit to JustinTong0323/sglang that referenced this pull request Dec 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Multi-modal multi-modal language model run-ci

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants