Skip to content

Support for Qwen3-VL and up-to-date MLLM compression documentation #1056

@FDH21

Description

@FDH21

I'm trying to apply auto-round to Qwen3-VL (or similar multimodal LLMs), but I noticed that the current documentation under auto_round/compressors/mllm/README.md seems outdated or incomplete.

Could you please:
Clarify whether Qwen3-VL (or Qwen-VL series) is supported or planned for support?
Update the MLLM README with current instructions, including how to add custom multimodal models?
Provide a minimal working example for quantizing a Qwen-VL model if possible?

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions