Skip to content

Commit 4f7d3ad

Browse files
authored
Merge branch 'main' into vas-bert-attn-refactors
2 parents f1ef07a + d9d7f6a commit 4f7d3ad

File tree

182 files changed

+5899
-1739
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

182 files changed

+5899
-1739
lines changed

.github/workflows/model_jobs.yml

Lines changed: 15 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -128,28 +128,35 @@ jobs:
128128
echo "machine_type=$machine_type" >> $GITHUB_ENV
129129
echo "machine_type=$machine_type" >> $GITHUB_OUTPUT
130130
131+
- name: Create report directory if it doesn't exist
132+
shell: bash
133+
run: |
134+
mkdir -p /transformers/reports/${{ env.machine_type }}_${{ inputs.report_name_prefix }}_${{ env.matrix_folders }}_test_reports
135+
echo "dummy" > /transformers/reports/${{ env.machine_type }}_${{ inputs.report_name_prefix }}_${{ env.matrix_folders }}_test_reports/dummy.txt
136+
ls -la /transformers/reports/${{ env.machine_type }}_${{ inputs.report_name_prefix }}_${{ env.matrix_folders }}_test_reports
137+
131138
- name: Run all tests on GPU
132139
working-directory: /transformers
133-
run: python3 -m pytest -rsfE -v --make-reports=${{ env.machine_type }}_${{ inputs.report_name_prefix }}_${{ matrix.folders }}_test_reports tests/${{ matrix.folders }}
140+
run: |
141+
PATCH_TESTING_METHODS_TO_COLLECT_OUTPUTS=yes _PATCHED_TESTING_METHODS_OUTPUT_DIR=/transformers/reports/${{ env.machine_type }}_${{ inputs.report_name_prefix }}_${{ env.matrix_folders }}_test_reports python3 -m pytest -rsfE -v --make-reports=${{ env.machine_type }}_${{ inputs.report_name_prefix }}_${{ env.matrix_folders }}_test_reports tests/${{ matrix.folders }}
134142
135143
- name: Failure short reports
136144
if: ${{ failure() }}
137145
continue-on-error: true
138-
run: cat /transformers/reports/${{ env.machine_type }}_${{ inputs.report_name_prefix }}_${{ matrix.folders }}_test_reports/failures_short.txt
146+
run: cat /transformers/reports/${{ env.machine_type }}_${{ inputs.report_name_prefix }}_${{ env.matrix_folders }}_test_reports/failures_short.txt
139147

140-
- name: Run test
141-
shell: bash
148+
- name: Captured information
149+
if: ${{ failure() }}
150+
continue-on-error: true
142151
run: |
143-
mkdir -p /transformers/reports/${{ env.machine_type }}_${{ inputs.report_name_prefix }}_${{ matrix.folders }}_test_reports
144-
echo "hello" > /transformers/reports/${{ env.machine_type }}_${{ inputs.report_name_prefix }}_${{ matrix.folders }}_test_reports/hello.txt
145-
echo "${{ env.machine_type }}_${{ inputs.report_name_prefix }}_${{ matrix.folders }}_test_reports"
152+
cat /transformers/reports/${{ env.machine_type }}_${{ inputs.report_name_prefix }}_${{ env.matrix_folders }}_test_reports/captured_info.txt
146153
147154
- name: "Test suite reports artifacts: ${{ env.machine_type }}_${{ inputs.report_name_prefix }}_${{ env.matrix_folders }}_test_reports"
148155
if: ${{ always() }}
149156
uses: actions/upload-artifact@v4
150157
with:
151158
name: ${{ env.machine_type }}_${{ inputs.report_name_prefix }}_${{ env.matrix_folders }}_test_reports
152-
path: /transformers/reports/${{ env.machine_type }}_${{ inputs.report_name_prefix }}_${{ matrix.folders }}_test_reports
159+
path: /transformers/reports/${{ env.machine_type }}_${{ inputs.report_name_prefix }}_${{ env.matrix_folders }}_test_reports
153160

154161
collated_reports:
155162
name: Collated Reports

docs/source/en/_toctree.yml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -485,6 +485,8 @@
485485
title: FLAN-UL2
486486
- local: model_doc/flaubert
487487
title: FlauBERT
488+
- local: model_doc/flex_olmo
489+
title: FlexOlmo
488490
- local: model_doc/fnet
489491
title: FNet
490492
- local: model_doc/fsmt
@@ -553,6 +555,8 @@
553555
title: LED
554556
- local: model_doc/lfm2
555557
title: LFM2
558+
- local: model_doc/lfm2_vl
559+
title: LFM2-VL
556560
- local: model_doc/llama
557561
title: LLaMA
558562
- local: model_doc/llama2
Lines changed: 139 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,139 @@
1+
<!--Copyright 2025 the HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License");
4+
you may not use this file except in compliance with the License.
5+
You may obtain a copy of the License at
6+
7+
http://www.apache.org/licenses/LICENSE-2.0
8+
9+
Unless required by applicable law or agreed to in writing, software
10+
distributed under the License is distributed on an "AS IS" BASIS,
11+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
See the License for the specific language governing permissions and
13+
limitations under the License.
14+
15+
16+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer.
17+
18+
-->
19+
*This model was released on 2025-07-09 and added to Hugging Face Transformers on 2025-09-15.*
20+
<div style="float: right;">
21+
<div class="flex flex-wrap space-x-1">
22+
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
23+
<img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat">
24+
<img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white">
25+
</div>
26+
</div>
27+
28+
# FlexOlmo
29+
30+
[FlexOlmo](https://huggingface.co/papers/2507.07024) is a new class of language models (LMs) that supports (1) distributed training without data sharing, where different model parameters are independently trained on closed datasets, and (2) data-flexible inference, where these parameters along with their associated data can be flexibly included or excluded from model inferences with no further training. FlexOlmo employs a mixture-of-experts (MoE) architecture where each expert is trained independently on closed datasets and later integrated through a new domain-informed routing without any joint training. FlexOlmo is trained on FlexMix, a corpus we curate comprising publicly available datasets alongside seven domain-specific sets, representing realistic approximations of closed sets.
31+
32+
You can find all the original FlexOlmo checkpoints under the [FlexOlmo](https://huggingface.co/collections/allenai/flexolmo-68471177a386b6e20a54c55f) collection.
33+
34+
> [!TIP]
35+
> Click on the FlexOlmo models in the right sidebar for more examples of how to apply FlexOlmo to different language tasks.
36+
37+
The example below demonstrates how to generate text with [`Pipeline`], [`AutoModel`] and from the command line.
38+
39+
<hfoptions id="usage">
40+
<hfoption id="Pipeline">
41+
42+
```py
43+
import torch
44+
from transformers import pipeline
45+
46+
pipe = pipeline(
47+
task="text-generation",
48+
model="allenai/FlexOlmo-7x7B-1T",
49+
dtype=torch.bfloat16,
50+
device=0,
51+
)
52+
53+
result = pipe("Plants create energy through a process known as")
54+
print(result)
55+
```
56+
57+
</hfoption>
58+
<hfoption id="AutoModel">
59+
60+
```py
61+
import torch
62+
from transformers import AutoModelForCausalLM, AutoTokenizer
63+
64+
tokenizer = AutoTokenizer.from_pretrained(
65+
"allenai/FlexOlmo-7x7B-1T"
66+
)
67+
68+
model = AutoModelForCausalLM.from_pretrained(
69+
"allenai/FlexOlmo-7x7B-1T",
70+
dtype=torch.bfloat16,
71+
device_map="auto",
72+
attn_implementation="sdpa"
73+
)
74+
input_ids = tokenizer("Plants create energy through a process known as", return_tensors="pt").to(model.device)
75+
76+
output = model.generate(**input_ids, max_length=50, cache_implementation="static")
77+
print(tokenizer.decode(output[0], skip_special_tokens=True))
78+
```
79+
80+
</hfoption>
81+
<hfoption id="transformers CLI">
82+
83+
```bash
84+
echo -e "Plants create energy through a process known as" | transformers-cli run --task text-generation --model allenai/FlexOlmo-7x7B-1T --device 0
85+
```
86+
87+
</hfoption>
88+
</hfoptions>
89+
90+
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends.
91+
92+
The example below uses [torchao](../quantization/torchao) to only quantize the weights to 4-bits.
93+
```py
94+
95+
#pip install torchao
96+
import torch
97+
from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig
98+
99+
torchao_config = TorchAoConfig(
100+
"int4_weight_only",
101+
group_size=128
102+
)
103+
104+
tokenizer = AutoTokenizer.from_pretrained(
105+
"allenai/FlexOlmo-7x7B-1T"
106+
)
107+
108+
model = AutoModelForCausalLM.from_pretrained(
109+
"allenai/FlexOlmo-7x7B-1T",
110+
quantization_config=torchao_config,
111+
dtype=torch.bfloat16,
112+
device_map="auto",
113+
attn_implementation="sdpa"
114+
)
115+
input_ids = tokenizer("Plants create energy through a process known as", return_tensors="pt").to(model.device)
116+
117+
output = model.generate(**input_ids, max_length=50, cache_implementation="static")
118+
print(tokenizer.decode(output[0], skip_special_tokens=True))
119+
120+
```
121+
122+
123+
## FlexOlmoConfig
124+
125+
[[autodoc]] FlexOlmoConfig
126+
127+
## FlexOlmoForCausalLM
128+
129+
[[autodoc]] FlexOlmoForCausalLM
130+
131+
## FlexOlmoModel
132+
133+
[[autodoc]] FlexOlmoModel
134+
- forward
135+
136+
## FlexOlmoPreTrainedModel
137+
138+
[[autodoc]] FlexOlmoPreTrainedModel
139+
- forward
Lines changed: 96 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,96 @@
1+
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License.
11+
12+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13+
rendered properly in your Markdown viewer.
14+
15+
-->
16+
17+
<div class="flex flex-wrap space-x-1">
18+
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
19+
</div>
20+
21+
# LFM2-VL
22+
23+
## Overview
24+
25+
[LFM2-VL](https://www.liquid.ai/blog/lfm2-vl-efficient-vision-language-models) first series of vision-language foundation models developed by [Liquid AI](https://liquid.ai/). These multimodal models are designed for low-latency and device-aware deployment. LFM2-VL extends the LFM2 family of open-weight Liquid Foundation Models (LFMs) into the vision-language space, supporting both text and image inputs with variable resolutions.
26+
27+
## Architecture
28+
29+
LFM2-VL consists of three main components: a language model backbone, a vision encoder, and a multimodal projector. LFM2-VL builds upon the LFM2 backbone, inheriting from either LFM2-1.2B (for LFM2-VL-1.6B) or LFM2-350M (for LFM2-VL-450M). For the vision tower, LFM2-VL uses SigLIP2 NaFlex encoders to convert input images into token sequences. Two variants are implemented:
30+
* Shape-optimized (400M) for more fine-grained vision capabilities for LFM2-VL-1.6B
31+
* Base (86M) for fast image processing for LFM2-VL-450M
32+
33+
The encoder processes images at their native resolution up to 512×512 pixels, efficiently handling smaller images without upscaling and supporting non-standard aspect ratios without distortion. Larger images are split into non-overlapping square patches of 512×512 each, preserving detail. In LFM2-VL-1.6B, the model also receives a thumbnail (a small, downscaled version of the original image capturing the overall scene) to enhance global context understanding and alignment. Special tokens mark each patch’s position and indicate the thumbnail’s start. The multimodal connector is a 2-layer MLP connector with pixel unshuffle to reduce image token count.
34+
35+
## Example
36+
37+
The following example shows how to generate an answer using the `AutoModelForImageTextToText` class.
38+
39+
```python
40+
from transformers import AutoProcessor, AutoModelForImageTextToText
41+
\
42+
# Load model and processor
43+
model_id = "LiquidAI/LFM2-VL-1.6B"
44+
model = AutoModelForImageTextToText.from_pretrained(
45+
model_id,
46+
device_map="auto",
47+
dtype="bfloat16",
48+
)
49+
processor = AutoProcessor.from_pretrained(model_id)
50+
51+
# Load image and create conversation
52+
conversation = [
53+
{
54+
"role": "user",
55+
"content": [
56+
{"type": "image", "image": "https://www.ilankelman.org/stopsigns/australia.jpg"},
57+
{"type": "text", "text": "What is in this image?"},
58+
],
59+
},
60+
]
61+
62+
# Generate snswer
63+
inputs = processor.apply_chat_template(
64+
conversation,
65+
add_generation_prompt=True,
66+
return_tensors="pt",
67+
return_dict=True,
68+
tokenize=True,
69+
).to(model.device)
70+
71+
outputs = model.generate(**inputs, max_new_tokens=64)
72+
processor.batch_decode(outputs, skip_special_tokens=True)[0]
73+
74+
```
75+
76+
## Lfm2VlImageProcessorFast
77+
78+
[[autodoc]] Lfm2VlImageProcessorFast
79+
80+
## Lfm2VlProcessor
81+
82+
[[autodoc]] Lfm2VlProcessor
83+
84+
## Lfm2VlConfig
85+
86+
[[autodoc]] Lfm2VlConfig
87+
88+
## Lfm2VlModel
89+
90+
[[autodoc]] Lfm2VlModel
91+
- forward
92+
93+
## Lfm2VlForConditionalGeneration
94+
95+
[[autodoc]] Lfm2VlForConditionalGeneration
96+
- forward

docs/source/ko/_toctree.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -607,6 +607,8 @@
607607
title: LED
608608
- local: in_translation
609609
title: LFM2
610+
- local: in_translation
611+
title: LFM2-VL
610612
- local: model_doc/llama
611613
title: LLaMA
612614
- local: model_doc/llama2

src/transformers/generation/utils.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1903,6 +1903,7 @@ def _supports_default_dynamic_cache(cls) -> bool:
19031903
"minimax",
19041904
"xlnet",
19051905
"lfm2",
1906+
"lfm2-vl",
19061907
]
19071908
)
19081909

0 commit comments

Comments
 (0)