Commit 866347a
Deepseek Mtp model uses the lm_head and embedding from the main model (#2790)
### What this PR does / why we need it?
In the Deepseek technical report, it is mentioned that the embedding and
lmhead layers of the MTP layer are shared with the main model, but the
current implementation independently loads the complete embedding and
lmhead. In the Deepseek-R1 model, their weight sizes are 129280*7168 in
fp16 format, which is 1.72G.
This PR fixes the MTP layer to use the lmhead and embedding of the main
model, saving 3.45G of GPU memory in the pure DP scenario.
The current process will first create temporary spaces for the embedding
and lmhead in the mtp layer, then I will call torch.equal to determine
if the two matrices are the same. If they are the same, they will be
reused, and the previous tensor will be released.
- vLLM version: v0.12.0
- vLLM main:
vllm-project/vllm@ad32e3e
Signed-off-by: zzhx1 <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>1 parent 9fbcfa3 commit 866347a
1 file changed
+10
-0
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
200 | 200 | | |
201 | 201 | | |
202 | 202 | | |
| 203 | + | |
| 204 | + | |
| 205 | + | |
| 206 | + | |
| 207 | + | |
| 208 | + | |
| 209 | + | |
| 210 | + | |
| 211 | + | |
| 212 | + | |
203 | 213 | | |
204 | 214 | | |
205 | 215 | | |
| |||
0 commit comments