-
Notifications
You must be signed in to change notification settings - Fork 7
Open
Description
Hi, thanks for the great work and releasing the code to reproduce it.
I have a few questions regarding the kronecker adaptation forward pass through the adapter modules:
(1) The scaling factor you use for the KAdaptation is 1/5 times the scaling used in standard LoRA:
PEViT/vision_benchmark/evaluation/model.py
Line 564 in be6fb43
| scale_factor = self.lora_attn_alpha / self.lora_attn_dim * 5 |
Is there a justification for this or is it simply an empirical magic number?
(2) While forwarding through your adapter for the value matrix, it seems like you reuse the query weight matrix (A as defined in the paper as I understand it). Is this a typo/bug?
PEViT/vision_benchmark/evaluation/model.py
Lines 571 to 580 in be6fb43
| "Perform kronecker adaptation to Q and K matrices" | |
| if matrix == 'q': | |
| if self.factorized_phm_rule: | |
| phm_rule1 = torch.bmm(self.phm_rule1_left, self.phm_rule1_right) | |
| H = kronecker_product_einsum_batched(phm_rule1, Wq).sum(0) | |
| elif matrix == 'v': | |
| if self.factorized_phm_rule: | |
| phm_rule2 = torch.bmm(self.phm_rule2_left, self.phm_rule2_right) | |
| H = kronecker_product_einsum_batched(phm_rule2, Wq).sum(0) |
Shouldn't line 580 be
H = kronecker_product_einsum_batched(phm_rule2, Wv).sum(0) instead?Metadata
Metadata
Assignees
Labels
No labels