-
Notifications
You must be signed in to change notification settings - Fork 7
Open
Description
Hi, thanks for your great work. I noticed that in your scripts, you hard-coded the lora alpha to be 128 and the rank r to be 4 (therefore leading to a scaling factor of 32):
PEViT/vision_benchmark/evaluation/lora_model.py
Lines 455 to 463 in be6fb43
| ''' | |
| LoRA setting | |
| ''' | |
| self.lora_moe_lambda = 1.0 | |
| self.lora_moe_act = 'linear' | |
| self.lora_r_dropout = None | |
| self.lora_attn_dim = 4 | |
| self.lora_moe = 0 | |
| self.lora_attn_alpha=128 |
Was there a principled justification for these choices? I am just wondering if you did any tuning on these values to suggest what would be good values to use.
Metadata
Metadata
Assignees
Labels
No labels