LoRA-GA Integration #2926
Add LoRA-GA core implementation
b9c368d6
Register and export LoRA-GA across PEFT
8128a564
Add comprehensive test suite for LoRA-GA
c5743ee5
Add LoRA-GA documentation
ec5490ca
Add LoRA-GA example script and README
5074bc89
Add paper reference to lora_ga_utils module docstring
96f0ec2b
Refactor LoraGAConfig to sub-config pattern
0aefeb68
Add preprocess_loraga function for gradient estimation
6e369829
Refactor lora_ga_init with SVD fix and fallback handling
63611513
Update LoRA exports for new LoRA-GA API
a9d52846
Add lora_ga to save_mutated_as_lora pattern
0b10a86a
Remove old LoRA-GA utilities from exports
fd831f77
Update LoRA-GA test suite for new API
db5a0414
Update LoRA-GA example and documentation
6cb8bed1
Remove LoraGAModel from peft package exports
d5f4bb74
Remove LoraGAModel from tuners package exports
840509f4
Remove LoraGAModel class from lora model
d65fa574
Delete old LoRA-GA utilities file
91a80b97
Remove LORAGA from PeftType enum
fa1905b9
Export preprocess_loraga from peft package
31f798ac
Add preprocess_loraga to tuners __all__ list
2f1c2096
Remove cache_file from LoraGAConfig dataclass
78baafd4
Refactor LoRA-GA preprocessing: add cache_file parameter, use _peft_ …
01014c1b
Fix attribute deletion to use _peft_ prefixed names in layer.py
8b9b6b0f
Update tests to use _peft_loraga_grad attribute name
5f70ef42
Update examples: move data_iter inside train_step, add script descrip…
ff9329dd
Update docs: change copyright to 2025, update usage tips, remove unve…
2eeea7c6
Add LoRA-GA to warning about rslora with rank_pattern when modifying …
81d6985d
Add documentation for path_initial_model_for_weight_conversion usage …
747befa9
Pass lora_ga_config as parameter instead of attaching to modules
0156a34b
Merge upstream/main into loraga-integration branch
19d4e0fe
Add Conv1D support and improve gradient estimation efficiency in LoRA-GA
a306f1b2
Update documentation to clarify LoRA-GA does not support quantized mo…
2d5022e6
Remove unnecessary note from README
7abaacae
Improve error message for missing lora_ga_config
e9f101c5
Remove redundant test_gradient_shapes test
75b0bfeb
Remove unnecessary TestLoraGAConfig class
f438cf5f
Remove TestLoraGAInitialization class with low-value tests
5b739f80
Move os import to root level
62128459
Use pytest tmp_path fixture instead of tempfile
8697f43e
Convert get_model_and_train_step to pytest fixtures
c58bde35
Enhance save/load tests with parametrization and fix random direction…
35164820
Add random seed for test reproducibility
c9690c25
Add test for cached gradients
49f0e104
Update copyright year to 2025
c6ccdda9
Raise error when lora_ga_config is missing with init_lora_weights='lo…
a7e22b59
Rename target_modules to get_target_modules
4fecab7e
Add lora_ga_config param to LoraParallelLinear and run style formatting
4b12fba5
Simplify test fixtures
2173498e
Add tests for lower precision dtypes and quantized model rejection
800d5c8c
Add support for mixed models with unsupported layer types in LoRA-GA
5f6135a8
Remove non-existent LoraGAModel from documentation
476bed7f
Make target_modules a CLI argument in example script
8d919ee3
Auto-populate target_modules from defaults in preprocess_loraga
641b2043
Use eval_strategy instead of evaluation_strategy
37e57f2b
Make gradient computation memory-efficient by disabling gradients for…
d4db2b60
Use forward hooks to correctly count gradient computations across mul…
dcc9a2d6
Separate test parametrizations for direction and scale with explicit …
bc402b90
Add helper function for BitsAndBytes quantized LoRA-GA training
5c164249
Move imports to top of file in LoRA-GA example
2304c1c6
Simplify gradient estimation using backward hooks with accumulation
84521a2a
Simplify gradient accumulation following PyTorch best practices
7570aa24
Remove helper function and document quantization workflow in README
aee93563
Lower default learning rate to 3e-5 for LoRA-GA
3ba1fb22
Fix dtype precision bug in LoRA-GA weight modification
14d59839
Assignees
No one assigned
Login to write a write a comment.
Login via GitHub