peft
e7b47ac0 - FIX Init DoRA weights in float32 if float16 used (#1653)

Commit
1 year ago
FIX Init DoRA weights in float32 if float16 used (#1653) When DoRA weights are initialized in float16 on CPU and when an older PyTorch version is being used (<2.2), there is an error because the the operation is not supported for float16 on CPU. This commit temporarily converts the LoRA weights to float32 beforehand if they're in float16. Of course, when the user tries to train or predict with this model on CPU, they will still encounter errors. However, in certain situations, only the initialization might be on CPU and later it is moved to GPU. This could be some framework code that the user has no control over, as in #1597. Therefore, it's good to have this safety hatch. Note that since our CI uses the latest PyTorch version, we cannot run a test for this, as the latest PyTorch runs no matter what.
Parents
Loading