llm-foundry
3c278a92
- Add fp32 to the set of valid inputs to attention layer (#1347)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
Add fp32 to the set of valid inputs to attention layer (#1347) Add support for Torch attn + FP32 + FSDP Mixed Precision FULL
References
#1347 - Add fp32 to the set of valid inputs to attention layer
Author
j316chuck
Parents
129bb56c
Loading