diffusers
ad935933
- perf: prefer batched matmuls for attention (#1203)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Hide Minimap (CTRL+M)
Commit
2 years ago
perf: prefer batched matmuls for attention (#1203) perf: prefer batched matmuls for attention. added fast-path to Decoder when num_heads=1
References
#1203 - perf: prefer batched matmuls for attention
Author
Birch-san
Parents
78a6eed2
Files
1
src/diffusers/models
attention.py
Loading