diffusers
ad935933 - perf: prefer batched matmuls for attention (#1203)

Comment changes are shownComment changes are hidden
Commit
2 years ago
perf: prefer batched matmuls for attention (#1203) perf: prefer batched matmuls for attention. added fast-path to Decoder when num_heads=1
Author
Parents
  • src/diffusers/models
    • File
      attention.py
Loading