xla
56ddd5de
- Add int8 per channel weight-only quantized matmul (#7201)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
Add int8 per channel weight-only quantized matmul (#7201) Co-authored-by: Siyuan Liu <lsiyuan@google.coim>
References
#7201 - Add int8 per channel weight-only quantized matmul
Author
lsy323
Parents
4a30ea7d
Loading