xla
56ddd5de - Add int8 per channel weight-only quantized matmul (#7201)

Commit
1 year ago
Add int8 per channel weight-only quantized matmul (#7201) Co-authored-by: Siyuan Liu <lsiyuan@google.coim>
Author
Parents
Loading