onnxruntime
9406bcb3
- Adding matmul_integer_to_float16 onnx models (#16978)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
2 years ago
Adding matmul_integer_to_float16 onnx models (#16978) ### Description Missed adding float16 onnx models generated using `matmul_integer_to_float.py` ### Motivation and Context
References
#16978 - Adding matmul_integer_to_float16 onnx models
#18530 - Add TryConvertTensorToBroadcastScalarfor QAttention and MatMulIntToFloat
Author
AnaghaRaoAMD
Parents
34b6cd6d
Loading