onnxruntime
60ad6c64
- Enable float32 model with FP16 precision for QNN HTP backend (#19863)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
Enable float32 model with FP16 precision for QNN HTP backend (#19863) ### Description Enable float32 model with FP16 precision for QNN HTP backend
References
#19863 - Enable float32 model with FP16 precision for QNN HTP backend
Author
HectorSVC
Parents
6579f74a
Loading