optimum
4d37ed91
- Fix float16 ORT conversion for models > 2GB (#1079)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
2 years ago
Fix float16 ORT conversion for models > 2GB (#1079) * use ORT symbolic shape inference instead of ONNX shape inference at ONNX export * hopefully fix
References
#1079 - Fix float16 ORT conversion for models > 2GB
Author
fxmarty
Parents
d53d398d
Loading