optimum
4d37ed91 - Fix float16 ORT conversion for models > 2GB (#1079)

Commit
2 years ago
Fix float16 ORT conversion for models > 2GB (#1079) * use ORT symbolic shape inference instead of ONNX shape inference at ONNX export * hopefully fix
Author
Parents
Loading