onnxruntime
aeaa1d65 - make optimized_model_path be in temp folder instead of source model folder for transformer optimization (#16531)

Commit
2 years ago
make optimized_model_path be in temp folder instead of source model folder for transformer optimization (#16531) ### The optimize_model will generate a temporary model in current model folder. Most of time, it is fine. However, the scenario will break when the function run against input model mount from AzureML. In that case, the mounted folder is read-only. We have to copy the model to another temp folder to call optimize_model to workaround this issue. Otherwise, the optimize_model will fail when creating the optimized model in the read-only folder. However, the model copy is painful, especially when model is huge. This PR just expose the optimized_model_path at optimize_model level so that the caller could decide where to save the temp model.
Author
Parents
Loading