pytorch
618be18a - Enable the quantization on XPU devices (#54857)

Commit
3 years ago
Enable the quantization on XPU devices (#54857) Summary: Enable the quantization on XPU devices. Keep the model as is if the model is on XPU devices. Pull Request resolved: https://github.com/pytorch/pytorch/pull/54857 Reviewed By: ailzhang Differential Revision: D28501381 Pulled By: jerryzh168 fbshipit-source-id: 6d3e9b04075393248b30776c69881f957a1a837c
Author
Parents
Loading