benchmark
d56a6259 - attention issue fix for fp16 inference (#794)

Commit
3 years ago
attention issue fix for fp16 inference (#794) Summary: The default min value will be overflow when model is under fp16. So I set a min val of fp16 for it. Pull Request resolved: https://github.com/pytorch/benchmark/pull/794 Reviewed By: xuzhao9 Differential Revision: D35020048 Pulled By: frank-wei fbshipit-source-id: d5b11ae944dd4edc4eb8541a4f1780fa44fe28ab
Author
Wei Wei
Parents
Loading