pytorch
7e20d389 - New generated conv_bn folding should use same weight and bias dtype as original conv module (#77042)

Commit
2 years ago
New generated conv_bn folding should use same weight and bias dtype as original conv module (#77042) When doing the conv_bn folding in `torch.jit.freeze`, the new calculated `weight` and `bias` for the new conv will be promoted to high precision such as `float32` even the original `weight` and `bias` for the conv is low precision such as `bfloat16`. In this PR, we will record the original dtype for the conv's `weight` and `bias` and convert the data type back after `conv_bn` folding. Pull Request resolved: https://github.com/pytorch/pytorch/pull/77042 Approved by: https://github.com/eellison
Committer
Parents
Loading