pytorch
b058a027 - TorchDynamo: enable convolution bn folding for functional bn (#89746)

Commit
1 year ago
TorchDynamo: enable convolution bn folding for functional bn (#89746) Motivation: for Timm model, there is always use customer-defined BN which using F.batch_norm: https://github.com/rwightman/pytorch-image-models/blob/main/timm/models/layers/norm_act.py#L26, and the fx graph will be like: ``` ------------- ---------------------- --------------------------------------- --------------------------------------------------------------------------------------------------------- -------- placeholder x x () {} call_module self_conv self_conv (x,) {} get_attr self_bn_running_mean_1 self_bn_running_mean () {} get_attr self_bn_running_var self_bn_running_var () {} get_attr self_bn_weight self_bn_weight () {} get_attr self_bn_bias self_bn_bias () {} call_function batch_norm <function batch_norm at 0x7f07196cdf70> (self_conv, self_bn_running_mean_1, self_bn_running_var, self_bn_weight, self_bn_bias, False, 0.1, 1e-05) {} call_module self_bn_drop self_bn_drop (batch_norm,) ``` the original conv+bn folding path doesn't work for **F.batch_norm**, but for **F.batch_norm** case, if its' parameters are const(attr of the module and will not be updated), we can also do the const folding's optimization. This PR will enable it and will improve the Timm models' performance. Pull Request resolved: https://github.com/pytorch/pytorch/pull/89746 Approved by: https://github.com/jgong5, https://github.com/jansel
Author
Committer
Parents
Loading