pytorch
1b75a2e4 - Add PReLU to MKLDNN convertible Ops (#79011)

Commit
2 years ago
Add PReLU to MKLDNN convertible Ops (#79011) Although an MKLDNN variant of PReLU is now available, it isn't used from the CPU path in `optimize_for_inference` as it is left off the allowable ops. It leads to graphs that look like this: ```Python %1770 : Tensor = aten::to_dense(%1804, %29) %1769 : Tensor = aten::to_dense(%shortcut.15, %29) %250 : Tensor = aten::prelu(%1770, %self.body.5.res_layer.2.weight) # /home/sacha/.local/lib/python3.10/site-packages/torch/nn/modules/activation.py:1226:0 %1806 : Tensor = aten::to_mkldnn(%250, %29) %1807 : Tensor = aten::to_mkldnn(%1769, %29) ``` Note: the odd to_dense and to_mkldnn of %shortcut.15 appears to be an artifact that occurs because of the prelu needing conversion. %shortcut.15 could have been left as mkldnn (both lines can actually be deleted). Pull Request resolved: https://github.com/pytorch/pytorch/pull/79011 Approved by: https://github.com/eellison
Author
Committer
Parents
Loading