pytorch
cf735d89 - [quant][core][bug fix][gpu] Added kReluFused to quantized cudnn conv operator's caching

Commit
2 years ago
[quant][core][bug fix][gpu] Added kReluFused to quantized cudnn conv operator's caching Summary: Previous implementation of CacheKey neglected kReluFused, but we need to be able to cache cases based on whether relu is activated or not, otherwise we will run into situations in which uid is defined in VariantPack but not in the operator graph. Test plan: ``` python test/test_quantization.py -k test_qconv2d_cudnn ``` Pull Request resolved: https://github.com/pytorch/pytorch/pull/75711 Approved by: https://github.com/jerryzh168
Author
Committer
Parents
Loading