pytorch
a6fea03a - Skip codegen checks for `dequantize_self`, `lu_unpack`, `_cudnn_rnn`, and `.*conv.*_backward.*` (#61139)

Commit
3 years ago
Skip codegen checks for `dequantize_self`, `lu_unpack`, `_cudnn_rnn`, and `.*conv.*_backward.*` (#61139) Summary: Temporary fix for fb-internal tests. This and similar failures are being discussed here: https://github.com/pytorch/pytorch/issues/60426 Applies the below changes: - This may seem counter intuitive because storage check comes before tensor check, but if TensorImpl use count is not enforced, we should also not enforce storage use count. If an op returns one of its inputs as-is, it is possible for this input to already be aliased with another tensor, and hence would have StorageImpl use count greater than one. - Also clarify in description that use_count is not necessarily > 1, use_count may but not necessarily return one of its inputs as-is. - Allow usage of regex in skip list Pull Request resolved: https://github.com/pytorch/pytorch/pull/61139 Reviewed By: malfet, Varal7 Differential Revision: D29564917 Pulled By: soulitzer fbshipit-source-id: 806b7177117a573dd12f161cc80dcadac892f9d0
Author
Parents
Loading