pytorch
fc21cc82 - Enable sparse_dim() and dense_dim() methods for Strided tensors (#86203)

Commit
3 years ago
Enable sparse_dim() and dense_dim() methods for Strided tensors (#86203) The reason for enabling sparse/dense_dim() for strided tensors is to have more meaningful error messages: For instance, compare ``` NotImplementedError: Could not run 'aten::sparse_dim' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::sparse_dim' is only available for these backends: [SparseCPU, SparseCUDA, SparseMeta, SparseCsrCPU, SparseCsrCUDA, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMeta, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradNestedTensor, Tracer, AutocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PythonDispatcher]. ``` [master] vs ``` RuntimeError: addmm: matrices expected, got 0D tensor ``` [this PR] where the latter message gives a hint of which function is to blame for dealing with unexpected inputs. Pull Request resolved: https://github.com/pytorch/pytorch/pull/86203 Approved by: https://github.com/cpuhrsch
Author
Committer
Parents
Loading