pytorch
1ea49c68 - Add linalg.vander

Commit
3 years ago
Add linalg.vander This PR adds `linalg.vander`, the linalg version of `torch.vander`. We add autograd support and support for batched inputs. We also take this chance to improve the docs (TODO: Check that they render correctly!) and add an OpInfo. **Discussion**: The current default for the `increasing` kwargs is extremely odd as it is the opposite of the classical definition (see [wiki](https://en.wikipedia.org/wiki/Vandermonde_matrix)). This is reflected in the docs, where I explicit both the odd defaults that we use and the classical definition. See also [this stackoverflow post](https://stackoverflow.com/a/71758047/5280578), which shows how people are confused by this defaults. My take on this would be to correct the default to be `increasing=True` and document the divergence with NumPy (as we do for other `linalg` functions) as: - It is what people expect - It gives the correct determinant called "the Vandermonde determinant" rather than (-1)^{n-1} times the Vandermonde det (ugh). - [Minor] It is more efficient (no `flip` needed) - Since it's under `linalg.vander`, it's strictly not a drop-in replacement for `np.vander`. We will deprecate `torch.vander` in a PR after this one in this stack (once we settle on what's the correct default). Thoughts? mruberry cc kgryte rgommers as they might have some context for the defaults of NumPy. Fixes https://github.com/pytorch/pytorch/issues/60197 Pull Request resolved: https://github.com/pytorch/pytorch/pull/76303 Approved by: https://github.com/albanD
Author
Committer
Parents
Loading