[ONNX] Update documentation (#58712) (#60249)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60249
* Add introductory paragraph explaining what ONNX is and what the
torch.onnx module does.
* In "Tracing vs Scripting" and doc-string for torch.onnx.export(),
clarify that exporting always happens on ScriptModules and that
tracing and scripting are the two ways to produce a ScriptModule.
* Remove examples of using Caffe2 to run exported models.
Caffe2's website says it's deprecated, so it's probably best not to
encourage people to use it by including it in examples.
* Remove a lot of content that's redundant:
* The example of how to mix tracing and scripting, and instead
link to Introduction to TorchScript, which includes very similar
content.
* "Type annotations" section. Link to TorchScript docs which explain
that in more detail.
* "Using dictionaries to handle Named Arguments as model inputs"
section. It's redundant with the description of the `args` argument
to `export()`, which appears on the same page once the HTML
is generated.
* Remove the list of supported Tensor indexing patterns. If it's not
in the list of unsupported patterns, users can assume it's
supported, so having both is redundant.
* Remove the list of supported operators and models.
I think the list of supported operators is not very useful.
A list of supported model architectures may be useful, but in
reality it's already very out of date. We should add it back if
/ when we have a system for keeping it up to date.
* "Operator Export Type" section. It's redundant with the description
of the `operator_export_type` arg to to `export()`, which appears on
the same page once the HTML is generated.
* "Use external data format" section. It's redundant with the
description of the `use_external_data_format` arg to `export()`.
* "Training" section. It's redundant with the
description of the `training` arg to `export()`.
* Move the content about different operator implementations producing
different results from the "Limitations" section into the doc for the
`operator_export_type` arg.
* Document "quantized" -> "caffe2" behavior of
OperatorExportTypes.ONNX_ATEN_FALLBACK.
* Combing the text about using torch.Tensor.item() and the text about
using NumPy types into a section titled
"Avoid NumPy and built-in Python types", since they're both
fundamentally about the same issue.
* Rename "Write PyTorch model in Torch way" to "Avoiding Pitfalls".
* Lots of minor fixes: spelling, grammar, brevity, fixing links, adding
links.
* Clarify limitation on input and output types. Phrasing it in terms of
PyTorch types is much more accessible than in terms of TorchScript
types. Also clarify what actually happens when dict and str are used
as inputs and outputs.
* In Supported operators, use torch function and class names and link
to them. This is more user friendly than using the internal aten
op names.
* Remove references to VariableType.h, which doesn't appear to contain
the information that it once did. Instead refer to the generated
.pyi files.
* Remove the text in the FAQ about appending to lists within loops.
I think this limitation is no longer present
(perhaps since https://github.com/pytorch/pytorch/pull/51577).
* Minor fixes to some code I read along the way.
* Explain the current rationale for the weird ::prim_PythonOp op name.
Test Plan: Imported from OSS
Reviewed By: zou3519, ZolotukhinM
Differential Revision: D29494912
Pulled By: SplitInfinity
fbshipit-source-id: 7756c010b2320de0692369289604403d28877719
Co-authored-by: Gary Miguel <garymiguel@microsoft.com>