fx quantization: add option to leave graph inputs and/or outputs quantized (#48624)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48624
Before this PR, there was an assumption that all graph inputs
and outputs are in floating point, with some exceptions for
`standalone_module`.
This PR adds an option to specify either inputs or outputs
as being quantized.
This is useful for incremental migrations of models using Eager mode.
Test Plan: Imported from OSS
Reviewed By: jerryzh168
Differential Revision: D25231833
fbshipit-source-id: 9f9da17be72b614c4c334f5c588458b3e726ed17