pytorch
237316aa - PNP: early FX numeric suite tool to quantize each layer N times (#80521)

Commit
2 years ago
PNP: early FX numeric suite tool to quantize each layer N times (#80521) Summary: This PR is an early prototype of a tool to quantize each layer of a model N times, with N qconfigs each. We follow the design agreed upon in https://fburl.com/gdoc/e1gaq3ih . Current API: ``` m = M().eval() example_input = (torch.randn(2, 2),) qconfig_mappings = [ QConfigMapping().set_global(torch.quantization.default_qconfig), QConfigMapping().set_global(torch.quantization.default_dynamic_qconfig), ] backend_config = get_native_backend_config() msp = prepare_n_shadows_model( m, example_input, qconfig_mappings, backend_config) for _ in range(2): msp(*example_input) msq = convert_n_shadows_model(msp) msq(*example_input) results = extract_results_n_shadows_model(msq) print_comparisons_n_shadows_model(results) // example output subgraph_idx ref_node_name best_idx 1 2 -------------- --------------- ---------- ------- ------- subgraph_0 fc1 2 42.0834 42.6279 subgraph_1 fc2 2 43.7259 50.0593 ``` Test plan: ``` python test/test_quantization.py -k test_n_shadows ``` Differential Revision: [D37650332](https://our.internmc.facebook.com/intern/diff/D37650332) Pull Request resolved: https://github.com/pytorch/pytorch/pull/80521 Approved by: https://github.com/jerryzh168, https://github.com/andrewor14
Author
Committer
Parents
Loading