quantize bias of the quantization parameters (#48749)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/48749
this change reverts D25179863 (https://github.com/pytorch/pytorch/commit/55e225a2dc1529d9c68d5f8b333b155bd5b5b334) because in 1.0.0.14 this behavior got
reintroduced
we believe this was already working pre 1.0.0.9, then intel regressed which is
why we had to remove this quantization section, and in 1.0.0.14 they fixed it
Test Plan:
we tested ctr_instagram_5x which now passes with bitwise matching
hl475 will test the top6 models and if they match, we will use this point
to lock any further changes in the future
Reviewed By: venkatacrc
Differential Revision: D25283605
fbshipit-source-id: 33aa9af008c113d4d61e3461a44932b502bf42ea