[Quant][Eager] Added 4 bit support for eager mode quantization flow (#69806)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69806
Minor modifications were made to support 4 bit embedding quantized module in eager mode quantization flow and to allow for testing of the changes
Test Plan:
In pytorch main dir, execute
```
python test_quantization.py TestPostTrainingStatic.test_quantized_embedding
```
to run the series of tests, including the newly added test_embedding_4bit
function
Imported from OSS
Reviewed By: jbschlosser
Differential Revision: D33152675
fbshipit-source-id: 5cdaac5aee9b8850e61c99e74033889bcfec5d9f