quantized embedding: make error message clearer (#66051)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/66051
Make the error message clearer when quantized embedding is converted
with an unsupported dtype. This is helpful when debugging quantization
errors on new models.
Test Plan:
```
class M(nn.Module):
def __init__(self):
super().__init__()
self.embedding = nn.Embedding(1, 1)
m = M().eval()
m.qconfig = torch.quantization.QConfig(
activation=torch.quantization.MinMaxObserver.with_args(dtype=torch.qint8),
weight=torch.quantization.MinMaxObserver.with_args(dtype=torch.qint8))
m.embedding.qconfig = m.qconfig
mp = torch.quantization.prepare(m)
mq = torch.quantization.convert(m)
// error message now includes the incorrect dtype
```
Imported from OSS
Reviewed By: dagitses
Differential Revision: D31472848
fbshipit-source-id: 86f6d90bc0ad611aa9d1bdae24497bc6f3d2acaa