[quant][fx][graphmode][be] Use is_qat instead of model.training as a flag for qat (#69878)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69878
But we'll still verify that model.training is True when user call prepare_qat API.
Relaxing this condition might also mean that we change the api for methods in fuser_method_mapping,
with additional flag for qat (currently we just have different fusions for training/eval), I don't think
this is P0, we could revisit if there is a need in the future
Test Plan:
```
python test/test_quantization.py TestQuantizeFx
```
Imported from OSS
Reviewed By: supriyar
Differential Revision: D33080988
fbshipit-source-id: b13715b91f10454948199323c5d81ef88bb3517f