Rename matmul_4bits_quantizer.py to matmul_nbits_quantizer.py (#24472)
### Description
* Rename filename and class name since it supports 4 and 8 bits.
* Update HQQWeightOnlyQuantizer to support 8 bits.
* Update some comments.
### Motivation and Context
https://github.com/microsoft/onnxruntime/pull/24384 added 8 bits support
for the default weight only quantizer.