optimum
9ff5ea8f
- Fix maximum seqlen for gptq quantization (#1748)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Hide Minimap (CTRL+M)
Commit
1 year ago
Fix maximum seqlen for gptq quantization (#1748) fix gptq calibration data
References
#1748 - Fix maximum seqlen for gptq quantization
Author
SunMarc
Parents
d87efb25
Files
1
optimum/gptq
quantizer.py
Loading