text-generation-inference
36dd1601 - Add support for exl2 quantization

Commit
1 year ago
Add support for exl2 quantization Mostly straightforward, changes to existing code: * Wrap quantizer parameters in a small wrapper to avoid passing around untyped tuples and needing to repack them as a dict. * Move scratch space computation to warmup, because we need the maximum input sequence length to avoid allocating huge scratch buffers that OOM.
Author
Committer
Daniƫl de Kok
Parents
Loading