🚨🚨🚨 [`Quantization`] Store the original dtype in the config as a private attribute 🚨🚨🚨 #26761
First step
2797129d
fix
b7e797f3
add adjustements for gptq
73d31090
change to `_pre_quantization_dtype`
28d2b27a
Update src/transformers/modeling_utils.py
316f7766
fix serialization
948de7ed
Apply suggestions from code review
06700c73
fixup
1ebe674d
younesbelkada
changed the title [`Quantization`] Store the original dtype in the config as a private attribute 🚨🚨🚨 [`Quantization`] Store the original dtype in the config as a private attribute 🚨🚨🚨 2 years ago
Assignees
No one assigned
Login to write a write a comment.
Login via GitHub