onnxruntime
d205bb78 - Support mixed precision in quantization for RTN (#24401)

Commit
301 days ago
Support mixed precision in quantization for RTN (#24401) ### Description Support mixed precision in quantization for RTN ### Motivation and Context More flexible for quantization Usage: ``` customized_weight_config = {} for i in layers_to_exclude: customized_weight_config["/model/layers."+str(i)+"/MatMul"] = {"bits": 8} algo_config = matmul_4bits_quantizer.RTNWeightOnlyQuantConfig(customized_weight_config=customized_weight_config) quant = MatMul4BitsQuantizer( model=onnx_model, block_size=32, is_symmetric=False, accuracy_level=4, nodes_to_exclude=nodes_to_exclude, algo_config=algo_config, ) ```
Author
Parents
Loading