fix: caching allocator behaviour for quantization. #12172
fix: caching allocator behaviour for quantization.
1bc34f55
up
51588508
a-r-r-o-w
approved these changes
on 2025-08-18
Update src/diffusers/models/model_loading_utils.py
6deece1c
Merge branch 'main' into fix-quantizer-warmup
81bef975
sayakpaul
merged
e8246604
into main 219 days ago
sayakpaul
deleted the fix-quantizer-warmup branch 219 days ago
Assignees
No one assigned
Login to write a write a comment.
Login via GitHub