diffusers
263b9734
- [LoRA] feat: support loading loras into 4bit quantized Flux models. (#10578)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Hide Minimap (CTRL+M)
Commit
175 days ago
[LoRA] feat: support loading loras into 4bit quantized Flux models. (#10578) * feat: support loading loras into 4bit quantized models. * updates * update * remove weight check.
Author
sayakpaul
Committer
DN6
Parents
a663a67e
Files
4
src/diffusers
loaders
lora_pipeline.py
utils
__init__.py
loading_utils.py
tests/quantization/bnb
test_4bit.py
Loading