diffusers
263b9734 - [LoRA] feat: support loading loras into 4bit quantized Flux models. (#10578)

Comment changes are shownComment changes are hidden
Commit
175 days ago
[LoRA] feat: support loading loras into 4bit quantized Flux models. (#10578) * feat: support loading loras into 4bit quantized models. * updates * update * remove weight check.
Author
Committer
DN6 DN6
Parents
  • src/diffusers
    • loaders
      • File
        lora_pipeline.py
    • utils
      • File
        __init__.py
      • File
        loading_utils.py
  • tests/quantization/bnb
    • File
      test_4bit.py