sd-scripts
merge dev to main
#1879
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
140
Changes
View On
GitHub
merge dev to main
#1879
kohya-ss
merged 140 commits into
main
from
dev
Add LoRA+ support
f99fe281
Add LoRA-FA for LoRA+
c7691607
Fix default_lr being applied
1933ab4b
Fix default LR, Add overall LoRA+ ratio, Add log
75833e84
Fix unset or invalid LR from making a param_group
68467bdf
Fused backward pass
4f203ce4
add disable_mmap to args
64916a35
Display name of error latent file
feefcf25
Allow negative learning rate
fc374375
passing filtered hyperparameters to accelerate
2c9db5d9
fix train controlnet
4477116a
Cleaned typing to be in line with accelerate hyperparameters type res…
b886d0a3
Update train_util.py
5cb145d1
disable main process check for deepspeed #1247
52652cba
pop weights if available #1247
0540c33a
Regenerate failed file
040e26ff
removed unnecessary `torch` import on line 115
fdbb03c3
Merge pull request #1233 from rockerBOO/lora-plus
834445a1
move loraplus args from args to network_args, simplify log lr desc
969f82ab
Fix caption_separator missing in subset schema
dbb7bb28
Add caption_separator to output for subset
8db0cadc
support block dim/lr for sdxl
58c2d856
add debug log
52e64c69
update loraplus on dylora/lofa_fa
7fe81502
fix dylora loraplus
3fd8cdc5
Merge pull request #1259 from 2kpr/fused_backward_pass
2a359e0a
update help message for fused_backward_pass
017b82eb
add experimental option to fuse params to optimizer groups
b56d5f78
fix get_trainable_params in controlnet-llite training
793aeb94
chore: Refactor optimizer group
607e041f
update readme
c1ba0b43
Merge branch 'dev' into fused-backward-pass
6dbc23cf
update README for fused optimizer
f3d2cf22
update README for fused optimizer
bee8cee7
Merge pull request #1319 from kohya-ss/fused-backward-pass
7983d3db
Merge branch 'dev' into lora-plus
e9f3a622
Merge branch 'dev' into lora-plus
e01e1487
fix typo
1ffc0b33
Merge branch 'dev' into lora-plus
c6a43705
revert lora+ for lora_fa
3c8193f6
update docs etc.
44190416
Merge pull request #1331 from kohya-ss/lora-plus
02298e3c
Merge pull request #1266 from Zovjsra/feature/disable-mmap
8d1b1acd
update readme and help message etc.
9ddb4d7a
Merge pull request #1278 from Cauldrath/catch_latent_error_file
78020936
raise original error if error is occured in checking latents
37015078
update readme
39b82f26
Merge pull request #1291 from frodo821/patch-1
e96a5217
Merge pull request #1312 from rockerBOO/patch-2
1c296f72
Merge pull request #1313 from rockerBOO/patch-3
a384bf21
fix create_network_from_weights doesn't work
16677da0
update README
589c2aa0
add prompt option '--f' for filename
153764a6
support Diffusers' based SDXL LoRA key for inference
146edce6
update README
2f19175d
Merge pull request #1322 from aria1th/patch-1
0640f017
update README and format code
e3ddd1fb
Merge pull request #1285 from ccharest93/main
47187f70
add `--log_config` option to enable/disable output training config
c68baae4
Merge pull request #1284 from sdbds/fix_traincontrolnet
de0e0b94
remove dependency for omegaconf #ref 1284
e4d9e3c8
Merge pull request #1277 from Cauldrath/negative_learning
38e4c602
update README
4c798129
update README
febc5c59
画像のアルファチャンネルをlossのマスクとして使用するオプションを追加 (#1223)
db675290
revert kwargs to explicit declaration
f2dd43e1
simplify and update alpha mask to work with various cases
da6fea3d
Add LoRA+ LR Ratio info message to logger
00513b9b
Merge pull request #1347 from rockerBOO/lora-plus-log-info
fb12b6d8
fix to work cond mask and alpha mask
e8cfd4ba
Update issue link
d50c1b3c
Merge branch 'dev' into alpha-masked-loss
58cadf47
add doc for mask loss
a4c31551
Merge pull request #1349 from rockerBOO/patch-4
ffce3b50
Update masked_loss_README-ja.md
71ad3c0f
Merge branch 'dev' into alpha-masked-loss
2870be9b
update docs for masked loss
fc85496f
Merge pull request #1339 from kohya-ss/alpha-masked-loss
0d96e10b
Final implementation
b2363f10
Skip the final 1 step
3eb27ced
fix alpha mask without disk cache closes #1351, ref #1339
e5bab69e
Merge pull request #1353 from KohakuBlueleaf/train_resume_step
321e24d8
update for corner cases
4dbcef42
Bump crate-ci/typos from 1.19.0 to 1.21.0, fix typos, and updated _ty…
4ecbac13
Merge pull request #1361 from shirayu/update/github_actions/crate-ci/…
5bfe5e41
set static graph flag when DDP ref #1363
58fb6481
make forward/backward pathes same ref #1363
1a104dc7
Merge branch 'dev' of https://github.com/kohya-ss/sd-scripts into dev
3259928c
Merge branch 'dev' into train_resume_step
4a441889
update README
18d7597b
Merge pull request #1359 from kohya-ss/train_resume_step
22413a52
add grad_hook after restore state closes #1344
56bb81c9
Merge branch 'main' into dev
9dd1ee45
show file name if error in load_image ref #1385
0b3e4f7a
judge image size for using diff interpolation
87526942
Generate sample images without having CUDA (such as on Macs)
2e67978e
Revert "judge image size for using diff interpolation"
1f16b80e
instead cv2 LANCZOS4 resize to pil resize
9ca7a5b6
correct option name closes #1446
74f91c2f
fix SD1.5 LoRA extraction #1490
afb971f9
Handle args.v_parameterization properly for MinSNR and changed predic…
1e8108fe
Merge pull request #1505 from liesened/patch-2
4ca29edb
update readme
d5c076cf
Merge pull request #1433 from millie-v/sample-image-without-cuda
319e4d98
Merge pull request #1426 from sdbds/resize
16bb5699
update README, format code
0005867b
Merge branch 'main' into dev
62ec3e64
Add New lr scheduler (#1393)
fd68703f
Fix to work PIECEWISE_CONSTANT, update requirement.txt and README #1393
6dbfd47a
fix typo
c7c666b1
Merge branch 'main' into dev
9d286076
Merge branch 'main' into dev
43ad7386
improve OFT implementation closes #944
93d9fbf6
Bug fix: alpha_mask load
e7040669
Merge branch 'main' into dev
d7e14721
Merge pull request #1615 from Maru-mee/patch-1
0b7927e5
make timestep sampling behave in the standard way when huber loss is …
e1f23af1
retain alpha in pil_resize backport #1619
29177d2f
init
ab7b2318
Merge pull request #1628 from recris/huber-timesteps
c1d16a76
update README
e74f5814
delete code for cleaning
1beddd84
fix flip_aug, alpha_mask, random_crop issue in caching
bf91bea2
Merge pull request #1640 from sdbds/ademamix8bit
4296e286
fix to work bitsandbytes optimizers with full path #1640
a94bc84d
update readme
ce49ced6
adjust min/max bucket reso divisible by reso steps #1632
fe2aa324
update help text #1632
15675492
fix to work linear/cosine scheduler closes #1651 ref #1393
012e7e63
Fix training for V-pred and ztSNR
8fc30f82
Only add warning for deprecated scaling vpred loss function
e1b63c22
Remove scale_v_pred_loss_like_noise_pred deprecation
0e7c5929
Remove v-pred warnings
be14c062
Merge pull request #1715 from catboxanon/vpred-ztsnr-fixes
c632af86
Merge pull request #1717 from catboxanon/fix/remove-vpred-warnings
b8ae745d
update README
b1e65040
Merge branch 'main' into dev
900d551a
Merge branch 'main' into dev
e070bd99
Merge branch 'main' into dev
6adb69be
update README for merging
345daaa9
kohya-ss
merged
6e3c1d0b
into main
1 year ago
Login to write a write a comment.
Login via GitHub
Reviewers
No reviews
Assignees
No one assigned
Labels
None yet
Milestone
No milestone
Login to write a write a comment.
Login via GitHub