Patch release v1.6.4 #13019
carmocca
added this to the 1.6.x milestone 3 years ago
carmocca
changed the base branch from
master
to
release/stable
3 years ago
carmocca
force pushed
from
5b1fed36
to
ca80c122
3 years ago
carmocca
force pushed
from
ca80c122
to
89b0e52d
3 years ago
carmocca
marked this pull request as ready for review 3 years ago
awaelchli
approved these changes
on 2022-05-31
carmocca
force pushed
from
adc7214d
to
777f5cae
3 years ago
Fix zero division error for empty dataloaders (#12885)
e9f77376
Merge pull request #12723 from PyTorchLightning/req/strategies
80d2cf0a
Unpin CUDA docker image for GPU CI (#12373)
26be3c9a
Fix default int values being float (#12989)
4d5bb5be
parse strategies as own extras (#12975)
b3b8ec6b
Avoid redundant callback restore warning while tuning (#13026)
c6cf5f2f
Fix double precision during evaluation (#12983)
3c96219a
add freeze for development and full range for install (#12994)
1493eee7
Use hpu hmp for bf16 alone (#13028)
26b6b4f9
Remove twine dependency from requirements (#13050)
788dbfd8
Fix version freeze comparison (#13057)
13b21865
GPU CI: Increase timeout from 55min to 65min (#13064)
3d067e0e
Fix number of references to LightningModule (#12897)
acf443ca
Fix `materialize_module` recursively setting its child module (#12870)
6009936c
CI: Azure - multiple configs (#12984)
86f44b4d
GPU CI: Increase timeout from 65 to 100min (#13104)
3978e68a
Update trainer profiler typehint to use `Profiler` instead of the dep…
db2a36c9
Avoid firewall message from `find_free_network_port` (#13113)
e1da6d62
Fix tests failing on a single GPU (#11753)
20353c4f
Use coverage>=6.4 (#13132)
8fe02b09
Fix torchelastic detection with non-distributed installations (#13142)
cb5bf3df
Fix doctests
9f3a103b
Enable all ddp params for hpu parallel strategy (#13067)
6e64183f
Update mlflow requirement from <=1.24.0,>=1.0.0 to >=1.0.0,<1.27.0 in…
97a5fe93
Update neptune-client requirement from <=0.15.2,>=0.10.0 to >=0.10.0,…
38abc18e
Update matplotlib requirement from <=3.5.1,>3.1 to >3.1,<3.5.3 in /re…
826e8118
Update jsonargparse[signatures] requirement from <=4.7.1,>=4.7.1 to >…
a627ce6c
Update tensorboard requirement from <=2.8.0,>=2.2.0 to >=2.2.0,<2.10.…
ee17c7f2
Update deepspeed requirement from <0.6.0 to <0.7.0 in /requirements (…
a66d1d0b
Update typing-extensions requirement from <=4.1.1,>=4.0.0 to >=4.0.0,…
5934e7ab
Pin protobuf version (#13177)
7080ab2d
xfail flaky quantization test blocking CI (#13177)
62265252
Fix standalone test collection (#13177)
92f3b30e
Revert "Update deepspeed requirement from <0.6.0 to <0.7.0 in /requir…
c9d49fe3
Avoid changing the current `cudnn.benchmark` value (#13154)
4a5b315e
Fix not running test codes (#13089)
1051b392
Fix logging's step values when multiple dataloaders are used during e…
1ec73f9f
carmocca
force pushed
from
777f5cae
to
1ec73f9f
3 years ago
Specify `Trainer(benchmark=False)` in parity benchmarks (#13182)
f4e6630a
Fix epoch logging on train epoch end (#13025)
5cd763a7
Fix initialization of optimizers in DDP Strategy (#11952)
acc63398
carmocca
force pushed
from
4b8b10c5
to
acc63398
3 years ago
lexierule
merged
a5f82f5f
into release/stable 3 years ago
lexierule
deleted the 1.6.4-draft branch 3 years ago
Login to write a write a comment.
Login via GitHub