The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
1468 | 1468 | else: | |
1469 | 1469 | z = posterior.mode() | |
1470 | dec = self.decode(z) | ||
1470 | dec = self.decode(z).sample |
Otherwise we return a tuple of DecoderOutput
when return_dict=False
.
Unused.
310 | 310 | self, x: torch.Tensor, generator: Optional[torch.Generator] = None, return_dict: bool = True | |
311 | 311 | ) -> Union[DecoderOutput, Tuple[torch.Tensor]]: | |
312 | 312 | if self.use_slicing and x.shape[0] > 1: | |
313 | output = [self._tiled_decode(x_slice) if self.use_tiling else self.decoder(x) for x_slice in x.split(1)] | ||
313 | output = [ | ||
314 | self._tiled_decode(x_slice) if self.use_tiling else self.decoder(x_slice) for x_slice in x.split(1) | ||
315 | ] |
Should use x_slice
and not x
.
Could maybe further refactor this to how the current implementations of Cog/Mochi are with _decode
method. A bit easier to understand code flow that way
Yeah sure feel free to club those in your PR.
@DN6 a gentle ping.
434 | 434 | temb, | |
435 | 435 | zq, | |
436 | conv_cache=conv_cache.get(conv_cache_key), | ||
436 | conv_cache.get(conv_cache_key), |
Because the torch.utils.checkpoint.checkpoint()
method doesn't have any conv_cache
argument.
Because these are supported.
@a-r-r-o-w @DN6 a gentle ping.
@a-r-r-o-w merging this to unblock you and will let you add any left over tests. Hopefully, that is okay.
Login to write a write a comment.
What does this PR do?
Internal thread: https://huggingface.slack.com/archives/C065E480NN9/p1730203711189419.
Tears apart
test_models_vae.py
to break the tests in accordance with the Autoencoder model classes we have undersrc/diffusers/models/autoencoders.
Didn't include Allegro as it's undergoing some refactoring love from Aryan. Discussed internally.
Some comments inline.