pytorch
3efefc40 - [CUDA graphs] Makes sure all graphs tests call empty_cache() at some point before capture (#59233)

Commit
3 years ago
[CUDA graphs] Makes sure all graphs tests call empty_cache() at some point before capture (#59233) Summary: Graphs tests are sometimes flaky in CI ([example](https://app.circleci.com/pipelines/github/pytorch/pytorch/328930/workflows/0311199b-a0be-4802-a286-cf1e73f96c70/jobs/13793451)) because when the GPU runs near its max memory capacity (which is not unusual during a long test), sometimes, to satisfy new allocations that don't match any existing unused blocks, the caching allocator may call `synchronize_and_free_events` to wait on block end-of-life events and cudaFree unused blocks, then re-cudaMalloc a new block. For ungraphed ops this isn't a problem, but synchronizing or calling cudaFree while capturing is illegal, so `synchronize_and_free_events` raises an error if called during capture. The graphs tests themselves don't use much memory, so calling torch.cuda.empty_cache() at some point before their captures should ensure memory is available and the captures never need `synchronize_and_free_events`. I was already calling empty_cache() near the beginning of several graphs tests. This PR extends it to the ones I forgot. Pull Request resolved: https://github.com/pytorch/pytorch/pull/59233 Reviewed By: mruberry Differential Revision: D28816691 Pulled By: ngimel fbshipit-source-id: 5cd83e48e43b1107daed5cfa2efff0fdb4f99dff
Author
Parents
Loading