pytorch
e3f54d46 - Support Custom Memory Plan in CudaCachingAllocator (#83837)

Commit
2 years ago
Support Custom Memory Plan in CudaCachingAllocator (#83837) * added sequence number * added test to ensure the sequence number is logged * added test to ensure the sequence number is logged * track allocate and free event sequences * v2 * track allocate/free events * track allocate/free events * track allocate/free events * track allocate/free events * track allocate/free events * track allocate/free events * track allocate/free events * track allocate/free events * track allocate/free events * track allocate/free events * track allocate/free events * track allocate/free events - singleton * track allocate/free events - singleton * track allocate/free events - singleton * track allocate/free events - singleton * track allocate/free events - singleton * track allocate/free events - singleton/comments * track allocate/free events - singleton/comments * track allocate/free events - singleton/comments * track allocate/free events - singleton/comments * track allocate/free events - comments * track allocate/free events - comments * track allocate/free events - comments * track allocate/free events - comments * track allocate/free events - try-finally * track allocate/free events - try-finally * track allocate/free events try * track allocate/free events new flags * track allocate/free events new flags * M2-start * M2 * support-custom-memory-plan * Update __init__.py
Author
Parents
Loading