onnxruntime
9e4dc084 - training with custom autograd Functions (#7513)

Commit
4 years ago
training with custom autograd Functions (#7513) * Register Torch Custom autograd.Function * Add flag to supress pybind11 warning * Avoid unnecessary include in cmake * Add missing reference * Add getter for registerred functions * Format for making subsquent changes cleaner * Fix interop feature build failure * Forward pass, run PyOP on CPU EP * clean up the code * Fix build * Define new ops * refactor pyop - extract PyOpLibProxy class * Hacks to run example * implement the kernel compute func * add back PyOP for comparision experiments * debug info - thread id * refine the kernels * Polish code (cherry picked from commit 4ed606f9a0833592b325a4b40cf917e219845f6f) * Fix a the Tensor address mismatch in C++ side * PythonOpGrad compute * add distributed test case * refine test cases * get dist.get_rank() in Autograd forward pass * Add CUDA kernels * Store float, int, and tuple of them as PythonOp's attributes * Populate local changes * Fix bugs * PythonOp/PythonOpGrad CUDA kernels * Support non-tensor inputs * Single GPU FP16 Run Pass (cherry picked from commit e539989e91e18ee997900292d3493b97d3eafa8a) * Fix segement * add basic test cases * Save progress * fix gradient builder for a Add op who have same inputs * add test cases for auto grad fallback feature * fix ref cnt issue. add thread id for debugging * POC: remove interface class * Remove interface classes * Clean a bit * Coarse-grained clean up after rebase master * reset pyop and language_interop_ops to latest master * Fix missing part during merge * re-structure torch related language interop files * Fix build * Fix tests and build * Fix build and basic unit tests * Fix most of uts * remove unnecessary import * clean up and fix build when enabling language_interop_ops * Fix single-GPU UTs * Move runner register into ORT package * Update dist UTs to new style * Also fix distributed UTs and leaf gradient problem * Static generation for constant args * Move arg_positions_ to static field * Rename some functions * Move arg ceration into a function * Clean output logic in PythonOp * Move PythonOp's ctor * Revise PythonOpGrad * Fix "ORT only supports contiguous tensor for now" for inputs * Fix evaulation mode error, add test & clean up * clean up codes * Fix issues introduced by recent master change (enabled symbolic shape infer) * automatically register forward/backward function pointers && clean up * Fix multi-output case * Add a test back * fix build and clean up * RAII for function params PyObject * Use new exporter * Clean full name in new exporter * Fix UTs * Format a file * Add "inplace" back Remove a legacy comment * Refine TorchProxy 1. Make TorchProxy a formal singleton class. 2. Remove unused Scope class. 3. Simplify the call to Forward and Backward. The two functions now automatically acquire and release GIL state, so user doesn't need any GIL-related calls. * Format * Add lock to avoid racing condition when registering Python objs * Fix Python call param ref issues && Add RefcountTracker for debug build && Clean up * clean up print * Resolve part of comments && clean up * Fix a potential bug * track pyobject consistently * move kernels to cpu provider as base class * Refactor - 1. Extract PythonOpBase/PythonOpGradBase 2. Implement CPU kernels 3. Test coverage for CPU kernels * Refine register code * Add a missing macro * Release python call result objects with PythonObjectPtr && Add UnRegisterContext && Track PyObject for Debugging && Clena up * Fix random segfault issue - relasing a wrong ctx pointer for inplace cases * put ref count in debug macro * Move GIL out * Refine tests * Fix memory leak issue && forward output lifecycle issue: 1. Unregister the OrtValue PythonObject. Currently, the OrtValue shared same buffer with PythonOp/PythonOpGrad's output. So after those kernels outputs are released, the "leaked" OrtValue caused the shared buffer cannot be released. 2. According PyTorch forward+backward execution. The forward outputs (e.g. torch tensors) maintains the context/saved variables/dirty inputs, etc, which are used for backward execution, so its life should be after the backward runs. This change added such a depencencies between PythonOpGrad on PythonOp. * Move dlpack->ortvalue into C++ to avoid temp object registration * Fix the over released Py_False/Py_True && refine tests * Clean up unused functions * Always assume the first forward output is context so we don't need to test unused cases. * Fix a memory leak * move-copy unique_ptr & avoid C-style casting * Use inplace attribute to determine if input tensors are copied * Move DlpackCapsuleDestructor's to a common place * Thread-safe TorchProxy * Use OrtValue instead of OrtValue* * Only keep checks for Debug build * Wrap some long line per comment * onnx_export_type --> kwargs * Use requires_grads to create PythonOpGrad's inputs * add missing files during master merge * Fix build issue after merge * Address two comments. 1. Internalize DlpackCapsuleDestructor 2. Change "(" to "]" for describing closed interval. * Address some comments. 1. "override" -> "overwrite" to avoid using reserved keyword. 2. Call DLPack's helper to create OrtValue for avoiding repeated code. * Address comments. 1. Pass std::mutex to registeration helpers so their callers don't have to lock the mutex expclicitly. 2. Rename "func_context_pool_mutex_" to "mutex_". This mutex is the global mutex for OrtTorchFunctionPool. * Add bridging code to make cuda kernels work with merged master * put debue macro check within RefCountTracker && use default logger for debug info && remove useless ortvalue_ptr interface && typos && revert unncessary blank line changes * fix some comments * Resolve more comments * Capitalize a word * use unique_ptr instead of ObjectPointer for PyObject management && add converntion * Support symbolic shape * Remove unused variable * fix build * Enable function registration for training only && rectify ToDlpack/FromDlpack merge with master. * Don't add context for non-PythonOp opeartors (for example AtenOp) * Fix build error * Polish frontend part. 1. Avoid adding kwargs to ORTModule's ctor 2. Use onnx_export_type rather than kwargs for type safty 3. Fix some build bugs. * Resolve simpler comments * Resolve export related comments * sync master && fix tests && fix non-training build error * Fix build errors * add target link lib * windows build error * Fix orttraining-linux-ci build * disable autograd test && clean up * fix linux orttraining ci build * try fixing win build error * Revise append calls in runner * Enable custom function using a function * Rename to avoid using reservied keyword * Use list comprehension * Set ORT random seed in tests * Remove print code and fix ctx shape * [] -> list() * Move autograd.Function and nn.Module into corresponding functions * Move test helpers * Polish dist test a bit. Tried move helpers to helper file but it causes a deadlock. * trying fix undefined reference * Context is not managed by global pool * Polish dist test * Polish dist test * Add enable_custom_autograd_function * Remove enable_custom_autograd_function from ctors * Add doc strings * Shorter code * Address comments * Add one empty line * revert a minor and not needed change * Address comments * Back to reference * Fix windows builds * Fix windows debug build fail to find "'python39_d.lib'" * fix mac build error * revert _to_contiguous change * add debugging tag for orttraining-cpu-ci * Fix the wrong PYTHON_LIBRARIES which is affected by PYTHON_LIBRARY given in build command * add debugging info * Fix the build in this case: PYTHON_LIBDIR: /opt/_internal/cpython-3.7.10/lib, PYTHON_EXECUTABLE: /opt/python/cp37-cp37m/bin/python3, PYTHON_MULTIARCH: x86_64-linux-gnu PYTHON_LIBRARY_PATH python3.7m * fix build error due to python lib not found * Fixes 1. Release PyObject's 2. Not useing deepcopy because we assume autograd.Function's non-tensor inputs are static (constants) so there should be no side effect after calling any autograd.Function multiple times. * Revert dtoc for decreasing refcnt * add debugging log * add debugging tag * Fix a small leak * Remove ONNX_FALLTHROUGH flag * debug tag * debug tag * fix builds * remove debug tag * fix build * fix builds * fix build * install python3 in centos, in case there is no libpython3.xm.so * build python so for redhat * add training cpu specific docker, build python so inside * revert build-cpython change * try fixing numpy include issue * install_deps after re-installing cpython * fix build && remove debug tag * install openssl before cpython * let's say: builds pass! * add build flag for torch iterop, only enable it when training+Python is enabled * skip ComputeBroadcastBackwardAxesDynamic for the shared inputs * fix build * add debug info for padgrad test * Fix builds * Split dlpack_converter into C++ and Python interfaces respecitively. Then different build use them as needed. * clean up the changes * fix addsubgradient builder * Fix builds * clean up * clean up * Address some comments. 1. Use pointer wraper to avoid calling Py_DECREF 2. Remove unregister_* functions 3. Allow repeated registration by skipping those with existing keys 4. Unregister context in PythonOpGrad * Fix over-released Py_Boolean Co-authored-by: Wei-Sheng Chin <wschin@outlook.com>
Author
Parents
Loading