Adding sparse Lp regularization operator to Caffe2 (#38574)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38574
Adding sparse L1 and L2 regularization operator to Caffe2. This doesn't work using run_on_loss, only run_after_optimize. Applying it to run_after_optimize rather than run_on_loss was easier to implement, particularly for the L1 norm which is preferable in some cases and is non-differentiable at zero.
Test Plan: Wrote and ran unit tests in operator_test:sparse_lp_regularizer_test.
Differential Revision: D21003029
fbshipit-source-id: 81070a621752560ce03e320d065ce27807a5d278