onnxruntime
47f136e2 - Speed Up Whisper Export (#16504)

Commit
2 years ago
Speed Up Whisper Export (#16504) ### Description Add a greedy option to the initializer deduplication process in the Whisper export. Currently to detect shared initializers, ORT compares every initializer against every other initializer (n^2). In the comparison operator, if the two initializers have different data types (e.g. raw_data and int_64), both initializers are converted to a numpy array and the cast result is compared. This cast happens in every comparison, and exponentially affects the runtime of finding shared initializers. This cast operation is the bottleneck for the current Whisper export script. The conversion to the numpy array is useful for detecting equal initializer values across nodes of different data types (e.g. recognizing a bias value of 0.0 is the same as a slice index of 0) but isn't triggered when comparing initializers of the same data type (e.g. weight value of 0.6 == weight value of 0.6). The latter case is where the majority of utility is for Whisper, and so by eliminating our path for comparing numpy arrays for initializers we save a lot of time for minimal cost. In other words, this PR adds an option to remove the ability to detect shared initializers of different types (e.g. Slice Index and MatMul Constant) while retaining the ability to deduplicate weights. ### Motivation and Context <!-- - Why is this change required? What problem does it solve? - Current time to export Whisper-large is prohibitive. --------- Co-authored-by: Peter McAughan <petermca@microsoft.com>
Author
Parents
Loading