onnxruntime
a7bc727a - Cache opSupportLimits to improve the performance and update tracing e… (#25589)

Commit
271 days ago
Cache opSupportLimits to improve the performance and update tracing e… (#25589) ### Description Cached opSupportLimits in webnn backend and avoid quering it from lower layer each time to improve the performance. Update the trace event in data transfer. ### Motivation and Context In current implementation, each time calling ensureTensor API to check input/output tensor, MLContext.opSupportLimits API will be called to query support ops capability from chromium and this function call will be the hotspot. Call this API when session is created and then cache it will avoid the frequent lower API call.
Author
Parents
Loading