Fix murmurhash3 inclusion in TensorRT shared library (#14221)
### Description
Updates TensorRT and CANN EPs to use murmurhash3 from core/framework via
provider bridge.
### Motivation and Context
A failure in a packaging pipeline required us to temporarily duplicate
murmurhash3 code for the TensorRT EP. This PR removes the duplicate
code. This is what is happening:
The original version of this code conditionally included a murmurhash
function for TensorRT only (not cuda) in the provider bridge. The
packaging pipeline selectively [copies binaries from two separate
builds](https://github.com/microsoft/onnxruntime/blob/main/tools/ci_build/github/linux/extract_and_bundle_gpu_package.sh)
(a cuda-only build and a tensorrt build) into a single libs directory.
These are the files within the resulting libs directory:
- onnxruntime.so (copied from tensorrt build, implements murmurhash in
provider bridge host)
- onnxruntime_providers_shared.so (copied from tensorrt build)
- onnxruntime_providers_tensorrt.so (copied from tensorrt build)
- onnxruntime_providers_cuda.so (copied from **cuda-only build**,
expects a provider host w/o murmurhash)
The [squeezenet
example](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/c_cxx/squeezenet)
crashed when onnxruntime_providers_cuda.so was loaded because the cuda
ep tried to call functions from a `ProviderHost` object that did not
match what was actually implemented by onnxruntime.so.
I've confirmed that we _can_ prevent the crash by modifying the pipeline
to use the onnxruntime_providers_cuda.so file from the tensorrt build
(instead of the file from the cuda-only build). However, I don't think
that is necessarily correct. Instead, I think we should try to make sure
that the provider bridge exposes the same interface to any EP libraries
that can potentially coexist in the same application (like cuda and
tensorrt). Failing that, there's probably something we can do to
generate a better error message when an EP detects that the Provider
Host implements an unexpected interface.
Note that the above applies to the Windows build in the packaging
pipeline as well. I used the onnxruntime branch
[adrianl/test-trt-cuda-bridge-packaging-pipeline](https://github.com/microsoft/onnxruntime/tree/adrianl/test-trt-cuda-bridge-packaging-pipeline)
along with the onnxruntime-inference-examples branch
[adrianl/squeezenet_ld_debug](https://github.com/microsoft/onnxruntime-inference-examples/tree/adrianl/squeezenet_ld_debug)
to test that copying the onnxruntime_providers_cuda.so file from the
tensorrt build gets rid of the crash.