[quant] Fix tensorrt config after the backend_config_dict refactor (#76414)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76414
Previously we refactored FX Graph Mode Quantization code base to use a native backend config dict for fbgemmq/qnnpack,
because of this, we need to defien the backend config dict for tensorrt properly as well (previously it was relying on
fbgemm/qnnpack configs), this PR added some configs to enable uru10x10 again
Test Plan: buck run mode/dev-nosan -c fbcode.split-dwarf=true -c fbcode.platform=platform009 accelerators/workloads/models/uru10x10:uru_10x10_to_trt_eval -- --int8
Reviewed By: vkuzo
Differential Revision: D35939944
fbshipit-source-id: c64ade5074f5a8ee74a833bb990cd7a91c2cb152
(cherry picked from commit 02855a5ef8c196fb5b0defdfff58d6f2b94c693e)