convert : support mixed-precision ModelOpt models with per-tensor NVFP4/FP8 quantization (#20539)
* support mixed-precision ModelOpt models with per-tensor NVFP4/FP8 quantization
* cleanup
* fallback
---------
Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>