Update optimum-intel version from `1.21.0` to `1.22.0` to avoid Could not load tokenizer using specified model ID or path. OpenVINO tokenizer/detokenizer models won't be generated. (#29397)
### Details:
During test LLM models with NPU, I followed below configuration from
[page](https://docs.openvino.ai/nightly/openvino-workflow-generative/inference-with-genai/inference-with-genai-on-npu.html).
However, I met below issue during convert the HF tokenizer to Openvino
tokenizer.
```bash
Could not load tokenizer using specified model ID or path. OpenVINO tokenizer/detokenizer models won't be generated.
```
After some test, we found the issue can be solved by update the
optimum-intel to `1.22.0` and that's why I created a PR for this.
Co-authored-by: Karol Blaszczak <karol.blaszczak@intel.com>
Co-authored-by: Sebastian Golebiewski <sebastianx.golebiewski@intel.com>