Adding `InferenceClient.get_recommended_model` (#1770)
* Moved logger info to InferenceClient, so get_recommended_model function can bypass that
* Added get_recommended_model to InferenceClient
* Ran make style to generate the async client
* Added tests of get_recommended_model
* Update src/huggingface_hub/inference/_client.py
Co-authored-by: Lucain <lucainp@gmail.com>
* Fixed ordering of logger info and _get_recommended_model, for model string to have been populated
* Removed _get_recommended_model private function, in favor of get_recommended_model in InferenceClient
* Fixed wording of ValueError to use 'model' not 'task'
* Ran make style for AsyncInferenceClient
---------
Co-authored-by: Lucain <lucainp@gmail.com>