langchain
bf3aefce - community[patch]: Update tongyi.py to support MultimodalConversation in dashscope. (#21249)

Commit
1 year ago
community[patch]: Update tongyi.py to support MultimodalConversation in dashscope. (#21249) Add the support of multimodal conversation in dashscope,now we can use multimodal language model "qwen-vl-v1", "qwen-vl-chat-v1", "qwen-audio-turbo" to processing picture an audio. :) - [ ] **PR title**: "community: add multimodal conversation support in dashscope" - [ ] **PR message**: ***Delete this entire checklist*** and replace with - **Description:** add multimodal conversation support in dashscope - **Issue:** - **Dependencies:** dashscope≥1.18.0 - **Twitter handle:** none :) - [ ] **How to use it?**: - ```python Tongyi_chat = ChatTongyi( top_p=0.5, dashscope_api_key=api_key, model="qwen-vl-v1" ) response= Tongyi_chat.invoke( input = [ { "role": "user", "content": [ {"image": "https://dashscope.oss-cn-beijing.aliyuncs.com/images/dog_and_girl.jpeg"}, {"text": "这是什么?"} ] } ] ) ``` --------- Co-authored-by: Bagatur <baskaryan@gmail.com>
Author
Parents
Loading