langchain
b9db2048 - Fix wrong token counts from `get_num_tokens` from openai llms (#2952)

Commit
2 years ago
Fix wrong token counts from `get_num_tokens` from openai llms (#2952) The encoding fetch was out of date. Luckily OpenAI has a nice[ `encoding_for_model`](https://github.com/openai/tiktoken/blob/46287bfa493f8ccca4d927386d7ea9cc20487525/tiktoken/model.py) function in `tiktoken` we can use now.
Author
Parents
Loading