Change method to calculate number of tokens for OpenAIChat (#1457)
Solves https://github.com/hwchase17/langchain/issues/1412
Currently `OpenAIChat` inherits the way it calculates the number of
tokens, `get_num_token`, from `BaseLLM`.
In the other hand `OpenAI` inherits from `BaseOpenAI`.
`BaseOpenAI` and `BaseLLM` uses different methodologies for doing this.
The first relies on `tiktoken` while the second on `GPT2TokenizerFast`.
The motivation of this PR is:
1. Bring consistency about the way of calculating number of tokens
`get_num_token` to the `OpenAI` family, regardless of `Chat` vs `non
Chat` scenarios.
2. Give preference to the `tiktoken` method as it's serverless friendly.
It doesn't require downloading models which might make it incompatible
with `readonly` filesystems.