Megatron-DeepSpeed
Add LRU cache, add faster tokenization
#37
Merged

Commits
  • Update gpt2_tokenization.py
    huu4ontocord committed 4 years ago
  • Update gpt2_tokenization.py
    huu4ontocord committed 4 years ago
  • Update gpt2_tokenization.py
    huu4ontocord committed 4 years ago
  • Update preprocess_data.py
    huu4ontocord committed 4 years ago
  • Update gpt2_tokenization.py
    huu4ontocord committed 4 years ago
  • Merge branch 'bigscience-workshop:main' into main
    huu4ontocord committed 4 years ago
  • Update megatron/tokenizer/gpt2_tokenization.py
    huu4ontocord committed 4 years ago
  • Update gpt2_tokenization.py
    huu4ontocord committed 4 years ago
  • Update megatron/tokenizer/gpt2_tokenization.py
    huu4ontocord committed 4 years ago
  • Update gpt2_tokenization.py
    huu4ontocord committed 4 years ago
  • Update gpt2_tokenization.py
    huu4ontocord committed 4 years ago
Loading