transformers
a3b402ff - llama fp16 torch.max bug fix (#24561)

Commit
2 years ago
llama fp16 torch.max bug fix (#24561) * open llama fp16 bug fix * bug fix * bug fixed * make style * Update modeling_llama.py * apply formatting * Address amy's comment --------- Co-authored-by: Prathik Rao <prathikrao@microsoft.com@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net> Co-authored-by: root <root@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
Author
Parents
Loading