llm-foundry
0be2ca82 - Allow MPT models to return attention weights (#599)

Commit
2 years ago
Allow MPT models to return attention weights (#599) * Allow MPT models to return attention weights * Update llmfoundry/models/mpt/modeling_mpt.py Co-authored-by: Daniel King <43149077+dakinggg@users.noreply.github.com> * Add unit test * Update tests/test_model.py Co-authored-by: Daniel King <43149077+dakinggg@users.noreply.github.com> * Update tests/test_model.py --------- Co-authored-by: Daniel King <43149077+dakinggg@users.noreply.github.com>
Author
Parents
Loading