optimum
2678e74d
- Allow `attention_mask=None` for BetterTransformer in the inference batched case for gpt2 & gpt-neo (#1180)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
2 years ago
Allow `attention_mask=None` for BetterTransformer in the inference batched case for gpt2 & gpt-neo (#1180) fix if mask is none
References
#1180 - Allow `attention_mask=None` for BetterTransformer in the inference batched case for gpt2 & gpt-neo
Author
fxmarty
Parents
53e09fe4
Loading