optimum
2678e74d - Allow `attention_mask=None` for BetterTransformer in the inference batched case for gpt2 & gpt-neo (#1180)

Commit
2 years ago
Allow `attention_mask=None` for BetterTransformer in the inference batched case for gpt2 & gpt-neo (#1180) fix if mask is none
Author
Parents
Loading