transformers
5ea2595e - Add warning for missing attention mask when pad tokens are detected (#25345)

Commit
2 years ago
Add warning for missing attention mask when pad tokens are detected (#25345) * Add attention mask and pad token warning to many of the models * Remove changes under examples/research_projects These files are not maintained by HG. * Skip the warning check during torch.fx or JIT tracing * Switch ordering for the warning and input shape assignment This ordering is a little cleaner for some of the cases. * Add missing line break in one of the files
Author
Parents
Loading