🔴[`Attention`] Bert-based Models Attention Refactor (#38301)
* clean start to bert refactor
* some test fixes
* style
* fix last tests
* be strict on positional embeddings, fixup according tests
* cache support
* more cache fixes, new causal API
* simplify masks, fix tests for gen
* flex attn, static cache support, round of fixes
* ?
* this time
* style
* fix flash attention tests, flex attention requires torch 2.7.x to work with multiple classes (as recompile strats force a size call which is wrongly interpreted before)
* roberta
* fixup sdpa remains
* attention split, simplify args and kwargs, better typing
* fix encoder decoder
* fix test
* modular roberta
* albert
* data2vectext, making it modular tomorrow
* modular data2vec text
* tmp disable
* xmod + cache position fixes
* whoops
* electra + markuplm, small fixes
* remove wrong copy
* xlm_roberta + some embedding fixes
* roberta prelayernorm
* RemBert: remove copy, maybe doing it later
* ernie
* fix roberta offloading
* camembert
* copy fixes
* bert generation + fixes on eager
* xlm roberta xl
* bridgetower (text) + seamlessv2 copy fixes
* rocbert + small fixes
* whoops
* small round of fixups
* NOTE: kernels didnt load with an earlier version, some fixup (needs another look bc cross deps)
* the end of the tunnel?
* fixup nllbmoe + style
* we dont need this anymore
* megatron bert is barely used, low prio skip for now
* Modernize bert (template for others)
NOTE: trying to push this through, might be overdue if not in time possible
* check inputs for all others (if checkmarked)
* fix bridgetower
* style
* fix encoder decoder (partially but cause found and fix also, just needs to be done for everything else)
* proper fix for bert to force intermediate dict outputs
* propagate to others
* style
* xlm roberta xl investigation, its the layernorm...
* mobile bert
* revert this, might cause issues with composed models
* review
* style