T5Attention support for cross-attention (#2654)
* fix AttnProcessor2_0
Fix use of AttnProcessor2_0 for cross attention with mask
* added scale_qk and out_bias flags
* fixed for xformers
* check if it has scale argument
* Update cross_attention.py
* check torch version
* fix sliced attn
* style
* set scale
* fix test
* fixed addedKV processor
* revert back AttnProcessor2_0
* if missing if
* fix inner_dim
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>