implementing flux on TPUs with ptxla (#10515)
* implementing flux on TPUs with ptxla
* add xla flux attention class
* run make style/quality
* Update src/diffusers/models/attention_processor.py
Co-authored-by: YiYi Xu <yixu310@gmail.com>
* Update src/diffusers/models/attention_processor.py
Co-authored-by: YiYi Xu <yixu310@gmail.com>
* run style and quality
---------
Co-authored-by: Juan Acevedo <jfacevedo@google.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>