[quant][pt2e] add dropout to executorch backend config (#99585)
OD Model has a dropout layer in training, In order to match eager mode qat, we also fake quantize the drop out layer in prepare_qat_fx.
To do this we add the dropout layer to the default_op_configs in which the observation type uses a different observer from its input
Differential Revision: [D45095936](https://our.internmc.facebook.com/intern/diff/D45095936/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99585
Approved by: https://github.com/jerryzh168