[JIT] Also fold NaiveSyncBatchNorm when folding batch norm (#57823)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57823
Some models use the `NaiveSyncBatchNorm` instead of `BatchNorm2d`, but during inference they behave the same. This change is to ensure that `NaiveSyncBatchNorm` gets folded into convs during optimization passes, particularly `FoldConvBatchNorm`.
Test Plan: Imported from OSS
Reviewed By: jerryzh168
Differential Revision: D28291709
Pulled By: SS-JIA
fbshipit-source-id: c494dc7698c3fa536146038808fedbb46c17a63b