fix accidental specialization with faketensor input checks (#121460)
Summary: When fake tensors are passed to a graph module and we do runtime assertions on them, we can accidentally trigger specialization guards. It's better to just relax the checking for these.
Test Plan: confirmed that problem in T181400371 is now fixed
Differential Revision: D54658960
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121460
Approved by: https://github.com/angelayi