benchmark
680d64ea - change GPT2ForSequenceClassification inference accuracy tolerance (#136749)

Commit
1 year ago
change GPT2ForSequenceClassification inference accuracy tolerance (#136749) Summary: Fixes https://github.com/pytorch/pytorch/issues/123503. https://github.com/pytorch/pytorch/pull/121866 makes GPT2ForSequenceClassification hit the SDPA pattern 18 and then encounter the accuracy issue. The issue only happens with BF16 inference single thread. This PR tends to increase the model tolerance from 4e-3 to 5e-3 and make the check pass. Note that the issue is due to some small implementation diff. For example, the sdpa math backend scales q, k before matmul for stability; the flash attention backend has more diffs as a new algorithm. X-link: https://github.com/pytorch/pytorch/pull/136749 Approved by: https://github.com/jgong5, https://github.com/jansel Reviewed By: jovianjaison Differential Revision: D64290722 fbshipit-source-id: a3e7248f57a97cd767257354d410b3508b5e0325
Author
Parents
Loading