[AutoDiff] Cache derivative function types for a 3.5x compile time speedup. (#29590)
Compiling packages that use AutoDiff is currently extremely slow. For example,
compiling tensorflow/swift-apis takes ~5 minutes on a slower machine. This is
largely due to calls to `SILFunctionType::getAutoDiffDerivativeFunctionType` in
SIL verifier, which spends a long time computing generic signatures with GSB.
This PR improves this situation by caching computed derivative function types in
`ASTContext`. This resulted in a 1.6x speedup in running the AutoDiff test
suite, and a 3.5x speedup in compiling tensorflow/swift-apis.
Future investigations:
- Cache generic signatures computed from
`getAutoDiffDerivativeFunctionGenericSignature`.
- Cache transpose function types computed from
`SILFunctionType::getAutoDiffTransposeFunctionType()`.
--------------------------------------------------------------------------------
Here are the results building tensorflow/swift-apis with `-j56`.
Before: `swift build 411.45s user 31.20s system 334% cpu 2:12.52 total`
After: `swift build 154.03s user 28.26s system 485% cpu 37.519 total`
Resolves TF-1133.