Use tagged CodeInstances for AbstractInterpreter caching (#52233)
Currently external AbstractInterpreters all follow the pattern
that `CACHE = IdDict{MethodInstance,CodeInstance}`. The
NativeInterpreter
has the benefit that each `MethodInstance` carries an inline cache of
`CodeInstances`.
`CodeInstances` pull triple duty, they contain validity information
(`min_world`, `max_world`),
the inference state cache and the compilation state cache.
When we currently create a CodeInstance we don't record the owner and we
thus construct detached CodeInstances.
In this PR I introduce the notion of tagging the code instances with the
owner (nothing meaning native),
thus allowing foreign code instances to be stored in the inline cache.
GPUCompiler/CUDA would change it's caching strategy from
`IdDict{MethodInstance, CodeInstance}` to `IdDict{CodeInstance,
LLVM.Module}`
or similar. Ideally we would allow polymorphism for the compilation
state part of the code instance, but that seems to invasive for now.
This has the benefit that CodeInstances are handled by caching and it
would allow us to cache inference results
from foreign abstract interpreters across sessions.
The downside is that the cache walk now is a bit slower since we need to
filter on owner. If this turns out to be
a large bottleneck we could bifurcate the cache on owner.
Co-authored-by: Shuhei Kadowaki <aviatesk@gmail.com>