Avoid using TaskTypes as keys in storage instead use hashes (#88904)
Save space in the persisent store by only recording TaskType one time instead of twice.
## What?
Instead of using an encoded TaskType struct as a key in persistent storage just use a 64 bit hash. Now when looking up we actually need to do a cascading read
* read task ids that match the hash
* restore all the tasks until we find one that matches our CachedTaskType
Full cache misses perform the same and so do cache hits since we always end up reading the TaskData anyway (in the `connect_child` operation that immediately follows). This just slightly changes _when_ that second read happens, as such we shouldn't really expect it to slow down.
In the case of a hash collisions we do end up doing strictly more work (reading and restoring TaskData entries for the wrong task), but this work is cached and this should be _extremely_ rare assuming a good hash function..
From measuring vercel-site this saves ~231M of data in the persistent cache. (The cache goes from from 3846M -> 3615 or -231M or about -6%).
Finally this also addresses a narrow race condition where two racing calls to `get_persistent_task_id` for the same task could result in two entries being pusshed to the `persistent_task_log`, that is now addressed as well.
## Why?
Currently we encode 2 copies of every `CachedTaskType` in the database.
1. as the key of the `TaskType`->`TaskId` map (aka `TaskCache` keyspace)
2. as a part of the `TaskStorage` struct stored in the `TaskData` keyspace
This redundancy is wasteful. Instead we can make the `TaskCache` map much smaller and add a bit of complexity to lookups.
## Future Work
### Better hash functions
Right now to compute the hashes we are just running `encode` and then hashing the bytes. This is not optimal, but we do not have a hash function that is suitable for this usecase. So we should create a new `PersistentHash` trait that TaskInputs implement in order to support this without encoding. Or perhaps a custom _encoder_ that accumulates encoding data to a hasher. This will be addressed in https://github.com/vercel/next.js/pull/89059
### New SST ffile heuristics
now that the TaskcCache keyspace is smaller our heuristics on 'maxiumum number of keys in a file' need to be rethought since we are now producing lots of 7Mb files for the taskcache. this will be addressed in https://github.com/vercel/next.js/pull/89058
### New compaction semantics
Right now we tolerate duplicates in the database but compaction will delete them. This is not too harmful for us since it means if there is a hash coliision we will tend to lose one of the results over time.
A better option would be to change compaction semantics for this KeySpace to either tolerate duplicates, leverage values for comparison, or something wilder where we simply 'recompute' the TaskCache instead of compacting it. This will be addressed in https://github.com/vercel/next.js/pull/89075