next.js
21fcfb01 - feat: Replace InnerStorage with the generated TaskStorage struct (#88355)

Commit
32 days ago
feat: Replace InnerStorage with the generated TaskStorage struct (#88355) ## What Replace `InnerStorage` with `TaskStorage` generated by the new derive macro from #88338 . Extend the TaskStorage derive macro to generate `CachedDataItem` adapter methods on the `TaskStorageAccessors` trait so we can maintain API compatibility. Replace the serialization layer to use the native TaskStorage representations now that CachedDataitem is only used for API access ## Why TaskStorage will enable more performant serialization and access patterns. From testing on vercel-site this saves ~1% of persistent cache size. For this PR there is no benefit to access pattern performance (in fact i would anticipate a small regression due to the adaptor layer) ## How 1. **Extended macro attribute parsing** - Added `variant`, and `key_field`` attributes to specify CachedDataItem variant mapping per field 2. **Generated adapter methods on TaskStorageAccessors trait**: - `insert_kv(item)` - Insert a CachedDataItem into typed storage - `get(key)` - Look up by CachedDataItemKey - `remove(key)` - Remove by key - `get_mut(key)` - Mutable access (returns None for flags/sets/multimaps) - `iter(type)` - Iterate over a specific variant type - ... This enables current access patterns to keep working 3. **Migrated serialization and deserialization flow**: - use `TaskStorage` throughout the serialization flow using apis like `restore_from` and `clone_data` to create targeted copies for lock free IO. 4. **Removed new dead code** - Delete the `Storage` layer now that it is dead. On its own this will be a small savings in serialized data size (saves 20Mb for vercel-site) and is slightly faster to snapshot/restore. Access performance should be nearly equivalent in performance.
Author
Parents
Loading