Allow also Async Storage implementations (#755)
* Split up Storage definition into a SyncStorage and a MaybeAsync one
* Adjust existing storages ...
This commit adjusts the existing storages to the new structure and also adds a new method "contexts" to query the contexts on the current level. it also adds tests for this
* Add a experimental Async Storage class
... to be used in enhanced tests once issues in new API are fixed
* Wire in new async storage
This commits add the handling for the new potentially async storage into all places where needed. Some of it can be removed once the legacy API gets removed and we could use a pure async storage (oif we like).
* Legacy API allows only SyncStorage
* Adjust legacy examples
* Adjust new API examples
* Adjust shell examples
* Make sure to return the right type in asyncNew instead any
* Introduce new isObject method
* Use the new isObject method
Please also have a look especially where potentially arrays are allowed because this would be forbidden with this new method!
* Optimize code without isObject because not possible here
* Replace __known__ by contexts
* Make sure to correctly return promise in ClusterClients
... found by chance because tests broke
* Adjust tests for Storage
* Rewert isObject in two places
* Changelog
* Revert another isObject place because it could be null allowed
* Ok this is ok
* Try to make Fabric remove callback maybe async to allow both
* Address review feedback
* [execute-chiptests-long] make linter happy, use async storage in bridge tests
* Allow Matter event emitters to be synchronous
Rather than awaiting the result of a triggered event, if a promise is returned we stick it into the runtime so it's
tracked independently.
* Remove some unknowns
Also remove some node code that accidentally linked into matter.js
* Add "globals" to matter.js types
Something must've changed, maybe w/ TS 5.4.3, but the IDE did not know about Symbol.asyncDispose. Adding "globals" to
the types array in src/package.json seems to have fixed it.
---------
Co-authored-by: Greg Lauckhart <greg@lauckhart.com>