CI: 11/12/24 upstream sync #136
add extensibility pointers to `jax.extend` docstring
9e7113e6
Error on numpy array conversion of PRNG key array
83383fc7
jax.device_get: handle generic extended dtypes
58dee3ea
don't warn on unused `type: ignore`
61150607
add about page
0a42bf12
(follow-up of PR #23852) add missing `typename` keyword to work with …
218f7632
Merge pull request #24759 from froystig:docs-about
ddce8670
Skip flaky test on tpuv4
927d7fc2
[mosaic_gpu] Scalar arguments to kernels.
5e43220e
[Mosaic GPU] Implement tiled and swizzled transfers for tiled layouts
6a124ac5
Don't perform size 0 slices into scipy rotations.
834e71bb
Fix argmin docstring to not say "maximum"
9763044d
[Mosaic GPU] Make sure to free the cloned MLIR module when debugging
ce3826d0
Merge pull request #24657 from dymil:patch-1
2b55bd5a
Fix pre-commit to run on all files in CI.
4a365670
Merge pull request #24778 from cainmagi:fix-pr-23852
4d1a1264
[SDY] add JAX lowering to Shardy `ShardingGroupOp` for shard_alike.
afd8239e
Merge pull request #24786 from hawkinsp:scipy
631bcd09
Disable the paged_attention test on TPU v5p.
8f169e7f
Disable lax_test on ARM in Google's internal CI.
7285f10e
Merge pull request #24773 from dfm:fix-ci-lint
ac06d198
Add tests for jnp.einsum in Pallas
0cc17478
Remove unused import
aa6adfb9
Update XLA dependency to use revision
c1360f54
Merge pull request #24808 from jakevdp:fix-lint
c6369f21
Merge pull request #24481 from jakevdp:key-array-error
c8f5b2bb
Add typing overloads for jax.extend.ffi.ffi_call() to aid type checkers
7404e0d2
[MOSAIC:GPU] Add `async_load`, `async_store`, and supporting attribut…
d833066a
Merge pull request #24774 from mattjj:dont-warn-on-unused-type-ignore
85dae9e6
Put the set of current spmd axis names in the axis env instead of spe…
d352f4f2
Make GPU work with copy=True and device_put since same device pinned_…
87ce0cbb
[shape_poly] Remove caching for the symbolic shape evaluator
45ae4dfb
Update XLA dependency to use revision
b51187ca
Update XLA dependency to use revision
098d582e
Skip test_jnp_einsum_grad_y_pallas on gpu due to ooms
a041ea15
Fix buggy and confusing logic in the C++/pjit caching path.
763952a6
Merge pull request #24818 from gnecula:poly_no_cache
f5f380b6
Disable for_loop_test on TPU v5p.
7491fdd9
[Mosaic GPU] Add `base_pointer` argument to `InitializeBarrierOp`.
da89c9e3
jnp.bincount: support boolean inputs
93599163
[XLA:GPU] Change `assert` to `CHECK` in Triton sparsity extensions.
1d24630b
[Mosaic GPU] Ensure that lowering `InitializeBarrierOp` preserves the…
8a7bf2e4
[XLA:GPU] Skip small tile sizes for sparse gemms on Ampere as well. E…
f18f62a5
[Mosaic GPU] Only run tests requiring sm90a on Hopper
24af8a67
Bump actions/cache from 4.1.1 to 4.1.2
39e0f486
[pallas:triton] Simplify reshape lowering rule.
034467de
Merge pull request #24839 from andportnoy:aportnoy/mosaic-gpu-hopper-…
a889a95a
Merge pull request #24814 from jakevdp:bincount-bool
422b4edf
Merge pull request #24828 from jax-ml:caching-fix
242ac2b9
[pallas:triton] Simplify lowering code. `BlockInfo` is now always pre…
0995bc23
[Pallas] Add a cost estimator for Pallas/JAX functions.
0e611e5c
Merge pull request #24841 from jax-ml:dependabot/github_actions/actio…
56150286
Update XLA dependency to use revision
6892e628
Allow 64-bit output types from ffi_call regardless of enable_x64 flag.
478ea0dc
[Mosaic TPU] Support dynamic DMA and ref slice on the 2nd minor when …
38d062db
Add wraparound for 2x2x2 v5p
54e72d50
Make sure compilation_cache.is_cache_used always returns a bool
31e42d8e
Cleanup more remnants of the jax.experimental.host_callback
c9250777
[shape_poly] Fix the handling of jvp(lax.sort)
fb68c97a
Merge pull request #24770 from jakevdp:extended-device-get
4363bb65
Merge pull request #24822 from gnecula:delete_outfeed_rewriter
8420e222
Merge pull request #24823 from gnecula:poly_jvp_sort
5ec08767
[pallas:triton] Fix reshape lowering with scalar output shape.
cb82609a
[pallas:mosaic_gpu] `emit_pipeline` now maintains the grid indices
15f30a9e
Explicitly raise an error if more than 65535 channels are created
2582a337
Merge pull request #24854 from hurryabit:is_cache_used-return-bool
e79eca6f
Merge pull request #24772 from dfm:ffi-call-no-canonicalize
837bccce
Allow more output storage types for some dot algorithms.
9bb63667
Fix overflow error in GPU batched linear algebra kernels.
21e98b5c
Remove GPU test with unreasonably large memory footprint.
a99ccd93
Merge branch 'rocm-main' into ci-upstream-sync-16_1
f3f64462
charleshofer
deleted the ci-upstream-sync-16_1 branch 1 year ago
Assignees
No one assigned
Login to write a write a comment.
Login via GitHub