For the CI failures, I suspect you need to restrict pytest.mark.usefixtures("skip_xp_backends")
to functions/classes you have converted, rather than putting it in pytestmark
, for now.
@mdhaber could you take a look at _support_alternative_backends.py
as the author of the equivalent in special
?
I did a pass through this PR (looking more carefully at the code changes compared to the test changes). Overall this looks pretty good to me. I have a number of comments which would be good to address; all local things though - the approach looks good to me.
Some comments, I haven't looked at the test changes yet
no promises on when I'll find the time to go through the test changes, sorry!
On this branch locally with SCIPY_DEVICE=cuda python dev.py test -j 8 -b all
, I see an additional 1238 failures relative to the main
branch.
Looks like it is all torch
on GPU doing device transfers and erroring out.
The diff below the fold applies skips a bit aggressively, but deals with all the failures. We might want to be more selective than that though as it may disable a bunch of CuPy testing. That said, for many test classes, quite a few different contained tests fail, so it would also be annoying to mark each one.
diff --git a/scipy/signal/tests/test_signaltools.py b/scipy/signal/tests/test_signaltools.py
index 750cd0dab2..51530ad784 100644
--- a/scipy/signal/tests/test_signaltools.py
+++ b/scipy/signal/tests/test_signaltools.py
@@ -39,6 +39,7 @@ skip_xp_backends = pytest.mark.skip_xp_backends
pytestmark = [array_api_compatible, pytest.mark.usefixtures("skip_xp_backends")]
+@skip_xp_backends(cpu_only=True)
class TestConvolve:
@skip_xp_backends("jax.numpy",
@@ -280,6 +281,7 @@ class TestConvolve:
+@skip_xp_backends(cpu_only=True)
class TestConvolve2d:
@skip_xp_backends("jax.numpy", reasons=["dtypes do not match"])
@@ -494,6 +496,7 @@ class TestConvolve2d:
+@skip_xp_backends(cpu_only=True)
class TestFFTConvolve:
@skip_xp_backends("torch", reasons=["dtypes do not match"])
@@ -949,7 +952,8 @@ def gen_oa_shapes_eq(sizes):
-@skip_xp_backends("jax.numpy", reasons=["fails all around"])
+@skip_xp_backends("jax.numpy", reasons=["fails all around"],
+ cpu_only=True)
class TestOAConvolve:
@pytest.mark.slow()
@pytest.mark.parametrize('shape_a_0, shape_b_0',
@@ -1184,6 +1188,7 @@ class TestAllFreqConvolves:
assert res.dtype == dtype
+@skip_xp_backends(cpu_only=True)
class TestMedFilt:
IN = [[50, 50, 50, 50, 50, 92, 18, 27, 65, 46],
@@ -1330,7 +1335,7 @@ class TestMedFilt:
class TestWiener:
- @skip_xp_backends("cupy", reasons=["XXX: can_cast in cupy <= 13.2"])
+ @skip_xp_backends("cupy", reasons=["XXX: can_cast in cupy <= 13.2"], cpu_only=True)
def test_basic(self, xp):
g = xp.asarray([[5, 6, 4, 3],
[3, 5, 6, 2],
@@ -2249,7 +2254,8 @@ class _TestCorrelateReal:
"uint64", "int64",
"float32", "float64",
])
-@skip_xp_backends("jax.numpy", reasons=["fails all around"])
+@skip_xp_backends("jax.numpy", reasons=["fails all around"],
+ cpu_only=True)
class TestCorrelateReal(_TestCorrelateReal):
pass
@@ -2297,7 +2303,7 @@ class TestCorrelate:
assert_raises(ValueError, correlate, [1], [[2]])
assert_raises(ValueError, correlate, [3], 2)
- @skip_xp_backends("jax.numpy", reasons=["dtype differs"])
+ @skip_xp_backends("jax.numpy", reasons=["dtype differs"], cpu_only=True)
def test_numpy_fastpath(self, xp):
a = xp.asarray([1, 2, 3])
b = xp.asarray([4, 5])
@@ -2349,6 +2355,7 @@ def test_correlation_lags(mode, behind, input_size, xp):
assert lags.shape == correlation.shape
+@skip_xp_backends(cpu_only=True)
@pytest.mark.parametrize('dt_name', ['complex64', 'complex128'])
class TestCorrelateComplex:
# The decimal precision to be used for comparing results.
@@ -2471,6 +2478,7 @@ class TestCorrelateComplex:
check_shape=False)
+@skip_xp_backends(cpu_only=True)
class TestCorrelate2d:
def test_consistency_correlate_funcs(self, xp):
@@ -3885,6 +3893,7 @@ class TestDeconvolve:
quotient, remainder = signal.deconvolve(recorded, impulse_response)
+@skip_xp_backends(cpu_only=True)
class TestDetrend:
def test_basic(self, xp):
I resolved all open comments that looked resolved to me and spotted one problem.
If you would rather not have small reviews like this and would prefer only a full review at a time, let me know. I may not have time for that soon, though.
I had a look over this, spotted a few things, and probably missed a bunch more! Maybe we should just merge this since since there's a decent period before the next release to iron out any issues
approving the production code changes! (modulo a few observations)
I'll take a look at the test changes once the existing comments have been addressed
overall LGTM, thanks Evgeni & reviewers! Just a few comments
Login to write a write a comment.
Towards gh-20678
What does this implement/fix?
Delegate
signal.convolve
and its ilk tocupyx.scipy.signal.convolve
if inputs are cupy arrays. For other array types, do the usual convert to numpy, run, convert back dance.Additional information
CuPy provides a near-complete clone of the scipy API in the
cupyx.scipy
namespace. We can treat these CuPy functions as accelerators: if a scipy function detects that its arguments are cupy-compatible, it can delegate all work to the cupyx function.