Skip to content

ENH: add maxlag and lags parameters to correlate/convolve#31261

Open
HoniSanders wants to merge 18 commits intonumpy:mainfrom
HoniSanders:maxlag2025
Open

ENH: add maxlag and lags parameters to correlate/convolve#31261
HoniSanders wants to merge 18 commits intonumpy:mainfrom
HoniSanders:maxlag2025

Conversation

@HoniSanders
Copy link
Copy Markdown

@HoniSanders HoniSanders commented Apr 16, 2026

PR summary

This PR adds maxlag and lags keyword parameters to np.correlate() and np.convolve(), plus a companion function correlation_lags() that returns the corresponding lag indices for the relevant mode/lags.

What was troubling me is that numpy.correlate() does not have a maxlag feature. This means that even if I only want to see correlations between two time series with lags between -100 and +100 ms, for example, it will still calculate the correlation for every lag between -20000 and +20000 ms (which is the length of the time series).

This PR revives #5978, which I opened back in 2015 and never finished pushing through. I had introduced this question as issues on numpy and scipy: #5954, scipy/scipy#4940 and on the scipy-dev list. It also got attention on stackoverflow at the time. The new issue that this resolves is #17286.

Apologies for the very long downtime. I'm excited to be able to come back to this and finally get this much-needed feature included.
Also, I plan on squashing the commits in this branch and naming the single commit to be the same as the name of this PR. I am expecting the review process to require more commits so pushed my branch as is. I hope this is ok.

Proposed API

np.correlate(a, v, mode=..., *, maxlag=None, lags=None)
np.convolve (a, v, mode=..., *, maxlag=None, lags=None)
np.correlation_lags(a_len, v_len, mode=..., *, maxlag=None, lags=None)

Parameter Design

  • maxlag=M (int): symmetric inclusive window [-M, M] (2M+1 lags). Matches MATLAB's xcorr(x, y, M) convention.
  • lags=: a range, a slice with explicit start/stop, or a 1-D integer array_like containing an arithmetic progression.

Both keyword-only, mutually exclusive. Default mode auto-resolves to 'lags' when either is supplied.

Major file changes

  • Python (numpy/_core/numeric.py): new maxlag/lags kwargs on correlate and convolve; helper functions _lags_from_maxlag, _lags_from_lags, _lags_from_mode, companion function correlation_lags.
  • C (numpy/_core/src/multiarray/multiarraymodule.c): new array_correlate_lags private C function; _pyarray_correlate implements non-mode lags and is now the single normalization site (handles array swap, negative-step normalization, output reversal internally — callers pass any valid form).

Benchmark

For a user interested in a small number of lags around 0, there is a huge speedup:

size1 size2 full + slice maxlag=5 speedup
1000 100 47.2 µs 11.1 µs 4.3×
1000 1000 572 µs 20.2 µs 28×
100000 100 4.09 ms 14.0 µs 290×
100000 1000 60.0 ms 23.4 µs 2500×

(From benchmarks/benchmarks/bench_core.py::CorrConvLags, included in this PR.)

First time committer introduction

Hi! I have used NumPy for more than a decade now, both directly, and as a key supporter of the whole scientific python ecosystem (e.g. pandas, pytorch, etc). When I started working on this in 2015, I was only using NumPy for the first time, at the time to build simulations of biological neural networks that were more performant and open source than Matlab, which I had previously been using. This was a feature that would have helped me quite a bit at the time, as I was cross-correlating very long time series, but only needed a relatively small window of time lags around 0. I have wanted to come back to it because of the continued interest of others even though I myself no longer need this specific functionality.

AI Disclosure

Claude Code is what finally allowed me to finally bring this PR to completion.

I used Claude as a pair-programming assistant for:

  • chasing down bugs (in particular it helped me find a buffer overflow that the earlier PR attempt had introduced in _pyarray_correlate);
  • proposing a more intuitive argument list for the Python API that I approved;
  • refactoring the vector inversion logic so the C function would be the single normalization site;
  • proposing a more complete structure of the test cases (geometry/equivalence/dtype groupings) -- the expected values in the tests were computed by hand or by running and sub-slicing the pre-existing implementation.
  • writing some of the docstrings (Parameters/Returns wording)

Brings the maxlag2025 branch up to date with current numpy/numpy main
(843 commits since branch base).

Conflict resolution in numpy/_core/src/multiarray/multiarraymodule.c:
- Adopt upstream's _pyarray_correlate signature change from
  `int typenum` to `PyArray_Descr *typec` (PR numpy#30931).
- Keep our refactor: the function still takes (minlag, maxlag, lagstep)
  instead of (mode, *inverted), and handles array swap, negative-step
  normalization, and output reversal internally.
- Update PyArray_Correlate, PyArray_Correlate2, and PyArray_CorrelateLags
  to use upstream's PyArray_DTypeFromObject + NPY_DT_CALL_ensure_canonical
  pattern for type resolution, with proper Py_DECREF(typec) cleanup.
- Update the new array_correlatelags argument parser to use upstream's
  brace-syntax for npy_parse_arguments.

Add NPY_2_6_API_VERSION 0x00000016 and matching version-string clause
in numpy/_core/include/numpy/numpyconfig.h to support the bumped
C API version that registers PyArray_CorrelateLags at slot 369.

Tests: numpy/_core/tests/test_numeric.py: 512 passed, 1 skipped.
@HoniSanders HoniSanders marked this pull request as draft April 16, 2026 21:14
@HoniSanders HoniSanders marked this pull request as ready for review April 17, 2026 02:24
@ngoldbaum ngoldbaum added 01 - Enhancement 56 - Needs Release Note. Needs an entry in doc/release/upcoming_changes 54 - Needs decision labels Apr 17, 2026
@ngoldbaum
Copy link
Copy Markdown
Member

Thanks for the PR, I just enabled CI. If it fails you may need to enable CI on your fork to get CI passing in a timely manner.

I don't have time to go through the backlog of discussion about this and am not registering an opinion about whether we want to add this or not. That said, generally NumPy is pretty conservative about adding new keywords and functions, since the API surface is already very large and complicated.

One of the rules we have is that new features require sending a message to the numpy-discussion mailing list with a proposal to add the new feature. That gives the wider contributor base who don't actively follow github a chance to weigh in.

We're also generally conservative about adding new things to the C API, so your proposal to add a new C API item needs some justification. Keep in mind that basically no one will be able to use the NumPy 2.6 C API until perhaps 2028 due to projects keeping support for older NumPy releases. Is it really worth it to add the new C API? You'll need to explain why we should add the function.

Would you mind sending such a message to the mailing list about this proposal?

@HoniSanders
Copy link
Copy Markdown
Author

Thanks. I did post about this on the numpy-discussion list at the time (see https://mail.python.org/archives/list/numpy-discussion@python.org/thread/JMXNOCFQRFJYISFKSRI7MRL7GPK5ZD4S/#JMXNOCFQRFJYISFKSRI7MRL7GPK5ZD4S and especially https://mail.python.org/archives/list/numpy-discussion@python.org/thread/6FC27XPOD4SBH3KJ7VY5DDAPS5Q46GPO/#6FC27XPOD4SBH3KJ7VY5DDAPS5Q46GPO where several people chimed in saying this would be useful). I figured that was sufficient, but can re-post since it's been a while. You can also see several people hopping in to support the feature on the previous PR and closed issue.

As for the C API question, I believe it needs a new C API function because the core vector calculation implementation needs to be in C and the existing arguments are not sufficient to support the new implementation. Feel free to let me know if there is another way for the python function to call a modified version of the existing C functions.

@ngoldbaum
Copy link
Copy Markdown
Member

I figured that was sufficient, but can re-post since it's been a while. You can also see several people hopping in to support the feature on the previous PR and closed issue.

Ah, I didn't search the archives. I'll take a look at those discussions and let you know if I think a new mailing list ping is warranted.

As for the C API question, I believe it needs a new C API function because the core vector calculation implementation needs to be in C and the existing arguments are not sufficient to support the new implementation.

But why does the new function need to be publicly exposed in the C API? Can't NumPy just call an internal C function?

Feel free to let me know if there is another way for the python function to call a modified version of the existing C functions.

You could not expose it publicly and instead just use the new C implementation internally.

@HoniSanders
Copy link
Copy Markdown
Author

oh sure, I didn't realize limiting it to internal use was an option. I don't know who uses the public C API. I guess the value of having the public C API function would just be to expose the functionality to whoever uses the C API as long as we are building the functionality anyway. But I certainly wouldn't want the question of the C API to delay the inclusion of the functionality in the python API

@ngoldbaum
Copy link
Copy Markdown
Member

Oh great, good it was just a misconception. We definitely add features to the Python API without adding equivalent features in the public C API.

@ngoldbaum
Copy link
Copy Markdown
Member

I just took a look and given that it’s been more than ten years since the mailing list discussion, I do think a new ping is warranted. When you send your message, please include context about prior discussions and how this proposal differs from earlier proposals, if it does.

Also, I haven’t checked but it would also be good to see what other array libraries do and what the array API standard says about this case. If adding this makes us more conformant with the array API or otherwise makes it easier to write code that is agnostic to the underlying array API, that would also argue towards including this.

However, if this is an entirely new feature that only NumPy would implement, that makes it a harder sell.

Also please feel free to come to a NumPy community meeting to chat face-to-face about this over Zoom:

https://scientific-python.org/calendars/

@HoniSanders
Copy link
Copy Markdown
Author

Ok, I will send an email to the list, and I will try to attend this week's zoom.

What do you mean by "other array libraries" and "array API standard"?

@rkern
Copy link
Copy Markdown
Member

rkern commented Apr 20, 2026

Array API Standard. In particular, note the Stakeholders which are those other array libraries.

convolve and correlate are not currently part of the standard, but those other libraries often do implement numpy functions that aren't in the standard yet. Changes to our implementation do have a "blast radius" in that it will either cause those libraries to do more work to follow numpy's choices or suffer greater divergence from numpy.

You may want to consider the option to implement more specialized functions in scipy.signal instead of modifying np.convolve and np.correlate. In particular, you could avoid needing returns_lagvector: we do not want to use this pattern of optional returns if we can at all avoid it. We will go to great lengths to avoid it.

@HoniSanders
Copy link
Copy Markdown
Author

Thanks for the pointer.
As you mention, convolve and correlate are not part of that standard, so it shouldn't break anything. The change I propose is backwards compatible so it will only cause more work for other libraries if they want to include this additional feature (which uses optional arguments). In that case it will be less work for them, as they only have to pass through instead of implementing from scratch themselves. I didn't find array libraries with conflicting implementations.
I found several related implementation ideas:

If you want, we can remove the returns_lagvector argument. That was a suggestion proposed in the first round of getting feedback on this PR ten years ago. We could have a separate function to calculate lags like scipy has if you prefer.

@HoniSanders
Copy link
Copy Markdown
Author

I pushed a commit to remove the returns_lagvector argument and instead have a companion function like scipy has.

@rkern
Copy link
Copy Markdown
Member

rkern commented Apr 20, 2026

As you mention, convolve and correlate are not part of that standard, so it shouldn't break anything. The change I propose is backwards compatible so it will only cause more work for other libraries if they want to include this additional feature (which uses optional arguments).

That's not quite the logic at play here. It's not like a dependency (e.g. scipy depends on numpy, if a new optional argument is added in a backwards-compatible fashion, that's fine because all of the existing usages of np.convolve() inside of scipy still work). Adding a new optional argument would still affect any of these libraries that provide a convolve() or correlate() that attempt to follow numpy's current signature. That's because if numpy users use the new arguments, that code will not be portable to those array implementations by swapping out np for xp (the placeholder name for the other-array-library namespace). So those other implementations still face the choice to do the work to add the new feature or else live with the growing discrepancy. There's a cost either way, and there's nothing about the optionality or backwards compatibility of the new feature that alleviates that cost.

Comment thread numpy/_core/numeric.pyi Outdated
Comment on lines +1057 to +1059
a: _ArrayLike1D[_AnyNumericScalarT], v: _ArrayLike1D[_AnyNumericScalarT],
mode: _CorrelateMode | L["lags"] = ..., *, maxlag: int | None = ...,
lags: _LagsArg = ...,
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I think it'd help make it more readable if we'd place the params on separate lines

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And could you fill in the ... defaults?

Comment thread numpy/_core/numeric.pyi Outdated

# NOTE: we ignore UP047 because inlining `_AnyScalarT` would result in a lot of code duplication

type _LagsArg = int | tuple[int, int] | tuple[int, int, int] | range | slice | ArrayLike | None
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use a more specific array-like type? numpy._typing._ArrayLikeInt_co, for example?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: It can help to move the | None to the use-site, so that you can immediately see that it's the default.

Comment thread numpy/_core/numeric.py Outdated


def _correlate_dispatcher(a, v, mode=None):
_CorrModeDefault = object()
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could use numpy._globals._Novalue instead

Comment thread numpy/_core/numeric.py Outdated
@honi-at-simspace
Copy link
Copy Markdown

Thanks for the comments and the meeting today.

With respect to downstream libraries wanting to match numpy's signatures, I realized that it would be trivial for other libraries to accept a new lags/maxlag keyword and just compute the same complete correlation they currently do and then just slice it at the end and return the sliced version. That would keep the other libraries API consistent with numpy's very easily. They can then decide later whether they want to actually implement the performant version of lag subsetting or not.

Another point that came from the meeting today was with respect to interest in the feature. The fact that matplotlib has a maxlag keyword in its cross correlation function (that just slices at the end without any performance improvement) is evidence that there is interest in this functionality, and the fact that they call numpy.correlate means that it would be extremely easy for them to update their call to immediately take benefit of the performance improvement.
Additionally, I pointed out that there have been periodic comments on the previous closed PR expressing interest in the feature and asking when it would be implemented.

tasks for me from the numpy meeting today:

  • Remove PyArray_CorrelateLags from C API (done, see commits above)
  • post on scipy https://discuss.scientific-python.org/c/contributor/12 to see what they think.
  • post on the Jax repo to see how they would feel about the numpy function signature changing and the lift necessary to match it.

@HoniSanders
Copy link
Copy Markdown
Author

Just a few other data points of other libraries having to do weird workarounds to deal with numpy.correlate being extremely inefficient for large arrays where only some lag values are relevant:

@ngoldbaum
Copy link
Copy Markdown
Member

Thanks! I think the statsmodels and matplotlib examples are probably sufficient to justify adding a maxlags keyword argument (following matplotlib).

@bashtage hope you don't mind the ping here, in case you want to weigh in from the statsmodels side.

I'd still like to get opinions from SciPy and Jax as well.

@charris
Copy link
Copy Markdown
Member

charris commented Apr 23, 2026

This is a common need, I am in favor.

@honi-at-simspace
Copy link
Copy Markdown

Great. I posted on scipy here: https://discuss.scientific-python.org/t/custom-lag-parameters-on-correlate-and-convolve-functions/2307

and jax here: jax-ml/jax#37112.

i also posted on matplotlib, and they said that they are not concerned with consistency with numpy on that function. https://discourse.matplotlib.org/t/custom-lag-parameters-on-numpy-correlate-and-convolve-functions/26226

@honi-at-simspace
Copy link
Copy Markdown

Does anyone have contacts at jax or scipy? No one responded to either post.

I posted on scipy here: https://discuss.scientific-python.org/t/custom-lag-parameters-on-correlate-and-convolve-functions/2307

and jax here: jax-ml/jax#37112.

@seberg
Copy link
Copy Markdown
Member

seberg commented Apr 29, 2026

@mhvk do you think this is useful for astropy? I am a bit doubtful about common enough the use-cases at least for lags that shouldn't be using the FFT trick to begin with.
maxlags had the argument that matplotlib has it/uses it, which seems relevant (but should check with them).

@mhvk
Copy link
Copy Markdown
Contributor

mhvk commented Apr 29, 2026

@seberg - I don't know that it would be much use for astropy (the main reason for our convolve is to deal with missing data...), but I do think I would use this myself. On the keywords, I think you are right that the case for maxlag is stronger, as almost always one will want symmetric lags (and steps other than 1 would be rarer still).

That said, a great advantage of being able to pass in lags is that you have those lags at hand right after, if, e.g., you want to plot the result. Note that I'm not in favour for introducing a dedicated function to return the lags, but it does point to a real problem, that it is not always all that obvious what the lags actually are. Obviously, this can (and should!) be solved by mentioning it in the docstring, but, heck, if you pass in lags, you don't have to worry about it at all. I do think that if one is allowed to pass in lags, they should just be any array of int; there is no reason to limit it to a slice.

Anyway, not sure whether or not to go with one, the other, or both...

Implementation-wise, I like that the main function in C now just takes lag information, but wonder if it isn't better to do the creation of that information from the keywords in python. I'd also tend not to change the C API interface at all, just change an internal function and call that from python. (I didn't even know the C version existed, and would agree with the random website I ran into when I googled it; https://runebook.dev/en/docs/numpy/reference/c-api/array/c.PyArray_Correlate2).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

01 - Enhancement 54 - Needs decision 56 - Needs Release Note. Needs an entry in doc/release/upcoming_changes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants