﻿ Finding Local Maxima and Minima - ITCodar

# Finding Local Maxima and Minima

## Finding local maxima/minima with Numpy in a 1D numpy array

If you are looking for all entries in the 1d array `a` smaller than their neighbors, you can try

``numpy.r_[True, a[1:] < a[:-1]] & numpy.r_[a[:-1] < a[1:], True]``

You could also smooth your array before this step using `numpy.convolve()`.

I don't think there is a dedicated function for this.

## Finding local maxima using find_peaks

You should change the line peak_pos = numbers[peaks] to `peak_pos = peaks` because peaks gives you the index of the peaks which are the actual x coordinates you want to pass to `ax.scatter`.

To get the peak at 422, we can set the threshold to None (so that you aren't constraining yourself by vertical distance to the neighbors), and make the distance larger, such as 10.

Then you can add the heights as text annotations:

``import pandas as pdimport numpy as npfrom scipy.signal import find_peaksimport matplotlib.pyplot as pltData = [95,95,95,95,95,95,95,95,94,94,94,94,94,94,94,94,229,444,457,387,280,188,236,181,183,183,185,186,189,190,190,190,179,165,151,151,161,214,213,213,214,213,212,195,179,160,158,155,114,98,164,346,229,39,134,149,194,1,153,171,187,185,104,102,100,90,90,92,92,92,93,93,93,93,93,93,94,94,94,94,94,11,1,11,11,70,182,104,58,60,134,115,99,97,99,98,98,97,97,97,97,97,97,97,97,97,96,96,96,96,96,96,96,96,96,96,96,96,95,95,95,95,95,95,95,95,95,95,95,95,95,95,95,95,95,95,95,95,94,94,94,94,94,94,94,94,94,94,94,94,94,94,94,93,93,152,206,221,286,326,341,360,377,391,392,393,393,393,394,406,418,420,422,422,408,389,345,329,276,224,166,113,-6,91,91,91,442,324,387,389,387,443,393,393,393,393,391,381,379,377,303,174,131,0,115,112,112,111,111,109,107,106,104,104,103,102,101,101,101,101,100,100,1,1,12,13,65,138,87]df = pd.DataFrame({'Data':Data})# convert to 1D arraynumber_column = df.loc[:,'Data']numbers = number_column.values#finding peaks for 1D array# peaks = find_peaks(numbers, height = 300, threshold = 1, distance = 5)peaks = find_peaks(numbers, height = 300, threshold = None, distance=10)height = peaks['peak_heights'] #list of heights of peakspeak_pos = peaksprint(peaks)# plot the peaksfig = plt.figure()ax = fig.subplots()ax.plot(numbers)ax.scatter(peak_pos, height,color = 'r', s = 25, label = 'Maxima')ax.legend## add numbers as text annotationsfor i, text in enumerate(height):    if text.is_integer():        ax.annotate(int(text), (peak_pos[i], height[i]), size=10)    else:        ax.annotate(text, (peak_pos[i], height[i]), size=10)plt.show()`` ## Python finding local maxima/minima for multiple polynomials efficiently

Since you're finding the minimum of a polynomial, you can take advantage of the fact that it's easy to take the derivative of a polynomial, and that there are many good algorithms for finding the roots of a polynomial.

Here's how it works:

1. First, take the derivative. All of the points which are minimums will have a derivative of zero.
2. Look for those zeros, aka find the roots of the derivative.
3. Once we have the list of candidates, check that the solutions are real.
4. Check that the solutions are within the bounds you set. (I don't know if you added bounds because you actually want the bounds, or to make it go faster. If it's the latter, feel free to remove this step.)
5. Actually evaluate the candidates with the polynomial and find the smallest one.

Here's the code:

``import numpy as npfrom numpy.polynomial import Polynomialdef find_extrema(poly, bounds):    deriv = poly.deriv()    extrema = deriv.roots()    # Filter out complex roots    extrema = extrema[np.isreal(extrema)]    # Get real part of root    extrema = np.real(extrema)    # Apply bounds check    lb, ub = bounds    extrema = extrema[(lb <= extrema) & (extrema <= ub)]    return extremadef find_minimum(poly, bounds):    extrema = find_extrema(poly, bounds)    # Note: initially I tried taking the 2nd derivative to filter out local maxima.    # This ended up being slower than just evaluating the function.    # Either bound could end up being the minimum. Check those too.    extrema = np.concatenate((extrema, bounds))    # Check every candidate by evaluating the polynomial at each possible minimum,    # and picking the minimum.    value_at_extrema = poly(extrema)    minimum_index = np.argmin(value_at_extrema)    return extrema[minimum_index]# Warning: polynomial expects coeffients in the opposite order that you use.poly = Polynomial([5,1,3,6,4]) print(find_minimum(poly, (-5, 5)))``

This takes 162 microseconds on my computer, making it about 6x faster than the scipy.optimize solution. (The solution shown in the question takes 1.12 ms on my computer.)

### Edit: A faster alternative

Here's a faster approach. However, it abandons bounds checking, uses a deprecated API, and is generally harder to read.

``p = np.poly1d([4,6,3,1,5])  # Note: polynomials are opposite order of beforedef find_minimum2(poly):    roots = np.real(np.roots(poly.deriv()))    return roots[np.argmin(poly(roots))]print(find_minimum2(p))``

This clocks in at 110 microseconds, making it roughly 10x faster than the original.

## Finding local maxima and minima of user defined functions

find a list of the stationary points, of their values and locations, and of whether they are minima or maxima.

This is in general an unsolvable problem. Method 1 (symbolic) is appropriate for that, but for complicated functions there is no symbolic solution for stationary points (there is no method for solving a general system of two equations symbolically).

### Symbolic solution with SymPy

For simple functions like your example, SymPy will work fine. Here is a complete example of finding the stationary points and classifying them by the eigenvalues of the Hessian.

``import sympy as symx, y = sym.symbols("x y")f = sym.cos(x*10)**2 + sym.sin(y*10)**2gradient = sym.derive_by_array(f, (x, y))hessian = sym.Matrix(2, 2, sym.derive_by_array(gradient, (x, y)))``

So far, Hessian is a symbolic matrix 2 by 2: `[[200*sin(10*x)**2 - 200*cos(10*x)**2, 0], [0, -200*sin(10*y)**2 + 200*cos(10*y)**2]]`. Next, we find the stationary points by equating `gradient` to zero, and plug them into Hessian one by one.

``stationary_points = sym.solve(gradient, (x, y))for p in stationary_points:    value = f.subs({x: p, y: p})    hess = hessian.subs({x: p, y: p})    eigenvals = hess.eigenvals()    if all(ev > 0 for ev in eigenvals):        print("Local minimum at {} with value {}".format(p, value))    elif all(ev < 0 for ev in eigenvals):        print("Local maximum at {} with value {}".format(p, value))    elif any(ev > 0 for ev in eigenvals) and any(ev < 0 for ev in eigenvals):        print("Saddle point at {} with value {}".format(p, value))    else:        print("Could not classify the stationary point at {} with value {}".format(p, value))``

The last clause is necessary because when the Hessian is only semidefinite, we cannot tell what kind of stationary point is that (`x**2 + y**4` and `x**2 - y**4` have the same Hessian at (0, 0) but different behavior). The output:

``Saddle point at (0, 0) with value 1Local maximum at (0, pi/20) with value 2Saddle point at (0, pi/10) with value 1Local maximum at (0, 3*pi/20) with value 2Local minimum at (pi/20, 0) with value 0Saddle point at (pi/20, pi/20) with value 1Local minimum at (pi/20, pi/10) with value 0Saddle point at (pi/20, 3*pi/20) with value 1Saddle point at (pi/10, 0) with value 1Local maximum at (pi/10, pi/20) with value 2Saddle point at (pi/10, pi/10) with value 1Local maximum at (pi/10, 3*pi/20) with value 2Local minimum at (3*pi/20, 0) with value 0Saddle point at (3*pi/20, pi/20) with value 1Local minimum at (3*pi/20, pi/10) with value 0Saddle point at (3*pi/20, 3*pi/20) with value 1``

Obviously, `solve` did not find all solutions (there are infinitely many of those). Consider solve vs solveset but in any case, handling infinitely many solutions is hard.

### Numeric optimization with SciPy

SciPy offers a lot of numerical minimization routines, including brute force (which is your method 2; generally it's very very slow). These are powerful methods, but consider these points.

1. Each run will only find one minimum.
2. Replacing f with -f you can also find a maximum.
3. Changing the starting point of the search (argument x0 of `minimize`) may yield another maximum or minimum. Still, you'll never know if there are other extrema that you have not seen yet.
4. None of this will find saddle points.

### Mixed strategy

Using `lambdify` one can turn a symbolic expression into a Python function that can be passed to SciPy numeric solvers.

``from scipy.optimize import fsolvegrad = sym.lambdify((x, y), gradient)fsolve(lambda v: grad(v, v), (1, 2))``

This returns some stationary point, `[0.9424778 , 2.04203522]` in this example. Which point it is depends on the initial guess, which was (1, 2). Typically (but not always) you'll get a solution that's near the initial guess.

This has an advantage over the direct minimization approach in that saddle points can be detected as well. Still, it is going to be hard to find all solutions, as each run of `fsolve` brings up only one.