## Finding local maxima/minima with Numpy in a 1D numpy array

If you are looking for all entries in the 1d array `a`

smaller than their neighbors, you can try

`numpy.r_[True, a[1:] < a[:-1]] & numpy.r_[a[:-1] < a[1:], True]`

You could also smooth your array before this step using `numpy.convolve()`

.

I don't think there is a dedicated function for this.

## Finding local maxima using find_peaks

You should change the line peak_pos = numbers[peaks[0]] to `peak_pos = peaks[0]`

because peaks[0] gives you the index of the peaks which are the actual x coordinates you want to pass to `ax.scatter`

.

To get the peak at 422, we can set the threshold to None (so that you aren't constraining yourself by vertical distance to the neighbors), and make the distance larger, such as 10.

Then you can add the heights as text annotations:

`import pandas as pd`

import numpy as np

from scipy.signal import find_peaks

import matplotlib.pyplot as plt

Data = [95,95,95,95,95,95,95,95,94,94,94,94,94,94,94,94,229,444,457,387,280,188,236,181,183,183,185,186,189,190,190,190,179,165,151,151,161,214,213,213,214,213,212,195,179,160,158,155,114,98,164,346,229,39,134,149,194,1,153,171,187,185,104,102,100,90,90,92,92,92,93,93,93,93,93,93,94,94,94,94,94,11,1,11,11,70,182,104,58,60,134,115,99,97,99,98,98,97,97,97,97,97,97,97,97,97,96,96,96,96,96,96,96,96,96,96,96,96,95,95,95,95,95,95,95,95,95,95,95,95,95,95,95,95,95,95,95,95,94,94,94,94,94,94,94,94,94,94,94,94,94,94,94,93,93,152,206,221,286,326,341,360,377,391,392,393,393,393,394,406,418,420,422,422,408,389,345,329,276,224,166,113,-6,91,91,91,442,324,387,389,387,443,393,393,393,393,391,381,379,377,303,174,131,0,115,112,112,111,111,109,107,106,104,104,103,102,101,101,101,101,100,100,1,1,12,13,65,138,87]

df = pd.DataFrame({'Data':Data})

# convert to 1D array

number_column = df.loc[:,'Data']

numbers = number_column.values

#finding peaks for 1D array

# peaks = find_peaks(numbers, height = 300, threshold = 1, distance = 5)

peaks = find_peaks(numbers, height = 300, threshold = None, distance=10)

height = peaks[1]['peak_heights'] #list of heights of peaks

peak_pos = peaks[0]

print(peaks)

# plot the peaks

fig = plt.figure()

ax = fig.subplots()

ax.plot(numbers)

ax.scatter(peak_pos, height,color = 'r', s = 25, label = 'Maxima')

ax.legend

## add numbers as text annotations

for i, text in enumerate(height):

if text.is_integer():

ax.annotate(int(text), (peak_pos[i], height[i]), size=10)

else:

ax.annotate(text, (peak_pos[i], height[i]), size=10)

plt.show()

## Python finding local maxima/minima for multiple polynomials efficiently

Since you're finding the minimum of a polynomial, you can take advantage of the fact that it's easy to take the derivative of a polynomial, and that there are many good algorithms for finding the roots of a polynomial.

Here's how it works:

- First, take the derivative. All of the points which are minimums will have a derivative of zero.
- Look for those zeros, aka find the roots of the derivative.
- Once we have the list of candidates, check that the solutions are real.
- Check that the solutions are within the bounds you set. (I don't know if you added bounds because you actually want the bounds, or to make it go faster. If it's the latter, feel free to remove this step.)
- Actually evaluate the candidates with the polynomial and find the smallest one.

Here's the code:

`import numpy as np`

from numpy.polynomial import Polynomial

def find_extrema(poly, bounds):

deriv = poly.deriv()

extrema = deriv.roots()

# Filter out complex roots

extrema = extrema[np.isreal(extrema)]

# Get real part of root

extrema = np.real(extrema)

# Apply bounds check

lb, ub = bounds

extrema = extrema[(lb <= extrema) & (extrema <= ub)]

return extrema

def find_minimum(poly, bounds):

extrema = find_extrema(poly, bounds)

# Note: initially I tried taking the 2nd derivative to filter out local maxima.

# This ended up being slower than just evaluating the function.

# Either bound could end up being the minimum. Check those too.

extrema = np.concatenate((extrema, bounds))

# Check every candidate by evaluating the polynomial at each possible minimum,

# and picking the minimum.

value_at_extrema = poly(extrema)

minimum_index = np.argmin(value_at_extrema)

return extrema[minimum_index]

# Warning: polynomial expects coeffients in the opposite order that you use.

poly = Polynomial([5,1,3,6,4])

print(find_minimum(poly, (-5, 5)))

This takes 162 microseconds on my computer, making it about 6x faster than the scipy.optimize solution. (The solution shown in the question takes 1.12 ms on my computer.)

### Edit: A faster alternative

Here's a faster approach. However, it abandons bounds checking, uses a deprecated API, and is generally harder to read.

`p = np.poly1d([4,6,3,1,5]) # Note: polynomials are opposite order of before`

def find_minimum2(poly):

roots = np.real(np.roots(poly.deriv()))

return roots[np.argmin(poly(roots))]

print(find_minimum2(p))

This clocks in at 110 microseconds, making it roughly 10x faster than the original.

## Finding local maxima and minima of user defined functions

find a list of the stationary points, of their values and locations, and of whether they are minima or maxima.

This is in general an unsolvable problem. Method 1 (symbolic) is appropriate for that, but for complicated functions there is no symbolic solution for stationary points (there is no method for solving a general system of two equations symbolically).

### Symbolic solution with SymPy

For simple functions like your example, SymPy will work fine. Here is a complete example of finding the stationary points and classifying them by the eigenvalues of the Hessian.

`import sympy as sym`

x, y = sym.symbols("x y")

f = sym.cos(x*10)**2 + sym.sin(y*10)**2

gradient = sym.derive_by_array(f, (x, y))

hessian = sym.Matrix(2, 2, sym.derive_by_array(gradient, (x, y)))

So far, Hessian is a symbolic matrix 2 by 2: `[[200*sin(10*x)**2 - 200*cos(10*x)**2, 0], [0, -200*sin(10*y)**2 + 200*cos(10*y)**2]]`

. Next, we find the stationary points by equating `gradient`

to zero, and plug them into Hessian one by one.

`stationary_points = sym.solve(gradient, (x, y))`

for p in stationary_points:

value = f.subs({x: p[0], y: p[1]})

hess = hessian.subs({x: p[0], y: p[1]})

eigenvals = hess.eigenvals()

if all(ev > 0 for ev in eigenvals):

print("Local minimum at {} with value {}".format(p, value))

elif all(ev < 0 for ev in eigenvals):

print("Local maximum at {} with value {}".format(p, value))

elif any(ev > 0 for ev in eigenvals) and any(ev < 0 for ev in eigenvals):

print("Saddle point at {} with value {}".format(p, value))

else:

print("Could not classify the stationary point at {} with value {}".format(p, value))

The last clause is necessary because when the Hessian is only **semi**definite, we cannot tell what kind of stationary point is that (`x**2 + y**4`

and `x**2 - y**4`

have the same Hessian at (0, 0) but different behavior). The output:

`Saddle point at (0, 0) with value 1`

Local maximum at (0, pi/20) with value 2

Saddle point at (0, pi/10) with value 1

Local maximum at (0, 3*pi/20) with value 2

Local minimum at (pi/20, 0) with value 0

Saddle point at (pi/20, pi/20) with value 1

Local minimum at (pi/20, pi/10) with value 0

Saddle point at (pi/20, 3*pi/20) with value 1

Saddle point at (pi/10, 0) with value 1

Local maximum at (pi/10, pi/20) with value 2

Saddle point at (pi/10, pi/10) with value 1

Local maximum at (pi/10, 3*pi/20) with value 2

Local minimum at (3*pi/20, 0) with value 0

Saddle point at (3*pi/20, pi/20) with value 1

Local minimum at (3*pi/20, pi/10) with value 0

Saddle point at (3*pi/20, 3*pi/20) with value 1

Obviously, `solve`

did not find *all* solutions (there are infinitely many of those). Consider solve vs solveset but in any case, handling infinitely many solutions is hard.

### Numeric optimization with SciPy

SciPy offers a lot of numerical minimization routines, including brute force (which is your method 2; generally it's very very slow). These are powerful methods, but consider these points.

- Each run will only find one minimum.
- Replacing f with -f you can also find a maximum.
- Changing the starting point of the search (argument x0 of
`minimize`

) may yield another maximum or minimum. Still, you'll never know if there are other extrema that you have not seen yet. - None of this will find saddle points.

### Mixed strategy

Using `lambdify`

one can turn a symbolic expression into a Python function that can be passed to SciPy numeric solvers.

`from scipy.optimize import fsolve`

grad = sym.lambdify((x, y), gradient)

fsolve(lambda v: grad(v[0], v[1]), (1, 2))

This returns some stationary point, `[0.9424778 , 2.04203522]`

in this example. Which point it is depends on the initial guess, which was (1, 2). Typically (but not always) you'll get a solution that's near the initial guess.

This has an advantage over the direct minimization approach in that saddle points can be detected as well. Still, it is going to be hard to find all solutions, as each run of `fsolve`

brings up only one.

### Related Topics

If Else Statements to Check If a String Contains a Substring in R

Selecting Data Frame Rows Based on Partial String Match in a Column

Run R Script from Command Line

Dcast Warning: 'Aggregation Function Missing: Defaulting to Length'

Combine Legends For Color and Shape into a Single Legend

Combine (Rbind) Data Frames and Create Column With Name of Original Data Frames

Formula With Dynamic Number of Variables

Complete Dataframe With Missing Combinations of Values

Finding Local Maxima and Minima

Add Column Which Contains Binned Values of a Numeric Column

Controlling Number of Decimal Digits in Print Output in R

Ggplot'S Qplot Does Not Execute on Sourcing

Dictionary Style Replace Multiple Items

Shading a Kernel Density Plot Between Two Points.

Determine Path of the Executing Script