Python - How to Check List Monotonicity

Python - How to check list monotonicity

It's better to avoid ambiguous terms like "increasing" or "decreasing" as it's not clear if equality is acceptable or not. You should always use either for example "non-increasing" (clearly equality is accepted) or "strictly decreasing" (clearly equality is NOT accepted).

def strictly_increasing(L):
return all(x<y for x, y in zip(L, L[1:]))

def strictly_decreasing(L):
return all(x>y for x, y in zip(L, L[1:]))

def non_increasing(L):
return all(x>=y for x, y in zip(L, L[1:]))

def non_decreasing(L):
return all(x<=y for x, y in zip(L, L[1:]))

def monotonic(L):
return non_increasing(L) or non_decreasing(L)

How to compute the Monotonicity value of a list or array in python

Try:

np.diff(mylist).mean()

Or skip the mean and you will have the increment on each step

How to check a sequence is strictly monotonically or there is one turning point where both sides are strictly monotonically?

Neither Python's standard lib nor NumPy have a specific primitive to solve this task.
However, the traditional way in NumPy is to look into differences using np.diff(), or even more simply by evaluating array[1:] versus array[:-1].

To investigate turning points, you could use np.argmin() and np.argmax(), respectively.

Strict monotonicity condition corresponds to: np.all(arr[1:] > arr[:-1]) (increasing) or np.all(arr[1:] < arr[:-1]) (decreasing).
The requirement of one turning point (pivot), is equivalent to finding that turning point and checking the that the sequence is separately monotonic.
For multiple consecutive minima or maxima, if one uses the first minima or maxima encountered and checks for monotonicity the left branch excluding that minima or maxima, this is enough to guarantee that two consecutive minima or maxima will be be correctly identified.

Hence, a simple implementation follows:

import numpy as np

def find_targeted_seq_np(seq):
incr = arr[1:] > arr[:-1]
decr = arr[1:] < arr[:-1]
if np.all(incr) or np.all(decr):
return True
maximum = np.argmax(seq)
if np.all(incr[:maximum]) and np.all(decr[maximum:]):
return True
minimum = np.argmin(seq)
if np.all(decr[:minimum]) and np.all(incr[minimum:]):
return True
return False

(This is fundamentally the same idea as in @DaniMesejo's answer).


Another option would be to make use of a combination of np.diff(), np.sign() and np.count_nonzero() to count the number of times there is a change in monotonicity. If this is 0 or 1, then the sequence is valid. Avoiding repeating elements is built-in in the counting of the changes in sign, except when repeating elements are at the beginning or the end of the sequence and this situation must be checked explicitly.
This leads to a very concise solution:

import numpy as np

def find_targeted_seq_np2(seq):
diffs = np.diff(seq)
return \
diffs[0] != 0 and diffs[-1] != 0 \
and np.count_nonzero(np.diff(np.sign(diffs))) < 2

(This is fundamentally the same idea as in @yonatansc97's answer, but without using np.isin() as suggested in the comments by @DaniMesejo).


Alternatively, one can consider using explicit looping.
This has the advantage of being considerably more memory efficient and has much better short-circuiting properties:

def find_targeted_seq(seq):
n = len(seq)
changes = 0
x = seq[1]
last_x = seq[0]
if x > last_x:
monotonic = 1
elif x < last_x:
monotonic = -1
else: # x == last_x
return False
for i in range(1, n):
x = seq[i]
if x == last_x:
return False
elif (x > last_x and monotonic == -1) or (x < last_x and monotonic == 1):
changes += 1
monotonic = -monotonic
if changes > 1:
return False
last_x = x
return True

Additionally, if the type stability of the elements of the sequence can be guaranteed, then it can be easily accelerated via Numba:

import numba as nb

_find_targeted_seq_nb = nb.njit(find_targeted_seq)

def find_targeted_seq_nb(seq):
return _find_targeted_seq_nb(np.array(seq))

For comparison, here it is reported an implementation using pandas (which provides some primitives for monotonicity check) and scipy.signal.argrelmin()/scipy.signal.argrelmax() for finding turning points (this code is substantially the same found in @DaniMesejo's answer) e.g.:

from scipy.signal import argrelmin, argrelmax
import pandas as pd

def is_strictly_monotonic_increasing(s):
return s.is_unique and s.is_monotonic_increasing

def is_strictly_monotonic_decreasing(s):
return s.is_unique and s.is_monotonic_decreasing

def find_targeted_seq_pd(lst):
ser = pd.Series(lst)
if is_strictly_monotonic_increasing(ser) or is_strictly_monotonic_decreasing(ser):
return True
minima, *_ = argrelmin(ser.values)
if len(minima) == 1: # only on minimum turning point
idx = minima[0]
return is_strictly_monotonic_decreasing(ser[:idx]) and is_strictly_monotonic_increasing(ser[idx:])
maxima, *_ = argrelmax(ser.values)
if len(maxima) == 1: # only on maximum turning point
idx = maxima[0]
return is_strictly_monotonic_increasing(ser[:idx]) and is_strictly_monotonic_decreasing(ser[idx:])
return False

These solutions applied to the given input all do give the correct results:

data = (
((1, 3, 5, 6, 7), True), # l1
((1, 2, 2, 3, 4), False), # l2
((5, 4, 3, 2, 1), True), # l3
((5, 5, 3, 2, 1), False), # l4
((1, 2, 3, 4.1, 3, 2), True), # l5
((3, 2, 1, 0.5, 1, 2), True), # this value was added in addition to the existing ones
((3, 2, 1, 0.4, 1, 2, 3), True), # l6
((1, 2, 10, 4, 8, 9, 2), False), # l7
((1, 2, 3, 4, 4, 3, 2, 1), False), # l8
((-0.05701686, 0.57707936, -0.34602634, -0.02599778), False), # l9
((0.13556905, 0.45859, -0.34602634, -0.09178798, 0.03044908), False), # l10
((-0.38643975, -0.09178798, 0.57707936, -0.05701686, 0.00649252), False), # l11
)

funcs = find_targeted_seq_np, find_targeted_seq_np2, find_targeted_seq_pd, find_targeted_seq, find_targeted_seq_nb

for func in funcs:
print(func.__name__, all(func(seq) == result for seq, result in data))
# find_targeted_seq_np True
# find_targeted_seq_np2 True
# find_targeted_seq_pd True
# find_targeted_seq True
# find_targeted_seq_nb True

Timewise, some simple benchmarks on the proposed data clearly indicate direct looping (with or without Numba acceleration) to be the fastest.
The second Numpy approach gets significantly faster than the first NumPy appraoch, while the pandas-based approach is the slowest:

for func in funcs:
print(func.__name__)
%timeit [func(seq) == result for seq, result in data]
print()

# find_targeted_seq_np
# 1000 loops, best of 3: 530 µs per loop

# find_targeted_seq_np2
# 10000 loops, best of 3: 187 µs per loop

# find_targeted_seq_pd
# 100 loops, best of 3: 4.68 ms per loop

# find_targeted_seq
# 100000 loops, best of 3: 14.6 µs per loop

# find_targeted_seq_nb
# 10000 loops, best of 3: 19.9 µs per loop

While direct looping is faster for this test data than the NumPy-based approaches on the given input, the latters should scale better with larger input size. The Numba approach is likely to be faster than the NumPy approaches at all scales.

Count monotonic items in a python list

It is much easier to tackle this problem by breaking it down into smaller steps. First, get the items grouped on their trend (increasing, equal or decreasing) by keeping track of previous and current values.

Gather all results and then break down the results into smaller lists as needed for the desired output in the second step.

I would encourage you to follow through the comments in code and try to implement the steps yourself.

x = [1,2,3,3,2,0]
prev = x[0]
curr = x[1] #keep track of two items together during iteration, previous and current
result = {"increasing": [],
"equal": [],
"decreasing": [],
}

def two_item_relation(prev, curr): #compare two items in list, results in what is effectively a 3 way flag
if prev < curr:
return "increasing"
elif prev == curr:
return "equal"
else:
return "decreasing"

prev_state = two_item_relation(prev, curr) #keep track of previous state
result[prev_state].append([prev]) #handle first item of list

x_shifted = iter(x)
next(x_shifted) #x_shifted is now similar to x[1:]

for curr in x_shifted:
curr_state = two_item_relation(prev, curr)
if prev_state == curr_state: #compare if current and previous states were same.
result[curr_state][-1].append(curr)
else: #states were different. aka a change in trend
result[curr_state].append([])
result[curr_state][-1].extend([prev, curr])
prev = curr
prev_state = curr_state

def all_subcombinations(lst): #given a list, get all "sublists" using sliding windows
if len(lst) < 3:
return [lst]
else:
result = []
for i in range(2, len(lst) + 1):
for j in range(len(lst) - i + 1):
result.extend([lst[j:j + i]])
return result

print(" all Outputs ")
result_all_combinations = {}

for k, v in result.items():
result_all_combinations[k] = []
for item in v:
result_all_combinations[k].extend(all_subcombinations(item))

print(result_all_combinations)
#Output:
{'increasing': [[1, 2], [2, 3], [1, 2, 3]],
'equal': [[3, 3]],
'decreasing': [[3, 2], [2, 0], [3, 2, 0]]}

How do you split a list into monotonically increasing/decreasing lists?

The starting entry of each sublist can be detected by looking at the locations where difference to previous one is positive. Then we can split the array over these locations; but since np.diff shrinks the array size by 1, we add 1 to the output to get indices with respect to original array:

>>> sub_lists = np.split(A, np.where(np.diff(A) > 0)[0] + 1)
>>> sub_lists

[array([100, 83, 82, 51, 45, 29]),
array([100, 100, 88, 88, 76, 76, 76, 59, 10]),
array([12]),
array([36]),
array([100, 100, 86, 81, 79, 65, 65, 9]),
array([10, 8])]

Two kind of filtering needs to be done over this list of arrays: first is to discard any list with 1 item and second is to discard those whose first entry is less than 80. Therefore,

>>> result = [sub for sub in sub_lists if sub.size > 1 and sub[0] > 80]
>>> result

[array([100, 83, 82, 51, 45, 29]),
array([100, 100, 88, 88, 76, 76, 76, 59, 10]),
array([100, 100, 86, 81, 79, 65, 65, 9])]

We can wrap these in a function:

def split_decreasing(arr, thre=80):
"""
Splits the given array `arr` into monotonically decreasing subarrays
of size at least 2 and first entries being at least `thre`.
"""
split_points = np.where(np.diff(arr) > 0)[0] + 1
sub_lists = np.split(arr, split_points)
result = [sub for sub in sub_lists if sub.size > 1 and sub[0] > thre]
return result

Sample runs:

>>> split_decreasing([63, 44, 43, 37, 30, 30, 27, 95, 91, 70, 65, 62, 62, 56, 56])

[array([95, 91, 70, 65, 62, 62, 56, 56])]

>>> split_decreasing(np.arange(10))
[]

>>> split_decreasing([12, 11, 7, 9, 7], thre=80)
[]

>>> split_decreasing([12, 11, 7, 9, 7], thre=10)
[array([12, 11, 7])]

>>> split_decreasing([12, 11, 7, 9, 7], thre=5)
[array([12, 11, 7]), array([9, 7])]

>>> split_decreasing([100, 83, 82, 51, 45, 29, 100, 100, 88, 88, 76, 76, 76,
59, 10, 12, 36, 100, 100, 86, 81, 79, 65, 65, 9, 10, 8])

[array([100, 83, 82, 51, 45, 29]),
array([100, 100, 88, 88, 76, 76, 76, 59, 10]),
array([100, 100, 86, 81, 79, 65, 65, 9])]

Find monotonic sequences in a list?

No loop!
Well at least, no explicit looping...

import itertools

def process(lst):
# Guard clause against empty lists
if len(lst) < 1:
return lst

# use a dictionary here to work around closure limitations
state = { 'prev': lst[0], 'n': 0 }

def grouper(x):
if x < state['prev']:
state['n'] += 1

state['prev'] = x
return state['n']

return [ list(g) for k, g in itertools.groupby(lst, grouper) ]

Usage (work both with Python 2 & Python 3):

>>> data = [45,78,120,47,58,50,32,34]
>>> print (list(process(data)))
[[45, 78, 120], [47, 58], [50], [32, 34]]

Joke apart, if you need to group items in a list itertools.groupby deserves a little bit of attention. Not always the easiest/best answer -- but worth to make a try...


EDIT: If you don't like closures -- and prefer using an object to hold the state, here is an alternative:

class process:
def __call__(self, lst):
if len(lst) < 1:
return lst

self.prev = lst[0]
self.n = 0

return [ list(g) for k, g in itertools.groupby(lst, self._grouper) ]

def _grouper(self, x):
if x < self.prev:
self.n += 1

self.prev = x
return self.n

data = [45,78,120,47,58,50,32,34]
print (list(process()(data)))

EDIT2: Since I prefer closures ... but @torek don't like the dictionary syntax, here a third variation around the same solution:

import itertools

def process(lst):
# Guard clause against empty lists
if len(lst) < 1:
return lst

# use an object here to work around closure limitations
state = type('State', (object,), dict(prev=lst[0], n=0))

def grouper(x):
if x < state.prev:
state.n += 1

state.prev = x
return state.n

return [ list(g) for k, g in itertools.groupby(lst, grouper) ]

data = [45,78,120,47,58,50,32,34]
print (list(process(data)))

Determine if the input list is strictly increasing

def almost_increasing_sequence(sequence):
if len(sequence) < 3:
return True

a, b, *sequence = sequence
skipped = 0
for c in sequence:
if a < b < c: # XXX
a, b = b, c
continue
elif b < c: # !XX
a, b = b, c
elif a < c: # X!X
a, b = a, c
skipped += 1
if skipped == 2:
return False
return a < b

if __name__ == '__main__':
assert almost_increasing_sequence([]) is True
assert almost_increasing_sequence([1]) is True
assert almost_increasing_sequence([1, 2]) is True
assert almost_increasing_sequence([1, 2, 3]) is True
assert almost_increasing_sequence([3, 1, 2]) is True
assert almost_increasing_sequence([1, 2, 3, 0, 4, 5, 6]) is True
assert almost_increasing_sequence([1, 2, 3, 0]) is True
assert almost_increasing_sequence([1, 2, 0, 3]) is True
assert almost_increasing_sequence([10, 1, 2, 3, 4, 5]) is True
assert almost_increasing_sequence([1, 2, 10, 3, 4]) is True
assert almost_increasing_sequence([1, 2, 3, 12, 4, 5]) is True

assert almost_increasing_sequence([3, 2, 1]) is False
assert almost_increasing_sequence([1, 2, 0, -1]) is False
assert almost_increasing_sequence([5, 6, 1, 2]) is False
assert almost_increasing_sequence([1, 2, 3, 0, -1]) is False
assert almost_increasing_sequence([10, 11, 12, 2, 3, 4, 5]) is False

check if elements of an array are monotonic

You can comapre array create by numpy.diff with 0 timedelta:

b = np.diff(realtime) > datetime.timedelta(0)
print (b)

[ True True True True True]

In pandas you could convert to a pd.Series object and use diff:

b = pd.Series(realtime).diff()
#replace first NaN value to 1
b.iat[0] = 1

print (b > pd.Timedelta(0))
0 True
1 True
2 True
3 True
4 True
5 True
dtype: bool

realtime is automatically casted to np.datetime64, from which diff produces Timedelta objects.

Timings:

realtime = np.array([datetime.datetime(2017, 11, 3, 20, 25, 10, 724000),
datetime.datetime(2017, 11, 3, 20, 25, 10, 744000),
datetime.datetime(2017, 11, 3, 20, 25, 10, 764000),
datetime.datetime(2017, 11, 4, 2, 13, 44, 704000),
datetime.datetime(2017, 11, 4, 2, 13, 44, 724000),
datetime.datetime(2017, 11, 4, 2, 13, 44, 744000)], dtype=object)

realtime = np.random.choice(realtime, size=1045702)

In [256]: %timeit[x.total_seconds() > 0 for x in np.diff(realtime)]
1 loop, best of 3: 382 ms per loop

In [257]: %timeit np.diff(realtime) > datetime.timedelta(0)
10 loops, best of 3: 88.2 ms per loop

In [258]: %timeit (pd.Series(realtime).diff() > pd.Timedelta(0))
10 loops, best of 3: 147 ms per loop

In [259]: %%timeit
...: b = pd.Series(realtime).diff()
...: b.iat[0] = 1
...:
...: b > pd.Timedelta(0)
...:
10 loops, best of 3: 149 ms per loop


Related Topics



Leave a reply



Submit