How to Check If a Sequence of Numbers Is Monotonically Increasing (Or Decreasing)

How to check if a sequence of numbers is monotonically increasing (or decreasing)?

Another one: check if

all(x == cummax(x))

or

all(x == cummin(x))

for monotonously increasing or decreasing respectively. It seems that cummax is a lot faster than diff and also uses less memory:

> x <- seq_len(1e7)
> system.time(all(x == cummax(x)))
user system elapsed
0.11 0.00 0.11
> system.time(all(diff(x) >= 0))
user system elapsed
0.47 0.13 0.59

> x <- seq_len(1e8)
> system.time(all(x == cummax(x)))
user system elapsed
1.06 0.09 1.16
> system.time(all(diff(x) >= 0))
Error: cannot allocate vector of size 381.5 Mb
In addition: Warning messages:
1: Reached total allocation of 1535Mb: see help(memory.size)
2: Reached total allocation of 1535Mb: see help(memory.size)
3: Reached total allocation of 1535Mb: see help(memory.size)
4: Reached total allocation of 1535Mb: see help(memory.size)
Timing stopped at: 1.96 0.38 2.33

My bet as to why cummax is faster than diff is because it only has to compare numbers which is faster than having to compute differences.

Edit: at your (Ali's) request, additional tests including your answer (Note that I am now running from a different machine so the following results should not be compared with the ones above)

> x <- seq_len(1e7)
> system.time(x == cummax(x))
user system elapsed
0.316 0.096 0.416
> system.time(all(diff(x) >= 0))
user system elapsed
4.364 0.240 4.632
> system.time(x[-1] - x[-length(x)] >= 0)
user system elapsed
3.828 0.380 4.227
> system.time(all(x[-1] >= x[-length(x)]))
user system elapsed
2.572 0.288 2.865

Code that check if the Sequence Monotone Increasing

You should make the following changes. I have added comments where required. Assuming that you wish to read the entire input!

#include <stdio.h>

int main(void)//refer the link given below

{

int i = 0, numCurr, numPrev = 0; //for storing current no. and previous no.
int flag = 0;
printf("Enter input:");

do {

scanf("%d", &numCurr);

if(i != 0 && numCurr != -1 && numPrev > numCurr)//so that comparison begins from the second element
flag = 1;

numPrev = numCurr;
i++;

} while (numCurr != -1);

if(flag == 0)printf("The Serias is Monotinic Acsending\n");
else printf("The Serias is not Monotinic Acsending\n");

return 0;
}

Also do hve a look :What should main() return in C and C++?

How to check a sequence is strictly monotonically or there is one turning point where both sides are strictly monotonically?

Neither Python's standard lib nor NumPy have a specific primitive to solve this task.
However, the traditional way in NumPy is to look into differences using np.diff(), or even more simply by evaluating array[1:] versus array[:-1].

To investigate turning points, you could use np.argmin() and np.argmax(), respectively.

Strict monotonicity condition corresponds to: np.all(arr[1:] > arr[:-1]) (increasing) or np.all(arr[1:] < arr[:-1]) (decreasing).
The requirement of one turning point (pivot), is equivalent to finding that turning point and checking the that the sequence is separately monotonic.
For multiple consecutive minima or maxima, if one uses the first minima or maxima encountered and checks for monotonicity the left branch excluding that minima or maxima, this is enough to guarantee that two consecutive minima or maxima will be be correctly identified.

Hence, a simple implementation follows:

import numpy as np

def find_targeted_seq_np(seq):
incr = arr[1:] > arr[:-1]
decr = arr[1:] < arr[:-1]
if np.all(incr) or np.all(decr):
return True
maximum = np.argmax(seq)
if np.all(incr[:maximum]) and np.all(decr[maximum:]):
return True
minimum = np.argmin(seq)
if np.all(decr[:minimum]) and np.all(incr[minimum:]):
return True
return False

(This is fundamentally the same idea as in @DaniMesejo's answer).


Another option would be to make use of a combination of np.diff(), np.sign() and np.count_nonzero() to count the number of times there is a change in monotonicity. If this is 0 or 1, then the sequence is valid. Avoiding repeating elements is built-in in the counting of the changes in sign, except when repeating elements are at the beginning or the end of the sequence and this situation must be checked explicitly.
This leads to a very concise solution:

import numpy as np

def find_targeted_seq_np2(seq):
diffs = np.diff(seq)
return \
diffs[0] != 0 and diffs[-1] != 0 \
and np.count_nonzero(np.diff(np.sign(diffs))) < 2

(This is fundamentally the same idea as in @yonatansc97's answer, but without using np.isin() as suggested in the comments by @DaniMesejo).


Alternatively, one can consider using explicit looping.
This has the advantage of being considerably more memory efficient and has much better short-circuiting properties:

def find_targeted_seq(seq):
n = len(seq)
changes = 0
x = seq[1]
last_x = seq[0]
if x > last_x:
monotonic = 1
elif x < last_x:
monotonic = -1
else: # x == last_x
return False
for i in range(1, n):
x = seq[i]
if x == last_x:
return False
elif (x > last_x and monotonic == -1) or (x < last_x and monotonic == 1):
changes += 1
monotonic = -monotonic
if changes > 1:
return False
last_x = x
return True

Additionally, if the type stability of the elements of the sequence can be guaranteed, then it can be easily accelerated via Numba:

import numba as nb

_find_targeted_seq_nb = nb.njit(find_targeted_seq)

def find_targeted_seq_nb(seq):
return _find_targeted_seq_nb(np.array(seq))

For comparison, here it is reported an implementation using pandas (which provides some primitives for monotonicity check) and scipy.signal.argrelmin()/scipy.signal.argrelmax() for finding turning points (this code is substantially the same found in @DaniMesejo's answer) e.g.:

from scipy.signal import argrelmin, argrelmax
import pandas as pd

def is_strictly_monotonic_increasing(s):
return s.is_unique and s.is_monotonic_increasing

def is_strictly_monotonic_decreasing(s):
return s.is_unique and s.is_monotonic_decreasing

def find_targeted_seq_pd(lst):
ser = pd.Series(lst)
if is_strictly_monotonic_increasing(ser) or is_strictly_monotonic_decreasing(ser):
return True
minima, *_ = argrelmin(ser.values)
if len(minima) == 1: # only on minimum turning point
idx = minima[0]
return is_strictly_monotonic_decreasing(ser[:idx]) and is_strictly_monotonic_increasing(ser[idx:])
maxima, *_ = argrelmax(ser.values)
if len(maxima) == 1: # only on maximum turning point
idx = maxima[0]
return is_strictly_monotonic_increasing(ser[:idx]) and is_strictly_monotonic_decreasing(ser[idx:])
return False

These solutions applied to the given input all do give the correct results:

data = (
((1, 3, 5, 6, 7), True), # l1
((1, 2, 2, 3, 4), False), # l2
((5, 4, 3, 2, 1), True), # l3
((5, 5, 3, 2, 1), False), # l4
((1, 2, 3, 4.1, 3, 2), True), # l5
((3, 2, 1, 0.5, 1, 2), True), # this value was added in addition to the existing ones
((3, 2, 1, 0.4, 1, 2, 3), True), # l6
((1, 2, 10, 4, 8, 9, 2), False), # l7
((1, 2, 3, 4, 4, 3, 2, 1), False), # l8
((-0.05701686, 0.57707936, -0.34602634, -0.02599778), False), # l9
((0.13556905, 0.45859, -0.34602634, -0.09178798, 0.03044908), False), # l10
((-0.38643975, -0.09178798, 0.57707936, -0.05701686, 0.00649252), False), # l11
)

funcs = find_targeted_seq_np, find_targeted_seq_np2, find_targeted_seq_pd, find_targeted_seq, find_targeted_seq_nb

for func in funcs:
print(func.__name__, all(func(seq) == result for seq, result in data))
# find_targeted_seq_np True
# find_targeted_seq_np2 True
# find_targeted_seq_pd True
# find_targeted_seq True
# find_targeted_seq_nb True

Timewise, some simple benchmarks on the proposed data clearly indicate direct looping (with or without Numba acceleration) to be the fastest.
The second Numpy approach gets significantly faster than the first NumPy appraoch, while the pandas-based approach is the slowest:

for func in funcs:
print(func.__name__)
%timeit [func(seq) == result for seq, result in data]
print()

# find_targeted_seq_np
# 1000 loops, best of 3: 530 µs per loop

# find_targeted_seq_np2
# 10000 loops, best of 3: 187 µs per loop

# find_targeted_seq_pd
# 100 loops, best of 3: 4.68 ms per loop

# find_targeted_seq
# 100000 loops, best of 3: 14.6 µs per loop

# find_targeted_seq_nb
# 10000 loops, best of 3: 19.9 µs per loop

While direct looping is faster for this test data than the NumPy-based approaches on the given input, the latters should scale better with larger input size. The Numba approach is likely to be faster than the NumPy approaches at all scales.

Python - How to check list monotonicity

It's better to avoid ambiguous terms like "increasing" or "decreasing" as it's not clear if equality is acceptable or not. You should always use either for example "non-increasing" (clearly equality is accepted) or "strictly decreasing" (clearly equality is NOT accepted).

def strictly_increasing(L):
return all(x<y for x, y in zip(L, L[1:]))

def strictly_decreasing(L):
return all(x>y for x, y in zip(L, L[1:]))

def non_increasing(L):
return all(x>=y for x, y in zip(L, L[1:]))

def non_decreasing(L):
return all(x<=y for x, y in zip(L, L[1:]))

def monotonic(L):
return non_increasing(L) or non_decreasing(L)

Determine whether each level is monotonically increasing

We can do this with a group by operation

library(dplyr)
df %>%
group_by(Serial_number) %>%
summarise(index = all(sign(Amplification -
lag(Amplification, default = first(Amplification))) >= 0))

Or with by from base R. As we are passing the complete dataset, the x (anonymous function call object) is the dataset, from which we can extract the column of interest with $ or [[

by(df, list(df$Serial_number), FUN = function(x) all(sign(diff(x$Amplification))>=0))

Or using data.table

library(data.table)
setDT(df)[, .(index = all(sign(Amplification - shift(Amplification,
fill = first(Amplification))) >=0)), .(Serial_number)]

data

df <- structure(list(Serial_number = c(608004648L, 608004648L, 608004648L, 
608004648L, 608004648L, 608004648L, 608004649L, 608004649L, 608004649L,
608004649L, 608004649L, 608004649L, 608004650L, 608004650L, 608004650L,
608004650L, 608004650L, 608004650L), Amplification = c(111.997,
123.673, 137.701, 154.514, 175.331, 201.379, 118.753, 131.739,
147.294, 166.238, 189.841, 220.072, 115.474, 127.838, 142.602,
160.452, 182.732, 211.035), Voltage = c(379.98, 381.968, 383.979,
385.973, 387.98, 389.968, 378.08, 380.085, 382.082, 384.077,
386.074, 388.073, 382.066, 384.063, 386.064, 388.056, 390.06,
392.065)), class = "data.frame", row.names = c("1", "2", "3",
"4", "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15",
"16", "17", "18"))

How to check if a sequence of numbers has a increasing/decreasing trend in C++

I would accumulate the number of increases vs number of decreases, which should give you an idea of whether there's an overall trend to increase or decrease.

Count monotonic items in a python list

It is much easier to tackle this problem by breaking it down into smaller steps. First, get the items grouped on their trend (increasing, equal or decreasing) by keeping track of previous and current values.

Gather all results and then break down the results into smaller lists as needed for the desired output in the second step.

I would encourage you to follow through the comments in code and try to implement the steps yourself.

x = [1,2,3,3,2,0]
prev = x[0]
curr = x[1] #keep track of two items together during iteration, previous and current
result = {"increasing": [],
"equal": [],
"decreasing": [],
}

def two_item_relation(prev, curr): #compare two items in list, results in what is effectively a 3 way flag
if prev < curr:
return "increasing"
elif prev == curr:
return "equal"
else:
return "decreasing"

prev_state = two_item_relation(prev, curr) #keep track of previous state
result[prev_state].append([prev]) #handle first item of list

x_shifted = iter(x)
next(x_shifted) #x_shifted is now similar to x[1:]

for curr in x_shifted:
curr_state = two_item_relation(prev, curr)
if prev_state == curr_state: #compare if current and previous states were same.
result[curr_state][-1].append(curr)
else: #states were different. aka a change in trend
result[curr_state].append([])
result[curr_state][-1].extend([prev, curr])
prev = curr
prev_state = curr_state

def all_subcombinations(lst): #given a list, get all "sublists" using sliding windows
if len(lst) < 3:
return [lst]
else:
result = []
for i in range(2, len(lst) + 1):
for j in range(len(lst) - i + 1):
result.extend([lst[j:j + i]])
return result

print(" all Outputs ")
result_all_combinations = {}

for k, v in result.items():
result_all_combinations[k] = []
for item in v:
result_all_combinations[k].extend(all_subcombinations(item))

print(result_all_combinations)
#Output:
{'increasing': [[1, 2], [2, 3], [1, 2, 3]],
'equal': [[3, 3]],
'decreasing': [[3, 2], [2, 0], [3, 2, 0]]}


Related Topics



Leave a reply



Submit