Peak-Finding Algorithm for Python/Scipy

Peak-finding algorithm for Python/SciPy

The function scipy.signal.find_peaks, as its name suggests, is useful for this. But it's important to understand well its parameters width, threshold, distance and above all prominence to get a good peak extraction.

According to my tests and the documentation, the concept of prominence is "the useful concept" to keep the good peaks, and discard the noisy peaks.

What is (topographic) prominence? It is "the minimum height necessary to descend to get from the summit to any higher terrain", as it can be seen here:

Sample Image

The idea is:

The higher the prominence, the more "important" the peak is.

Test:

Sample Image

I used a (noisy) frequency-varying sinusoid on purpose because it shows many difficulties. We can see that the width parameter is not very useful here because if you set a minimum width too high, then it won't be able to track very close peaks in the high frequency part. If you set width too low, you would have many unwanted peaks in the left part of the signal. Same problem with distance. threshold only compares with the direct neighbours, which is not useful here. prominence is the one that gives the best solution. Note that you can combine many of these parameters!

Code:

import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import find_peaks

x = np.sin(2*np.pi*(2**np.linspace(2,10,1000))*np.arange(1000)/48000) + np.random.normal(0, 1, 1000) * 0.15
peaks, _ = find_peaks(x, distance=20)
peaks2, _ = find_peaks(x, prominence=1) # BEST!
peaks3, _ = find_peaks(x, width=20)
peaks4, _ = find_peaks(x, threshold=0.4) # Required vertical distance to its direct neighbouring samples, pretty useless
plt.subplot(2, 2, 1)
plt.plot(peaks, x[peaks], "xr"); plt.plot(x); plt.legend(['distance'])
plt.subplot(2, 2, 2)
plt.plot(peaks2, x[peaks2], "ob"); plt.plot(x); plt.legend(['prominence'])
plt.subplot(2, 2, 3)
plt.plot(peaks3, x[peaks3], "vg"); plt.plot(x); plt.legend(['width'])
plt.subplot(2, 2, 4)
plt.plot(peaks4, x[peaks4], "xk"); plt.plot(x); plt.legend(['threshold'])
plt.show()

How to find maximum peak location, index with scipy?

If you want to find the highest of the peaks identified by scipy.signal.find_peaks then you can do the following:

import numpy as np
from scipy.signal import find_peaks
import matplotlib.pyplot as plt

# Example data
x = np.linspace(-4000, 4000) # equal spacing needed for find_peaks
y = np.sin(x / 1000) + 0.1 * np.random.rand(*x.shape)

# Find peaks
i_peaks, _ = find_peaks(y)

# Find the index from the maximum peak
i_max_peak = i_peaks[np.argmax(y[i_peaks])]

# Find the x value from that index
x_max = x[i_max_peak]

# Plot the figure
plt.plot(x, y)
plt.plot(x[i_peaks], y[i_peaks], 'x')
plt.axvline(x=x_max, ls='--', color="k")
plt.show()

Sample Image

If you just want the highest point then do just use argmax as Sembei suggests.

Issues with scipy find_peaks function when used on an inverted dataset

You complicate your task by trying to find all valleys. This will always be difficult because they do not stand out as well as your peaks in comparison to the surrounding data. Whatever your parameters for find_peaks, sometimes it will identify two valleys after a peak, sometimes none. Instead, just identify the local minimum after each peak:

import pandas as pd
import matplotlib.pyplot as plt
from scipy.signal import find_peaks

#sample data
from scipy.misc import electrocardiogram
x = electrocardiogram()[2000:4000]
date_range = pd.date_range("20210116", periods=x.size, freq="10ms")
df = pd.DataFrame({"Timestamp": date_range, "Mean_values": x})

x = df['Timestamp']
y = df['Mean_values']

fig, (ax1, ax2, ax3) = plt.subplots(3, figsize=(12, 8))

#peak finding
peaks, _ = find_peaks(y, prominence=1)

ax1.plot(x[peaks], y[peaks], "ob")
ax1.plot(x, y)
ax1.legend(['Prominence'])

#valley finder general
valleys, _ = find_peaks(-y, prominence=1)

ax2.plot(x[valleys], y[valleys], "vg")
ax2.plot(x, y)
ax2.legend(['Valleys without filtering'])

#valley finding restricted to a short time period after a peak
#set time window, e.g., for 200 ms
time_window_size = pd.Timedelta(200, unit="ms")
time_of_peaks = x[peaks]
peak_end = x.searchsorted(time_of_peaks + time_window_size)
#in case of evenly spaced data points, this can be simplified
#and you just add n data points to your peak index array
#peak_end = peaks + n
true_valleys = peaks.copy()
for i, (start, stop) in enumerate(zip(peaks, peak_end)):
true_valleys[i] = start + y[start:stop].argmin()

ax3.plot(x[true_valleys], y[true_valleys], "sr")
ax3.plot(x, y)
ax3.legend(['Valleys after events'])

plt.show()

Sample output:
Sample Image

I am not sure what you intend to do with these minima, but if you are only interested in baseline shifts, you can directly calculate the peak-wise baseline values like

baseline_per_peak = peaks.copy().astype(float)
for i, (start, stop) in enumerate(zip(peaks, peak_end)):
baseline_per_peak[i] = y[start:stop].mean()

print(baseline_per_peak)

Sample output:

[-0.71125 -0.203    0.29225  0.72825  0.6835   0.79125  0.51225  0.23
0.0345 -0.3945 -0.48125 -0.4675 ]

This can, of course, also easily be adapted to the period before the peak:

#valley in the short time period before a peak
#set time window, e.g., for 200 ms
time_window_size = pd.Timedelta(200, unit="ms")
time_of_peaks = x[peaks]
peak_start = x.searchsorted(time_of_peaks - time_window_size)
#in case of evenly spaced data points, this can be simplified
#and you just add n data points to your peak index array
#peak_start = peaks - n
true_valleys = peaks.copy()
for i, (start, stop) in enumerate(zip(peak_start, peaks)):
true_valleys[i] = start + y[start:stop].argmin()

Sample Image

Problem with plotting peaks using find_peaks from SciPy to detect drastic up/down turns or global outliers

  1. You need to specify height in the same domain as your data
  2. Upper thresohld is not missing, it is on the plot, just all those lines are close to 0 and clutter on the bottom.
thresh_top = np.median(x) + 1 * np.std(x)
thresh_bottom = np.median(x) - 1 * np.std(x)
# (you may want to use std calculated on 10-90 percentile data, without outliers)

# Find indices of peaks
peak_idx, _ = find_peaks(x, height=thresh_top)

# Find indices of valleys (from inverting the signal)
valley_idx, _ = find_peaks(-x, height=-thresh_bottom)

# Plot signal
plt.figure(figsize=(14,12))
plt.plot(t, x , color='b', label='data')
plt.scatter(t, x, s=10,c='b',label='value')

# Plot threshold
plt.plot([min(t), max(t)], [thresh_top, thresh_top], '--', color='r', label='peaks-threshold')
plt.plot([min(t), max(t)], [thresh_bottom, thresh_bottom], '--', color='g', label='valleys-threshold')

# Plot peaks (red) and valleys (blue)
plt.plot(t[peak_idx], x[peak_idx], "x", color='r', label='peaks')
plt.plot(t[valley_idx], x[valley_idx], "x", color='g', label='valleys')

plt.xticks(rotation=45)
plt.ylabel('value')
plt.xlabel('timestamp')
plt.title(f'data over time for username=target')
plt.legend( loc='upper left')
plt.gcf().autofmt_xdate()
plt.show()

Sample Image

Intelligent Peak Detection Method

This is a pragmatic solution, as the way I see this (please correct me if I'm wrong) you want to find each peak after/before a 'smooth' or 0 period.

You can do this by simply checking for such periods and reporting their start and stop.

Here is a very basic implementation, allowing to specify what qualifies as smooth period (I use a change of less than 0.001 as condition here):

dy_lim = 0.001
targets = []
in_lock = False
i_l, d_l = 0, data[0]
for i, d in enumerate(data[1:]):
if abs(d_l - d) > dy_lim:
if in_lock:
targets.append(i_l)
targets.append(i + 1)
in_lock = False
i_l, d_l = i, d
else:
in_lock = True

And then plotting the targets:

plt.plot(range(len(data)), data)
plt.scatter(targets, [data[t] for t in targets], c='red')
plt.show()

Sample Image

Nothing very elaborated, but it finds the peak you indicated.

Increasing the value of dy_lim will let you find more peaks. Also you might want to specify a minimal length of what is a smooth period, here is how this might look like (again just a crude implementation):

dy_lim = 0.001
di_lim = 50
targets = []
in_lock = False
i_l, d_l = 0, data[0]
for i, d in enumerate(data[1:]):
if abs(d_l - d) > dy_lim:
if in_lock:
in_lock = False
if i - i_l > di_lim:
targets.append(i_l)
targets.append(i + 1)
i_l, d_l = i, d
else:
in_lock = True

With this you would not get the first point as the difference between first and 2nd is bigger than di_lim=50.


Update for the 2nd dataset:

This gets a bit trickier, as now there are gradual decreases after a peak leading to a slow aggregation of difference, enough to hit the dy_lim leading the algorithm to falsely report a new target. So you need to test whether this target really is a peak and only report then.

Here is a crude implementation of how to achieve this:

dy_lim = 0.1
di_lim = 5
targets = []
in_lock = False
i_l, d_l = 0, data[0]
for i, d in enumerate(data[1:]):
if abs(d_l - d) > dy_lim:
if in_lock:
in_lock = False
if i - i_l > di_lim:
# here we check whether the start of the period was a peak
if abs(d_l - data[i_l]) > dy_lim:
# assure minimal distance if previous target exists
if targets:
if i_l - targets[-1] > di_lim:
targets.append(i_l)
else:
targets.append(i_l)
# and here whether the end is a peak
if abs(d - data[i]) > dy_lim:
targets.append(i + 1)
i_l, d_l = i, d
else:
in_lock = True

What you'll end up with is this:
Sample Image


General Note: We are following a bottom-up approach here: You have a specific feature you want to detect, so you write a specific algorithm to do so.

This can be very effective for simple tasks, however, we realize already in this simple example that if there are new features the algorithm should be able to cope with we need to adapt it. If the current complexity is all there is, then you are fine. But if the data presents yet other patterns, then you'll be again in the situation where you need to add further conditions and the algorithm becomes more and more complicated as it needs to deal with the additional complexity. If you end up in such a situation then you might want to consider switching gears and adapt a more genuine approach. There are many options in this case, one way would be to work with the difference of the original data with a Savizky-Golay filtered version, but that's just one of many suggestions one could make here.

Peak signal detection in realtime timeseries data

Robust peak detection algorithm (using z-scores)

I came up with an algorithm that works very well for these types of datasets. It is based on the principle of dispersion: if a new datapoint is a given x number of standard deviations away from some moving mean, the algorithm signals (also called z-score). The algorithm is very robust because it constructs a separate moving mean and deviation, such that signals do not corrupt the threshold. Future signals are therefore identified with approximately the same accuracy, regardless of the amount of previous signals. The algorithm takes 3 inputs: lag = the lag of the moving window, threshold = the z-score at which the algorithm signals and influence = the influence (between 0 and 1) of new signals on the mean and standard deviation. For example, a lag of 5 will use the last 5 observations to smooth the data. A threshold of 3.5 will signal if a datapoint is 3.5 standard deviations away from the moving mean. And an influence of 0.5 gives signals half of the influence that normal datapoints have. Likewise, an influence of 0 ignores signals completely for recalculating the new threshold. An influence of 0 is therefore the most robust option (but assumes stationarity); putting the influence option at 1 is least robust. For non-stationary data, the influence option should therefore be put somewhere between 0 and 1.

It works as follows:

Pseudocode

# Let y be a vector of timeseries data of at least length lag+2
# Let mean() be a function that calculates the mean
# Let std() be a function that calculates the standard deviaton
# Let absolute() be the absolute value function

# Settings (these are examples: choose what is best for your data!)
set lag to 5; # average and std. are based on past 5 observations
set threshold to 3.5; # signal when data point is 3.5 std. away from average
set influence to 0.5; # between 0 (no influence) and 1 (full influence)

# Initialize variables
set signals to vector 0,...,0 of length of y; # Initialize signal results
set filteredY to y(1),...,y(lag) # Initialize filtered series
set avgFilter to null; # Initialize average filter
set stdFilter to null; # Initialize std. filter
set avgFilter(lag) to mean(y(1),...,y(lag)); # Initialize first value average
set stdFilter(lag) to std(y(1),...,y(lag)); # Initialize first value std.

for i=lag+1,...,t do
if absolute(y(i) - avgFilter(i-1)) > threshold*stdFilter(i-1) then
if y(i) > avgFilter(i-1) then
set signals(i) to +1; # Positive signal
else
set signals(i) to -1; # Negative signal
end
set filteredY(i) to influence*y(i) + (1-influence)*filteredY(i-1);
else
set signals(i) to 0; # No signal
set filteredY(i) to y(i);
end
set avgFilter(i) to mean(filteredY(i-lag+1),...,filteredY(i));
set stdFilter(i) to std(filteredY(i-lag+1),...,filteredY(i));
end

Rules of thumb for selecting good parameters for your data can be found below.



Demo

Demonstration of robust thresholding algorithm

The Matlab code for this demo can be found here. To use the demo, simply run it and create a time series yourself by clicking on the upper chart. The algorithm starts working after drawing lag number of observations.



Result

For the original question, this algorithm will give the following output when using the following settings: lag = 30, threshold = 5, influence = 0:

Thresholding algorithm example



Implementations in different programming languages:

  • Matlab (me)

  • R (me)

  • Golang (Xeoncross)

  • Golang [efficient version] (Micah Parks)

  • Python (R Kiselev)

  • Python [efficient version] (delica)

  • Swift (me)

  • Groovy (JoshuaCWebDeveloper)

  • C++ (brad)

  • C++ (Animesh Pandey)

  • Rust (swizard)

  • Scala (Mike Roberts)

  • Kotlin (leoderprofi)

  • Ruby (Kimmo Lehto)

  • Fortran [for resonance detection] (THo)

  • Julia (Matt Camp)

  • C# (Ocean Airdrop)

  • C (DavidC)

  • Java (takanuva15)

  • JavaScript (Dirk Lüsebrink)

  • TypeScript (Jerry Gamble)

  • Perl (Alen)

  • PHP (radhoo)

  • PHP (gtjamesa)

  • Dart (Sga)



Rules of thumb for configuring the algorithm

lag: the lag parameter determines how much your data will be smoothed and how adaptive the algorithm is to changes in the long-term average of the data. The more stationary your data is, the more lags you should include (this should improve the robustness of the algorithm). If your data contains time-varying trends, you should consider how quickly you want the algorithm to adapt to these trends. I.e., if you put lag at 10, it takes 10 'periods' before the algorithm's treshold is adjusted to any systematic changes in the long-term average. So choose the lag parameter based on the trending behavior of your data and how adaptive you want the algorithm to be.

influence: this parameter determines the influence of signals on the algorithm's detection threshold. If put at 0, signals have no influence on the threshold, such that future signals are detected based on a threshold that is calculated with a mean and standard deviation that is not influenced by past signals. If put at 0.5, signals have half the influence of normal data points. Another way to think about this is that if you put the influence at 0, you implicitly assume stationarity (i.e. no matter how many signals there are, you always expect the time series to return to the same average over the long term). If this is not the case, you should put the influence parameter somewhere between 0 and 1, depending on the extent to which signals can systematically influence the time-varying trend of the data. E.g., if signals lead to a structural break of the long-term average of the time series, the influence parameter should be put high (close to 1) so the threshold can react to structural breaks quickly.

threshold: the threshold parameter is the number of standard deviations from the moving mean above which the algorithm will classify a new datapoint as being a signal. For example, if a new datapoint is 4.0 standard deviations above the moving mean and the threshold parameter is set as 3.5, the algorithm will identify the datapoint as a signal. This parameter should be set based on how many signals you expect. For example, if your data is normally distributed, a threshold (or: z-score) of 3.5 corresponds to a signaling probability of 0.00047 (from this table), which implies that you expect a signal once every 2128 datapoints (1/0.00047). The threshold therefore directly influences how sensitive the algorithm is and thereby also determines how often the algorithm signals. Examine your own data and choose a sensible threshold that makes the algorithm signal when you want it to (some trial-and-error might be needed here to get to a good threshold for your purpose).


WARNING: The code above always loops over all datapoints everytime it runs. When implementing this code, make sure to split the calculation of the signal into a separate function (without the loop). Then when a new datapoint arrives, update filteredY, avgFilter and stdFilter once. Do not recalculate the signals for all data everytime there is a new datapoint (like in the example above), that would be extremely inefficient and slow in real-time applications.

Other ways to modify the algorithm (for potential improvements) are:

  1. Use median instead of mean
  2. Use a robust measure of scale, such as the median absolute deviation (MAD), instead of the standard deviation
  3. Use a signalling margin, so the signal doesn't switch too often
  4. Change the way the influence parameter works
  5. Treat up and down signals differently (asymmetric treatment)
  6. Create a separate influence parameter for the mean and std (as in this Swift translation)


(Known) academic citations to this answer:

  • Cai, Y., Wang, X., Joos, G., & Kamwa, I. (2022). An Online Data-Driven Method to Locate Forced Oscillation Sources from Power Plants Based on Sparse Identification of Nonlinear Dynamics (SINDy). IEEE Transactions on Power Systems.

  • Yang, S., Yim, J., Kim, J., & Shin, H. V. (2022). CatchLive: Real-time Summarization of Live Streams with Stream Content and Interaction Data. CHI Conference on Human Factors in Computing Systems, 1-20.

  • Feng, D., Tan, Z., Engwirda, D., Liao, C., Xu, D., Bisht, G., ... & Leung, R. (2022). Investigating coastal backwater effects and flooding in the coastal zone using a global river transport model on an unstructured mesh. Hydrology and Earth System Sciences Discussions, 1-31 [preprint].

  • Link, J., Perst, T., Stoeve, M., & Eskofier, B. M. (2022). Wearable sensors for activity recognition in ultimate frisbee using convolutional neural networks and transfer learning. Sensors, 22(7), 2560.

  • Romeiro, J. M. N., Torres, F. T. P., & Pirotti, F. (2021). Evaluation of Effect of Prescribed Fires Using Spectral Indices and SAR Data. Bollettino della società italiana di fotogrammetria e topografia, (2), 36-56.

  • Moore, J., Goffin, P., Wiese, J., & Meyer, M. (2021). An Interview Method for Engaging Personal Data. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 5(4), 1-28.

  • Rykov, Y., Thach, T. Q., Bojic, I., Christopoulos, G., & Car, J. (2021). Digital Biomarkers for Depression Screening With Wearable Devices: Cross-sectional Study With Machine Learning Modeling. JMIR mHealth and uHealth, 9(10), e24872.

  • Hong, Y., Xin, Y., Martin, H., Bucher, D., & Raubal, M. (2021). A Clustering-Based Framework for Individual Travel Behaviour Change Detection. In 11th International Conference on Geographic Information Science (GIScience 2021)-Part II.

  • Grammenos, A., Kalyvianaki, E., & Pietzuch, P. (2021). Pronto: Federated Task Scheduling. arXiv preprint arXiv:2104.13429.

  • Courtial, N. (2020). Fusion d’images multimodales pour l’assistance de procédures d’électrophysiologie cardiaque. Doctoral dissertation, Université Rennes.

  • Beckman, W. F., Jiménez, M. Á. L., Moerland, P. D., Westerhoff, H. V., & Verschure, P. J. (2020). 4sUDRB-sequencing for genome-wide transcription bursting quantification in breast cancer cells. bioRxiv.

  • Olkhovskiy, M., Müllerová, E., & Martínek, P. (2020). Impulse signals classification using one dimensional convolutional neural network. Journal of Electrical Engineering, 71(6), 397-405.

  • Gao, S., & Calderon, D. P. (2020). Robust alternative to the righting reflex to assess arousal in rodents. Scientific reports, 10(1), 1-11.

  • Chen, G. & Dong, W. (2020). Reactive Jamming and Attack Mitigation over Cross-Technology Communication Links. ACM Transactions on Sensor Networks, 17(1).

  • Takahashi, R., Fukumoto, M., Han, C., Sasatani, T., Narusue, Y., & Kawahara, Y. (2020). TelemetRing: A Batteryless and Wireless Ring-shaped Keyboard using Passive Inductive Telemetry. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (pp. 1161-1168).

  • Negus, M. J., Moore, M. R., Oliver, J. M., Cimpeanu, R. (2020). Droplet impact onto a spring-supported plate: analysis and simulations. Journal of Engineering Mathematics, 128(3).

  • Yin, C. (2020). Dinucleotide repeats in coronavirus SARS-CoV-2 genome: evolutionary implications. ArXiv e-print, accessible from: https://arxiv.org/pdf/2006.00280.pdf

  • Esnaola-Gonzalez, I., Gómez-Omella, M., Ferreiro, S., Fernandez, I., Lázaro, I., & García, E. (2020). An IoT Platform Towards the Enhancement of Poultry Production Chains. Sensors, 20(6), 1549.

  • Gao, S., & Calderon, D. P. (2020). Continuous regimens of cortico-motor integration calibrate levels of arousal during emergence from anesthesia. bioRxiv.

  • Cloud, B., Tarien, B., Liu, A., Shedd, T., Lin, X., Hubbard, M., ... & Moore, J. K. (2019). Adaptive smartphone-based sensor fusion for estimating competitive rowing kinematic metrics. PloS one, 14(12).

  • Ceyssens, F., Carmona, M. B., Kil, D., Deprez, M., Tooten, E., Nuttin, B., ... & Puers, R. (2019). Chronic neural recording with probes of subcellular cross-section using 0.06 mm² dissolving microneedles as insertion device. Sensors and Actuators B: Chemical, 284, pp. 369-376.

  • Dons, E., Laeremans, M., Orjuela, J. P., Avila-Palencia, I., de Nazelle, A., Nieuwenhuijsen, M., ... & Nawrot, T. (2019). Transport Most Likely to Cause Air Pollution Peak Exposures in Everyday Life: Evidence from over 2000 Days of Personal Monitoring. Atmospheric Environment, 213, 424-432.

  • Schaible B.J., Snook K.R., Yin J., et al. (2019). Twitter conversations and English news media reports on poliomyelitis in five different countries, January 2014 to April 2015. The Permanente Journal, 23, 18-181.

  • Lima, B. (2019). Object Surface Exploration Using a Tactile-Enabled Robotic Fingertip (Doctoral dissertation, Université d'Ottawa/University of Ottawa).

  • Lima, B. M. R., Ramos, L. C. S., de Oliveira, T. E. A., da Fonseca, V. P., & Petriu, E. M. (2019). Heart Rate Detection Using a Multimodal Tactile Sensor and a Z-score Based Peak Detection Algorithm. CMBES Proceedings, 42.

  • Lima, B. M. R., de Oliveira, T. E. A., da Fonseca, V. P., Zhu, Q., Goubran, M., Groza, V. Z., & Petriu, E. M. (2019, June). Heart Rate Detection Using a Miniaturized Multimodal Tactile Sensor. In 2019 IEEE International Symposium on Medical Measurements and Applications (MeMeA) (pp. 1-6). IEEE.

  • Ting, C., Field, R., Quach, T., Bauer, T. (2019). Generalized Boundary Detection Using Compression-based Analytics. ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, United Kingdom, pp. 3522-3526.

  • Carrier, E. E. (2019). Exploiting compression in solving discretized linear systems. Doctoral dissertation, University of Illinois at Urbana-Champaign.

  • Khandakar, A., Chowdhury, M. E., Ahmed, R., Dhib, A., Mohammed, M., Al-Emadi, N. A., & Michelson, D. (2019). Portable system for monitoring and controlling driver behavior and the use of a mobile phone while driving. Sensors, 19(7), 1563.

  • Baskozos, G., Dawes, J. M., Austin, J. S., Antunes-Martins, A., McDermott, L., Clark, A. J., ... & Orengo, C. (2019). Comprehensive analysis of long noncoding RNA expression in dorsal root ganglion reveals cell-type specificity and dysregulation after nerve injury. Pain, 160(2), 463.

  • Cloud, B., Tarien, B., Crawford, R., & Moore, J. (2018). Adaptive smartphone-based sensor fusion for estimating competitive rowing kinematic metrics. engrXiv Preprints.

  • Zajdel, T. J. (2018). Electronic Interfaces for Bacteria-Based Biosensing. Doctoral dissertation, UC Berkeley.

  • Perkins, P., Heber, S. (2018). Identification of Ribosome Pause Sites Using a Z-Score Based Peak Detection Algorithm. IEEE 8th International Conference on Computational Advances in Bio and Medical Sciences (ICCABS), ISBN: 978-1-5386-8520-4.

  • Moore, J., Goffin, P., Meyer, M., Lundrigan, P., Patwari, N., Sward, K., & Wiese, J. (2018). Managing In-home Environments through Sensing, Annotating, and Visualizing Air Quality Data. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2(3), 128.

  • Lo, O., Buchanan, W. J., Griffiths, P., and Macfarlane, R. (2018), Distance Measurement Methods for Improved Insider Threat Detection, Security and Communication Networks, Vol. 2018, Article ID 5906368.

  • Apurupa, N. V., Singh, P., Chakravarthy, S., & Buduru, A. B. (2018). A critical study of power consumption patterns in Indian Apartments. Doctoral dissertation, IIIT-Delhi.

  • Scirea, M. (2017). Affective Music Generation and its effect on player experience. Doctoral dissertation, IT University of Copenhagen, Digital Design.

  • Scirea, M., Eklund, P., Togelius, J., & Risi, S. (2017). Primal-improv: Towards co-evolutionary musical improvisation. Computer Science and Electronic Engineering (CEEC), 2017 (pp. 172-177). IEEE.

  • Catalbas, M. C., Cegovnik, T., Sodnik, J. and Gulten, A. (2017). Driver fatigue detection based on saccadic eye movements, 10th International Conference on Electrical and Electronics Engineering (ELECO), pp. 913-917.

Other works using the algorithm from this answer



Related Topics



Leave a reply



Submit