Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ Padua Thesis and Dis...arrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.

Bayesian Autoencoder for Trigger-less Anomaly Detection

Authors: DI PRIMA, GIACOMO#idabnull;

Bayesian Autoencoder for Trigger-less Anomaly Detection

Abstract

In the realm of time series analysis, accurate anomaly detection is crucial for a variety of different applications, ranging from industrial process monitoring to financial fraud detection. Traditional methods often struggle with the complexity and high dimensionality inherent in time series data. This work explores the integration of an Autoencoder with Long Short Term Memory (LSTM) layers and a Convolutional Neural Network (CNN) used as a predictor within the realm of Bayesian inference, to predict anomalies in time series data, aiming to enhance detection accuracy and robustness by identifying confidence intervals that allows to classify normal behavior in the data over anomalies. The proposed model leverages the strengths of both LSTMs and CNNs to capture temporal dependencies and extract complex features from time series data, respectively. The Autoencoder architecture, designed to learn efficient representations of the input data, consists of an encoder that compresses the input into a lower-dimensional space and a decoder that reconstructs the input from this compressed representation. By incorporating LSTM layers, the model effectively retains long-term dependencies within the sequential data, while the CNN layers facilitate the extraction of localized patterns. This thesis contributes to the field of time series analysis by presenting a novel hybrid model that combines the temporal learning capabilities of LSTMs with the spatial feature extraction prowess of CNNs, encapsulated within an Auto Encoder architecture. The findings highlight the model's effectiveness in identifying anomalies, paving the way for future research and applications in various domains requiring reliable anomaly detection.

Country
Italy
Related Organizations
Powered by OpenAIRE graph
Found an issue? Give us feedback