Detecting Anomalies in Large Datasets

Create a job that detects anomalies using asynchronous detection.

You can use asynchronous detection to detect anomalies in both univariate and multivariate detection datasets. Typical use cases suited for asynchronous detecting are:

Detecting anomalies in very large datasets

The maximum number of data points supported by the detectAnomalies REST Synchronous API is 30,000. This might impose restrictions in anomaly detection scenarios in which a large number of data points (typically in the millions) needs to be detected. Using asynchronous detection, you can analyze and detect anomalies in very large datasets upwards of 10 million data points.

Automating detection workflows

In IoT use cases, time-series data is usually collected from a large number of sensors and devices, and it’s stored in a persistent data store such as a database or a file system. Often, this raw data must be preprocessed (enriched) by using PaaS services such as Data Flow before inferencing can be performed. You can easily integrate the asynchronous detection APIs within data processing pipelines, and automate detection workflows.

Postprocessing anomalous events

In certain anomaly detection scenarios, the detection data (detected anomalies) might need to be transformed or enriched before it can be consumed by downstream applications. With asynchronous detection, detected anomalies are saved in an Object Storage bucket. You can use PaaS services such as Data Flow to analyze, process, and enrich the anomalous events. Furthermore, you can consume and render the anomalies in visualization graphs in Oracle Analytics Cloud to enable you to monitor target systems and take corrective actions.

Was this article helpful?