Logging Analytics Terms and Concepts

Here are some of the common terms and basic concepts for Oracle Logging Analytics.

Source

This is the definition of where the log files are located, how to collect them, how to mask the data using Data Masks, parse using Parsers, extract data using Extended Field Definitions, enrich using Labels and enrichment function definitions, and extract the metric data from a log file. Log sources can be used to collect logs continuously from an Oracle Cloud Infrastructure Management Agent or can be provided when you perform on-demand upload of a log file to Oracle Logging Analytics or Oracle Cloud Infrastructure Object Store collection rules. Whenever logs are collected or sent to Oracle Logging Analytics, a source must be provided to give the context of how to process the logs.

Oracle Logging Analytics ships with hundreds of Oracle-defined sources covering a large variety of Oracle and non-Oracle products and more sources are added to the list continuously.

Entity

When working with on-premises assets, for example, a Fusion Middleware Server instance, you can define an entity in Oracle Logging Analytics that references that real asset on your on-premises host. To enable log collection through the Oracle Cloud Infrastructure Management Agent, you can associate a log source to an entity that you have already created. This starts the continuous collection of logs through the agent. For continuous log collection through an agent, an entity definition is required. When uploading the logs to Oracle Logging Analytics through a REST API, specifying the entity is optional. However, it is recommended that you use the entity model to define where the logs are coming from and make the analytics experience more powerful.

An entity must have an entity type. Nearly 100 Oracle-defined entity types are already available. You can also create custom entity types.

When an entity is created for a specific type, an entity type defines the properties which should be provided for the type. These properties are used to locate the log files location. For example, in an Oracle Database entity type, you must provide path values for properties such as ADR_HOME, ORACLE_HOME, and INSTALL_HOME.

Log Group

When collecting logs by using any of the available methods, you must specify which Log Group to store the logs into. The log group is used to define who has access to query the logs in the Log Explorer or Dashboards and also purge logs. For example, your organization may decide to have separate log groups for secure logs and non-secure logs. These would be placed into separate OCI compartments and policies can be written to grant different levels of access to different user groups.

Source-Entity Association

The association of a log source to an entity starts the continuous log collection process through the Oracle Cloud Infrastructure Management Agent. If the source and entity are properly defined, then the association metadata will be sent to the agent to perform the log collection. The logs will be sent to the cloud for indexing and enriching before they are made available for search.

The source-entity association is applicable only for continuous log collection through the Oracle Cloud Infrastructure Management Agent. When you perform the association, any parameterized file paths in the log source will be replaced by the actual property value for that given entity instance. For example, if you monitor the Database Alert Logs source against myDatabaseInstance1, then Database Alert Logs source definition will look for logs under a path such as {ADR_HOME}/alert/log*.log. The entity definition for myDatabaseInstance1 will have a value for ADR_HOME provided by you. When performing the association, that variable ADR_HOME is replaced in the path to find the absolute path of the log entries. This model allows you to have a single log source that can monitor logs from several entity instances where the file paths are different for each entity.

Parser

A parser is a definition of how to parse a log file into log entries and how to parse the log entries into fields. Parsers can be written in regular expression for semi-structured or unstructured logs. JSON and XML Parsers can be written for logs in those formats.

Log Partitioning

Oracle Logging Analytics can ingest very large amounts of log data on a daily basis per tenant. If you expect to persistently ingest more than 6TB log data per day for a single tenant in a single region, then Oracle recommends that you use the log partitioning feature. Partitioning allows you to physically segment the log data that has some common characteristic to enable parallel ingestion and parallel query of data thus optimizing the search performance.

The partitioning feature depends on the key specified while ingesting the log data. This key is called log set. This can be any logical string value that aligns with how the logs are typically used based on your infrastructure, application architecture, or organizational structure.

When querying the data, the best performance is achieved when the queries are limited to a small set of log sets. The logs of underlying applications in each log set are isolated and independent from IT Operations, and business perspective. Most searches are targeted towards one log set thus leading to improved search performance. While you can certainly query across all log sets, these queries will take more time to return data.

See Examples of Leveraging Log Partitioning Feature.