Create a Source

Sources define the location of your entity's logs and how to enrich the log entries. To start continuous log collection through the OCI management agents, a source needs to be associated with one or more entities.

  1. Open the navigation menu and click Observability & Management. Under Logging Analytics, click Administration. The Administration Overview page opens.

    The administration resources are listed in the left hand navigation pane under Resources. Click Sources.

    The Sources page opens. Click Create Source.

  2. In the Name field, enter the name of the source.

    Optionally, add a description.

  3. From the Source Type list, select the type for the log source.
    Oracle Log Analytics supports three log source types for custom sources:
    • File: Use this type for collecting most types of logs, such as Database, Application, and Infrastructure logs.

    • Oracle Diagnostic Logging (ODL): Use this type for logs that follow the Oracle Diagnostics Logs format. These are typically used for diagnostic logs for Oracle Fusion Middleware and Oracle Applications.

    • Syslog Listener: This is typically used for network devices such as Intrusion Detection Appliance, Firewall, or other device where a management agent could not be installed.

    • Microsoft Windows: Use this type for collecting Windows Event messages. Oracle Logging Analytics can collect all historic Windows Event Log entries. It supports Windows as well as custom event channels.

      Note

      This source type does not require the field Log Parser.

    • Database: Use this source type to collect the logs stored in the tables inside of an on-premises database. With this source type, a sql query is run periodically to collect the table data as log entries.

    • REST API: Use this source type to set up continuous REST API based log collection from endpoint URLs that respond with log messages. With this source type, a GET or POST API call is made to the endpoint URL that you provide to get the logs.

  4. Click the Entity Type field and select the type of entity for this log source. Later, when you associate this source to an entity to enable log collection through the management agent, only entities of this type will be available for association. A source can have one or more entity types.
    • If you selected File, REST API, or Oracle Diagnostic Log (ODL), then it's recommended that you select the entity type for your log source that most closely matches what you are going to monitor. Avoid selecting composite entity types like Database Cluster and instead select the entity type Database Instance because the logs are generated at the instance level.

    • If you selected the source type Syslog Listener, then select one of the variants of Host.

    • If you selected the source type Database, then the entity type is limited to the eligible database types.

    • If you selected Windows Event System source type, then the default entity type Host (Windows) is automatically selected, and cannot be changed.

  5. Click the Parser field and select the relevant parser name such as Database Audit Log Entries Format.
    You can select multiple file parsers for the log files. This is particularly helpful when a log file has entries with different syntax and can’t be parsed by a single parser.

    The order in which you add the parsers is important. When Oracle Logging Analytics reads a log file, it tries the first parser and moves to the second parser if the first one does not work. This continues until a working parser is found. Select the most common parser first for this source.

    For ODL source type, the only parser available is Oracle Diagnostic Logging Format.

    For Syslog source type, typically one of the variant parsers such as Syslog Standard Format or Syslog RFC5424 Format is used. You can also select from the Oracle-defined syslog parsers for specific network devices.

    The File Parser field isn’t available for Windows Event System and REST API source types. For the Windows Event System source type, Oracle Logging Analytics retrieves already parsed log data.

    To parse only the time information from the log entries, you can select the automatic time parser. See Use the Automatic Time Parser.

  6. Enter the following information depending on the source type:
    • Syslog source type: Specify Listener Port.

    • Windows source type: Specify an event service channel name. The channel name must match with the name of the Windows event so that the agent can form the association to pick up logs.

    • Database source type: Specify SQL Statements and click Configure. Map the SQL table columns to the fields available in the menu. To create a new field for mapping, click the Add icon icon.

    • REST API source type: Click Add log endpoint to provide a single log endpoint URL or Add log list endpoint for multiple logs to provide a log list endpoint URL for multiple logs from which the logs can be collected periodically based on the time configuration in the UI. For more information on setting up REST API log collection, see Set Up REST API Log Collection.
    • File and ODL source types: Use the Include and Exclude tabs

      • In the Included Patterns tab, click Add to specify file name patterns for this source.

        Enter the file name pattern and description.

        You can enter parameters within braces {}, such as {AdrHome}, as a part of the file name pattern. Oracle Logging Analytics replaces these parameters in the include pattern with entity properties when the source is associated with an entity. The list of possible parameters is defined by the entity type. If you create your own entity types, you can define your own properties. When you create an entity, you will be prompted to give value for each property for that entity. You can also add your own custom properties per entity, if required. Any of these properties can be used as parameters here in the Included Patterns.

        For example for a given entity where {AdrHome} property is set to /u01/oracle/database/, the include pattern {AdrHome}/admin/logs/*.log will be replaced with /u01/oracle/database/admin/logs/*.log for this specific entity. Every other entity on the same host can have a different value for {AdrHome}, which would result in a completely different set of log files to be collected for each entity.

        You can associate a source with an entity only if the parameters that the source requires in the patterns has a value for the given entity.

        You can configure warnings in the log collection for your patterns. In the Send Warning drop-down list, select the situation in which the warning must be issued:

        • For each pattern that has an issue: When you have set multiple include patterns, a log collection warning will be sent for each file name pattern which doesn't match.

        • Only if all patterns have issues: When you have set multiple include patterns, a log collection warning will be sent only if all the file name patterns don't match.

      • You can use an excluded pattern when there are files in the same location that you don’t want to include in the source definition. In the Excluded Patterns tab, click Add to define patterns of log file names that must be excluded from this log source.

        For example, there’s a file with the name audit.aud in the directory that you configured as an include source (/u01/app/oracle/admin/rdbms/diag/trace/). In the same location, there’s another file with the name audit-1.aud. You can exclude any files with the pattern audit-*.aud.

  7. Add Data Filters. See Use Data Filters in Sources.
  8. Add Extended Fields. See Use Extended Fields in Sources.
  9. Configure Field Enrichment options. See Configure Field Enrichment Options.
  10. Add Labels. See Use Labels in Sources.
  11. Click Save.

Use Data Filters in Sources

Oracle Logging Analytics lets you mask and hide sensitive information from your log entries as well as hide entire log entries before the log data is uploaded to the cloud.

Using the Data Filters tab when editing or creating a source, you can mask IP addresses, user ID, host name, and other sensitive information with replacement strings, drop specific keywords and values from a log entry, and also hide an entire log entry.

You can add data filters when creating a log source, or when editing an existing source. See Customize an Oracle-Defined Source to learn about editing existing log sources.

If the log data is sent to Oracle Logging Analytics using On-demand Upload or collection from object store, then the masking will happen on the cloud side before the data is indexed. If you are collecting logs using the Management Agent, then the logs are masked before the content leaves your premises.

Topics:

Masking Log Data

Masking is the process of taking a set of existing text and replacing it with other static text to hide the original content.

If you want to mask any information such as the user name and the host name from the log entries:

  1. Open the navigation menu and click Observability & Management. Under Logging Analytics, click Administration. The Administration Overview page opens.

  2. The administration resources are listed in the left hand navigation pane under Resources. Click Sources.

  3. Click the name of the source that you want to edit. The source details page opens. Click Edit to edit the source.

  4. Click the Data Filters tab and click Add.

  5. Enter the mask Name, select Mask as the Type, enter the Find Expression value, and its associated Replace Expression value.

    Find Expression value can be plain text search or standard regular expression. The value that will be replaced with the Replace Expression should be surrounded by quotes ( ).

    Name Find Expression Replace Expression
    mask username User=(\S+)s+ confidential
    mask host Host=(\S+)s+ mask_host
    Note

    The syntax of the replace string should match the syntax of the string that’s being replaced. For example, a number shouldn’t be replaced with a string. An IP address of the form 123.45.67.89 should be replaced with 000.000.000.000 and not with 000.000. If the syntaxes don’t match, then the parsers may break.

  6. Click Save.

When you view the masked log entries for this log source, you’ll find that Oracle Logging Analytics has masked the values of the fields that you’ve specified.

  • User = confidential

  • Host = mask_host

Hash Masking the Log Data

When you mask the log data using the mask as described in the previous section, the masked information is replaced by a static string provided in the Replace Expression. For example, when the user name is masked with the string confidential, then the user name is always replaced with the expression confidential in the log records for every occurrence. By using hash mask, you can hash the found value with a unique hash. For example, if the log records contain multiple user names, then each user name is hashed to a unique value. So, if the string user1 is replaced with the text hash ebdkromluceaqie for every occurrence, then the hash can still be used to identify that these log entries are for the same user. However, the actual user name will not be visible.

Risk Associated: Because this is a hash, there is no way to recover the actual value of the masked original text. However, taking a hash of any string, you arrive at the same hash every time. Ensure that you consider this risk while hash masking the log data. For example, the string oracle has the md5 hash of a189c633d9995e11bf8607170ec9a4b8. Every time someone tries to create an md5 hash of the string oracle, it will always be the same value. Although you cannot take this md5 hash and reverse it back to get the original string oracle, if someone tries to guess and forward hash the value oracle, they will see that the hash matches the one in the log entry.

To apply the hash mask data filter on your log data:

  1. Go to Create Source page. For steps, see Create a Source.

  2. You can also edit a source that already exists. For steps to open an Edit Source page, see Edit Source.

  3. Click the Data Filters tab and click Add.

  4. Enter the mask Name, select Hash Mask as the Type, enter the Find Expression value, and its associated Replace Expression value.

    Name Find Expression Replace Expression
    Mask User Name User=(\S+)s+ Text Hash
    Mask Port Port=(\d+)s+ Numeric Hash
  5. Click Save.

If you want to use hash mask on a field that is string based, you can use Text or Numeric hash as a string field. But if your data field is numeric, such as an integer, long, or floating point, then you must use Numeric hash. If you do not use numeric hash, then the replace text will cause your regular expressions which depend on this value to be a number, to break. The value will also not be stored.

This replacement happens before the data is parsed. Typically, when the data must be masked, it's not clear if it is always numeric. Therefore, you must decide the type of hash while creating the mask definition.

As the result of the above example hash masking, each user name is replaced by a unique text hash, and each port number is replaced by a unique numeric hash.

You can utilize the hash mask when filtering or analyzing your log data. See Filter Logs by Hash Mask.

Dropping Specific Keywords or Values from Your Log Records

Oracle Logging Analytics lets you search for a specific keyword or value in log records and drop the matched keyword or value if that keyword exists in the log records.

Consider the following log record:

ns5xt_119131: NetScreen device_id=ns5xt_119131  [Root]system-notification-00257(traffic): start_time="2017-02-07 05:00:03" duration=4 policy_id=2 service=smtp proto=6 src zone=Untrust dst zone=mail_servers action=Permit sent=756 rcvd=756 src=192.0.2.1 dst=203.0.113.1 src_port=44796 dst_port=25 src-xlated ip=192.0.2.1 port=44796 dst-xlated ip=203.0.113.1 port=25 session_id=18738

If you want to hide the keyword device_id and its value from the log record:

  1. Open the navigation menu and click Observability & Management. Under Logging Analytics, click Administration. The Administration Overview page opens.

  2. The administration resources are listed in the left hand navigation pane under Resources. Click Sources.

  3. Click the name of the source that you want to edit. The source details page opens. Click Edit to edit the source.

  4. Click the Data Filters tab and click Add.

  5. Enter the filter Name, select Drop String as the Type, and enter the Find Expression value such as device_id=\S*

  6. Click Save.

When you view the log records for this source, you’ll find that Oracle Logging Analytics has dropped the keywords or values that you’ve specified.

Note

Ensure that your parser regular expression matches the log record pattern, otherwise Oracle Logging Analytics may not parse the records properly after dropping the keyword.

Note

Apart from adding data filters when creating a source, you can also edit an existing source to add data filters. See Customize an Oracle-Defined Source to learn about editing existing sources.

Dropping an Entire Log Entry Based on Specific Keywords

Oracle Logging Analytics lets you search for a specific keyword or value in log records and drop an entire log entry in a log record if that keyword exists.

Consider the following log record:

ns5xt_119131: NetScreen device_id=ns5xt_119131  [Root]system-notification-00257(traffic): start_time="2017-02-07 05:00:03" duration=4 policy_id=2 service=smtp proto=6 src zone=Untrust dst zone=mail_servers action=Permit sent=756 rcvd=756 src=198.51.100.1 dst=203.0.113.254 src_port=44796 dst_port=25 src-xlated ip=198.51.100.1 port=44796 dst-xlated ip=203.0.113.254 port=25 session_id=18738

Let’s say that you want to drop entire log entry if the keyword device_id exists in them:

  1. Open the navigation menu and click Observability & Management. Under Logging Analytics, click Administration. The Administration Overview page opens.

  2. The administration resources are listed in the left hand navigation pane under Resources. Click Sources.

  3. Click the name of the source that you want to edit. The source details page opens. Click Edit to edit the source.

  4. Click the Data Filters tab and click Add.

  5. Enter the filter Name, select Drop Log Entry as the Type, and enter the Find Expression value such as .*device_id=.*

    It is important that the regular expression must match the entire log entry. Using .* in front of and at the end of the regular expression ensures that it match all other text in the log entry.

  6. Click Save.

When you view the log entries for this log source, you’ll find that Oracle Logging Analytics has dropped all those log entries that contain the string device_id in them.

Note

Apart from adding data filters when creating a source, you can also edit an existing source to add data filters. See Customize an Oracle-Defined Source to learn about editing existing sources.

Use Extended Fields in Sources

The Extended Fields feature in Oracle Logging Analytics lets you extract additional fields from a log record in addition to any fields that the parser parsed.

In the source definition, a parser is chosen that can break a log file into log entries and each log entry into a set of base fields. These base fields would need to be consistent across all log entries. A base parser extracts common fields from a log record. However, if you have a requirement to extract additional fields from the log entry content, then you can use the extended fields definition. For example, the parser may be defined so that all the text at the end of the common fields of a log entry are parsed and stored into a field named Message.

When you search for logs using the updated source, values of the extended fields are displayed along with the fields extracted by the base parser.

Note

To add the Log Group as the input field, provide its OCID for the value instead of the name.

  1. Open the navigation menu and click Observability & Management. Under Logging Analytics, click Administration. The Administration Overview page opens.

    The administration resources are listed in the left hand navigation pane under Resources. Click Sources.

  2. Click the name of the source that you want to edit. The source details page opens. Click Edit to edit the source.
  3. Click the Extended Fields tab and then click Add.
  4. A condition can be specified so the field extraction occurs only if the log entry being evaluated matches a predefined condition. To add a condition to the extended field, expand the Conditions section.
    • Reuse Existing: If required, to reuse a condition that's already defined for the log source, select the Reuse Existing radio button, and select the previously defined condition from the Condition menu.
    • Create New Condition: Enable this button if you want to define a new condition. Specify the Condition Field, Operator, and Value.

      For example, the extended field definition that extracts the value of the field Security Resource Name from the value of the field Message only if the field Service has one of the given values NetworkManager, dhclient, or dhcpd is as following:

      • Base Field: Message
      • Example Base Field Content: DHCPDISCOVER from b8:6b:23:b5:c1:bd (HOST1-LAP) via eth0
      • Extract Expression: ^DHCPDISCOVER\s+from\s+{Security Resource Name:\S+}\s+.+

      The condition for this extended field definition should be defined as following:

      • Condition Field: service
      • Condition Operator: IN
      • Condition Value: NetworkManager,dhclient,dhcpd

      In the above example, the extracted value of the field Security Resource Name is b8:6b:23:b5:c1:bd.

      To provide multiple values for the field Condition Value, key in the value and press Enter for each value.

    By adding a condition, you can reduce the regular expression processing on a log entry that is not likely to have the value that you are trying to extract. This can effectively reduce the processing time and the delay in the availability of your log entries in the Log Explorer.

  5. Select the Base Field where the value is the one that you want to further extract into the fields.

    The fields that are shown in the base field are those that are parsed from the base parser and some default fields that are populated by log collection such as Log Entity (the file name, database table, or other original location the log entry came from) and Original Log Content.

  6. Enter a common example value for the Base Field that you chose to extract into additional fields in the Example Base Field Content space. This is used during the test phase to show that the extended field definition is working properly.
  7. Enter the extraction expression in the Extraction Expression field and select Enabled check box.

    An extraction expression follows the normal regular expression syntax, except when specifying the extraction element, you must use a macro indicated by curly brackets { and }. There are two values inside the curly brackets separated by a colon :. The first value inside the curly brackets is the field to store the extracted data into. The second value is the regular expression that should match the value to capture from the base field.

    Note

    When you want to extract multiple values from a field using the Extended Fields:

    1. First create a field for the log content that can have multiple values for a field, for example Error IDs. See Create a Field.

    2. In the Add Extended Field Definition dialog box, for the Base Field, select a base field which is extracted from a parser and has multiple-value data, for example, Message, Original Log Content.

    3. Enter Example Base Field Content which has multiple values of a field that you want to extract.

    4. Under Extract Expression, provide the regular expression to extract each value from the field. Click Add.


    EFD for Multiple values of a Field

  8. Click Test Definition to validate that the extract expression can successfully extract the desired fields from the base field example content that you provided. In case of success in the match, the Step Count is displayed which is the good measure of the effectiveness of the extract expression. If the expression is inefficient, then the extraction may timeout, and the field will not be populated.
    Note

    It is best to keep the step count under 1000 for best performance. The higher this number, the longer it will take to process your logs and make them available in the Log Explorer.
  9. Click Save.

If you use Automatic parse time only option in your source definition instead of creating a parser, then the only field that will be available for creating Extended Field Definitions will be the Original Log Content field since no other fields will be populated by the parser. See Use the Automatic Time Parser.

Oracle Logging Analytics enables you to search for the extended fields that you’re looking for. You can search based on the how it was created, the type of base field, or with some example content of the field. Enter the example content in the Search field, or click the down arrow for the search dialog box. In the search dialog box, under Creation Type, select if the extended fields that you’re looking for are Oracle-defined or user-defined. Under Base Field, you can select from the options available. You can also specify the example content or the extraction field expression that can be used for the search. Click Apply Filters.

Table 8-1 Sample Example Content and Extended Field Extraction Expression

Description Base Field Example Content Extended Field Extraction Expression
To extract the endpoint file entension from the URI field of a Fusion Middleware Access log file

URI

/service/myservice1/endpoint/file1.jpg

{Content Type:\.(jpg|html|png|ico|jsp|htm|jspx)}

This will extract the file suffix such as jpg or html and store the value into the field Content Type. It will only extract for suffixes listed in the expression.

To extract the user name from the file path of a log entity

Log Entity

/u01/oracle/john/audit/134fa.xml

/\w+/\w+/{User Name:\w+}/\w+/\w+

To extract the start time from the Message field

Note: Event Start Time is a Timestamp data type field. If this were a numeric data type field, then the Start Time would be stored simply as a number, and not as a timestamp.

Message

Backup transaction finished. Start=1542111920

Start={Event Start Time:\d+}

Source: /var/log/messages

Parser Name: Linux Syslog Format

Message

authenticated mount request from 10.245.251.222:735 for /scratch (/scratch)

authenticated {Action:\w+} request from {Address:[\d\.]+}:{Port:\d+} for {Directory:\S+}\s(

Source: /var/log/yum.log

Parser Name: Yum Format

Message

Updated: kernel-headers-2.6.18-371.0.0.0.1.el5.x86_64

{Action:\w+}: {Package:.*}

Source: Database Alert Log

Parser Name: Database Alert Log Format (Oracle DB 11.1+)

Message

Errors in file /scratch/cs113/db12101/diag/rdbms/pteintg/pteintg/trace/pteintg_smon_3088.trc (incident=4921): ORA-07445: exception encountered: core dump [semtimedop()+10] [SIGSEGV] [ADDR:0x16F9E00000B1C] [PC:0x7FC6DF02421A] [unknown code] []

Errors in file {Trace File:\S+} (incident={Incident:\d+}): {Error ID:ORA-\d+}: exception encountered: core dump [semtimedop()+10] [SIGSEGV] [ADDR:{Address:[\w\d]+] [PC:{Program Counter:[\w\d]+}] [unknown code] []

Source: FMW WLS Server Log

Parser Name: WLS Server Log Format

Message

Server state changed to STARTING

Server state changed to {Status:\w+}

Configure Field Enrichment Options

Oracle Logging Analytics lets you configure Field Enrichment options so you can further extract and display meaningful information from your extended fields data.

One of the Field Enrichment options is Geolocation, which converts IP addresses or location coordinates present in the log records to a country or country code. This can be used in log sources like Web Access Logs that have external client IP addresses.

Using the Lookup Field Enrichment option, you can match field-value combinations from logs to an external lookup table.

Include additional information in your log entries by using the Additional Fields option. This information gets added to each log entry at processing time.

Note

  • For a source, you can define a maximum of three field enrichments, each of different type.

  • To add the Log Group as the input field, provide its OCID for the value instead of the name.

Use Ingest-Time Lookups in the Source

Oracle Logging Analytics lets you enrich log data with additional field-value combinations from lookups by setting up Lookup Field Enrichment option in the source. Oracle Logging Analytics matches the specified field's value to an external lookup table, and if matched, appends other field-value combinations from the matched lookup record to the log data. See Manage Lookups.

You can add data from multiple lookups by setting up the Lookup Field Enrichment option multiple times. The Lookup Field Enrichment is processed in the same order as it is created. So, if you have related lookups where the keys overlap and help in adding more enrichments with the processing of each lookup, then ensure to include the overlapping keys in the input and output selections of the Lookup Field Enrichment definition. For an example of using multiple related lookups to enrich log data, see Example of Adding Multiple Lookup Field Enrichments.

Steps to Add Lookup Field Enrichment

  1. Open the navigation menu and click Observability & Management. Under Logging Analytics, click Administration. The Administration Overview page opens.

    The administration resources are listed in the left hand navigation pane under Resources. Click Sources.

    The Sources page opens. Click Create Source.

    Alternatively, click the Actions menu icon Actions icon next to the source entry that you want to edit and select Edit. The Edit Source page is displayed.

    Note

    Make sure that a parser is selected in the source definition page to have the Add button enabled for field enrichment.

  2. Click the Field Enrichment tab and then click Add.

    The Add Field Enrichment dialog box opens.

  3. In the Add Field Enrichment dialog box,

    1. Select Lookup as the Function.
    2. Select the Lookup Table Name from the drop down menu.
    3. Under Input Fields, select the Lookup Table Column and the Log Source Field to which it must be mapped. This is to map the key from the lookup table to a field that is populated by your parser in Log Source Field, for example, errid column in the lookup table can be mapped to the Error ID field in the logs.

      The list for the input fields in Log Source Field will be limited to the fields that your log source populates.

    4. Under Actions, select the new log source field and the field value in the lookup table column to which it must be mapped. When a matching record is found in the specified lookup table based on the input mapping above, the output field specified in the Log Source Field is be added to the log with the value of the output lookup column specified in Field value. For example, the erraction column in the lookup table can be mapped to the Action field.

      Optionally, click + Another item to map more output fields.

    5. Click Add field enrichment.

    The lookup is now added to the Field Enrichment table.

  4. Keep the Enabled check box selected.

  5. To add more lookups, repeat steps 3 and 4.

When you display the log records of the log source for which you created the ingest-time lookup field enrichment, you can see that the Output Field displays values that are populated against the log entries because of the lookup table reference you used in creating the field enrichment. See Manage Lookups.

Example of Adding Multiple Lookup Field Enrichments

You can add up to three Lookup Field Enrichments to a source. The individual lookups may or may not be related to one another.

The following example illustrates how three related lookups can be set up such that the log data can be enriched with information from all three lookups. Consider the following three related lookups that have information about multiple hosts:

Lookup1: SystemConfigLookup

Serial Number Manufacturer Operating System Memory Processor Type Disk Drive Host ID
SER-NUM-01 Manuf1 OS1 256TB Proc1 Hard Drive 1001
SER-NUM-02 Manuf2 OS2 7.5TB Proc3 Solid State Drive 1002
SER-NUM-03 Manuf2 OS3 16TB Proc2 Solid State Drive 1003
SER-NUM-04 Manuf3 OS1 512TB Proc5 Hard Drive 1004
SER-NUM-05 Manuf1 OS1 128TB Proc4 Hard Drive 1001

Lookup2: GeneralHostConfigLookup

Host ID Host Owner Host Location Host Description Host IP Address
1001 Jack San Francisco Description for Jack host 192.0.2.76
1002 Alexis Denver Description for Alexis host 203.0.113.58
1003 John Seattle Description for John host 198.51.100.11
1004 Jane San Jose Description for Jane host 198.51.100.164

Lookup3: NetworkConfigLookup

IP Address Subnet Mask Gateway DNS Server
192.0.2.76 255.255.255.252 192.0.2.1 Recursive server
203.0.113.58 255.255.255.0 203.0.113.1 Authoritative server
198.51.100.11 255.255.255.224 198.51.100.1 Root server
198.51.100.164 255.255.255.192 198.51.100.1 Recursive server

Between the lookups Lookup1 and Lookup2, Host ID is the common key that can be selected as the output in the first lookup field enrichment, and as the input in the second lookup field enrichment. Similarly, between the lookups Lookup2 and Lookup3, IP Address is the common key that can be selected as the output in the first lookup field enrichment, and as the input in the second lookup field enrichment.

With the above setting, let the lookup field enrichments be configured in the order 1, 2, and 3:

Lookup Field Enrichment Lookup Table Name Input Fields Actions
1 SystemConfigLookup
  • Log Source Field: Serial Number
  • Lookup Table Column: Serial Number
  • New Log Source field 1: Operating System
  • Field value 1: Operating System
  • New Log Source field 2: Memory
  • Field value 2: Memory
  • New Log Source field 3: Host ID
  • Field value 3: Host ID
2 GeneralHostConfigLookup
  • Log Source Field: Host ID
  • Lookup Table Column: Host ID
  • New Log Source field 1: Host Owner
  • Field value 1: Host Owner
  • New Log Source field 2: Host IP Address
  • Field value 2: Host IP Address
3 NetworkConfigLookup
  • Log Source Field: Host IP Address
  • Lookup Table Column: IP Address
  • New Log Source field 1: Gateway
  • Field value 1: Gateway
  • New Log Source field 2: DNS Server
  • Field value 2: DNS Server

After the above enrichment configuration is complete, when the Serial Number field is detected in the log data, it is further enriched with Operating System, Memory, Host ID, Host Owner, Host IP Address, Gateway, and DNS Server from the three lookups. So, for the serial number SER-NUM-01 detected in the log, it is enriched with additional information OS1, 256TB, 1001, Jack, 192.0.2.76, 192.0.2.1, and Recursive server.

Use the Geolocation Field for Grouping Logs

After you set up the Geolocation field enrichment, you can view log records grouped by country or country code. This is useful when you analyze logs that have crucial location information such as IP address or location coordinates, for example, access logs, trace logs, or application transport logs.

  1. Open the navigation menu and click Observability & Management. Under Logging Analytics, click Administration. The Administration Overview page opens.

    The administration resources are listed in the left hand navigation pane under Resources. Click Sources.

    The Sources page opens. Click Create Source.

    Alternatively, click the Actions menu icon Actions icon next to the source entry that you want to edit and select Edit. The Edit Source page is displayed.

  2. Add the Extended Fields definition for the base field that contains the country-specific IP address or host names records, such as Host IP Address.
  3. Click the Field Enrichment tab and then click Add.
  4. In the Add Field Enrichment dialog box, select Geolocation as the Function.
  5. Under Input Fields section, select IP Field which is the gelocation field name that is extracted by the parser from the logs, for example, Client Coordinates or Host IP Address (Client).

    To detect threats with the geolocation information, enable the check box Threat intelligence enrichment. During the ingestion of the log data, if the IP address value associated with the Source Address input field in the log content is flagged as a threat, then it is added to the Threat IPs field. You can then use the field to filter the logs that have threat associated with them. Additionally, those log records will also have Threat IP label with a problem priority High. You can use the label in your search.

    The log records that have problem priority High associated with them have a red dot in the row. This makes those log records more prominent in their appearance in the table, making it easy for you to spot them and analyze them. You can then open the Threat IPs in the Oracle Threat Intelligence console and obtain more information about the threat.

  6. Click Add.

Add More Data to Your Log Entries at Processing Time

You might want to include more information in each of your entries as additional metadata. This information is not part of the log entry but is added at processing time, for example, Container ID, Node. For an example of adding metadata while uploading logs on demand, see Upload Logs on Demand.

The information thus added might not be directly visible in the Log Explorer. Complete the following steps to make it visible in the Log Explorer for your log analysis:

  1. Open the navigation menu and click Observability & Management. Under Logging Analytics, click Administration. The Administration Overview page opens.

    The administration resources are listed in the left hand navigation pane under Resources. Click Sources.

    The Sources page opens. Click the Actions menu icon Actions icon next to the source entry that you want to edit and select Edit. The Edit Source page is displayed.

    Note

    Make sure that a parser is selected in the source definition page to have the Add button enabled for field enrichment.

  2. Click the Field Enrichment tab and then click Add.

    The Add Field Enrichment dialog box opens.

  3. In the Add Field Enrichment dialog box,

    1. Select Additional Fields as the Function.
    2. Under Map Fields, select the fields that you want to map to the source. The fields that are selected in the parsers associated with this source are not available here.
    3. Click Add.

After you specify the additional fields, they are visible in the Log Explorer for log analysis. They can also be selected while configuring the Extended Fields or Labels for sources.

Use Labels in Sources

Oracle Logging Analytics lets you add labels or tags to log records, based on defined conditions.

When a log entry matches the condition that you have defined, a label is populated with that log entry. That label is available in your log explorer visualizations as well as for searching and filtering log entries.

You can use Oracle-defined or user created labels in the sources. To create a custom label to tag a specific log entry, see Create a Label.

  1. To use labels in an existing source, edit that source. For steps to open an Edit Source page, see Edit Source.

  2. Click the Labels tab.

  3. To add a conditional label, click Add conditional label.

    In the Conditions section:

    1. Select the log field on which you want to apply the condition from the Input Field list.

    2. Select the operator from the Operator list.

    3. In the Condition Value field, specify the value of the condition to be matched for applying the label.

      Note

      To add the Log Group as the input field, provide its OCID for the value instead of the name.

    4. To add more conditions, click Add Condition icon Add Condition icon, and repeat the steps 3a through 3c. Select the logical operation to apply on the multiple conditions. Select from AND, OR, NOT AND, or NOT OR.

      To add a group of conditions, click the Group Condition icon Group Condition icon, and repeat the steps 3a through 3c to add each condition. A group of conditions must have more than one condition. Select the logical operation to apply on the group of conditions. Select from AND, OR, NOT AND, or NOT OR.

      To remove a condition, click Remove Condition icon Remove Condition icon.

      To view the list of conditions in the form of statement, click Show Condition Summary.

  4. Under Actions, select from the already available Oracle-defined or user created labels. If required, you can create a new label by clicking Create Label.

    Select the Enabled check box.

  5. Click Add.

Oracle Logging Analytics enables you to search for the labels that you’re looking for in the Log Explorer. You can search based on any of the parameters defined for the labels. Enter the search string in the Search field. You can specify the search criteria in the search dialog box. Under Creation Type, select if the labels that you’re looking for are Oracle-defined or user-defined. Under the fields Input Field, Operator, and Output Field, you can select from the options available. You can also specify the condition value or the output value that can be used for the search. Click Apply Filters.

You can now search log data based on the labels that you’ve created. See Filter Logs by Labels.

Use the Conditional Fields to Enrich the Data Set

Optionally, if you want to select any arbitrary field and write a value to it, you can use the conditional fields. Populating a value in an arbitrary field using the conditional fields functionality is very similar to using Lookups. However, using the conditional fields provides more flexibility in your matching conditions and is ideal to use when dealing with a small number of conditions - field population definitions. For example, if you have a few conditions to populate a field, then you can avoid creating and managing a lookup by using conditional fields.

The steps to add the conditional fields are similar to those in the workflow above for adding conditional labels.

  • In the step 3, instead of clicking Add conditional label, click Add conditional field. The rest of the step 3 to select the conditions remains the same as the above workflow.

  • In step 4 above,

    1. For the Output Field, select from the already available Oracle-defined or user created fields from the menu. If required, you can create a new field by clicking Create New Field.

    2. Enter an Output Value to write for the output field when the input condition is true.

      For example, the source can be configured to attach the authentication.login output value for the Security Category output field when the log record contains the input field Method set to the value CONNECT .

      Select the Enabled check box.

Use the Automatic Time Parser

Oracle Logging Analytics lets you configure your source to use a generic parser instead of creating a parser for your logs. When doing this, your logs will only have the log time parsed from the log entries if the time can be identified by Oracle Logging Analytics.

This is particularly helpful when you’re not sure about how to parse your logs or how to write regular expressions to parse your logs, and you just want to pass the raw log data to perform analysis. Typically, a parser defines how the fields are extracted from a log entry for a given type of log file. However, the generic parser in Oracle Logging Analytics can:

  • Detect the time stamp and the time zone from log entries.

  • Create a time stamp using the current time if the log entries don’t have any time stamp.

  • Detect whether the log entries are multiple lined or single lined.

  1. Open the navigation menu and click Observability & Management. Under Logging Analytics, click Administration. The Administration Overview page opens.

    The administration resources are listed in the left hand navigation pane under Resources. Click Sources.

  2. In the Sources page, click Create source.
    This displays the Create Source dialog box.
  3. In the Source field, enter the name for the source.
  4. In the Source Type field, select File.
  5. Click Entity Type and select the type of entity for this source.
  6. Select Automatically parse time only. Oracle Logging Analytics automatically applies the generic parser type.
  7. Click Save.
When you access the log records of the newly created source, Oracle Logging Analytics extracts and displays the following information from the log entries:
  • Time stamp:

    • When a log entry doesn’t have a time stamp, then the generic parser creates and displays the time stamp based on the time when the log data was collected.

    • When a log record contains a time stamp, but the time zone isn’t defined, then the generic parser uses the management agent’s time zone.

      When using Management Agent, if the timezone is not detected properly, then you can manually set the timezone in the agent configuration files. See Manually Specify Time Zone and Character Encoding for Files.

      When uploading logs using on-demand upload, you can specify the timezone along with your upload to force the timezone if we cannot properly detect it. If you're using CLI, see Command Line Reference: Logging Analytics - Upload. If you're using REST API, then see Logging Analytics API - Upload.

    • When a log file has log records with multiple time zones, the generic parser can support up to 11 time zones.

    • When a log file displays some log entries with a time zone and some without, then the generic parser uses the previously found timezone for the ones missing a timezone.

    • When you ingest logs using management agent, if the time zone or the time zone offset is not indicated in the log records, then Oracle Logging Analytics compares the last modified time of the OS with the timestamp of the last log entry to determine the proper time zone.

  • Multiple lines: When a log entry spans multiple lines, the generic parser can captures the multiline content correctly.