The interactive data visualizations in Oracle Logging Analytics enable you to get deeper insights into your log data. Based
on what you want to achieve with your data set, you can select the visualization type that
best suits your application.
Here are some of the things you can do with visualizations:
Compare and Contrast the Data Set Using One or Two Parameters 🔗
Use these simple graphs to visualize your data set and compare the log records based on one or two key parameters:
Visualization Type
What You Input
What Output You Get
What You Can Do
Pie: A pie chart shows the overall composition of a data set by encoding the percentage values in angles.
Default Group By field: Log Source. Optionally, you can change this parameter.
A circular representation of the count of the log records that are grouped using the input parameter.
Compare the broad groups in the circle that indicate percentages of the whole data
set. For example, compare the percentages of the counts of the log
records from various sources.
Bar: The count of the log records is displayed as segmented columns against the time period.
Default X-axis field: Log Source. Optionally, you can change this parameter.
Additionally, provide a second parameter in the Group by section to view a colored and stacked bar graph.
Bar graph: The input parameter represented along the x-axis as segmented columns, with the height of the column denoting the count.
Stacked bar graph: The key input parameter is grouped by the second parameter, and is represented as a stacked bar graph along the x-axis. The overall height of the column denotes the count. The colored stack represents the grouping.
Bar graph: Compare the sizes of the segmented columns to compare the count of
the log records based on the input parameter. For example, compare
the count of log records from each source.
Stacked bar graph: Here, you can compare not only the count of the values of
the input parameter, but also notice the grouping of it, based on
the second parameter. In the following example, the count of the log
records from the sources are obtained by the overall height of the
segmented columns. The log records in each column are grouped based
on the severity of the errors noticed in them.
Horizontal bar: The count of the log records is displayed as segmented rows against the time period.
Default Y-axis field: Log Source. Optionally, you can change this parameter.
One parameter, for example, Log Source. Additionally, provide a second parameter in the Group by section to view a colored and stacked horizontal bar graph.
Horizontal bar graph: The input parameter represented along the y-axis as segmented columns, with the width of the row denoting the count.
Stacked horizontal bar graph: The key input parameter is grouped by the second parameter, and is represented as a stacked bar graph along the y-axis. The overall width of the row denotes the count. The colored stack represents the grouping.
Horizontal bar graph: Compare the sizes of the segmented rows to compare the
count of the log records based on the input parameter. For example,
compare the count of log records from each source.
Stacked horizontal bar graph: Here, you can compare not only the count of the
values of the input parameter, but also the grouping of it, based on
the second parameter. In the following example, the count of the log
records from the sources are obtained by the overall width of the
segmented rows. The log records in each row are grouped based on the
entity type.
Map: The geographical distribution of the log records is
displayed on the world map based on the location the log records are
collected from.
Default fields referenced: Client Host
City, Client Host Region,
Client Host Country, Client Host
Continent, and Client Coordinates.
The geographical distribution of the count of log
records based on the input geographical parameter.
Compare the count of the log records based on their
geographical distribution.
Line: The count of the log records against the specific time is plotted with the line tracing the number that represents the count.
Default Group By field: Log Source. Optionally, you can change this parameter.
A plotted line that presents the count of the input parameter along the y-axis tracked on the timeline along the x-axis.
Compare the count of the log records based on the input parameter represented by separate lines plotted against time. In the following example, the count of log records from various log sources are plotted against time in each line.
Word Cloud: The data set is represented by a set of word tiles, whose size indicate the count of log records in each group and the colors indicate the grouping.
Default Group By field: Log Source. Optionally, you can change this parameter.
Additionally, provide a second parameter in the Color section to further group the data set. For example, Entity Type.
A word cloud where the size of the word tile represents the count. Additionally, when you provide a second input parameter, you can see a colored word cloud where the words are grouped by the second parameter. The groups are represented by colors.
Compare the count of the log records based on the size of the word tiles that
represent the input parameter. If you provided the second parameter,
then you can also view the color grouping of the word tiles. In the
following example, the size of the word tiles represent the count of
the log records from each source. The color of the word tiles
indicate the entity type of each group.
Heat Map: Heat map makes it easier to visualize more values such as
counts or utilization against time.
Default Group By field: Log
Source. Optionally, you can change this parameter.
Time is plotted along the y-axis of the chart. Along x-axis, the
field which is the input to the timestats command
is plotted, for example, Log Source. Each rectangle along a row
represents the count of log records for a slot of time. The color of
the rectangle represents the range in which the count belongs. The
ranges are displayed at the top of the chart.
The rectangles of various colors represent values over time so you
can quickly spot areas that might be of interest or concern.
View these charts to get detailed information about the data set:
Visualization Type
What You Input
What Output You Get
What You Can Do
Summary table
Default Value: count
Optionally, you can select a different math function to perform on the data set. For example, Percentile, Median, or Average.
Default Group by field: Log Source
Optionally, you can select more input parameters for the Group by section that will enable further grouping of the data set.
A table that displays the following:
Each column of the table represents a display field and the fields that you want to use for grouping the data set.
The number of rows in the table indicate the number of groups.
Summary table is the most versatile visualization chart that can perform statistical analysis on any type of input data. It also permits multiple input parameters in the Group by section, thus enabling more complex deductions from the analysis.
Perform statistical analysis on the entire data set.
Select the fields for statistical analysis that can help you understand the data set.
Group your statistical analysis to correlate the results.
Default Value: Entity, Entity
Type, Log Source, Host Name
(Server), Problem Priority, and
Label.
Optionally, you can select more input parameters that will display in the chart.
A chart of log records that contain:
The time when the log was collected
Original log content and the selected display fields
View the original log content to understand and correlate the values of the display fields.
View the log content corresponding to a specific log collection time.
Table
Default Value: Entity, Entity
Type, Log Source, Host Name
(Server), Problem Priority, and
Label.
Optionally, you can select more input parameters that will display in the table.
A table that displays the following:
Each column of the table represents a display field that you selected
Each row of the table represents a log record
Prioritize and select the fields that you want to view in the table to help you make decisions.
Filter the log content and view only the data in each log record that is of interest to you.
Distinct
Default Value: Log Source
Optionally, you can select more input parameters that will display in the table.
A table that lists the unique values of the default field. If you included more fields, then the table displays the following:
Each column of the table represents a display field that you selected.
The number of rows in the table indicate the number of groups.
Each row indicates a unique group of the display fields that are available in the log data.
Identify the unique values of the fields in your log data.
Identify unique groups of fields in the log data.
Alternatively, use the Tile visualization to summarize the data set. By default, the tile visualization
summarizes the overall count of the log records. Identify the fields to group the log
records in order to refine the summary. For example, you can group the log records by
source. This is a sample summary output of the grouping: 8 Distinct
values of Log Source.
Group and Drill Down to the Specific Data Set 🔗
Use these simple graph and chart visualizations to group the log records based on a parameter, and then drill down to the individual log records to investigate further.
A histogram is a graph that lets you view the underlying frequency distribution or shape of a continuous data set. It shows the dispersion of log records over a specific time period with segmented columns. You can optionally select a field for the Group by section to group the log records for the histogram visualization.
Reduce the size of the data set for understanding and analyzing by grouping the log records in the histogram, and drilling down to specific log records. You can click a select segment in the histogram to drill down to a specific set of log records and to view the original log content.
The combination of the histogram graph and records chart enables you to drill down to the specific log content faster.
Table with histogram
Use an appropriate field to group the log records in the histogram visualization. From the histogram graph, identify the data set that you want to view the field details of, and view it in the table.
The combination of the histogram graph and table enables you to drill down to the specific data set faster.
Analyze the Data Set Using Multiple Key Parameters 🔗
Use these complex graph visualizations to determine the hierarchical and fractional relationships of the fields in the whole data set:
Visualization Type
What You Input
What Output You Get
What You Can Do
Sunburst
Default Value: count
Optionally, you can select a different field whose count can help to generate the sunburst.
Default Group by field: Log Source
Optionally, you can select more input parameters for the Group by section that will enable further grouping of the data set. For example, Entity Type and Entity.
By default, a sunburst that represents the log records grouped by the default parameter. The size of a sector in the circle indicates the count of the log records in the specific data set. If you specified more fields for grouping, you’ll see a concentric sunburst, with the innermost ring representing the first computation of the grouping, and the subsequent rings representing the following computations, in that order.
Use the sunburst visualization to analyze hierarchical data from multiple fields. The hierarchy is represented in the form of concentric rings, with the innermost ring representing the top of the hierarchy.
In the following example, the log records are grouped using the fields Log Source, Entity Type and Entity. Click a segment to view the Records with Histogram visualization for the specific data set. The records chart lists the original log content emphasizing the default display fields.
Treemap
Default Value: count
Optionally, you can select a different field whose count can help to generate the treemap.
Default Group by field: Log Source
Optionally, you can select more input parameters for the Group by section that will enable further grouping of the data set.
A treemap that represents the log records grouped by the default parameter. The size of the rectangles indicate the count of the log records in the specific data set. If you specified more fields for grouping, you’ll see a nested treemap that groups the log records based on all the parameters that you specified. The nested treemap also shows the fractional relationship of the fields in each data set.
Use the treemap visualization to analyze the data from multiple fields that are both hierarchical and fractional, with the help of interactive nested rectangles.
In the following example, the log records are grouped using the Log Source field. Click a rectangle to view the Records with Histogram visualization for the specific data set. The records chart lists the original log content emphasizing the default display fields.
Perform Advanced Analysis of the Data Set 🔗
Use these visualizations to perform advanced analysis of the large data set to figure out the root cause an issue, to identify potential issues, to view trends, or to detect an anomaly.
Visualization Type
What You Input
What Output You Get
What You Can Do
Cluster
The cluster visualization works on the entire data set and isn’t based on a specific parameter.
The Cluster view displays a summary banner at the top showing the following tabs:
Total Clusters: Total number of clusters for the selected log records.
Potential Issues: Number of clusters that have potential issues based on log records containing words such as error, fatal, exception, and so on.
Outliers: Number of clusters that occurred only once during a given time period.
Trends: Number of unique trends during the time period. Many clusters may have
the same trend. Therefore, clicking this panel shows a
cluster from each of the trends.
Clustering uses machine learning to identify the pattern of log records, and then to group the logs that have similar patterns. You can investigate further from each of the tabs based on your requirement. When you click any of the tabs, the histogram view of the cluster changes to display the records for the selected tab.
Clustering helps significantly reduce the total number of log entries that you have
to explore, and points out the outliers. See Clusters Visualization.
Link
Default Group By field: Log Source.
Optionally, you can select more input parameters for the Group
By section for more relevant grouping of the log
data. You can also select additional parameters for the
Value section.
The Groups tab displays a bubble chart that represents the groups formed with
the fields used for linking in the commonly seen ranges. The
Group By field is plotted along the x-axis, and the group
duration is plotted along the y-axis. The size of each
bubble in the graph is determined by the number of groups
contained in that bubble.
Trends: Project the time series data using
the Link Trend feature.
The histogram tab displays the log records or groups in the histogram visualization.
The groups table lists parameters like Log Source, Entity Type, Entity, Count, Start Time, End Time, and Group Duration for each group. If you specified more display fields, they’re included in the table too.
Use the link visualization to perform advanced analysis of log records by combining
individual log records from across the sources into groups, based on
the fields you selected for linking.
The bubble chart shows the anomalies in the patterns based on the analysis of the
groups. You can further examine the anomalies by clicking an
individual bubble or select multiple bubbles. To view the details of
the groups that correspond to the anomaly, select the anomaly bubble
in the chart. You can investigate the anomaly to identify and
rectify issues. See Link Visualization.
Select cluster() to group the log data using the query section and
the input parameter for the Group By section
for more relevant grouping of the log data.
The Groups tab displays a bubble chart that represents the groups formed with
the selected field and the clusters used for linking in the commonly
seen ranges. The Group By field is plotted along the x-axis, and the
group duration is plotted along the y-axis.
The groups table lists parameters like Entity Type, Cluster Sample, Count, Start Time, End Time, and Group Duration for each group. If you specified more display fields, they’re included in the table too.
Use the combination of link and cluster visualizations to perform this analysis. The machine learning capability of the cluster visualization to identify clusters and potential issues, and the ability of link visualization to group the log records based on the selection of fields are combined to narrow down your analysis to small anomaly groups or potential issues.
You can refine your query and be specific about the output required on the bubble chart. The analysis generates clusters that are grouped based on your selection of the field for analysis. You can investigate the anomalies further to arrive at conclusive decisions of the analysis.
Select the baseline time range that best groups the
typical set of logs that your system would generate.
Select the time range for analysis that has the logs
of your interest for analysis.
The output is the set of new issues identified in your logs in
the selected time range which are not present in the baseline time
range.
Additionally, the visualization also displays New Outliers and
summarizes the number of log records used in the analysis, total
number of unique clusters identified, and the number of log sources
in which issues were detected.
The Issues visualization uses the clustercompare
utility to group the logs in the specified time ranges, remove the
common clusters, and then generate unique set of clusters from which
the new issues are identified.
This visualization is ideal if you have a select baseline set of logs
that you want to compare against other logs to be able to detect new
issues.
For the baseline time range, select the time range that captures the
entire cycle of log generation. Longer baseline range may result in
longer time to run the query.
The time range for analysis also must be select such that the query
time is short and it is easy to identify issues.