The Insights reports offer developer-oriented analytics that pinpoint issues with skills. Using these reports, you can address these issues before they cause problems.

You can track metrics at both the chat session (or user session) level and at the conversation level. A chat session begins when a user contacts a skill and ends either when a user has closed the chat window or after the chat session has timed out after a period of inactivity. A chat session can contain multiple conversations. You can toggle between the conversation and session reporting using the Metric filter in the Overview report.
Description of skill_session_conversation_toggle.png follows


Session metrics do not apply to Q&A skills.

Chat Session Insights

Insights tallies the total number of chat sessions that were initiated for the skill and then breaks this total down by:
  • Ended Sessions – The number of chat sessions that ended explicitly by users closing the chat window, or that have expired after the session expiration specified by the channel configuration. Any in-progress chat sessions will be expired after the release of 21.12.

    Chat Sessions initiated through the skill tester are expired after 24 hours of inactivity. Currently, the functionality for ending a session by closing the chat window is supported by the Oracle Digital Assistant Native Client SDK for Web.
  • Active Sessions – The chat sessions that remain active because the chat window remains open or because they haven't yet timed out.
  • Average User Responses per Session – The average number of responses from users averaged by the total number of sessions initiated by the skill. A response is counted each time a user interacts with the skill by asking a question or replying to the skill message.
  • Average Duration– The amount of time that users remained connected to this skill averaged across all sessions.
    Description of skills_session_metrics.png follows

  • Session Trends – A comparison of the active, ended, and initiated chat sessions presented in two different views:
    • As a donut chart, which contrasts the total number of sessions that have been initiated against the sessions that have ended or remain active. You can find out the actual count by clicking the arcs.
    • As a trend line that plots the count of active, ended, and initiated session against dates.
      Description of skill_session_trends_pie_chart.png follows

  • Channel usage breakdown – To find consumption data about the channels through which users initiated sessions with this skill, compare the arcs of the chart and hover over them to get the actual total.
    Description of skill_session_channels.png follows


The Skills filter is disabled for sessions reporting.

Conversation Insights for Skills

The conversation reports for skills, which track voice and text conversations by time period and by channel, enable you to identify execution paths, determine the accuracy of your intent resolutions, and access entire conversation transcripts. Voice Insights are tracked for skills routed to chat clients that have been configured for voice recognition and are running on Version 20.8 or higher of the Oracle Web, iOS, or Android SDKs.

Report Types

  • Overview – Use this dashboard to quickly find out the total number of voice and text conversations by channel and by time period. The report's metrics break this total down by the number of complete, incomplete, and in-progress conversations. In addition, this report tells you how the skill completed, or failed to complete, conversations by ranking the usage of the skill's transactional and answer intents in bar charts and word clouds.
  • Custom Metrics – Enables you to measure the custom dimensions that have been applied to the skill.
  • Intents – Provides intent-specific data and information for the execution metrics (states, conversation duration, and most- and least-popular paths).
  • Paths – Shows a visual representation of the conversation flow for an intent.
  • Conversations – Displays the actual transcript of the skill-user dialog, viewed in the context of the dialog flow and the chat window.
  • Retrainer – Where you use the live data and obtained insights to improve your skill through moderated self-learning.
  • Export – Lets you download a CSV file of the Insights data collected by Oracle Digital Assistant. You can create a custom Insights report from the CSV.

Review the Summary Metrics and Graphs

The Overview report's metrics, graphs, charts, and word clouds depict overall usage. When the skill has handled both text and voice conversations, the default view of this dashboard includes both text and voice (the rendering enabled by the All option). Otherwise, the default is either just for text, or voice.
Description of select_overview_mode.png follows

You can adjust this view by toggling the between the Voice and Text modes, or you can compare the two by enabling. Compare text and voice conversations.
Description of compare_text_voice.png follows

When you select Text, the report displays a set of common metrics. When you select Voice, the report includes additional voice-specific metrics. These metrics only apply for voice conversations, so they do not appear when you choose Compare text and voice conversations

The Mode options depend on the presence of voice or text messages. If there are only text messages, for example, then only the Text option appears.
Common Metrics

The Overview report includes the following KPIs for both text and voice conversations
  • Total number of conversations—The total number of conversations, which is comprised of completed, incomplete, and in-progress conversations. Regardless of status, a conversation can be comprised of one or more dialog turns. Each turn is a single exchange between the user and the skill.

    Conversations are not the same as metered requests. To find out more about metering, refer to Oracle PaaS and IaaS Universal Credits Service Descriptions.
  • Completed conversations – Conversations that have ended by answering a user's query successfully. Conversations are counted as complete when the traversal through the dialog flow ends with a return transition or at a state with the insightsEndConversation property.
  • Incomplete conversations – Conversations that users didn't complete, because they abandoned the skill, or couldn't complete it because of system-level errors, timeouts, or infinite loops.
  • In progress conversations – "In-flight" conversations (conversations that have not yet completed nor timed-out). This metric tracks multi-turn conversations. An in-progress conversation becomes an timeout after a session expires.
  • Average time spent on conversations – The average length for all of the skill’s conversations.
  • Total number of users and Number of unique usersUser base metrics that indicate how many users a skill has and how many of these users are returning users.

Description of common_metrics.png follows

Voice Metrics

Any conversation that begins with a voice interaction is considered a voice conversation. Any conversation started in voice, but was completed in text, is considered a switched conversation. All other conversations are considered text. In addition to the standard metrics, the Overview report includes the following metrics that are specific to voice and switched conversations.

These metrics are for informational purposes only; you cannot act upon them.
To view these metrics, disable Compare text and voice operations and select either All or Voice as the mode.
  • Average time spent on conversations – The average length of time of the voice conversations.
  • Average Real Time Factor (RTF) – The ratio of the time taken to process the audio input relative to the CPU time. For example, if it takes one second of CPU time to process one second of audio, then the RTF is 1 (1/1). The RTF for 500 milliseconds to process one second of audio is .5 or ½ . Ideally, RTF should be below 1 to ensure that the processing does not lag behind the audio input. If the RTF is above 1, contact Oracle Support.
  • Average Voice Latency – The delay, in milliseconds, between detecting the end of the utterance and the generation of the final result (or transcription). If you observe latency, contact Oracle Support.
  • Average Audio Time – The average duration, in seconds, for all voice conversations.
  • Switched Conversations – The percentage of the skill's conversations that began with voice commands, but needed to be switched to text to complete the interaction. This metric indicates that there were multiple execution paths involved in switching from voice to text.
    Description of voice_metrics.png follows

Incomplete Conversation Breakdown
If there are any incomplete conversations during the selected period, the total number is broken down by the following error categories:
  • Timeouts – Timeouts are triggered when an in-progress conversation is idle for more than an hour, causing the session to expire.
  • System-Handled Errors – System-handled errors are handled by the system, not the skill. These errors occur when the dialog flow definition is not equipped with error handling, either globally in the defaultTransitions node, or at the state level with error transitions.
  • Infinite Loop – Infinite loops can occur because of flaws in the dialog flow definition, such as incorrectly defined transitions.
  • Canceled - The number of times that users exited a skill by explicitly canceling the conversation.

Description of incomplete_conversation_count.png follows

By clicking an error category in the table, or one of the arcs in the graph, you can drill down to the Conversations report to see these errors in the context of incomplete conversations. When you access the Conversations report from here, the Conversations report's Outcome and Errors filters are set to Incomplete and the selected error category. For example, if you click Infinite Loop, the Conversations report will be filtered by Incomplete and Infinite Loop. The report's Intents and Outcome filters are set to Show All and the Sort by field is set to Latest.

Description of incomplete_conversations_report.png follows

User Metrics
You can find out the number of users a skill has for a selected point in time through the following metrics. You can compare them to the running total shown in the Total number of conversations metric while filtering the report by channel and time period. For live agent integrations, you can weigh the number of unique users who were transferred to an agent against a total conversation count that includes live agent transfers and skill-handled conversations.
  • Number of users – A running total of all types of users who have interacted with the skill: users with channel-assigned IDs that persist across sessions (the unique users), and users whose automatically assigned IDs last for only one session.
  • Number of unique users – The number of users who have accessed the skill as identified by their unique user IDs. Each channel has a different method of assigning an ID to a user: users chatting with the skill through the Web channel are identified by the value defined for userId field, for example. The Skill Tester's test channel assigns you a new user ID each time you end a chat session by clicking Reset.
    Once assigned, these unique IDs persist across chat sessions so that the unique user count tallied by this metric does not increase when a user revisits the skill. The count only increases when another user assigned with a unique ID is added to the user pool.


    Because the user IDs are only unique within a channel (a user with identical IDs on two different channels will be counted as two users, not one), you can get a better idea of the user base by filtering the report by channel.
Enable New User Tracking
To track users who have never before interacted with a skill or digital assistant, switch on New User Tracking in Settings > Configuration. Before you switch this feature on for a skill, make sure that the channels routed to it assign some type of user ID. Otherwise, leave this feature switched off (its default mode). Whenever channels don't provide user IDs, Digital Assistant assigns a new user ID to each chat session. Enabling this feature when these types of channels are in use skews the reporting because new users will be added for each new chat session and consequently, the user table will become bloated with new entries. The new user data does not get purged automatically from storage, so you need to use the Oracle Digital Assistant API instead. To purge the new user data, include "purgeUserData": true in the payload of the Start Export Task POST request.

The collection of new user data only begins on the date that this feature was shipped with Release 23.10.
Review Conversation Trends Insights
The Conversation Trends chart plots the following for transactional intents (including agent transfer intents) and answer intents:
  • Completed – The conversations that users have successfully completed. These conversations include the ones where traversal through the dialog flow ended with the triggering of a return action, or ended at a state with the insightsEndConversation property.
  • Incomplete – Conversations that users didn't complete, because they abandoned the skill, or couldn't complete because of system-level errors, timeouts, or flaws in the skill's design.
  • In Progress – "In-flight" conversations (conversations that have not yet completed nor timed out). This metric tracks multi-turn conversations.

Description of insights_trends_overview.png follows

View Intent Usage
The Intents bar chart enables you to spot not only the transactional and answer intents that completed conversations, but also the ones that caused incomplete conversations. You can also use this chart to find out if the overall usage of these intents bears out your use case. For example, does the number of completed conversations for an intent that serves a secondary purpose outpace the number of completed conversations for your primary intent? To put this in more practical terms, has your pizza ordering skill become a "file complaint" skill that routes most users to a live agent?

Description of insights_overview_stacked_bar.png follows


Not all conversations resolve to an intent. When No Intent displays in the Intent bar chart and word cloud, it indicates that an intent was not resolved by user input, but through a transition action, a skill-initiated conversation, or through routing from a digital assistant.

You can filter the Intents bar chart and the word cloud using the bar chart's All Intents, Answer Intents, and Transaction Intents options.
Description of all_intents.png follows

These options enable you to quickly breakdown usage. For example, for mixed skills – ones that have both transactional and answer intents – you can view usage for these two types of intents using the Answer Intents and Transaction Intents options.
Description of transactional_intents.png follows

The key phrases rendered in the word cloud reflect the option, so for example, only the key phrases associated answer intents display when you select Answer Intents.
Description of answer_intents.png follows

Review Intents and Retrain Using Key Phrase Clouds
The Most Popular Intents word cloud provides a companion view to the Intents bar chart by displaying the number of completed and incomplete conversations for an intent. It weighs the most frequently invoked intents by size and by color. The size represents the number of invocations for the given period.
Description of popular_intent_phrase_cloud.png follows

The color represents the level of success for the intent resolution:
  • Green represents a high average of resolving requests at, or exceeding, the Confidence Win Margin threshold within the given period.
  • Yellow represents intent resolution that, on average, don't meet the Confidence Win Margin threshold within the given period. This color is a good indication that the intent needs retraining.
  • Red is reserved for unresolvedIntent. This is the collection of user requests that couldn't be matched to any intent but could potentially be incorporated into the corpus.
The Most Popular Intents word cloud is the gateway to more detailed views of how the intents resolve user messages. Review Intents and Retrain Using Key Phrase Clouds describes how you can drill down from the Most Popular Intents word cloud to find out more about usage, user interactions, and retraining.

Beyond that, it gives you a more granular view of intent usage through key phrases, which are representations of actual user input, and, for English-language phrases (the behavior differs when non-English language phrases resolved to an intent), access to the Retrainer.

Review Key Phrases

By clicking an intent, you can drill down to a set of key phrases. These phrases are abstractions of the original user message that preserve its original intent. For example, the key phrase cancel my order is rendered from the original message, I want to cancel my order. Similar messages can be grouped within a single key phrase. The phrases I want to cancel my order, can you cancel my order, and cancel my order please can be grouped within the cancel my order key phrase, for example. Like the intents, size represents the prominence for the time period in question and color reflects the confidence level.
Description of key_phrases_for_intent.png follows

You can see the actual user message (or the messages grouped within a key phrase) within the context of a conversation when you click a phrase and then choose View Conversations from the context menu.
Description of view_conversations_option.png follows

This option opens the Conversations Report.
Description of key_phrases_conversation_report.png follows

Anonymized values display in the phrase cloud when you enable PII Anonymization.
Description of pii_skill_phrase_cloud.png follows

Retrain from the Word Cloud
In addition to viewing the message represented by the phrase in context, you can also add the message (or the messages grouped within a key phrase) to the training corpus by clicking Retrain.
Description of unresolved_phrases_with_menu.png follows

This option opens the Retrainer, where you can add the actual phrase to the training corpus.

Description of unresolved_phrase_retrainer.png follows

Review Native Language Phrases

The behavior of the key phrase cloud differs for skills with native language support in that you can't access the Retrainer for non-English phrases. When phrases in different languages have been resolved to an intent, languages, not key phrases, display in the cloud when you click an intent. For example, if French and English display after you click unresolvedIntent, then that means that there are phrases in both English and French that could not be resolved to any intent.
Description of ml_phrase_cloud.png follows

If English is among the languages, then you can drill down to the key phrase cloud by clicking English. From the key phrase cloud, you can use the context menu's View Conversations and Retrain options to drill down to the Conversation Report and the Retrainer. But when you drill down from a non-English language, you drill down to the Conversations report, filtered by the intent and language. There is no direct access to the Retrainer. So going back to the unresolvedIntent example, if you clicked English, you would drill down to the key phrase cloud. If you clicked French, you'd drill down to the Conversations report, filtered by unresolvedIntent and French.
Description of ml_conversation_report.png follows

If you want to incorporate or reassign a phrase after reviewing it within the context of the conversation, you'll have to incorporate the phrase directly from the Retrainer by filtering on the intent, the language (and any other criteria).

Review Language Usage

For a multi-lingual skill, you can compare the usage of its supported languages through the segments of the Languages chart. Each segment represents a language currently in use.
Description of languages_chart_overview_skill.png follows

If you want to review the conversations represented by a language in the chart, you can click either a segment or the legend to drill down to the Conversations report, which is filtered by the selected language.
Description of conversations_report_filtered_by_language.png follows

Review User Feedback and Ratings
The User Rating donut chart and User Feedback word cloud track the direct feedback and scores collected by the System.Feedback component. When the dialog transitions to a System.Feedback state, the skill presents users with a rating system and optionally, the ability to provide feedback. By default, the users can rate their interaction with the skill by choosing along a range of one to five. For ODA Version 21.10 and higher, the feedback component is, by default, a star rating system. For prior versions, the feedback component displays as a list.
Description of feedback_rating_in_chat_widget.png follows

The average customer satisfaction score, which is proportional to the number of conversations for each of the ratings, is rendered at the center of the donut chart. The individual totals on a per-conversation basis for each number on the range are graphed as arcs of the User Rating donut chart which vary in length according occurrence. Clicking one of these arcs opens the Conversations report filtered by the score.

If your skill runs on a platform prior to Release 21.12, you need to switch Enable Masking off to see the user rating in the conversation transcript. To retain the actual user rating in the transcripts for skills running on Platforms 21.12 and higher (where Enable Masking is deprecated), you need delete the NUMBER entity from the list of entities treated as PII when enabling PII anonymization.

Description of user_rating_user_feedback_overview.png follows

By default, the System.Feedack component's threshold for determining a positive or negative reaction is set at two (Dissatisfied). If user feedback is enabled for the System.Feedback component, the User Feedback word cloud displays the user comments that accompany negative ratings and sizes them according to their frequency. You can see these comments in the context of the overall interaction by clicking the arc on the User Rating chart that represents a below-the-threshold rating (a one or two per the component's default settings) to drill down to the Conversation report, which is filtered by the selected score.
Description of conversation_report_user_feedback.png follows

How to Add the Feedback Component to the Dialog Flow

To capture data for the User Rating graph and User Feedback word cloud, you need to a add a sequence of states to your dialog flow. The first of these state is a System.Feedback state. In the following snippet, this state is called getUserFeedback. To add the template for this state, choose User Messaging > Solicit User Feedback > Ask User Feedback from the Add Component dialog.

In addition to the System.Feedback state, you need to add the states for its above, below, and cancel transitions. These states accommodate the high and low range of the rating as determined by the threshold property and also allow users to skip having to give a rating altogether. In this snippet, these states display simple text messages, with the "below" state using a system variable, system.userFeedbackRating, in a value expression (${system.userFeedbackRating.value}) to output the user's rating. Each of these states terminate the conversation with a return: done transition.

The System.Feedback component does not allow out-of-order input, so users can't change their ratings or responses after they've sent them.
Your dialog flow can transition to a System.Feedack sequence whenever you want to gauge a user's reaction. This could be, as illustrated by the following snippet, after a user has either completed or canceled a transaction. When adding System.Feedback:
  • The flow must explicitly transition to the System.Feedback state using a next transition.
  • The final state in the transactional flow must include keepTurn: true.

    The hard-coded strings for output text in the following snippet are for illustrative purposes only. Per our best practices, reference bundles, not string literals, should be used for output text.
    component: "System.CommonResponse"
      keepTurn: true
          - text: "Thank you for your order. Your pizza will arrive in 30 minutes!"
            type: "text"
          - type: "attachment"
            attachmentType: "image"
            name: "image"
            attachmentUrl: "${pizzaCardInfo.value[pizza.value.Type].image}"
      processUserMessage: false
      next: "getUserFeedback"
    component: "System.Output"
      text: "Your order is canceled"
      keepTurn: true
      next: "getUserFeedback"
    component: "System.Feedback"
      threshold: 2
      maxRating: 5
      enableTextFeedback: true
        above: "positiveFeedback"
        below: "negativeFeedback"
        cancel: "cancelFeedback"
    component: "System.Output"
      text: "Thank you for your rating of ${system.userFeedbackRating.value}."
      return: "done"
    component: "System.Output"
      text: "You entered ${system.userFeedbackText.value}. We appreciate your feedback."
      return: "done"
    component: "System.Output"
      text: "Feedback canceled."
      return: "done"


You can customize the prompts output by the System.Feedback component by the editing the Feedback-related resource bundles accessed through the Resource Bundle Configuration page or by editing the systemComponent_Feedback_ keys in a resource bundle CSV file.
Using Custom Metrics to Measure Feedback
You can augment the feedback reporting with a high-level view of positive, negative and skipped feedback by setting a System.SetCustomMetrics state for each of the states named by the System.Feedback's above, below, and cancel transition actions.
Description of custom_metrics_feedback_type.png follows

The System.SetCustomMetrics states in the following snippet segment the feedback for the Feedback Type dimension in the Custom Metrics report.
    component: "System.Feedback"
      threshold: 2
      maxRating: 5
      enableTextFeedback: true
        above: "PositiveFeedbackMetrics"
        below: "NegativeFeedbackMetrics"
        cancel: "CancelFeedbackMetrics"
    component: "System.SetCustomMetrics"
      - name: "Feedback Type"
        value: "Positive"
      next: "positiveFeedback"        
    component: "System.Output"
      text: "Thank you for the ${system.userFeedbackRating.value}-star rating."
      return: "done"
    component: "System.SetCustomMetrics"
      - name: "Feedback Type"
        value: "Negative"
      next: "negativeFeedback"
    component: "System.Output"
      text: "Thank you for your feedback."
      return: "done"
    component: "System.SetCustomMetrics"
      - name: "Feedback Type"
        value: "Canceled"   
      next: "cancelFeedback"        
    component: "System.Output"
      text: "Maybe next time."
      return: "done"      

Review Custom Metrics

The Custom Metrics report gives you added perspectives on the Insights data by tracking conversation data for skill-specific dimensions. The dimensions tracked by this report are created in the dialog flow definition using the System.SetCustomMetrics component. Using this component, you can create dimensions to explore business and development needs that are particular to your skill. For example, you can build dimensions that report the consumption of a product or service (the most requested pizza dough or the type of expense report that's most commonly filed), or track when the skill fails users by forcing them to exit or by passing them to live agents.

Description of custom_metrics_report_first_view.png follows

The Custom Metrics report graphs the dimensions defined on the conversation data as both a donut chart and a line trend graph. Each dimension has its own conversation total. This tally includes conversations that have completed, are incomplete, or in progress. The dimension's values (or categories) are represented as segments on the donut chart and as points on the and line trend chart. You can use these values to filter the report view (and also the custom metric data that you can download into a CSV file).

Description of filter_custom_metrics_by_dimensions.png follows

On the donut chart, the length of the arcs represent the occurrences of the dimension value as a percentage of the total number of conversations. The actual count for the dimension values is tracked by the line chart. Both the arcs and the trend lines are access points to the Conversations report. Clicking either opens the Conversations report filtered by the selected dimension value.
Description of from_custom_metric_to_conversation_report.png follows


Dimensions and categories appear in the report only when the conversations measured by them have occurred.
Instrument the Skill for Custom Metrics

To generate the Custom Metrics report, you need to define one or more dimensions using the System.SetCustomMetrics component (accessed by clicking Variables > Set Insights Custom Metrics in the Add Component dialog for YAML dialogs or Variables > Set Custom Metrics in Visual Flow Dialog mode).
Description of set_insights_component_dialog.png follows

If the Custom Metrics report has no data, then it's likely that no System.SetCustomMetrics states have been defined, or that the transitions to these states have not been set correctly.

You can add System.SetCustomMetrics states wherever you want to track an entity value or an activity within an execution flow.

You can define up to six dimensions for each skill.
Depending on the structure of the dialog flow definition and your use case, you can define multiple dimensions within a single System.SetCustomMetrics state, or with several System.SetCustomMetrics states throughout the dialog flow definition.
Creating Dimensions for Variable Values
You can track entity values by setting a transition to a System.SetCustomMetrics state from a state that sets the entity value that you want to track, or as illustrated by the setPizzaDough state in the following snippet, ends a series of value-setting states that you want to track. The setInsightsCustomMetrics state in the following snippet, for example, follows the value-setting resolveEntities and setPizzaDough states that resolve the items in a composite bag entity.
    component: "System.ResolveEntities"
      variable: "pizza"
      nlpResultVariable: "iResult"
      maxPrompts: 5
      headerText: "<#list system.entityToResolve.value.updatedEntities>I have updated the <#items as ent>${ent.description}<#sep> and </#items>. </#list>"
      cancelPolicy: "immediate"     
        cancel: "maxError"
        next: "setInsightsCustomMetrics" 
    component: "System.SetCustomMetrics"
      - name: "Dough Preference"
        value: "${pizza.value.PizzaDough}"
      - name: "Pizza Sizes Ordered"
        value: "${pizza.value.PizzaSize}"        
      - name: "Pizza Types Ordered"
        value: "${pizza.value.PizzaTopping}"
      next: "showPizzaOrder"
The dimensions and filters in the Custom Metrics report are rendered from the name-value pairs defined for the dimensions attribute. The value properties' Apache Freemarker expressions reference the bag items. In this case, the bag items are all value list entities, which means that their individual values can be applied as filters and data segments in the Custom Metrics report. The resulting report for this pizza skill breaks down pizza orders by size, type, and pizza dough, supplementing the metrics already reported for the Order Pizza intent.
Description of custom_metrics_example.png follows

Entity value-based dimensions are only recorded in the Custom Metrics report after an entity value has been set. When no value has been set, or when the value-setting state does not transition to a System.SetCustomMetrics state, the report's graphs note the missing data as <not set>. Depending on the composition and complexity of the dialog flow definition, the entity values that you want to track may not be resolved within the same dialog flow like the one illustrated in the above snippet. In these situations, you may not be able to define all the dimensions with a single System.SetCustomMetrics state. Instead, you'll need System.SetCustomMetrics states to different parts of the dialog flow definition.

Creating Dimensions that Track Skill Usage

In addition to dimensions based on variable values, you can create dimensions that track not only how users interact with the skill, but its overall effectiveness as well. You can, for example, add a dimension that tells you how often, and why, users are transferred to live agents.
Description of custom_metrics_agent_transfer_example.png follows

You can create dimensions like these, which can inform you of the user experience, using text strings, such as value: "No Agent Needed" in the following snippet, an illustration of how to create a single dimension (Agent Transfer) from a series of a System.SetCustomMetrics states.
    component: "System.Intent"
      variable: "iResult"
      optionsPrompt: "Do you want to"      
        OrderPizza: "startOrderPizza"
        WelcomePizza: "startWelcome"
        LiveChat: "setInsightsCustomMetrics3"
        unresolvedIntent: "startUnresolved"


    component: "System.SetCustomMetrics"
      - name: "Pizza Size"
        value: "${pizza.value.PizzaSize}"
      - name: "Pizza Type"
        value: "${pizza.value.PizzaTopping}"
      - name: "Pizza Crust"
        value: "${pizza.value.PizzaDough}"
      - name: "Agent Transfer"
        value: "No Agent Needed"
      next: "showPizzaOrder"

    component: "System.Output"
      text: "I didn't that get that. Let me connect you with support."
      keepTurn: true 
      next: "setInsightsCustomMetrics1"
### Transfer because of unresolved input ####

    component: "System.SetCustomMetrics"
      - name: "Agent Transfer"
        value: "Bad Input"
      next: "getAgent"

    component: "System.Output"
      text: "OK, let's connect you with someone to help"
      keepTurn: true
      next: "setInsightsCustomMetrics2"
### Transfer because of Max Error"  ####

    component: "System.SetCustomMetrics"
      - name: "Agent Transfer"
        value: "Max Errors"
      next: "getAgent"
### Transfer because of direct request ####

    component: "System.SetCustomMetrics"
      - name: "Agent Transfer"
        value: "Agent Requested"
      next: "getAgent"

    component: "System.AgentInitiation"


Each System.SetCustomMetrics state defines a different category for the Agent Transfer dimension. The Custom Metrics report records data for these metrics when these states are included in an execution flow, and as illustrated by the above sample, are named in the transitions.

Custom Metric States for Agent Transfer Dimension Value Use
setInsightsCustomMetrics No Agent Needed Reflects the number of successful conversations where orders were placed without assistance.
setInsightsCustomMetrics1 Bad Input Reflects the number of converstaions where unresolved input resulted in users getting transferred to a live agent.
setInsightsCustomMetrics2 Max Errors Reflects the number of conversations where users were directed to live agents because they reached the m
setInsightsCustomMetrics3 Agent Requested Reflects the number of conversations where users requested a live agent.
Export Custom Metrics Data
Clicking Export downloads the custom metrics data in a CSV file that you can use to for your own offline analysis and reporting. You can filter the data downloaded to the CSV by the dimension values. This downloaded CSV has the following fields.
Description of download_filters.png follows

Column Description
CREATED_ON The date of the data export.
USER_ID The ID of the skill user.
SESSION_ID An identifier for the current session. This is a random GUID, which makes this ID different from the USER_ID.
BOT_ID The skill ID which is assigned to the skill when it was created.
CUSTOM_METRICS A JSON array that contains an object for each custom metric dimension. name is a dimension name and value is the dimension value captured from the conversation. [{"name":"Custom Metric Name 1","value":"Custom Metric Value"},{"name":"Custom Metric Name 2","value":"Custom Metric Value"},...] For example: [{"name":"Pizza Size","value":"Large"},{"name":"Pizza Type","value":"Hot and Spicy"},{"name":"Pizza Crust","value":"regular"},{"name":"Agent Transfer","value":"No Agent Needed"}].
QUERY The user utterance or the skill response that contains a custom metric value.
CHOICES The menu choices in UI components.
COMPONENT The dialog component, System.setCustomMetrics, that executes the custom metrics.
CHANNEL The channel that conducted the session.

Review Intents Insights

You can find out the total number of complete and incomplete conversations for each intent in the Overview report. Using the Intents report, you can find out how the user traffic flowed along the intents' execution paths and where it was blocked by malfunctioning states.

This report returns the intents defined for a skill over a given time period, so its contents may change to reflect the intents that have been added, renamed, or removed from the skill at various points in time.

Description of intents_report.png follows

Completed Paths
For completed conversations, the report tells you the number of execution paths that users traversed to complete these conversations with statistics on the time spent and the number of states visited.

Description of completed_paths.png follows

You can use these statistics and as indicators of the user experience. For example, you can use this report to ascertain if the time spent is appropriate to the task, or if the shortest paths still result in an attenuated user experience, one that may encourage users to drop off. Could you, for example, usher a user more quickly through the skill by slotting values with composite bag entities instead of prompts and value setting components?

For more context on completed conversations:
  • You can trace the execution path for a selected intent by clicking View Path, which opens the Paths report filtered by completed conversations for the intent. To improve focus on the execution paths, you can filter out the states that you're not interested in.

    Description of initial_completed_intent_path.png follows

  • You can read transcripts of the completed conversations for an intent by clicking View Conversations, which opens the Conversations report filtered by completed conversations for the intent.

    Description of completed_conversations_report_from_intents.png follows

Incomplete Paths
For the incomplete conversations, you can identify the states along the intent's execution path where these conversations ended using the Incomplete States horizontal bar chart. This chart, which renders for the transactional intents listed in the left navbar, plots the distribution of incomplete conversations by state, which can be a state defined in the dialog flow, or an internal state that marks the end of a conversation, such as System.DefaultErrorHandler. Using it, you can find out if a dialog flow state is a continual point of failure and the reasons why (errors, timeouts, or bad user input). This report doesn’t show paths or velocity for incomplete paths because they don’t apply to this user input. Instead, the bar chart ranks each intent by the number messages that either couldn’t be resolved to any intent, or had the potential of getting resolved (meaning the system could guess an intent), but were prevented from doing so because of low confidence scores.

The Incomplete States chart doesn't render static intents (Answer Intents) because their outcomes are supported by the System.Intent component state alone, not by a series of states in an dialog flow definition.

Description of incomplete_conversations_bar_chart.png follows

For more context on the incomplete conversations for an intent:
  • Click View Path opens the Paths report filtered for incomplete conversations for the selected intent. The terminal states on this path may include states defined in the dialog or an internal state that marks the end of a conversation, such as System.EndSession, System.ExpiredSession, System.MaxStatesExceededHandler, and System.DefaultErrorHandler.
    Description of incomplete_conversations_path_from_intent.png follows

  • You can access transcripts of conversations that lead to the failure by clicking View Conversations. This option opens the Conversations report filtered for incomplete conversations for the selected intent. You can narrow the results further by applying a filter. For example, you can filter the report by error conditions.

    Description of incomplete_conversations_from_intents_report.png follows


In addition to the duration and routes for task-oriented intents, the Intents report also returns the messages that couldn’t get resolved. To see these messages, click unresolvedIntent in the left navbar. Clicking an intent in the Closest Predictions bar chart updates the Unresolved Message window with the unresolved messages for that intent sorted by a probability score.

Description of insights_unresolved_intents.png follows

You can view the path and conversations for these unresolved messages by View Path and View Conversations, but you can also access the unresolved messages through the Retrainer report, where you can evaluate them as possible addtions to the training data. Clicking Retrain opens the Retrainer report filtered by unresolved messages.

Description of retrainer_accessed_from_intents_report.png follows

Review Path Insights

The Paths report lets you find out how many conversations flowed through the intents' execution paths for any given period. This report renders a path that's similar to a transit map where the stops can represent intents, the states defined in the dialog flow definition and the internal states that mark the beginning and end of every conversation that is not classified as in-progress.
Description of path_report.png follows

You can scroll through this path to see where the values slotted from the user input propelled the conversation forward, and where it stalled because of incorrect user input, timeouts resulting from no user input, system errors, or other problems. While the last stop in a completed path is green, for incomplete paths where these problems have arisen, it’s red. Through this report, you can find out where the number of conversations remained constant through each state and pinpoint where the conversations branched because of values getting set (or not set), or dead-ended because of some other problem like a malfunctioning custom component or a timeout.

Query the Paths Report
The Paths report renders an intent execution path according to your query parameters. You can query this report for both the complete and incomplete execution paths for any or all intents, set the length of the path by choosing a final state, and isolate portions of the execution paths by excluding states that are of secondary importance. For example, you may consider states that set variables or instrument the skill for custom metrics as "filler" states that detract from the focus of your investigation.
Description of query_intent_and_state.png follows

All of the execution flows render by default after you enter your query. The green Begin arrow This is an image of the Begin pathing icon. represents System.BeginSession, the system state that starts each conversation. The getIntent icon This is an image of the intent path icon. icon can represent different intents, depending on the filter. It can refer to a specific intent that you've chosen one as a filter, or it can represent every intent defined for your skill when you filter the report by All (which is the default setting).
Description of path_with_state.png follows

For incomplete conversations, the path may conclude with an internal state Image of the path error icon such as System.ExpiredSession, System.MaxStatesExceededHandler, or System.DefaultErrorHandler that represent the error that terminated the conversation.


Use the Filter States filter to search for, and remove, the states that you're not interested in from the path rendering.
Clicking the final state opens the details panel, which displays statistics, errors, warnings and the final user messages.
Description of insights_last_state.png follows

The report displays Null Response for any customer message that's blank (or not otherwise in plain text) or contains unexpected input. For non-text responses that are postback actions, it displays the payload of the most recent action. For example:
Clicking View Conversations opens the Conversations report queried by the path so that you can review the messages that concluded the conversation within the context of a transcript.

Description of conversations_report_from_paths_report.png follows

Scenario: Querying the Pathing Report

Looking at the Overview report for a financial skill, you notice that there is a sudden uptick in incomplete conversations. By adding up the values represented by the orange "incomplete" segments of the stacked bar charts, you deduce that conversations are failing on the execution paths for the skill's Send Money and Balances intents.

To investigate the intent failures further, you open the pathing report and enter your first query: filter for all intents that have an incomplete outcome. The path renders with two branches: one that begins with startPayments and ends with SystemDefaultErrorHandler and a second that starts with startBalances and also ends with System.DefaultErrorHandler This is an image of the SystemDefaultErrorHandler icon.. Clicking the final node in either path opens the details pane that notes the number of errors and displays snippets of the user messages received by the skill before these errors occurred. To see these snippets in context, you then click View Conversations in the details panel to see the transcript. In all of the conversations, the skill was forced to respond with Unexpected Error Prompt (Oops! I'm encountering a spot of trouble…) because system errors prevented it from processing the user request.

To find out more about the states leading up to these errors (and their possible roles in causing these failures), you then refer to the dialog flow definition to identify the states that begin the execution paths for each of the intents.
    component: "System.Intent"
      variable: "iResult"
        Balances: "startBalances"
        Transactions: "startTxns"
        Send Money: "startPayments"
        Track Spending: "startTrackSpending"
        Dispute: "setDate"
        unresolvedIntent: "unresolved"
These states (referenced as transition actions for the System.Intent component) are startBalances, startTxns, startPayments, startTrackSpending, and setDate.

Comparing the paths to the dialog flow definition, you notice that in both the startPayments and the startBalances flows, the last state rendered in the path precedes a state that uses a custom component. After checking the ComponentsThis is an image of the Components icon in the left navbar. page, you notice that the service has been disabled, preventing the skill from retrieving the account information needed to complete conversations.

Review the Skill Conversation Insights

Using the Conversations report, you can examine the actual transcripts of the conversations to review how the user input completed the intent-related paths, or why it didn’t. You can filter the conversations by channel, by mode (Voice, Text, All), and by time period.

You can review conversation transcripts by filter this report by intents. You can add dimension like conversation length and outcome, which is noted as completed, incomplete, or in progress. If you want to find out which error type contributed to incomplete conversations, you can filter Outcome by Incomplete, and then select one of the error categories (Timeouts, Infinite Loops, and System-Handled Errors) for the Errors filter. For conversations with messages that began as voice but ended up as text, you can also filter by Switched Conversations.

Description of conversations_filtered_by_voice.png follows

View Conversation Transcripts

Clicking View Conversation opens the conversation in the context of a chat window. Clicking the bar chart icon displays the voice metrics for that interaction.
Description of view_conversation_window.png follows

View Voice Metrics
Clicking View Voice Metrics displays a subset of the voice metrics that are averaged across the entire conversation. To view these metrics broken down by the indivdual voice interactions, click the bar chart icon in the transcript view that's accessed by clicking View Conversations.

Description of voice_metrics_per_message.png follows

How the Insights Reports Handle return Transitions
For a single intent, the Conversations report lists the different conversations that have completed. However, complete can mean different things depending on the user message and the return transition, which ends the conversation and destroys the conversation context. For an OrderPizza intent, for example, the Conversations report might show two successfully completed conversations. Only one of them culminates in a completed order. The other conversation ends successfully as well, but instead of fulfilling an order, it handles incorrect user input.
    component: "System.Output"
      text: "I can only order pizza for you today. Let me know what kind of pizza you'd like?"
      keepTurn: false
      return: "startUnresolved"
You can find out the different outcomes for the same intent using the Final State filter in the Paths report.
How the Insights Reports Handle Empty Transitions

A skill throws an exception when the final state in a flow either lacks a transition, or uses an empty transition (transitions: {}). Insights considers these conversations as incomplete, even when they've handled a transaction successfully. In the paths, these final states get classified as System.DefaultErrorHandler.

PII Anonymization

User messages may contain Personally Identifiable Information (PII), information like first and last names, phone numbers, and e-mail addresses. To protect user privacy, but preserve the context of the message, you can anonymize the PII values with an equivalent value, an anonym, before they're persisted to the database. These anonymns are used consistently within a session. For example, all occurrences of "John Smith" in a conversation would be replaced by the anonym, "davis". In this case, davis, not John Smith, is stored in the database and appears throughout the export logs and the Insights reports such a the Convevrsations report, the Retrainer, and the key word phrase cloud.

CURRENCY and DATE_TIME values are not anonymized, even though they contain numbers. Also, the "one" in the default prompt for a composite bag entity ("Please select one value for...") gets anonymized as a numeric value. To avoid this, add a custom prompt ("Select a value for...", for example).

Description of anonyms_in_client_ui_logs.png follows

You can anonymize the values recognized by the following system entities:
  • URL

Enable Masking is deprecated in Release 21.12. Use PII anonymization instead to mask numeric values in the Insights reports and export logs. You cannot apply anonymization to conversations logged prior to the 21.12 release.
Enable PII Anonymization
  1. Click Settings > General.
  2. Switch on Enable PII Anonymization.
  3. Click Add Entity to select the entity values that you want to anonymize in the Insights reports and the logs.

    Anonymized values are persisted to the database only after you enable anonymization for PII values for the selected entities. They are not applied to prior conversations. Depending on the date range selected for the Insights reports or export files, the PII values might appear in both their actual and anonymized forms. You can apply anonymization to any non-anonymized PII value (including those in conversations that occurred before you enabled anonymization in the skill or digital assistant settings) when you create an export task. These anonyms apply only to the exported file and are not persisted in the database.
    If you want to discontinue the anonymization for a PII value, or if you don't want an anonym to be used at all, select the corresponding entity and then click Delete Entity. Once you delete an entity, the actual PII value appears throughout the Insights reports for subsequent conversations. Its anonymized form, however, will remain for prior conversations.

    Anonymization is permanent (the export task-applied anonymization notwithstanding). You can't recover PII values after you enable anonymization.

Description of pii_settings_configuration.png follows

PII Anonymization in the Export File

Anonymization in an exported Insights file depends on whether (and when) you've enabled PII anonymization for the skill or digital assistant in Settings.

When you enable PII anonymization settings for the skill or digital assistant:
  • The PII values recognized for the selected entities are replaced with anonyms. These anonyms get persisted to the database and replace the PII values in the logs and Insights reports. This anonymization is applied to the conversations that occur after – not prior to – your enabling of anonymization in Settings.
  • The Enable PII anonymization for the file option for the export task is enabled by default to ensure that the PII values recognized for the entities selected in Settings are applied to conversations that occurred before PII anonymization had been set. The anonyms applied during the export to conversations that predate the PII anonymization exist in the export file only. The original PII values remain in the database, Insights logs, and in the Insights reports).
  • If you switch off Enable PII anonymization for the file, only the PII values recognized for the entities that were selected in Settings will be anonymized. The log files will contain the anonyms for conversations that occurred after anonymization settings have been enabled for the skill or digital assistant. Prior conversations will appear as original, unmodified utterances with their PII values intact. Consequently, the export file may include both anonymized and non-anonymized conversations if part of the export task's date range predates anonymization.

    If your export task includes anonymized conversations that occurred prior to Release 22.04, the anonyms applied to the pre-22.04 conversations will be changed, or re-anonymized, in the export files when you select Enable PII anonymization for the file for the export task. The anonyms in the exported file will not match either the anonyms in pre-22.04 export files or the anonyms that appear in the Insights reports.
When you disable, or don't configure, PII anonymization settings for a skill or digital assistant:
  • The Enable PII anonymization for the file option will be disabled by default for the export task so that the exported file will contain all the original unmodified utterances, including the PII values.
  • If you select Enable PII anonymization for the file, the PII values will be anonymized in the exported file only for the default entities, PERSON, EMAIL, URL, and NUMBER. The PII values will remain in the database, logs, and Insights reports.

Model the Dialog Flow

By default, Insights tracks all of the states in a conversation, but you may not want to include all of them in the reports. To focus on certain transactions, or exclude the states from the reporting entirely, you can model the dialog flow using the insightsInclude and insightsEndConversation properties. These properties, which you can add to any component, provide a finer level of control over the Insights reporting.

These properties are only supported on Oracle Digital Assistant instances provisioned on Oracle Cloud Infrastructure (sometimes referred to as the Generation 2 cloud infrastructure). They are not supported on instances provisioned on the Oracle Cloud Platform (as are all version 19.4.1 instances of Oracle Digital Assistant).
Mark the End of a Conversation
Instead of depending on the return transition to mark the end of a complete conversation, you can instead mark where you want to stop recording the conversation for insights reporting using the insightsEndConversation property. This property enables you to focus only on the aspects of the dialog flow that you're interested in. For example, you may only need to record a conversation to the point where a customer cancels an order, but no further (no subsequent confirmation messages or options that branch the conversation). By default, this property is set to false, meaning that Insights continues recording until a return transition, or until the insightsEndConversation property is set to true (insightsEndConversation: true).
    component: "System.Output"
      text: "Your order is canceled."
      insightsEndConversation: true 
      next: "intent"
Because this flag changes how the insights reporting views a completed conversation, conversation counts tallied after the introduction of this flag in the dialog flow may not be comparable to the conversation counts for previous versions of the skill.

The insightsEndConversation marker is not used in the Visual Flow Designer because the modular flows already delineate the conversation. A conversation ends when the last state of a top-level flow has been reached.
Streamline the Data Collected by Insights
Use the insightsInclude property to exclude states that you consider extraneous from being recorded in the reports. To exclude a state from the Insights reporting, set this property to false:

    component: "System.SetVariable"
      variable: "crust"
      value: "${iResult.value.entityMatches['PizzaSize'][0]}" 
      insightsInclude: false      
This property is specific to Insights reporting only. It does not prevent states from being rendered in the Tester.

insightsInclude is not supported by the Visual Flow Designer.
Use Cases for Insights Markers

These typical use cases illustrate the best practices for making the reports easier to read by adding the conversation marker properties to the dialog flow.

Use Case 1: You Want to Separate Conversations by Intents or Transitions

Use the insightsEndConversation: true property to view the user interactions that occur within a single chat session as separate conversations. You can, for example, apply this property to a state that begins the execution path for a specific intent, yet branches the dialog flow.

The CrcPizzaBot skill's ShowMenu state, with its pizza, pasta, and textReceived transitions is such a state:
    component: "System.CommonResponse"
      processUserMessage: true
          - type: "text"
            text: "Hello ${profile.firstName}, this is our menu today:"
            footerText: "${(textOnly.value=='true')?then('Enter number to make your choice','')}"
            name: "hello"
            separateBubbles: true
              - label: "Pizzas"
                type: "postback"
                keyword: "${numberKeywords.value[0].keywords}"
                  action: "pizza"
                name: "Pizzas"
              - label: "Pastas"
                keyword: "${numberKeywords.value[1].keywords}"
                type: "postback"
                  action: "pasta"
                name: "Pastas"
        pizza: "OrderPizza"
        pasta: "OrderPasta"
        textReceived: "Intent"
By adding the insightsEndConversation: true property to the ShowMenu state, you can break down the reporting by these transitions:
    component: "System.CommonResponse"
      processUserMessage: true
      insightsEndConversation: true
Because of the insightsEndConversation: true property, Insights considers any further interaction enabled by the pizza, pasta, or textReceived transitions as a separate conversation, meaning that two conversations, rather than one, are tallied in Overview page's Conversations metric and likewise, two separate entries are created in the Conversations report.

Keep in mind that conversation counts will be inconsistent with those tallied prior to adding this property.
The first entry is for the ShowMenu intent execution path is where the conversation ends with the ShowMenu state.
Description of conversation_details_state_w_end_conversation.png follows

The second is the transition-specific entry that names an intent when the textReceived action has been triggered, or notes No Intent when there's no second intent in play. When you choose either Pizzas or Pastas from the list menu rendered for the showMenu state, the Conversation report contains a ShowMenu entry and a No Intent entry for the transition conversation because the user did not enter any text that needed to be resolved to an intent.
Description of no_intent_action_transition.png follows

However, when you trigger the textReceived transition by entering text, the Conversation report names the resolved intent (OrderPizza, OrderPasta).
Description of two_intents_textreceived_transition.png follows

Use Case 2: You Want to Exclude Supporting States from the Insights Pathing Reports
The states node of the CrcPizzaBot skill begins with a series of System.SetVariable states. Because these states are positioned at the start of the definition, they begin each path rendering when you haven't excluded them with the Filter States option. You may consider supporting states like these as clutter if your focus is instead on the transactional aspects of the path. You can simplify the path rendering manually using the Filter States menu, or by adding the insightsInclude: false property to the dialog flow definition.
Description of pathing_set_variable_included.png follows

You can add the insightsInclude: false property to any state that you don't wish to see in the Paths report.
    component: "System.SetVariable"
      insightsInclude: false
      variable: "textOnly"
      value: "${(system.channelType=='webhook')?then('true','false')}"
    component: "System.SetVariable"
      insightsInclude: false
      variable: "autoNumberPostbackActions"
      value: "${textOnly}"
    component: "System.SetVariable"
      insightsInclude: false
      variable: "cardsRangeStart"
      value: 0
For the CRCPizzaBotSkill, adding the insightsInclude: false property to each of the System.SetVariable states places the transactional states at the start of the path.
Description of pathing_set_variable_excluded.png follows


Adding the insightsInclude: false property not only changes how the paths are rendered, but will impact the sum reported for the Average States metric.
Tutorial: Optimize Insights Reports with Conversation Markers

You can practice with conversation markers using the following tutorial: Optimize Insights Reports with Conversation Markers.

Apply the Retrainer

Customers can use different phrases to ask for the same request. When this user input can't be resolved to an intent (or was resolved to the wrong intent) you can direct it to the correct intent using the Retrainer. To help you out, the Retrainer suggests an intent for the user input. Because you're adding actual user input, you can improve the skill's performance with each new version.

Description of retrainer_filters.png follows

You can filter the conversation history using one or more of the following:
  • time period
  • language – For multi-lingual capability that's enabled through either native language support or translation services. By default, the report filters by the primary language.
  • intents – Filter by matching the names of the two top-ranking intents, and by using comparison operators for their resolution-related properties, confidence and Win Margin.
  • channels – Includes the Agent Channel that's created for Oracle Service Cloud integrations.
  • text or voice modes – Includes switched conversations.
The report returns the top two intents for each returned utterance along with the Win Margin that separates them and, through a horizontal bar chart, their contrasting confidence scores. Hovering over the bars reveals the actual scores.

Description of intent_score.png follows

The horizontal line that intersects with the chart marks where the score exceeded, or fell short of, the skill's confidence threshold.

Description of resolution_confidence_threshold.png follows

Update Intents with the Retrainer
There are some things to keep in mind when you add user messages to your training corpus:
  • You can only add user input to the training corpus that belongs to a draft version of a skill, not a published version.
  • You can’t add any user input that’s already present as an utterance in the training corpus, or that you have already added using the Retrainer.
To update a transactional intent or an answer intent using the Retrainer:
  1. Because you cannot update a published skill, you must create a draft version before you can add new data to the corpus.


    Click Compare All Versions This is an image of the Compare All Versions icon. or switch off the Show Only Latest toggle to access both the draft and published versions of the skill.
    If you're reviewing a published version of the skill, select the draft version of the skill.
    This is an image of the Select Version drop down menu.

  2. In the draft version of the skill, apply a filter, if needed, then click Search.
  3. Select the user message, then choose the target intent from the Select Intent menu. If your skill supports more than one native language, then you can add it to the language-appropriate training set by choosing from among the languages in the Select Language menu.


    You can add utterances to an intent on an individual basis, or you can select multiple intents and then select the target intent and if needed, a language from the Add To menus that's located at the upper left of the table. If you want to add all of returned requests to an intent, select Utterances (located at the upper right of the table) and then choose the intent and language from the Add To menu.
  4. Click Add Example.
  5. Retrain the skill.
  6. Republish the skill.
  7. Update the digital assistant with the new skill.
  8. Monitor the Overview report for changes to the metrics over time and also compare different versions of the skill to find out if new versions have actually added to the skill's overall success. Repeating the retraining process improves the skill's responsiveness for each new version. For skills integrated with Oracle Service Cloud Chat, for example, retraining should result in a downward trend in escalations, which is indicated by a downward trend in the usage of agent handoff intents.
Moderated Self-Learning

By setting the Top Confidence filter below the confidence threshold set for the skill, or through the default filter, Intent Matches unresolvedIntent, you can update your training corpus using the confidence ranking made by the intent processing framework. For example, if the unresolvedIntent search returns "someone used my credit card," you can assign it to an intent called Dispute. This is moderated self-learning – enhancing the intent resolution while preserving the integrity of the skill.

For instance, the default search criteria for the report shows you the random user input that can’t get resolved to the Confidence Level because it’s inappropriate, off-topic, or contains misspellings. By referring to the bar chart, you can assign the user input: you can strengthen the skill’s intent for handling unresolved intents by assigning the input that’s made up of gibberish, or you can add misspelled entries to the appropriate task-oriented intent (“send moneey” to a Send Money intent, for example). If your skill has a Welcome intent, for example, you can assign irreverent, off-topic messages to which your skill can return a rejoinder like, “I don’t know about that, but I can help you order some flowers.”

Support for Translation Services

If your skill uses a translation service, then the Retrainer displays the user messages in the target language. However, the Retrainer does not add translated messages to the training corpus. It instead adds them in English, the accepted language of the training model. Clicking This is an image of the show translation icon. reveals the English version that can potentially be added to the corpus. For example, clicking this icon for contester (French), reveals dispute (English).

Create Data Manufacturing Jobs

Instead of assigning utterances to intents yourself, you can crowd source this task by creating Intent Annotation and Intent Validation jobs. You don't need to compile the conversation logs into a CSV to create these jobs. Instead, you click Create then Data Manufacturing Job.
An image of the Create option in the Retrainer report.
You then choose the job type for the user input that's filtered in the Retrainer report. For example, you can create an Intent Annotation job from a report filtered by the top intent matching unresolvedIntent, or you can create an Intent Validation job from a report filtered on utterances that have matched an intent.
Description of retrainer_data_manufacturing_job_dialog.png follows


Using the Select utterances options, you can choose all of the results returned by the filter applied to the Retrainer for the data manufacturing job, or create a job from a subset of these results which can include a random sampling of utterances. Selecting Exclude utterances from previous jobs means that utterances selected for a previous data manufacturing job will no longer be available for subsequent jobs: the utterances included in one Intent Annotation job, for example, won't be available for a later Intent Annotation job. Use this option when you're creating multiple jobs to review a large set of results.
After you create the job, it appears in the Data Manufacturing Jobs page, where you can distribute it to crowd workers by sharing the link.
Description of retrainer_job_in_data_manufacturing.png follows

Create a Test Suite

Similar to the data manufacturing jobs from the results queried in the Retrainer report, you can also create test cases from the utterances returned by your query. You can add a suite of these test cases to the Utterance Tester by clicking Create, then Test Suite.
This is an image of the Test Suite option.
You can filter the utterances for the test suite using the Select utterances options in the Create Test Suite dialog. You can include all of the utterances returned by the filter applied to the Retrainer in the test suite, or a subset of these results which can include a random sampling of the utterances. Select Include language tag to ensure that the language that's associated with a test case remains the same throughout testing.
Description of create_test_suite_dialog_insights.png follows

You can access the completed test suite by clicking Go to Test Cases in the Utterance Tester.

Review Language Usage

For a multi-lingual skill, you can compare the usage of its supported languages through the segments of the Languages chart. Each segment represents a language currently in use.
Description of languages_chart_overview_skill.png follows

If you want to review the conversations represented by a language in the chart, you can click either a segment or the legend to drill down to the Conversations report, which is filtered by the selected language.
Description of conversations_report_filtered_by_language.png follows

Export Insights Data

The various Insights reports provide you with different perspectives, but if you need to view this data in another way, then you can create your own report from a CSV file of exported Insights data.

The CSVs contain fields for user and skill messages, component types, and states, which are described in The Export Log Fields. You can write a processing script to filter this content, or just use a spreadsheet app. Review the Export Logs describes some common approaches to filtering the files.

The data may be spread across a series of CSVs when the export task returns more than 1,048,000 rows. In such cases, the ZIP file will contain a series of ZIP files, each containing a CSV.
The Exports page lists the tasks by:
  • Name: The name of the export task.
  • Last Run: The date when the task was most recently run.
  • Created By: The name of the user who created the task.
  • Export Status: Submitted, In Progress, Failed, No Data (when there's no data to export within the date range defined for the task), or Completed, a hyperlink that lets you download the exported data as a CSV file. Hovering over the Failed status displays an explanatory message.

An export task applies to the current version of the skill.

Description of insights_export_page.png follows

Create an Export Task
  1. Open the Exports page and then click + Export.
  2. Enter a name for the report and then enter a date range.
  3. Click Enable PII anonymization for the exported file to replace Personally Identifiable Information (PII) values with anonyms in the exported file. These anonyms exist only in the exported file if PII is not enabled in the skill settings. In this case, the PII values, not their anonym equivalents, still get stored in database and appear in the exported Insights logs and throughout the Insights reports, including the Conversations report, the Retrainer, and the key phrases in the word cloud. If PII has been enabled in the skill settings, then logs and Insights reports will contain anonyms.

    The PII anonymization that's enabled for the skill or digital assistant settings factors into how PII values that get anonymized in the export file and also contributes to the consistency of the anonymization in the export file.
  4. Click Export.
  5. When the task succeeds, click Completed to download a ZIP of the CSV (or CSVs for large exports). The name of the skill-level export CSV begins with B_. File names for digital assistant-level exports begin with D_.
Description of insights_export_dialog.png follows
Review the Export Logs
Here are some of the fields that you're likely to focus on most often. The Export Log Fields describes all of the fields. Filter the Exported Insights Data describes some approaches for sorting the data.
  • BOT_NAME contains the name of the skill or the name of the digital assistant. You can use this column to see how the dialog is routed from the digitial system to the skills (and between the skills).
  • CHANNEL_SESSION_ID stores the channel session ID. You can use that ID, in conjunction with the third column, CHANNEL_ID, to create a kind of unique identifier for the session. Because sessions can expire or get terminated, you can use this identifier to find out if the session has changed.
  • TIMESTAMP indicates the chronology or sequence in which the events happened. Typcially, you would sort by this column..
  • USER_UTTERANCE and BOT_RESPONSE contain the actual conversation between the skill and its user. These two fields make the interleafing of the user and skill messages easily visible when you sort by the TIMESTAMP.

    There may be duplicate utterances in the USER_UTTERANCE column. This can happen when user testing runs on the same instance, but more likely it's because the utterance is used in different parts of the conversation.

  • You can use the COMPONENT_NAME, CURR_STATE and NEXT_STATE to debug the dialog flow.
Filter the Exported Insights Data
Typically, you would sort the logs by the TIMESTAMP column to view the sequence of events. For other perspectives, such as the skill-user conversation, for example, you can filter the columns by the system-generated internal states. Some of the filtering techniques you'll use most ofter include:
  • Sorting out the skill and digital assistant conversation – When an export contains both data from a digital assistant and its registered skills, the contents of the BOT_NAME field might seem confusing, as the conversation appears to jump arbitrarily between the different skills and between the skills and the digitial assistant. To to see the dialog in the correct sequence (and context), the TIMESTAMP column in ascending order.
  • Finding the conversation boundaries – Use System.BeginSession field and one of the terminal states to find the beginning and end of a conversation. Conversations start with a System.BeginSession state. They can end with any of the following terminal states:
    • System.EndSession
    • System.ExpiredSession
    • System.MaxStatesExceededHandler
    • System.DefaultErrorHandler
  • Reviewing the actual user-skill conversation – To isolate the contents of the USER_UTTERANCE and BOT_RESPONSE columns, filter CURR_STATE column by the system-generated states System.MsgReceived and System.MsgSent

    A non-text message response, such those from UI components like System.CommonResponse and System.List, the skill output will be partial responses joined by a newline character.
    Sometimes parts of the user-skill dialog may be repeated in the USER_UTTERANCE and BOT_RESPONSE columns. The user text is repeated when there is an automatic transition that does not require user input. The skill responses get repeated if next state is one of the terminal states, such as System.EndSession or System.DefaultErrorHandler.
  • Reviewing just the dialog flow execution with the user-skill dialog – To view internal transactions or display only the non-text messages, you need to filter out the System.MsgReceived and System.MsgReceived states from the CURR_STATE column (the opposite approach to viewing just the dialog).
  • Identifying a session – Compare the values in the CHANNEL_SESSION_ID and SESSION_ID (which are next to each other).
The Export Log Fields
The exported CSV for a skill includes the following fields.
Column Name Description Sample Value
BOT_NAME The name of the skill PizzaBot
CHANNEL_SESSION_ID The ID for a user for the session.This value identifies a new session. A change in this value indicates that the session expired or was reset for the channel. 2e62fb24-8585-40c7-91a9-8adf0509acd6
SESSIONID An identifier for the current session. This is a random GUID, which makes this ID different from the CHANNEL_SESSION_ID or the USER_ID. A session indicates that one or more intent execution paths that have been terminated by an explicit return transition in state definition, or by an implicit return injected by the Dialog Engine. 00cbecbb-0c2e-4749-bfa9-c1b222182e12
TIMESTAMP The "created on" timestamp. Used for chronological ordering or sequencing of events. 14-SEP-20 PM
USER_ID The user ID 2880806
DOMAIN_USERID Refers to the USER_ID. 2880806
PARENT_BOT_ID The ID of the skill or digital assistant. When a conversation is triggered by a digital assistant, this refers to the ID of the digital assistant. 9148117F-D9B8-4E99-9CA9-3C8BA56CE7D5
ENTITY_MATCHES Identifies the composite bag item values that are matched in the first utterance that's resolved to an intent. If a user's first message is "Order a large pizza", this column will contain the match for the for the PizzaSize item within the composite bag entity, Pizza:
Any other item values in subsequent user messages are not tracked, so if a user's next message includes a PizzaType value, it won't be included in the export file. If a user first enters "Order a pizza" and then, after the intent has been resolved, adds a follow-up message with an entity value for the PizzaSize item ("make it a large"), a null value is recorded in the ENTITY_MATCHES column, because the initial message that was resolved to the intent did not contain any item values.

An empty object ({}) is returned when you enable PII anonymization.

{"Pizza":[{"entityName":"Pizza","PizzaType":["CHEESE BASIC"],"PizzaSize":["Large"]}]}
PHRASE The ODA interpretation of the user input large thin pizza
INTENT_LIST A ranking of the candidate intents, expressed as a JSON object. [{"INTENT_NAME":"OrderPizza","INTENT_SCORE":0.4063},{"INTENT_NAME":"OrderPasta","INTENT_SCORE":0.1986}]

For digital assisant exports, this is a ranking of skills that were called through the digital assistant. For example: [{"INTENT_NAME":"Pizza_For_DA_Starter-1.2","INTENT_SCORE":0.931},{"INTENT_NAME":"Retail_for_DA_Starter-1.1","INTENT_SCORE":0.0996},{"INTENT_NAME":"Finance_for_DA_Starter-1.1-DA","INTENT_SCORE":0.0925}]

BOT_RESPONSE The responses made by the skill in response to any user utterances. How old are you?
USER_UTTERANCE The user input. 18
INTENT The intent selected by the skill to process the conversation.This lists the top intent out of the list of intent(s) that were considered a possibility for the conversation. OrderPizza
LOCALE The user's locale en-US
COMPONENT_NAME The component (either system or custom), executed in the current state. You can use this field along with the CURR_STATE and NEXT STATE to debug the dialog flow.There are other values in the COMPONENT_NAME column that are not components:
  • ODA.Routing – Notes that an event is being recorded.
  • __NO_COMPONENT__ – No component has been defined for the state. The column may not contain a value if no component has been defined for the state.
CURR_STATE The current state for the conversation, which you use to determine the source of the messgage. This field contains the names of the states defined in the dialog flow definition along with system-genarated states. You can filter the CSV by these states, which include System.MsgRecieved for user messages and System.MsgSent for messages sent by the skill or agents for customer service integrations. checkage

NEXT_STATE The next state in the execution path. The state transitions in the dialog flow definition indicate the next state in the execution path. crust
Language The language used during the session. fr
SKILL_VERSION The version of the skill 1.2
INTENT_TYPE Whether the intent is transactional (TRANS) or an answer intent (STATIC) STATIC
CHANNEL_ID Identifies the channel on which the conversation was conducted. This field, along with CHANNEL_SESSION_ID, depict a session. AF5D45A0EF4C02D4E053060013AC71BD
ERROR_MESSAGE The returned error message. Session expired due to inactivity.
INTENT_QUERY_TEXT The input that's sent to the intent server for classification. The content of INTENT_QUERY_TEXT and USER_UTTERANCE are the same when the user input is in one of the native languages, but it's different when the user input is in a language that's not natively supported so it's handled by a translated service. In this case, the INPUT_QUERY_TEXT is in English.  
TRANSLATE_ENABLED Whether a translation service is used. NO
SKILL_SESSION_ID The session ID 6e2ea3dc-10e2-401a-a621-85e123213d48
ASR_REQUEST_ID A unique key field that identifies each voice input, in other words, the Speech Request ID. Presence of this value indicates the input is a voice input. cb18bc1edd1cda16ac567f26ff0ce8f0
ASR_EE_DURATION The duration for a single voice utterance within a conversation window. 3376
ASR_LATENCY The voice latency, measured in milliseconds. While voice recognition demands a large number of computations, the memory bandwidth and battery capacity are limited. This introduces latency from the time a voice input is received to when it is transcribed. Additionally, server-based implementations also add latency due to the round trip. 50
ASR_RTF a standard metric of performance in the voice recognition system. If it takes time {P} to process an input of duration {I} , the real time factor is defined as: RTF = \frac{P}{I}.The ratio of the time taken to process the audio input relative to the CPU time. For example, if it takes one second of CPU time to process one second of audio, then the RTF is 1 (1/1). The RTF for 500 milliseconds to process one second of audio is .5 or ½ . 0.330567
CONVERSATION_ID The conversation ID 906ed6bd-de6d-4f59-a2af-3b633d6c7c06
CUSTOM_METRICS A JSON array that contains an object for each custom metric dimension. name is a dimension name and value is the returned value. This column is available for Versions 22.02 and higher.
[{"name":"Order Sizes","value":"a box of 3 bottles"},{"name":"Wine Types","value":"red wine"},{"name":"Most Popular","value":"Pinot noir"}]
Internal States
State Name Description
System.MsgReceived A message received event that's triggered to Insights when a skill receives a text message from an external source, such as a user or another skill.
System.MsgSent A message sent event that's triggered to Insights when a skill responds to an external source, such as a user or another skill.

For each System.MsgReceived event, there can be zero, one, or more than one, corresponding System.MsgSent events.

System.BeginSession A System.BeginSession event is sent as a marker for starting the session when:
  • No dialog state has been executed yet.
  • The first dialog state is about to be triggered.
System.EndSession A System.EndSession event is captured as a marker for session termination when the current state has not generated any unhandled errors and it has a return transition, which indicates that there won't be another dialog state to execute. The System.EndSession event may also be recorded when the current state has:
  • An error transition for handling an error.
  • The insightsEndConversation: true conversation marker.
System.ExpiredSession (Error type: "systemHandled") A session time out. The default timeout is one hour.

When a conversation stops for more than one hour, the expiration of the session is triggered. The session expiration is captured as two separate events in Insights. The first event is the idle state, the state in the dialog flow where user communication stopped. The second is the internal System.ExpiredSession event.

System.DefaultErrorHandler The default error handler is executed when there is no there is no error handling defined in the dialog flow, either globally in the defaultTransitions node, or at the state level with error transitions. When the dialog flow includes error transitions, a System.EndSession event is triggered.
System.ExpiredSessionHandler The System.ExpiredSessionHandler event is raised if a message is sent from an external system, or user, to the skill after the session has expired. For example, this event is triggered when a user stops chatting with the skill in mid-conversation, but then sends a message after leaving the chat window open for more than one hour.
System.MaxStatesExceededHandler This event is raised if there are more than 100 dialog states triggered as part of a single user message.

Tutorial: Use Oracle Digital Assistant Insights

Apply Insights reporting (including the Retrainer) with this tutorial: Use Oracle Digital Assistant Insights.

Live Agent Insights for Skills

If your skill is configured for live agent transfer, you can compare the number of conversations that it routed to its agent hand off flow (the sequence of System.AgentInitiation and System.AgentConversation states that initiate the agent channel hand off and manage the skill-agent conversation, respectively) to the conversations that were handled by its other flows. Depending on the dialog flow definition, live agent chats can either be explicitly requested by the user, or requested by the skill on the user's behalf (or both).

Insights begins its live agent reporting after the first traversal of the agent hand off flow. Once this happens, the Insights reports include the Handler filter and along with it, charts and metrics for comparing the skill and live agent conversation handling. The Handler filter only displays when you filter the report on dates during which an agent hand off was attempted.

Insights reporting, through its Skill and Live Agent handlers, covers all of the communication between the end user, the skill, and the live agent. This is not the case for DA as Agent conversations, where Insights only covers the conversation up until the chat has been transferred to the live agent. For full reporting on DA as Agent conversations, use Oracle Fusion Service Analytics.

The Handler filter.


Instrument your skill with custom metrics to add detail to the live agent reporting.

Review the Deflection Rate

From the Overview report, you can access the Deflection Rate charts by selecting Skill from the Handler menu. In this section of the Overview report, Insights tracks the conversations that the skill deflected from the live agent as a donut chart that's segmented by skill- and agent-handled conversations and as a trend line chart that plots the conversations over time. Clicking an arc on the donut chart opens the Conversations report filtered by agent or skill.
Description of live_agent_deflection_graphs.png follows

Live Agent Conversation Metrics for Skills

You can access these metrics by selecting Live Agent from the Handler filter (which only displays when you filter the report by a date or date range that includes live agent transfer conversations).
Description of live_agent_option_skill_insights.png follows

Live Agent Conversation Metrics
These metrics reflect how well the skill has been off-loading tasks for live agents.
  • Total number of conversations – The total number of conversations for the selected time period and channel that include both conversations that requested a live agent and conversations where no live agent was requested.
  • Conversations handled by live agent – The total number of conversations with live agent requests.
  • Conversations handled by skills – The total number of conversations (either complete or incomplete) with no live agent requests.
  • Conversations resolved by skill – The number of conversations that completed (that is, the dialog traversed to the exit state) with no live agent requests.
  • Conversations abandoned while waiting for live agent - The number of conversations where users were never handed off to a live agent, despite having requested one. Conversations can be considered abandoned when users never connect with live agents, possibly because they've left the conversation or were timed out.
  • Deflection Rate – The percentage of conversations, which is calculated as the tally of Conversations Resolved by Skill divided by the tally for the Total Number of Conversations.
  • Number of users that were transferred to a human agent – The total number of users (unique and otherwise) who were transferred to a user agent.
  • Number of unique users that were transferred to a human agent – The total number of unique users (a group that may include returning users) who were transferred to a live agent. To gauge skill usability, you can compare this metric, which may include returning users, to the number tallied by the Total number of conversations.
    Description of live_agent_metrics_skill.png follows

Live Agent Handle/Wait Times
Use these metrics to assess the user experience for live agent chats.
Description of live_agent_handle_wait_time_skill.png follows

  • Average Duration of Skill Conversations – The average number of seconds that users have spent having conversations as calculated by adding up the total amount of time from the start to the end of each conversation by the total number of conversations.
  • Average Duration of Live Agent – The average number of seconds that users spent on conversations that were routed to a live agent. This amount of time, which is typically longer than the Average Duration of Skill Conversations, is calculated by adding up the total amount of time spent on all live agent conversations divided by the Conversations Handled by Live Agent tally.
  • Average Wait Time for the Live Agent – The average number of seconds that the users had to wait in the queue before they were eventually connected to an agent.