The Skill Tester

The Skill Tester lets you simulate conversations with your skill to test the dialog flow, intent resolution, entity matching, Q&A responses and review the Intent Engine responses in the conversation.json file. You can also use it to find out how conversations would render in different channels.

You can test the various functions of your skill in both an ad-hoc manner and by creating test cases by recording conversations. You can create an entire suite of specific test cases for the skill. When developers extend skills, they can reference the test cases to preserve the core functionality of the skill. You open the Skill Tester by clicking Preview.
Description of skill_tester_top_margin.png follows

Typically, you’d use the Skill Tester after you’ve created intents and defined a dialog flow. It’s where you actually chat with your skill or digital assistant to see how it functions as a whole, not where you build Q&A or intents.

As you are creating, testing, and refining intents, you may prefer to use the Utterance Tester.

Tip:

You should test each skill in your target channels early in the development cycle to make sure that your components render as intended.

Track Conversations

In the Conversation tab, the Skill Tester tracks the current response in terms of the current state in the in the dialog flow. Depending on where you are in the dialog flow, the window shows you the postback actions or any context and system variables that have been set by a previous postback action. It also shows you any URL, call, or global actions.

Description of conversation_tester.png follows

In the Intent/Q&A tab, you can see the resolved intent that triggered the current path in the conversation.

Description of intents_bot_tester.png follows

When the user input gets resolved to Q&A, you can find out the ranking for the returned answers. If the skill uses answer intents for FAQs, then only the resolved answer intent displays.
Finally, the View JSON tab enables you to review the conversation.json file that has complete details for the conversation the entities that match the user input and values returned from the backend. You can search this JSON object, or download it.

Description of json_bot_tester.png follows

Test Suites and Test Cases

You can create a test case for different use cases. You create one of these test cases from JSON or by recording conversations in the Conversation Tester. These test cases are part of the skill's metadata so they persist across versions. Because of this, you can run these test cases to ensure that any extensions made to the skill have not broken the basic functionality. Test cases are not limited to just preserving the core functions. You use them to test out new scenarios. As your skill evolves, you can retire the test cases that continually fail because of the changes that were introduced through extensions.

All test cases belong to a test suite, containers that enable you to partition your testing. We provide a test suite called Default Test Suite, but you can create your own as well. The Test Suites page lists all of the test suites and the test cases that belong to them. The test suites listed on this page may be ones that you have created, or they may have been inherited from a skill that you've extended or cloned. You can use this page to create and manage test suites and test cases and compile test cases into test runs.
Description of test_suites.png follows

Add Test Cases

Whether you're creating a skill from scratch, or extending a skill, you can create a test case for each use case. For example, you can create a test case for each payload type. You can build an entire suite of test cases for a skill by simply recording conversations or by creating JSON files that define message objects.

Create a Test Case from a Conversation
Recording conversations is quicker and less error prone than defining a JSON file. To create a test case from a conversation:
  1. Open the skill or digital assistant that you want to create the test for.
  2. In the toolbar for the bot at the top of the page, click Tester icon.
  3. Click Bot Tester.
  4. Select the channel.
    Note

    Test cases are channel-specific: the test conversation, as it is handled by the selected channel, is what is recorded for a test case. For example, test cases recorded using one of the Skill Tester's text-based channels cannot be used to test the same convesation on the Oracle Web Channel.
  5. Enter the utterances that are specific to the behavior or output that you want to test.
  6. Click Save As Test.
  7. Complete the Save Conversation as Test Case dialog:
    • If needed, disable the test case by switching off Enabled.
    • If you're running a test case for conversations or messages that have postback actions, you can switch on Ignore Postback Variables to enable the test case to pass because the differences between the expected message and the actual message at the postback variable level will be ignored.
    • Enter a name and display name that describes the test.
    • As an optional step, provide details in the Description field that help developers understand how the test validates the expected behavior by describing a scenario or a use case from a design document.
    • If needed, select a different test suite from the Test Suite list.
    • The variable placeholders that you create are listed in the Variables field. For newly created test cases, the Variable field also notes the SYSTEM_BOT_ID placeholder that's substituted for the system.botId values that change when the skill has been imported from another instance or cloned.
      The Conversation pane in the New Test dialog

      The responses from a skill or digital assistant can include dynamic information like timestamps that will cause test cases to fail when the test run compares the actual value to the expected value. You can exclude dynamic information from the comparison by substituting a placeholder that's formatted as ${MY_VARIBALE_NAME}. For example, a temporal value, such as one returned by the ${.now?string.full} Apache FreeMarker date operation will cause test cases to continually fail because of the mismatch of the time when the test case was recorded and the time when the test case was run. To enable these test cases to pass, replace the clashing time value in the JSON definition in the Conversation pane with a placeholder.

      For example, ${ORDER_TIME} replaces a date string like Monday, December 4, 2023 9:02:08 PM UTC in the following object:
      {
              "source": "bot",
              "type": "text",
              "payload": {
                  "message": "You placed your order on ${ORDER_TIME} for a small Meat Lovers pizza. Your pizza is on the way."
              }
          }
      
  8. Click Add to Suite.
    Description of save_conversation_history_test_case.png follows

Create a Test Case from a JSON Object
You create a test case from an array object of message objects by first clicking + Test Case in the Test Suite page and then by completing the New Test Case dialog. The properties are the same as those for recorded test case except that you must complete the array ([]) Conversations window with the message objects. Here is template for the different payload types:
    {
        source: "user",             //text only message format is kept simple yet extensible.
        type: "text"
        payload: {
            message: "order pizza" 
        }
    },{
        source: "bot",
        type: "text",
        payload: {
            message: "how old are you?"
            actions: [action types --- postback, url, call, share],  //bot messages can have actions and globalActions which when clicked by the user to send specific JSON back to the bot.
            globalActions: [...]
        }
    },
    {
        source: "user",
        type: "postback"
        payload: {      //payload object represents the post back JSON sent back from the user to the bot when the button is clicked
            variables: {
                accountType: "credit card"
            }, 
            action: "credit card", 
            state: "askBalancesAccountType"
        }
    },
    {
        source: "bot",
        type: "cards"
        payload: {
            message: "label"
            layout: "horizontal|vertical"
            cards: ["Thick","Thin","Stuffed","Pan"],    // In test files cards can be strings which are matched with button labels or be JSON matched  
            cards: [{
                title: "...",
                description: "..."
                imageUrl: "...",
                url: "...",
                actions: [...]  //actions can be specific to a card or global
            }],
            actions: [...],
            globalActions: [...]
        }
         
    },
    {
        source: "bot|user",
        type: "attachment"  //attachment message could be either a bot message or a user message    
        payload: {
            attachmentType: "image|video|audio|file"
            url: "https://images.app.goo.gl/FADBknkmvsmfVzax9"
            title: "Title for Attachment"
        }   
    },
    {
        source: "bot",
        type: "location"       
        payload: {
            message: "optional label here"
            latitude: 52.2968189
            longitude: 4.8638949
        }
    },
    {
        source: "user",
        type: "raw"
        payload: {
            ... //free form application specific JSON for custom use cases. Exact JSON matching
        }
    }
    ...
    //multiple bot messages per user message possible.]
}
 

Run Test Cases

You can create test runs for a single test case, a subset of test cases, or for the entire set of test cases that are listed in the Test Suite page. As your skill evolves, you may need to retire test cases that are bound to fail because of the changes that were deliberately made to a skill. You also temporarily disable a test case because of ongoing development.
Note

You can't delete an inherited test case, you can only disable it.
After the test run completes, click the Test Run Results tab to find out which of the test cases passed or failed.
Description of test_run_results.png follows

View Test Run Results

The Test Run Results page lists the recently executed test runs and their results. The test cases complied into the test run either pass or fail according to a comparison of the expected output that's recorded in the test case definition and the actual output. If the two match, then the test case passes. If they don't, the test case fails. When test cases fail, you can find out why by clicking View Differences.
The View Differences button

Note

The test run results for each skill are maintained for 14 days. They are deleted after this time.
Review Failed Test Cases

The report lists the points of failure at the message level, with the Message Element column noting the position of the skill message within the test case conversation. For each message, the report provides a high-level comparison of the expected and actual payloads. To drill down to see this comparison in detail – and to reconcile the differences to allow this test case to pass in future test runs – click the Actions menu.
Description of top_level_menu.png follows

Fix Failed Test Cases
When needed, you can use the Apply Actual Value, Ignore Difference, and Add actions to fix a test case (or portions of a test case) to prevent it from failing the next time it's run. The options in the Actions menu are node-specific, so the actions at the message level differ from those at lower points on the traversal.
  • Expand All – Expands the message object nodes.
  • View Difference – Provides a side-by-side comparison of the actual and expected output. The view varies depending on the node. For example, you can view a single action, or the entire actions array. You can use this action before you reconcile the actual and expected output.
    Description of view_difference_message_level.png follows

  • Ignore Difference – Choose this action when clashing values don’t affect the functionality. If you have multiple differences and you don't want to go through them one-by-one, you can choose this option. At the postback level, for example, you can apply actual values individually, or you can ignore differences for the whole postback object.
  • Apply Actual Value – Some changes, however small, can cause many of the test cases to fail within the same run. This is often the case with changes to text strings such as prompts or labels. For example, changing a text prompt from "How big of a pizza do you want?" to "What pizza size?" will cause any test case that includes this prompt to fail, even though the skill's functionality remains unaffected. While you can accommodate this change by either re-recording the test case entirely, you can instead quickly update the test case definition with the revised prompt by clicking Apply Actual Value. Because the test case is now in step with the new skill definition, the test case will pass (or at least not fail because of the changed wording) in future test runs.
    Note

    While you can apply string values, such as prompts and URLs, you can't use the Apply Actual Value to fix a test case when a change to an entity's values or its behavior (disabling the Out of Order Extraction function, for example) causes the values provided by the test case to become invalid. The test case will fail because the skill will continually prompt for a value that it will never receive, thus causing its responses to become out of step with the sequence defined by the test case.
  • Add Regex – You can substitute a Regex expression to resolve clashing text values. For example, you add user* to resolve conflicting user and user1 strings.
  • Add – At the postback level of the traversal, Add actions appear when a revised skill includes postback actions that were not present in the test case. To prevent the test case from failing because of the new postback action, you can click Add to include it in the test case. (Add is similar to Apply Actual Value, but at the postback level.)

Import Test Cases

You can import a test case when you're developing parallel versions of the same skill or working with clones. To import a test case:
  1. Choose Export Tests from the menu in the skill's tile.
  2. Save, then extract, the DefaultTestSuite.zip file to your local system.
  3. In the extracted ZIP, navigate to the testSuites directory, then open the JSON file of the test case in an editor.
  4. Open the cloned or versioned skill.
  5. Manually create the test case by first clicking + Test Case in the Test Suites page.
  6. Define the dialog's properties.
  7. Delete the array ([]) in the Conversation window.
    Description of empty_conversation_window.png follows

  8. In the JSON file:
    • Copy the array of conversation objects defined for the conversation object:
         "conversation" : [ {
            "source" : "user",
            "type" : "text",
            "payload" : {
              "message" : "I want to order a pizza"
            }
          }, {
            "source" : "bot",
            "type" : "text",
            "payload" : {
              "message" : "What kind of pizza would you like to order?"
            }
          }, {
            "source" : "bot",
            "type" : "cards",
            "payload" : {
              "layout" : "horizontal",
              "cards" : [ {
                "title" : "CHEESE BASIC",
                "description" : "Classic marinara sauce topped with whole milk mozzarella cheese.",
                "imageUrl" : "https://cdn.pixabay.com/photo/2017/09/03/10/35/pizza-2709845__340.jpg",
                "actions" : [ {
                  "type" : "postback",
                  "label" : "Order Now",
                  "postback" : {
                    "variables" : {
                      "pizza" : "CHEESE BASIC"
                    },
                    "system.botId" : "${SYSTEM_BOT_ID}",
                    "system.state" : "orderPizza"
                  }
      ...
      {
            "source" : "bot",
            "type" : "attachment",
            "payload" : {
              "type" : "image",
              "url" : "https://cdn.pixabay.com/photo/2017/09/03/10/35/pizza-2709845__340.jpg"
            }
          } ]
      Note

      Include only the array of conversation objects. Do not include the comma separator after this array, the variables array definition (if one exists), or the closing curly bracket because they will make your test case definition syntactically invalid.
      {
            "source" : "bot",
            "type" : "attachment",
            "payload" : {
              "type" : "image",
              "url" : "https://cdn.pixabay.com/photo/2017/09/03/10/35/pizza-2709845__340.jpg"
            }
          } ],
          "variables" : [ "SYSTEM_BOT_ID" ]
        } 
    • Copy it into the Conversation window.
      Description of copied_test_case_json.png follows

      If you included the comma separator after the conversation array, the variables array definition, or the closing curly bracket, delete them to avoid syntax errors.
      Description of remove_variables.png follows

  9. Click Add to Suite.