Entities

While intents map words and phrases to a specific action, entities add context to the intent itself. They help to describe the intent more fully and enable your bot to complete a user request.

The OrderPizza intent, for example, describes a user request, but only in general terms. To fill in the specifics, this intent is augmented by the PizzaSize entity, which identifies values like large, medium, and small from the user input. There are two types of entities, both of which you can declare as variables in the dialog flow: built-in entities that we provide for you and custom entities, which you can add on your own.

Built-In Entities

We provide entities that identify objective information from the user input, like time, date, and addresses.

Description of system_entities_01.eps follows

These built-in entities extract primitive values like strings and integers, but can also extract more complicated values from the user input using groups of properties.
Note

Whenever you define a variable as an entity in a YAML-based dialog flow, be sure to match the entity name and letter case exactly. For example, you’ll get a validation error if you enter confirm: "YESNO" instead of confirm: “YES_NO”.

Built-In Entities and Their Properties

Entities extract content using properties, each of which recognizes a specific value. You can see these properties in the JSON output that’s returned by the NLU Engine. In this output, the matched entities display along with the value that they’ve identified from the user input. Within your dialog flow definition, you can use these properties to isolate a specific facet of an entity value. While each entity has its specific properties, all entities have the following properties:
Property Description
beginOffset The beginning offset of this slotted entity value starting at 0.
endOffset The ending offset of this slotted entity value starting at 0.
originalString The original string that was extracted from the query for this entity slot or the response to the prompt.
Note

The DATE, TIME, and DURATION entities are deprecated in Release 22.08. These entities are not available to skills created on this version of the platform. They use the DATE_TIME entity instead. Existing skills upgraded to 22.08 will continue to support these legacy system entities, though there may be some behavior changes.
Entity Name Content Extracted Examples Properties (Referenced in Value Expressions) Example NLU Engine Response
ADDRESS The city, house number, and road

This entity is English-only.

500 Smith Road, Smithville
  • city

  • houseNumber

  • road

{
"road": "smith road",
"city": "smithville",
"entityName": "ADDRESS",
"houseNumber": "500",
"originalString": "500 Smith Road, Smithville"
}
CURRENCY Representations of money. You can disambiguate $ and ¥ currencies by the detected locale of the user.
  • $67

  • 75 dollars

  • amount

  • currency

  • totalCurrency

"CURRENCY": [ { "amount": 550, "currency": "usd", "totalCurrency": "550.0 usd", "entityName": "CURRENCY" } ]
DATE An absolute or relative date.

This entity is deprecated in Version 22.08 and is unavailable to skills created on this version of the platform. For skills created using prior versions, consider using the DATE_TIME entity instead.

Note: When the user input names a day, but provides no other temporal context, the system considers this a future date. For example, it considers Wednesday in the following input as next Wednesday, not the current Wednesday or the prior Wednesday.
  • Book me a ticket for Wednesday.

  • I want to file an expense report for Wednesday.

You can override this behavior by applying an ambiguity resolution rule. While the DATE entity resolves to format of several supported locales, you can opt to ignore the format of the detected locale, and impose a default format and a tense (future, past, nearest, etc.) by applying an ambiguity resolution rule.
  • November 9, 2016

  • Today

date
  {
          "entityName": "Meeting",
          "DATE_TIME": [
            {
              "originalString": "Monday, October 16th",
              "bagItem": "Meeting:DateTime",
              "subType": "DATE",
              "timeZone": "UTC",
              "movableDateValue": "--10-16",
              "relativeRepresentation": "--10-16",
              "entityName": "DATE_TIME",
              "value": "2022-10-16"
            }
          ]
        }
      ]
DATE_TIME Extracts various time-related information through the following subtypes: a date, a time, a date and time, a recurring event, an interval or a duration.
  • Date: January 1, 2023
  • Time: 10am
  • Date and Time: January 1, 2023 at 10am
  • Interval: January 1 2023 from 10 am for 2 hours
  • Duration: 2 hours
  For "Schedule a meeting for every Tuesday from 10:00 am to 1 pm starting on January 23, 2022 and ending February 23":
   "entityMatches": {
      "Meeting": [
        {
          "entityName": "Meeting",
          "DATE_TIME": [
            {
              "originalString": "February 23",
              "bagItem": "Meeting:DateTime",
              "subType": "DATE",
              "timeZone": "UTC",
              "role": "end",
              "movableDateValue": "--02-23",
              "relativeRepresentation": "--02-23",
              "entityName": "DATE_TIME",
              "value": "2023-02-23"
            },
            {
              "originalString": "January 23, 2022",
              "bagItem": "Meeting:DateTime",
              "subType": "INTERVAL",
              "startDate": {
                "originalString": "January 23, 2022",
                "subType": "DATE",
                "timeZone": "UTC",
                "entityName": "DATE_TIME",
                "value": "2022-01-23"
              },
              "entityName": "DATE_TIME"
            },
            {
              "originalString": "every Tuesday from 10:00 am to 1 pm",
              "bagItem": "Meeting:DateTime",
              "subType": "RECURRING",
              "timeZone": "UTC",
              "recurrenceFrequency": {
                "originalString": "every Tuesday from 10:00 am to 1 pm",
                "subType": "DURATION",
                "timeZone": "UTC",
                "entityName": "DATE_TIME",
                "value": "P1W"
              },
              "startInterval": {
                "originalString": "Tuesday from 10:00 am to 1 pm",
                "subType": "INTERVAL",
                "timeZone": "UTC",
                "startDate": {
                  "originalString": "Tuesday",
                  "subType": "DATE",
                  "timeZone": "UTC",
                  "weekday": "TU",
                  "relativeReference": "weekday",
                  "entityName": "DATE_TIME",
                  "value": "2022-10-18"
                },
                "startTime": {
                  "originalString": "10:00 am",
                  "subType": "TIME",
                  "timeZone": "UTC",
                  "entityName": "DATE_TIME",
                  "value": "10:00:00"
                },
                "endTime": {
                  "originalString": "1 pm",
                  "subType": "TIME",
                  "timeZone": "UTC",
                  "entityName": "DATE_TIME",
                  "value": "13:00:00"
                },
                "entityName": "DATE_TIME"
              },
              "entityName": "DATE_TIME"
            }
          ]
        }
      ]
    }
  • Interpretation of February 23 per the time resolution rules. Because the use case is for scheduling a meeting, the date will always be interpreted as forward-looking.
    "movableDateValue": "--02-23",
                  "relativeRepresentation": "--02-23"
  • "value": "P1W": An ISO 8601 interchange standard representation of weekly/once a week, where P is the duration desigator and W is the week designator.
 
DURATION The amount of time between the two endpoints of a time interval

This entity is deprecated in Version 22.08 and is unavailable to skills created on this version of the platform. For skills created using prior versions, consider using the DATE_TIME entity instead.

  • 4 years

  • two weeks

  • startDate

  • endDate

[ { "originalString": "2 hours", "bagItem": "Meeting:DateTime", "subType": "DURATION", "timeZone": "UTC", "entityName": "DATE_TIME", "value": "PT2H" } ] } ]
EMAIL An email address. The NLU system can recognize email addresses that have a combination of the following:
  • part before the at (@) symbol:
    • uppercase and lowercase letters in the Latin alphabet (A-Z and a-z)
    • digits (0-9)
    • the following printable characters: !#$%&'*+-/=?^_`{}~
    • dot (.)
  • part after the at (@) symbol:
    • uppercase and lowercase letters in the Latin alphabet (A-Z and a-z)
    • digits (0-9)
    • hyphen (-)
ragnar.smith@example.com    
LOCATION Extracts cities, states, and countries from the user's input.
  • Redwood City
  • CA
  • USA
  • city
  • state
  • country
"LOCATION": [ { "originalString": "Redwood City, CA, USA", "name": "redwood city, ca, usa", "country": "usa", "state": "ca", "city": "redwood city", "entityName": "LOCATION" } ]
NUMBER Matches ordinal and cardinal numbers. You can resolve a entity as the locale-specific format (grouping of thousands by full stops, commas, spaces, etc.).
  • 1st

  • first

  • 1

  • one

   
PERSON Recognizes a string as the name of a person.

The PERSON entity can't match names that are also locations (for example, Virginia North).

To expand the PERSON entity to always match the people in your organization, you can associate it with a Value List Entity.
  • John J. Jones
  • Ashok Kumar
  • Gabriele D'Annunzio
  • Jones, David
  • Cantiflas
  • Zhang San
  • Virginia Jones
name "PERSON": [ { "originalString": "John J. Johnson", "name": "john j. johnson", "entityName": "PERSON" } ]
PHONE NUMBER A phone number—The NLU Engine recognizes phone numbers that have seven or more digits (it can’t recognize any phone number with fewer digits). All country codes need to be prefixed with a plus sign (+), except for the United States of America (where the plus sign is optional). The various parts of the phone number (the area code, prefix, and line number), can be separated by dots (.), dashes (-), or spaces. If there are multiple phone numbers entered in the user input, then the NLU Engine can recognize them when they’re separated by commas. It can’t recognize different phone numbers if they’re separated by dots, dashes or spaces.
  • (650)-555–5555

  • 16505555555

  • +61.3.5555.5555

  • phoneNumber

  • completeNumber

{ "phone_number":"(650)-555-5555", "complete_number":"(650)-555-5555", "entityName":"PHONE_NUMBER" }
TIME A specific time.

This entity is deprecated in Version 22.08 and is unavailable to skills created on this version of the platform. For skills created using prior versions, consider using the DATE_TIME entity instead.

In some cases, for example, when the input is ambiguous, you may need the TIME entity to resolve input consistently as a past or future time, or approximate it by the nearest time. To do this, apply an ambiguity resolution rule.

2:30 pm
  • hrs

  • mins

  • secs

  • "hourFormat":"PM"

"startTime": { "date": 1613653200000, "zoneOffset": "0", "endOffset": 4, "mins": 0, "zone": "UTC", "entityName": "TIME", "secs": 0, "hrs": 1, "originalString": "1 pm", "type": "TIME", "hourFormat": "PM", "beginOffset": 0 }
URL A URL—This entity can extract IPv4 addresses, Web URLs, deep links (http://example.com/path/page), file paths, and mailto URIs. If the user input specifies login credentials, then it must also include the protocol. Otherwise, the protocol isn’t required. http://example.com
  • protocol

  • domain

  • fullPath

{"protocol":"http","domain":"example.com",}
YES_NO Detects a "yes" or a "no".     "YES_NO": [ { "beginOffset": 0, "endOffset": 4, "originalString": "Yeah", "yesno": "YES", "entityName": "YES_NO", "type": "YES_NO" } ]

The DATE_TIME Entity

There are many ways that your skill might need to get date and time input. For example, you may need a simple date or time, a date and a time, or a one-time or recurring period. You can use the DATE_TIME entity to gather information for all of these scenarios.

With the DATE_TIME entity, you choose a specific subtype to define what information to gather. The following table shows which subtype to use for each possible scenario and links to information about the attributes for each subtype.

Scenario DATE_TIME Subtype Reference
A date. Date. DATE Subtype Attributes
A time. Time TIME Subtype Attributes
A date and a time. Date Time DATETIME Subtype Attributes
A span of time. For example, 1 hour or 4 days. Duration DURATION Subtype Attributes
A single occurrence of a period defined by a beginning and ending date or a beginning and ending date and time. Interval INTERVAL Subtype Attributes
A regularly recurring period defined by, for example, the start and end of the first period, the interval between the recurring periods, and when the periods stop recurring. Recurring RECURRING Subtype Attributes
Note

the DATE_TIME entity supersedes the DATE, TIME, DURATION, and SET system entities, which have been deprecated and are not available in skills created in Release 22.08 and later. Existing skills upgraded to 22.08 will support these deprecated system entities, though there may be some behavior changes.
You can use the Date, Time, and Duration subtypes as standalone entities in the dialog flow (where you declare separate variables for each), but you can only utilize the Interval and Recurring subtypes by incorporating them into a composite bag entity.
Note

We recommend that all DATE_TIME subtypes be managed within a composite bag entity.
If you use the Date, Time, and Duration subtypes as standalone entities in a YAML-based dialog flow, specify the subtype using dot notation: DATE_TIME.DATE, DATE_TIME.TIME, DATE_TIME.DURATION and for SET, DATE_TIME.RECURRING. For example:
context:
  variables:
    iResult: "nlpresult"
    Startdate: "DATE_TIME.DATE"
    duration: "DATE_TIME.DURATION" 
In the states node, you reference these variables using a System.ResolveEntities component.

In visual dialog mode, reference the DATE_TIME subtype using Resolve Entity and Resolve Declarative Entity states.

DATE_TIME values are representated as ISO 8601. For user-friendly output, use the Apache FreeMarker .xs built-in. For example, the Time subtype is extracted using .value.value?time.xs?string['hh:mm a'] in the following resource bundle reference:
${rb('pizzaDeliveryMessage','time',deliveryTime.value.value?time.xs?string['hh:mm a'])}
The first value gets the content of the variable as an object. The second value is an attribute of the DATE_TIME object that holds the time value.
Attributes for Each DATE_TIME Subtype

Here are the attributes for each DATE_TIME subtype.

Note that, just like every other system entity, the subtypes also include the beginOffset, endOffset, and originalString properties.

DATE Subtype Attributes

The DATE subtype contains these attributes about a specific date:

Attribute Type Explanation
entityName String DATE_TIME
month Integer

When DATE is an attribute of the RECURRING subtype, and the original string includes the name of a month, such as "every Monday of July", this represents the numeric representation ("7" in this example) of the explicitly-specified month value .

movableDateValue String

When DATE is an attribute of RECURRING and the slotted date doesn't represent a specific date (that is, it is a movable date such as July 4), this represents the explicitly-specified movable date value that's used by the RECURRING subtype's DATE attribute to differentiate between the resolved movable date and the resolved non-movable date. For example, if the slotted date is July 4, then this value is --07-04.

ordinal Integer

When DATE is an attribute of the RECURRING subtype, and the original string specifies an ordinal value, such as first in "every first Monday", this represents the numeric value of the ordinal (in this example, "1").

ordinalReference Enum

When DATE is an attribute of the RECURRING subtype, and the original string includes an ordinal that is qualified by the name of a month, such as July in "every first Monday of July", this represents the explicitly-specified qualifier ('M' for month).

subType String DATE
timezone String The time zone offset. For example: +07:00.
type String DATE_TIME
value String The resolved value in ISO 8601 format. For example 2022-08-05.
weekday Enum

When DATE is an attribute of the RECURRING subtype, and the original string includes the name of a day, such as "every Monday", this represents the explicitly-specified weekday value using the iCalendar format, such as MO, TU, and WE.

year Integer

When DATE is an attribute of the RECURRING subtype, and the original string includes the year, such as "every Monday of 2023", this represents the explicitly-specified year value.

Here's an example of the NLU response for the DATE subtype:

          "aDate": {
            "endOffset": 8,
            "entityName": "DATE_TIME",
            "timeZone": "-10:00",
            "originalString": "tomorrow",
            "subType": "DATE",
            "type": "DATE_TIME",
            "value": "2022-09-07",
            "beginOffset": 0
          }
TIME Subtype Attributes

The TIME subtype contains these attributes about a specific time:

Attribute Type Explanation
entityName String DATE_TIME
subType String TIME
timezone String The time zone offset. For example: +07:00.
type String DATE_TIME
value String The resolved value in ISO 8601 format. For example 12:00:00.

Here's an example of the NLU response for the TIME entity:

          "aTime": {
            "endOffset": 4,
            "entityName": "DATE_TIME",
            "timeZone": "-10:00",
            "originalString": "2 pm",
            "subType": "TIME",
            "type": "DATE_TIME",
            "value": "14:00:00",
            "beginOffset": 0
          }
DATETIME Subtype Attributes

The DATETIME subtype contains these attributes about a specific date and time:

Attribute Type Explanation
date DATE This object contains the attributes described in DATE Subtype Attributes.
entityName String DATE_TIME
subType String DATETIME
time TIME This object contains the attributes described in TIME Subtype Attributes.

Here's an example of the NLU response for the DATETIME subtype:


          "aDateAndTime": {
            "date": {
              "endOffset": 5,
              "entityName": "DATE_TIME",
              "timeZone": "-10:00",
              "originalString": "today",
              "subType": "DATE",
              "type": "DATE_TIME",
              "value": "2022-09-06",
              "beginOffset": 0
            },
            "entityName": "DATE_TIME",
            "subType": "DATETIME",
            "time": {
              "endOffset": 13,
              "entityName": "DATE_TIME",
              "timeZone": "-10:00",
              "originalString": "noon",
              "subType": "TIME",
              "type": "DATE_TIME",
              "value": "12:00:00",
              "beginOffset": 9
            }
          }
DURATION Subtype Attributes

The DURATION subtype contains these attributes about a day or time duration, such as 1 week:

Attribute Type Explanation
entityName String DATE_TIME
subType String DURATION
timezone String The time zone offset. For example: +07:00.
type String DATE_TIME
value String Duration in ISO 8601 format. Examples: PT1H for 1 hour,P4D for 4 days, P1W for 1 week, P2M for 2 months.

Here's an example of the NLU response for the DURATION subtype:

          "aDuration": {
            "endOffset": 7,
            "entityName": "DATE_TIME",
            "timeZone": "-10:00",
            "originalString": "3 hours",
            "subType": "DURATION",
            "type": "DATE_TIME",
            "value": "PT3H",
            "beginOffset": 0
          }
INTERVAL Subtype Attributes

The INTERVAL subtype contains these attributes about a period that's defined by a beginning and ending date and time, or is defined by a date, start time, and length, such as 2 hours.

Attribute Type Explanation
duration ENTITY This object contains the attributes described in DURATION Subtype Attributes.
endDate DATE This object contains the attributes described in DATE Subtype Attributes.

Included for Date and Time and Date Only prompt types.

endTime TIME This object contains the attributes described in TIME Subtype Attributes.

Included for Date and Time and Time Only prompt types.

entityName String DATE_TIME
startDate DATE This object contains the attributes described in DATE Subtype Attributes.

Included for Date and Time and Date Only prompt types.

startTime TIME This object contains the attributes described in TIME Subtype Attributes.

Included for Date and Time and Time Only prompt types.

subType String INTERVAL

Here's an example of the NLU response for the INTERVAL entity with the Date and Time prompt type:

          "anInterval": {
            "duration": {
              "entityName": "DATE_TIME",
              "subType": "DURATION",
              "value": "P1D"
            },
            "endDate": {
              "endOffset": 8,
              "entityName": "DATE_TIME",
              "timeZone": "-10:00",
              "originalString": "tomorrow",
              "subType": "DATE",
              "type": "DATE_TIME",
              "value": "2022-09-07",
              "beginOffset": 0
            },
            "entityName": "DATE_TIME",
            "subType": "INTERVAL",
            "startTime": {
              "endOffset": 4,
              "entityName": "DATE_TIME",
              "timeZone": "-10:00",
              "originalString": "noon",
              "subType": "TIME",
              "type": "DATE_TIME",
              "value": "12:00:00",
              "beginOffset": 0
            },
            "endTime": {
              "endOffset": 4,
              "entityName": "DATE_TIME",
              "timeZone": "-10:00",
              "originalString": "noon",
              "subType": "TIME",
              "type": "DATE_TIME",
              "value": "12:00:00",
              "beginOffset": 0
            },
            "startDate": {
              "endOffset": 5,
              "entityName": "DATE_TIME",
              "timeZone": "-10:00",
              "originalString": "today",
              "subType": "DATE",
              "type": "DATE_TIME",
              "value": "2022-09-06",
              "beginOffset": 0
            }
          }
RECURRING Subtype Attributes

The RECURRING subtype contains these attributes about a regularly recurring period:

Attribute Type Explanation
entityName String DATE_TIME
recurrenceDates Array of DATE Included when multiple recurring dates are given. This object contains an array of DATE objects with the attributes described in DATE Subtype Attributes.
recurrenceFrequency DURATION This object contains the attributes described in DURATION Subtype Attributes.
recurrenceTimes Array of TIME Included when multiple recurring times are given. This object contains an array of TIME objects with the attributes described in TIME Subtype Attributes.
recurrenceUntil INTERVAL Specifies the bounds of the repetition. Typically, only the end date is specified. This object contains the attributes described in INTERVAL Subtype Attributes.
startDate DATE This object contains the attributes described in DATE Subtype Attributes.

Note that for RECURRING entities, the DATE object may include the month. movevableDateValue, ordinal, ordinalReference, weekday, and year attributes.

Included for Date Only prompt type.

startDateTime DATETIME This object contains the attributes described in DATETIME Subtype Attributes.

Note that for RECURRING entities, the DATETIME's DATE sub-object may include the month. movevableDateValue, ordinal, ordinalReference, weekday, and year attributes.

Included for Date and Time prompt type.

startInterval INTERVAL This object contains the attributes described in INTERVAL Subtype Attributes.
startTime TIME This object contains the attributes described in TIME Subtype Attributes.

Included for Time Only prompt type.

subType String RECURRING

Here's an example of the NLU response for the RECURRING subtype with the Date and Time prompt type:

          "aRecurringPeriod": {
            "startInterval": {
              "duration": {
                "entityName": "DATE_TIME",
                "subType": "DURATION",
                "value": "PT1H"
              },
              "endDate": {
                "entityName": "DATE_TIME",
                "timeZone": "-10:00",
                "subType": "DATE",
                "value": "2022-07-28"
              },
              "entityName": "DATE_TIME",
              "subType": "INTERVAL",
              "startTime": {
                "endOffset": 7,
                "entityName": "DATE_TIME",
                "timeZone": "-10:00",
                "originalString": "12 noon",
                "subType": "TIME",
                "bagItem": "Meeting:DateTime",
                "type": "DATE_TIME",
                "value": "12:00:00",
                "beginOffset": 0
              },
              "endTime": {
                "entityName": "DATE_TIME",
                "timeZone": "-10:00",
                "subType": "TIME",
                "value": "13:00:00"
              },
              "startDate": {
                "endOffset": 8,
                "entityName": "DATE_TIME",
                "timeZone": "-10:00",
                "originalString": "tomorrow",
                "subType": "DATE",
                "bagItem": "Meeting:DateTime",
                "type": "DATE_TIME",
                "value": "2022-07-28",
                "beginOffset": 0
              }
            },
            "recurrenceFrequency": {
              "endOffset": 10,
              "entityName": "DATE_TIME",
              "timeZone": "-10:00",
              "originalString": "every week",
              "subType": "DURATION",
              "type": "DATE_TIME",
              "bagItem": "Meeting:DateTime",
              "value": "P1W",
              "beginOffset": 0
            },
            "entityName": "DATE_TIME",
            "subType": "RECURRING",
            "recurrenceUntil": {
              "endDate": {
                "endOffset": 6,
                "entityName": "DATE_TIME",
                "timeZone": "-10:00",
                "originalString": "Sept 1",
                "subType": "DATE",
                "bagItem": "Meeting:DateTime",
                "type": "DATE_TIME",
                "value": "2022-09-01",
                "beginOffset": 0
              },
              "entityName": "DATE_TIME",
              "subType": "INTERVAL"
            }
          }

Ambiguity Resolution Rules for Time and Date Matches

Users can enter partial dates where the time is implied. For example:
  • "Schedule a meeting for Monday"
  • "Create an expense report for 7/11"
  • "Create an expense report for the 11th"
Some situations, like scheduling a meeting, imply a future time. Others, like creating an expense report, refer to some time in the past. To ensure that the DATE_TIME entity Time and Date subtypes can resolve ambiguous input as the past, present, or as the closest approximation, you can apply Ambiguity Resolution Rules. To set the temporal context for the time resolution, for example, click the DATE_TIME entity and then apply a rule.
Note

The ambiguity resolution rules do not validate the user input. You can validate the user input with custom validator that uses Apache FreeMarker (which is not recommended) or in an Entity Event Handler (which is recommended). This validator returns false (validation fails) if a past date is given for a forward-looking use case (for example, a meeting scheduler). For a backward-looking use case like expense reporting, the validator returns false if the user inputs a future date.

Description of time_entity_configuration.png follows

Note

If you're referencing the same entity with two or more items within the same composite bag, or if two or more composite bags reference the same entity and are also associated with the same intent, upgrade to Release 21.12 to ensure that the ambiguity resolution rules specific each entity reference are handled separately and not overwritten by the rules set for a previously resolved entity.
Resolution Rules for Matches to the Date Subtype
Date resolves to the UTC date, not the server's date nor the browser's date. For example, "today" uttered at 8 pm on July 8th from the Hawaii–Aleutian Time Zone (UTC−10:00) is resolved as July 9th.
Rule How it works Examples
Past Resolves the ambiguous input as the nearest day of the week in the past.
  • If the utterance includes "Monday" and the current day is also Monday, then "Monday" is resolved as today.
  • If the utterance includes "Monday" and the current day is Wednesday, the "Monday" is resolved as the previous Monday.
Future Resolves the ambiguous input as the nearest day in the future
  • If the utterance includes "Monday" and the current day is also Monday, then "Monday" is resolved as today.
  • If the utterance includes "Monday", and the current day is Tuesday, then "Monday" is resolved as the following Monday.
  • If the utterance includes "Tuesday", and the current day is Monday, then "Tuesday" is resolved as this Tuesday.
Nearest Resolves the ambiguous input as the nearest day.
  • If the utterance includes "Monday"and the current day is also Monday, the "Monday" is resolved as today. If the utterance includes "Monday" and the current day is Tuesday, then "Monday" resolves as yesterday.
  • If the utterance includes "Monday", and the current day is Sunday, the "Monday" resolves as tomorrow.
Default Resolves the ambiguous input as a future date. For example, if the input includes Wednesday, the day is interpreted as next Wednesday, not the prior Wednesday or the current day (if it's a Wednesday, that is).
Resolution Rules for Matches to the Time Subtype
Rule How it works Examples
Past Resolves the input to the nearest time in the past relative to the current time in the UTC time zone.
  • If the utterance includes "9 am" and the current time is 10:00 am, then the time is resolved as 9:00 am today.
  • If the utterance includes "9 pm" and the current time is 10:00 am, then the time is resolved as 9:00 pm yesterday.
  • If the utterance includes "9" and the current time is 10:00 am, then the time is resolved as 9:00 am today.
Future Resolves the input to the nearest time in the future relative to the current time in the UTC time zone.
  • If the utterance includes "9 am" and the current time right now is 10:00 am, then the time is resolved as 9:00 am tomorrow.
  • If the utterance includes "9 pm" and the current time is 10:00 am, the time is resolved as 9 pm today.
  • If the utterance includes "9" and the current time is 10:00 am, then the time is resolved as 9:00 pm today.
Nearest Resolves the input as the nearest time relative to the current time in the UTC time zone.
  • If the utterance includes "9 am" and the current time is 10:00 am, then the time is resolved as today 9:00 am.
  • If the utterance includes "9 pm" and the current time is 10:00 am, then the time is resolved as 9:00 pm today.
  • If the utterance includes "9" and the current time is 10:00 am, then the time is resolved as 9:00 am today.
  • If the utterance includes "10:00" and the current time is 1:00 am, then the time is resolved as 10:00 pm yesterday.
Default Resolves the input by the method used in the pre-21.06 releases of Oracle Digital Assistant.
  • If the utterance includes is "9 am" and the current time is 10 am, then the time is resolved as 9 am today.
  • If the utterance includes "9 pm" and the current time is 10 am, then the time is resolved as 9 pm today.
  • If the utterance includes "9" and the current time is 10 am, then the time is resolved as 9 am today.
  • If the utterance includes "1:00 am" and the current time is 2 pm, then the time is resolved as 1 am tomorrow.

Locale-Based Entity Resolution

You can enable the CURRENCY, DATE and NUMBER entities to resolve to the user locale by switching on Consider End User Locale.
Description of currency_entity_customization.png follows

Depending on the entity, this option has different applications:
  • DATE resolves to the locale-specific format: it can resolve 11/7 as November 7 for en-US or July 11 for en-AU, for example. For non-supported locales, you can apply a format and a temporal context, such as past or future.
  • NUMBER resolves to the country-specific numeric format -- the comma, period, or space used to separate groups of thousands and the decimal point with or without a thin space that separates the fractional part of the number. For example, the U.K. and U.S. both use a comma to separate groups of thousands.
    Note

    When Consider End User Locale is switched off, the NUMBER entity resolves as COMMA_DOT (1,000.00).
  • CURRENCY uses locale to resolve to a specific $ or ¥ currency. When no locale is detected, you can set the input to resolve as the $ or ¥ currency that's set by the Ambiguity Resolution Rule.
Note

If you're referencing the same entity with two more items within the same composite bag, or if two or more composite bags reference the same entity and are also associated with the same intent, upgrade to Release 21.12 to ensure that the locale customization specific to each entity reference is handled separately and not overwritten by the locale configuration of a previously resolved entity.
Locale-Based Date Resolution
When the user's locale cannot be detected, the date is resolved as the selected default date format.
For this Locale... This input... ...Resolves as... Format (Date-Month Sequence)
United States (en_US) 11/7 November 7 MONTH_DAY
Great Britain (en_GB) 11/7 July 11 DAY_MONTH
Canada - English (en_CA) 11/7 November 7 MONTH_DAY
Canada - French (fr_CA) 11/7 November 7 MONTH_DAY
Australia (en_AU) 11/7 July 11 DAY_MONTH
Mexico (es_MX) 11/7 July 11 DAY_MONTH
Singapore (zh_SG) 11/7 July 11 DAY_MONTH
United Arab Emirates (ar_AE) 11/7 November 7 MONTH_DAY
Arabic (ar_AR) 11/7 November 7 MONTH_DAY
France (fr_FR) 11/7 July 11 DAY_MONTH
Netherlands (nl_NL) 11/7 July 11 DAY_MONTH
Germany (de_DE) 11/7 July 11 DAY_MONTH
Italy (it_IT) 11/7 July 11 DAY_MONTH
Portugal (pt_PT) 11/7 July 11 DAY_MONTH
Spain (en_ES) 11/7 July 11 DAY_MONTH
China (zh_CN) 11/7 November 7 MONTH_DAY
Japan (ja_JP) 11/7 November 7 MONTH_DAY
Locale-Based Currency Resolution
For this Locale... This input... ...Resolves as (Dollar Ambiguity) This input... ...Resolves as (Yen Ambiguity)
United States (en_US) 20 dollars 20.0 USD 20 ¥ 20.0 JPY
Great Britain (en_GB) 20 dollars 20.0 USD 20 ¥ 20.0 JPY
Canada - English (en_CA) 20 dollars 20.0 CAD 20 ¥ 20.0 JPY
Canada - French (fr_CA) 20 dollars 20.0 CAD 20 ¥ 20.0 JPY
Australia (en_AU) 20 dollars 20.0 AUD 20 ¥ 20.0 JPY
Mexico (es_MX) 20 dollars 20.0 MXN 20 ¥ 20.0 CNY
Singapore (zh_SG) 20 dollars 20.0 SGD 20 ¥ 20.0 JPY
United Arab Emirates (ar_AE) 20 dollars 20.0 USD 20 ¥ 20.0 JPY
Arabic (ar_AR) 20 dollars 20.0 USD 20 ¥ 20.0 JPY
France (fr_FR) 20 dollars 20.0 USD 20 ¥ 20.0 JPY
Netherlands (nl_NL) 20 dollars 20.0 USD 20 ¥ 20.0 JPY
Germany (de_DE) 20 dollars 20.0 USD 20 ¥ 20.0 JPY
Italy (it_IT) 20 dollars 20.0 USD 20 ¥ 20.0 JPY
Portugal (pt_PT) 20 dollars 20.0 USD 20 ¥ 20.0 JPY
Spain (en_ES) 20 dollars 20.0 USD 20 ¥ 20.0 JPY
China (zh_CN) 20 dollars 20.0 USD 20 ¥ 20.0 CNY
Japan (ja_JP) 20 dollars 20.0 USD 20 ¥ 20.0 JPY
Locale-Based Number Resolution
When Consider End User Locale is switched off, the number format defaults to COMMA_DOT (1,000.00).
When the locale is enabled for... ...The recognized format is … Example
United States (en_US) COMMA_DOT 1,000,000.00
Great Britain (en_GB) COMMA_DOT 1,000,000.00
Canada - English (en_CA) COMMA_DOT 1,000,000.00
Canada - French (fr_CA) DOT_COMMA 1.000.000,00
Australia (en_AU) COMMA_DOT 1,000,000.00
Mexico (es_MX) COMMA_DOT 1,000,000.00
Singapore (zh_SG) COMMA_DOT 1,000,000.00
United Arab Emirates (ar_AE) DOT_COMMA 1.000.000,00
Arabic (ar_AR) DOT_COMMA 1.000.000,00
France (fr_FR) SPACE_COMMA 1 000 000,00
Netherlands (nl_NL) DOT_COMMA 1.000.000,00
Germany (de_DE) DOT_COMMA 1.000.000,00
Italy (it_IT) DOT_COMMA 1.000.000,00
Portugal (pt_PT) COMMA_DOT 1,000,000.00
Spain (en_ES) DOT_COMMA 1.000.000,00
China (zh_CN) COMMA_DOT 1,000,000.00
Japan (ja_JP) COMMA_DOT 1,000,000.00

Custom Entities

Because the built-in entities extract generic information, they can be used in a wide variety of bots. Custom entities, on the other hand, have a narrower application. Like the FinancialBot’s AccountType entity that enables various banking transactions by checking the user input for keywords like checking, savings, and credit cards, they’re tailored to the particular actions that your bot performs.

Composite Bag

A composite bag is a grouping of related entities that can be treated as a whole within a conversation. Using composite bags enables a skill to extract values for multiple entities in one user utterance, which allows a conversation to flow more naturally. Early on in the designing of your skill, you should identify these groups of related entities, which often reflect clear business domains, and build composite bags for them.

For example, a composite bag for a pizza might include entities for type, size, crust, and extra toppings. If a user enters "I'd like a large pepperoni pizza with a gluten-free crust", the skill could extract "large", "pepperoni", and "gluten-free" from that input and not need to prompt the user for those values individually.

You can configure the composite bag entity to resolve its constituent items in different ways: it can prompt for individual entity values when they're missing from the user input, for example, or it can use the value extracted by one if its entities to resolve a second entity.

Composite bags can also include other types of items, such as those that store location and accept free text and attachments.

Composite bag entities allow you to write much shorter, more compact dialog flow definitions because they can be resolved using just one component. See Composite Bag Entities for details on creating and configuring composite bags.

ML Entities

An ML (machine learning) entity uses a model to identify the entity values in a user message. You build this model from training utterances with annotations: labeled text that corresponds to an entity. In the following utterances, Flo's and SFO can be annotated for an entity that identifies vendors for an expense reporting skill:
  • Reimburse me $100 for dinner at Flo's
  • SFO charged $2.75 for parking on May 25th
You can start off by providing your own annotated utterances, but you can bulk up the training data by sourcing Entity Annotation Jobs through Data Manufacturing. After you train the entity, it can interpret the context of a message and generalize entity values. This flexible "fill-in-the-blanks" approach allows an ML entity to recognize values even when they're not included in the training set.

Because anticipating the format or wording of user messages is challenging, especially for multi-lingual skills, you may want to use an ML entity in place of the less flexible Value List and Regular Expression entities. Despite fuzzy matching, Value List entities (both static and dynamic) can often detect entity values only when they match their values or synonyms. "Computer engineer" might not match "computer engineering", for example. Regular Expression entities restrict the user input to matching a predetermined pattern or the wording that proceeds or follows an entity value. ML entities, on the other hand, are adaptable and can be made more so through robust training data.

Value List Entities

An entity based on a list of predetermined values, like menu items that are output by a Common Response component. You can optimize the entity’s ability to extract user input by defining synonyms. These can include abbreviations, slang terms, and common misspellings. Synonym values are not case-sensitive: USA and usa, for example, are considered the same value.

Dynamic Entities

Dynamic entities are entities whose values can be updated even after a skill has been published.
Note

Dynamic entities are only supported on instances of Oracle Digital Assistant that were provisioned on Oracle Cloud Infrastructure (sometimes referred to as the Generation 2 cloud infrastructure). If your instance is provisioned on the Oracle Cloud Platform (as are all version 19.4.1 instances), then you can't use this feature.
Like value list entities, dynamic entities are enum types. However, dynamic entities differ from value list entities in that their values are not static; they may be subject to frequent change. Because of this – and also because dynamic entities can contain thousands of values and synonyms – the values are not usually managed in the UI. They are instead managed by the Dynamic Entities API (described in REST API for Oracle Digital Assistant).
Note

Enhanced speech models created for dynamic entity values are currently trained only after a finalized push request is made from the Dynamic Entity API, so if you change dynamic entity values through the UI, the change won't be included in the enhanced speech models after you retrain the skill. Your changes can only be included after the next update from the API. To preserve your changes, the request's copy parameter must be set to TRUE.

Regular Expression

Resolves an entity using a regular expression (regex), such as (?<=one\s).*(?=\sthree). Regular expressions allow your skill to identify pre-defined patterns in user input, like ticket numbers. Unlike the other entity types, regex-based entities don’t use NLP because the matching is strictly pattern-based.

Entity List

A super set of entities. Using a travel skill as an example, you could fold the entities that you’ve already defined that extract values like airport codes, cities, and airport names into a single entity called Destination. By doing so, you would enable your skill to respond to user input that uses airport codes, airport names, and cities interchangeably. So when a user enters “I want to go to from JFK to San Francisco,” the Destination entity detects the departure point using the airport code entities and the destination using the cities entity.

Derived

A derived entity is the child of a built-in entity or another entity that you define. You base this relationship on prepositional phrases (the "to" and "from" in utterances like I want to go from Boston to Dallas or Transfer money from checking to savings). Derived entities can’t be parent entities. And because the NLU Engine detects derived entities only after it detects all of the other types of entities, you can’t add derived entities as members of an entities list.

Create Entities

To create an entity:
  1. Click Entities (This is an image of the Entities icon.) in the side navbar.
  2. Click Add Entity and then enter the name and select the type. The dialog's fields reflect the entity type. For example, For regular expressions entities, you can add the expression. For Value List entities, you add the values and synonyms.
    If your skill supports multiple languages through Digital Assistant's native language support, then you need to add the foreign-language counterparts for the Value List entity's values and synonyms.
    Description of multilingual_entity_values.png follows

    Because these values need to map to the corresponding value from the primary langauge (The Primary Language Value), you need to select the primary value before you add its secondary language counterpart. For example, if you've added French as a secondary language to a skill's whose primary language is English, you first select small as the Primary Language Value and then add petite.
    Description of add_ml_entity_value.png follows

  3. As an optional step, enter a description. You might use the description to spell out the entity, like the pizza toppings for a PizzaTopping entity. This descripition is not retained when you add the entity to a composite bag.
  4. You can add the following functions, which are optional. They can be overwritten if you add the entity to a composite bag.
    • If a value list entity has a long list of values, but you only want to show users only a few options at a time, you can set the pagination for these values by entering a number in the Enumeration Range Size field, or by defining an Apache FreeMarker expression that evaluates to this number. For example, you can define an expression that returns enum values based on the channel.

      When you set this property to 0, the skill won't output a list at all, but will the user input against an entity value.

      If you set this number to one lower than the total number of values defined for this entity, then the System.resolveEntities component displays a Show More button to accompany each full set of values. If you use the System.CommonResponse component to resolve the entity, then you can configure the Show More button yourself.
      This is an image of the Show More button.
      You can change the Show More button text using the showMoreLabel property that belongs to the System.ResolveEntities and the System.CommonResponse component.

    • Add an error message for invalid user input. Use an Apache FreeMarker expression that includes the system.entityToResolve.value.userInput property. For example, ${system.entityToResolve.value.userInput!'This'}' is not a valid pizza type.
    • To allow users to pick more than one value from a value list entity, switch on Multiple Values. When you switch this on, the values display as a numbered list.
      This is an image of the numbered multi-value list.
      Switching this option off displays the values as a list of options, which allows only a single choice.
    • Switching on Fuzzy Match increases the chances of the user input matching a value, particularly when your values don’t have a lot of synonyms. Fuzzy matching uses word stemming to identify matches from the user input. Switching off fuzzy matching enforces strict matching, meaning that the user input must be an exact match to the values and synonyms; "cars" won’t match a value called "car", nor will "manager" match a "development manager" value.
    • For skills that are configured with a translation service, entity matching is based on the translation of the input. If you switch on Match Original Value, the original input is also considered in entity matching, which could be useful for matching values that are untranslatable.
    • To force a user to select a single value, switch on Prompt for Disambiguation and add a disambiguation prompt. By default, this message is Please select one value of <item name>, but you can replace this with one made up solely of text (You can only order one pizza at a time. Which pizza do you want to order?) or a combination of text and FreeMarker expressions. For example:
      "I found multiple dates: <#list system.entityToResolve.value.disambiguationValues.Date as date>${date.date?number_to_date}<#sep> and </#list>. Which date should I use as expense date?"
    • Define a validation rule using a FreeMarker expression.
      Note

      You can only add prompts, disambiguation, and validation for built-in entities when they belong to a composite bag.
  5. Click Create.
  6. Next steps:
    1. Add the entity to an intent. This informs the skill of the values that it needs to extract from the user input during the language processing. See Add Entities to Intents.
    2. In the dialog flow, declare a context variable for the entity.
    3. Access the variable values using Apache FreeMarker expressions. See Built-In FreeMarker Array Operations.
    4. Click Validate and review the validation messages for errors related to entity event handlers (if used), potential problems like multiple values in a value list entity sharing the same synonym, and for guidance on applying best practices such as adding multiple prompts to make the skill more engaging.
Value List Entities for Multiple Languages
When you have a skill that is targeted to multiple languages and which uses Digital Assistant's native language support, you can set values for each language in the skill. For each entity value in a skill's primary language, you should designate a corresponding value in each additional language.

Tip:

To ensure that your skill consistently outputs responses in the detected language, always include useFullEntityMatches: true in System.CommonResponse, System.ResolveEntities, and System.MatchEntities states. As described in Add Natively-Supported Languages to a Skill, setting this property to true (the default) returns the entity value as an object whose properties differentiate the primary language from the detected language. When referenced in Apache FreeMarker expressions, these properties ensure that the appropriate language displays in the skill's message text and labels.
Word Stemming Support in Fuzzy Match

Starting with Release 22.10, fuzzy matching for list value entities is based on word stemming, where a value match is based on the lexical root of the word. In previous versions, fuzzy matching was enabled through partial matching and auto correct. While this approach was tolerant of typos in the user input, including transposed words, it could also result in matches to more than one value within the value list entity. With stemming, this scatter is eliminated: matches are based on the word order of the user input, so either a single match is made, or none at all. For example, "Lovers Veggie" would not result in any matches, but "Veggie Lover" would match to the Veggie Lovers value of a pizza type entity. (Note that "Lover" is stemmed.) Stop words, such as articles and prepositions, are ignored in extracted values, as are special characters. For example, both "Veggie the Lover" and "Veggie////Lover" would match the Veggie Lovers value.

Create ML Entities

ML Entities are a model-driven approach to entity extraction. Like intents, you create ML Entities from training utterances – likely the same training utterances that you used to build your intents. For ML Entities, however, you annotate the words in the training utterances that correspond to an entity.

To get started, you can annotate some of the training data yourself, but as is the case for intents, you can develop a more varied (and therefore robust) training set by crowd sourcing it. As noted in the training guidelines, robust entity detection requires anywhere from 600 - 5000 occurrences of each ML entity throughout the training set. Also, if the intent training data is already expansive, then you may want to crowd source it rather than annotate each utterance yourself. In either case, you should analyze your training data to find out if the entities are evenly represented and if the entity values are sufficiently varied. With the annotations complete, you then train the model, then test it. After reviewing the entities detected in the test runs, you can continue to update the corpus and retrain to improve the accuracy.

To create an ML Entity:
  1. Click + Add Entity.
  2. Complete the Create Entity dialog. Keep in mind that the Name and Description appear in the crowd worker pages for Entity Annotation Jobs.
    • Enter a name that identifies the annotated content. A unique name helps crowd workers.
    • Enter a description. Although this is an optional property, crowd workers use it, along with the Name property, to differentiate entities.
    • Choose ML Entity from the list.
  3. Switch on Exclude System Entity Matches when the training annotations contain names, locations, numbers, or other content that could potentially clash with system entity values. Setting this option prevents the model from extracting system entity values that are within the input that's resolved to this ML entity. It enforces a boundary around this input so that the model recognizes it only as an ML entity value and does not parse it further for system entity values. You can set this option for composite bag entities that reference ML entities.
  4. Click Create.
  5. Click +Value List Entities to associate this entity with up to five Value List Entities. This is optional, but associating an ML Entity with a Value List Entity combines the contextual extraction of the ML Entity and the context-agnostic extraction of the Value List Entity.
  6. Click the DataSet tab. This page lists all the utterances for each ML Entity in your skill, which include the utterances that you've added yourself to bootstrap the entity, those submitted from crowd sourcing jobs, or have been imported as JSON objects. From this page, you can add utterances manually or in bulk by uploading a JSON file. You can also manage the utterances from this page by editing them (including annotating or re-annotating them), or by deleting, importing, and exporting them.
    • Add utterances manually:
      • Click Add Utterance. After you've added the utterance, click Edit Annotations to open the Entity List.
        Note

        You can only add one utterance at a time. If you want to add utterances in bulk, you can either add them through an Entity Annotation job, or you can upload a JSON file.
      • Highlight the text relevant to the ML Entity, then complete the labeling by selecting the ML Entity from the Entity List. You can remove an annotation by clicking x in the label.
        This is an image of the Delete icon on an annotation.

    • Add utterances from a JSON file. This JSON file contains a list of utterance objects.
      [
        {
          "Utterance": {
            "utterance": "I expensed $35.64 for group lunch at Joe's on 4/7/21",
            "languageTag": "en",
            "entities": [
              {
                "entityValue": "Joe's"   
                "entityName": "VendorName",
                "beginOffset": 37,
                "endOffset": 42
              }
            ]
          }
        },
        {
          "Utterance": {
            "utterance": "Give me my $30 for Coffee Klatch on 7/20",
            "languageTag": "en",
            "entities": [
              {
                "entityName": "VendorName",
                "beginOffset": 19,
                "endOffset": 32
              }
            ]
          }
        }
      ]
      You can upload it by clicking More > Import to retrieve it from your local system.
      The entities object describes the ML entities that have been identified within the utterance. Although the preceding example illustrates a single entities object for each utterance, an utterance may contain multiple ML entities which means multiple entities objects:
      [
        {
          "Utterance": {
            "utterance": "I want this and that",
            "languageTag": "en",
            "entities": [
              {
                "entityName": "ML_This",
                "beginOffset": 7,
                "endOffset": 11
              },
              {
                "entityName": "ML_That",
                "beginOffset": 16,
                "endOffset": 20
              }
            ]
          }
        },
        {
          "Utterance": {
            "utterance": "I want less of this and none of that",
            "languageTag": "en",
            "entities": [
              {
                "entityName": "ML_This",
                "beginOffset": 15,
                "endOffset": 19
              },
              {
                "entityName": "ML_That",
                "beginOffset": 32,
                "endOffset": 36
              }
            ]
          }
        }
      ]
      entityName identifies the ML Entity itself and entityValue identifies the text labeled for the entity. entityValue is an optional key that you can use to validate the labeled text against changes made to the utterance. The label itself is identified by the beginOffset and endOffset properties, which represent the offset for the characters that begin and end the label. This offset is determined by character, not by word, and is calculated from the first character of the utterance (0-1).
      Note

      You can't create the ML Entities from this JSON. They must exist before you upload the file.
      If you don't want to determine the offsets, you can leave the entities object undefined and then apply the labels after you upload the JSON file.
      [
        {
          "Utterance": {
            "utterance": "I expensed $35.64 for group lunch at Joe's on 4/7/21",
            "languageTag": "en",
            "entities": []
              
            
          }
        },
        {
          "Utterance": {
            "utterance": "Give me my $30 for Coffee Klatch on 7/20",
            "languageTag": "en",
            "entities": []
            
          }
        }
      ]
      The system checks for duplicates to prevent redundant entries. Only changes made to the entities definition in the JSON file are applied. If an utterance has been changed in the JSON file, then it's considered a new utterance.
    • Edit an annotated utterance:
      • Click Edit This is an image of the Edit ML Entity icon to remove the annotation.
        Note

        A modified utterance is considered a new (unannotated) utterance.
      • Click Edit Annotations to open the Entity List.
      • Highlight the text, then select an ML Entity from the Entity List.
      • If you need to remove an annotation, click x in the label.
  7. When you've completed annotating the utterances. Click Train to update both trainer Tm and the Entity model.
  8. Test the recognition by entering a test phrase in the Utterance Tester, ideally one with a value not found in any training data. Check the results to find out if the model detected the correct ML Entity and if the text has been labeled correctly and completely.
  9. Associate the ML Entity with an intent.
Exclude System Entity Matches

Switching on Exclude System Entity Matches prevents the model from replacing previously extracted system entity values with competing values found within the boundaries of an ML entity. With this option enabled, "Create a meeting on Monday to discuss the Tuesday deliverable" keeps the DATE_TIME and ML entity values separate by resolving the applicable DATE_TIME entity (Monday) and ignoring "Tuesday" in the text that's recognized as the ML entity ("discuss the Tuesday deliverable").

When this option is disabled, the skill instead resolves two DATE_TIME entities values, Monday and Tuesday. Clashing values like these diminish the user experience by updating a previously slotted entity value with an unintended value or by interjecting a disambiguation prompt that interrupts the flow of the conversation.
Note

You can set the Exclude System Entity Matches option for composite bag entities that reference an ML entity.
Import Value List Entities from a CSV File

Rather than creating your entities one at a time, you can create entire sets of them when you import a CSV file containing the entity definitions.

This CSV file contains columns for the entity name, (entity), the entity value (value) and any synonyms (synonyms). You can create this file from scratch, or you can reuse or repurpose a CSV that has been created from an export.

Whether you're starting anew or using an exported file, you need to be mindful of the version of the skill that you're importing to because of the format and content changes for native language support that were introduced in Version 20.12. Although you can import a CSV from a prior release into a 20.12 skill without incident in most cases, there are still some compatibility issues that you may need to address. But before that, let's take a look at the format of a pre-20.12 file. This file is divided into the following columns: entity, value, and synonyms. For example:
entity,value,synonyms
PizzaSize,Large,lrg:lrge:big
PizzaSize,Medium,med
PizzaSize,Small,little
For skills created with, or upgraded to, Version 20.12, the import files have language tags appended to the value and synonyms column headers. For example, if the skill's primary native language is English (en), then the value and synonyms columns are en:value and en:synonyms:
entity,en:value,en:synonyms
PizzaSize,Large,lrg:lrge:big
PizzaSize,Medium,med
PizzaSize,Small,
PizzaSize,Extra Large,XL
CSVs that support multiple native languages require additional sets of value and synonyms columns for each secondary language. If a native English language skill's secondary language is French (fr), then the CSV has fr:value and fr:synonyms columns as counterparts to the en columns:
entity,en:value,en:synonyms,fr:value,fr:synonyms
PizzaSize,Large,lrg:lrge:big,grande,grde:g
PizzaSize,Medium,med,moyenne,moy
PizzaSize,Small,,petite,p
PizzaSize,Extra Large,XL,pizza extra large,
Here are some things to note if you plan to import CSVs across versions:
  • If you import a pre-20.12 CSV into a 20.12 skill (including those that support native languages or use translation services), the values and synonyms are imported as primary languages.
  • All entity values for both the primary and secondary languages must be unique within an entity, so you can't import a CSV if the same value has been defined more than once for a single entity. Duplicate values may occur in pre-20.12 versions, where values can be considered unique because of variations in letter casing. This is not true for 20.12, where casing is more strictly enforced. For example, you can't import a CSV if it has both PizzaSize, Small and PizzaSize, SMALL. If you plan to upgrade Version 20.12, you must first resolve all entity values that are the same, but differentiated only by letter casing before performing the upgrade.
  • Primary language support applies to skills created using Version 20.12 and higher, so you must first remove language tags and any secondary language entries before you can import a Version 20.12 CSV into a skill created with a prior version.
When you import a 20.12 CSV into a 20.12 skill:
  • You can import a multi-lingual CSV into skills that do not use native language support, including those that use translation services.
  • If you import a multi-lingual CSV into a skill that supports native languages or uses translation services, then only rows that provide a valid value for the primary language are imported. The rest are ignored.
With these caveats in mind, here's how you create entities through an import:
  1. Click Entities (This is an image of the Entities icon.) in the side navbar.

  2. Click More, choose Import Value list entities, and then select the .csv file from your local system.
    Description of import_entities.png follows

  3. Add the entity or entities to an intent (or to an entity list and then to an intent).

Export Value List Entities to a CSV File
You can export the values and synonyms in a CSV file for reuse in another skill. The exported CSVs share the same format as the CSVs used for creating entities through imports in that they contain entity, value, and synonyms columns. The these CVS have release-specific requirements which can impact their reuse.
  • The CSVs exported from skills created with, or upgraded to, Version 20.12 are equipped for native language support though the primary (and sometimes secondary) language tags that are appended to the value and synonyms columns. For example, the CSV in the following snippet has a set of value and synonyms columns for the skill's primary language, English (en) and another set for its secondary language, French (fr):
    entity,en:value,en:synonyms,fr:value,fr:synonyms
    The primary language tags are included in all 20.12 CSVs regardless of native language support. They are present in skills that are not intended to perform any type of translation (native or through a translation service) and in skills that use translation services.
  • The CSVs exported from skills running on versions prior to 20.12 have the entity, value, and synonyms columns, but no language tags.
To export value list entities:
  1. Click Entities (This is an image of the Entities icon.) in the side navbar.

  2. Click More, choose Export Value list entities and then save the file.
    Description of export_entities.png follows

    The exported .csv file is named for your skill. If you're going to use this file as an import, then you may need to perform some of the edits described in Import Intents from a CSV File if you're going to import it to, or export it from, Version 20.12 skills and prior versions.

Composite Bag Entities
Composite bag entities allow you to write much shorter, more compact dialog flow definitions because they can be resolved using just one component (either System.ResolveEntities or System.CommonResponse). We recommend that you use this approach, because you don't need components like System.Switch or System.setVariable to capture all of the user input that's required to perform some business transaction. Instead, a single component can prompt users to provide values for each item in the bag. The prompts themselves are condition-specific because they're based on the individual configuration for each bag item. Using the composite bag entity, an entity event handler or Apache FreeMarker, and either the System.CommonResponse and System.ResolveEntities components, your skill can:
  • Capture all free text, allow file uploads, and collect the user's current location with the STRING, ATTACHMENT, and LOCATION items.

  • Execute individual behavior for each member entity in the bag–You can add value-specific prompts and error messages for individual entities within the composite bag (which includes custom entities, system entities, and the STRING, ATTACHMENT, and LOCATION items). You can also control which entities should (or shouldn't) match the user input. Because you can create a prompt sequence, the skill can output different prompts for each user attempt.

  • Present multi-select pick lists.

  • Validate value matches based on validation rules.

  • Support for the unhappy flow–Users can correct prior entries.

  • Execute temporary, match-based transitions–The dialog flow can temporarily exit from the component when an entity has been matched, so that another state can perform a supporting function like a REST call. After the function completes, the dialog flow transitions back to the component so that the value matching can continue. For example:
    • After a user uploads a receipt, the receipt itself needs to be scanned so that values like expense date, amount, and expense type can be extracted from it for the other entities in the bag. This allows the component to fill the rest of values from the receipt, not from any user input.

    • The skill outputs a message like, “Almost there, just a few more questions” in between matching sets of entities in the bag.

    • The user input must be validated through a backend REST call. The validation might be required immediately, because it determines which of the bag items must prompt for further user input. Alternatively, the call might return information that needs to be shared with the user, like an out-of-policy warning.

  • Disambiguate values–You can isolate a value from the user input through entity-specific prompts and component properties. These include support for corrections to prior input (the “unhappy” flow) and for prompting user input for specific built-in entity properties.

Explore the CbPizzaBot Skill
The CbPizzaBot skill gives you a taste of how a composite bag and System.CommonResponse component to output responses based on input values.
  • Customized Messages–Each value for the PizzaType entity is rendered as a card.

  • Global Actions–Whenever you enter an invalid value, the skill adds a value-specific error message to the card and a Cancel button, which lets you exit the dialog.

  • Multi-Value Pick List–The Toppings entity is rendered as a paginated list of values. Entering 7 (Extra Cheese) triggers a conditional message, which is a single-value list.

  • Location–The skill prompts for, and collects, the user’s coordinates (longitude and latitiude).

This skill doesn’t use any custom components for this functionality. Instead, this functionality is created declaratively.
Create a Composite Bag Entity
  1. Click Entities This is an image of the Entities icon. in the side navbar.

  2. Click Add Entities.

  3. Choose Composite Bag as the entity type.

  4. Enter the name and description.
  5. Click + Event Handler if you want to use execute the composite bag's prompting and logic programmatically using entity event handlers.
  6. Click + Bag Item to open the Add Bag Item dialog. If you’re adding a built-in entity or an existing custom entity, you can create a bag-specific name for it and add a description of its role within the context of the composite bag.

  7. You can fill the bag with custom entities, built-in entities, and the following:
    • STRING—Captures free text from the user.

    • LOCATION—Captures the user’s location.

    • ATTACHMENT—Accepts files, audio files, video, or image files uploaded by the user. The composite bag entity stores the URL where the attachment is hosted.

    Note

    You are prompted for a subtype when you add the DATE_TIME entity.
    The items get resolved in the order that you add them. However, the sequence can be affected depending on how you configure individual members of the composite bag.
  8. Clicking Close returns you to the Entities page, but you can add other bag-specific capabilities to the item first (or update it later by clicking This is an image of the Edit icon. in the Entities page).

  9. Next steps:
    • Add individual error messages, disambiguation prompts, or conditional prompting for the bag items.
      Note

      These will be overwritten if you add the entity to a composite bag.
    • Add the entity to an intent. See Add Entities to Intents.

    • Configure the dialog flow to use the composite bag entity. See Configure the Dialog Flow for Composite Bag Entities and use the CbPizzaBot as a reference if you’re using the System.CommonResponse component.

Enhanced Slot Filling
When you enable enhanced slot filling by switching on Use Enhanced Slot Filling in Settings > Configuration:
  • Only the currently resolving item will be updated. When a match applies to more than one bag item, the currently resolving bag item takes precedence over other items. If you switch off enhanced slot filling, then all items are updated with the same value.
  • If the current resolving item is a STRING bag item, then no other bag items are ever updated.
  • If an entity match applies to multiple (non-resolving) bag items, a disambiguation dialog displays, allowing the user to choose which item should be updated instead of updating all bag items.
  • The entity's Prompt for Disambiguation switch is ignored. We recommend that you implement custom disambiguation with an entity event handler.
Note

The Use Enhanced Slot Filling toggle is switched on by default for skills created using Version 22.08 of the platform. It's switched off for skills that have been upgraded to this version.
Add Prompts
You can add a single prompt, or create a sequence of prompts, each providing increasingly specific information for each time the user enters an invalid value. By default, prompting is enabled. To add these prompts:
  1. If you want to enable prompting, leave the Prompt for Value field blank (its default state). Entering false in the Prompt for Value field prevents prompting. To prompt for a conditional value, add a boolean FreeMarker expression that evaluates to either true (for prompting) or false.

    Tip:

    When you set Prompt for Value to false, the item can still be resolved as part of another item that’s being prompted for when you enable Out of Order Extraction.
  2. Click Add Prompt to build the prompt sequence. You can reorder it by shuffling the fields through drag and drop gestures, or by renumbering them. You can randomize the output of the prompts when you give two or more prompts the same number.
    Note

    You can only add prompts for built-in entities when you add them to a composite bag.
    You can store prompts in resource bundles (for example, ${rb.askCheese}), or write them as combinations of text and FreeMarker expressions.
Updating Slotted Values with Apache FreeMarker Expressions

In the Updatable field, enter an Apache FreeMarker expression that evaluates to true to allow the value slotted for a composite bag item to be updated.

Enable Out-of-Order Extraction
Out of order extraction enables value slotting and updating for a composite bag item at any point in the conversation regardless of whether the composite bag has prompted the user for the value or not. Using the following rules, you can set how, when, or if, a value can slotted or changed at any point in the conversation for any item or item subtype (such as the DATE_TIME subtypes).
  • Always – The default option. When you choose this option for an item, its value can be slotted with no prompting. For example, the PizzaSize entity might be resolved when a customer enters I want a large pizza. This option also enables the item value to be changed at any point, provided that the expression in the Updatable property does not evaluate to false. For example, when the composite bag prompts for the PizzaType entity, the customer might then reply Veggie please, but make it a medium. The skill can update the PizzaSize entity value with medium without restarting the conversation because Always is enabled for the bag's PizzaSize and PizzaType items.
    Note

    Although this option is the default behavior, it may not always be appropriate for STRING items. If you chose this option on for a STRING item, for example, the first user message would be stored by STRING item instead of getting matched by intended entity (which might be designated as the first item in the bag to get resolved).
  • Never – When you choose this option, the item is only slotted after it's been prompted for, even when other user messages contain valid values. Choose Never to prevent inadvertent matches.
  • Only when resolving the intent utterance – Restricts the out-of-order value slotting to the first user utterance that has been resolved to the intent that's associated with the composite bag entity.
Here are examples of the out-of-order extraction rules as they're applied to a PizzaToppings composite bag item.
Out of Order Extraction Rule Initial User Utterance Value Slotted Notes
Always Order pizza with tuna Tuna The value slotting for the PizzaToppings item can be matched whenever the user message contains the correct value ("Mushrooms instead!). It can be slotting or updated at any point in the conversation without prompting.
Never Order pizza with tuna None The value for PizzaTopping item cannot slotted out of order or updated ad hoc. It can only be matched when it's prompted for.
Only when resolving the intent utterance Order pizza with tuna Tuna. However, if the user entered "Order large pizza", the composite bag would have to prompt for the PizzaTopping value. The PizzaTopping item can be slotted out of order only when the first user utterance that resolves to an intent has a matching value. Otherwise, this value must be prompted for. The composite bag will not allow ad hoc updating or slotting of this item.
Enable Extract With
Use the Extract With option to enable your skill to resolve one bag item using the input entered for a second item in the bag. This option, which allows your skill to handle related values, provides greater flexibility for user input. Users can enter home instead of a full address, for example. Here's how:
  • The composite bag has two address-related entities: NamedAddress, a list value entity with values like home and office, and DeliveryAddress, an ADDRESS entity.
  • The DeliveryAddress entity's prompt is Where do you want that delivered?
  • The NamedAddress entity does not prompt for input (false is entered in the Prompt for Value field).
  • The NamedAddress entity can be extracted with DeliveryAddress (DeliveryAddress is selected from the Extract With menu).

When the composite bag prompts for the DeliveryAddress entity, it can resolve the entity using either a physical address, or one of the NamedAddress list values ( home or office).

Add Validation Rules
Each item in the bag can have its own validation rules. You can add a validation rule by first clicking +Validation Rule and then adding a FreeMarker expressions and a text prompt. The expression uses the following pattern to reference the item value, where varName is the name of the composite bag entity that’s declared as a context variable in the dialog flow definition:
${varName.value.itemName}
If this expression evaluates to false, then the user input is not valid.
The following example of a validation expression is for a item called Amount. It’s a built-in entity, CURRENCY. To return a number amount for the comparison, the expression adds the CURRENCY entity’s amount property:
${expense.value.Amount.amount > 4}
The corresponding validation message can also reflect the user input through a FreeMarker expression. For example, the following message uses the type of currency extracted from the user's input as part of the validation message:
Amounts below 5 ${expense.value.Amount.currency} cannot be expensed. Enter a higher amount or type 'cancel'.
To find out about other CURRENCY properties (and the other built-in entity properties as well), see Built-In Entities and Their Properties.
Configure the Dialog Flow for Composite Bag Entities
  1. In the context node, declare the composite bag entity as a variable:
    ...
    metadata:
      platformVersion: "1.1"
    main: true
    name: "ExpenseBot"
    context:
      variables:
        expense: "Expense"
        iResult: "nlpresult"
  2. You can use System.ResolveEntities or System.CommonResponse. Both of these components let you leverage the composite bag entity and both provide their own benefits. The System.ResolveEntities is the simpler of the two, having a small set of properties. Unlike the System.ResolveEntities component, the System.CommonResponse provides you with more control over the UI that’s used to resolve the entities in the bag. For example, you can add conditional logic to determine prompts and value-related global actions.

    Tip:

    Because the metadata for the System.CommonResponse component can become very complex when you use composite bag entities, we recommend that you use the System.ResolveEntities component instead and use entity event handlers for any UI customizations.
  3. Reference the composite bag entity context variable in the component’s variable property and then define the other properties as needed. System.ResolveEntities and The Component Properties describe them and provide further examples.

    Here’s an example of the System.ResolveEntities component:
    createExpense:
        component: "System.ResolveEntities"
        properties:
          variable: "expense"
          useFullEntityMatches: true
          nlpResultVariable: "iResult"
          cancelPolicy: "immediate"
        transitions:
          actions:
            cancel: "cancelExpense"
          return: "done"          
The system.entityToResolve Variable
The system.entityToResolve provides information on the current status of the entity resolution process as performed by the System.resolveEntities and System.CommonResponse components. You will typically reference the properties of this variable value in the System.CommonResponse metadata when you want to customize messages. You can use it to define the logic for an entity's error message, or for various properties that belong to the System.resolveEntities and System.CommonResponse components. Append the following properties to return the current entity value:
  • userInput
  • prompt
  • promptCount
  • updatedEntities
  • outOfOrderMatches
  • disambiguationValues
  • enumValues
  • needShowMoreButton
  • rangeStartVar
  • nextRangeStart
You can also reference the properties in FreeMarker expressions used bag item properties like prompt, errorMessage and validation rules.
Here's an example of using this variable to return the current user input in an entity's error message:
Sorry,'${system.entityToResolve.value.userInput!'this'}' is not a valid pizza size.
Here's an example of using various system.entityToResolve definitions. Among these is a message defined for the text property, which confirms an update made to a previously set entity value using an Apache FreeMarker list directive and the updatedEntities property.
    metadata:
      responseItems:        
      - type: "text" 
        text: "<#list system.entityToResolve.value.updatedEntities>I have updated <#items as ent>${ent.description}<#sep> and </#items>. </#list><#list system.entityToResolve.value.outOfOrderMatches>I got <#items as ent>${ent.description}<#sep> and </#items>. </#list>"
      - type: "text" 
        text: "${system.entityToResolve.value.prompt}"
        actions:
        - label: "${enumValue}"
          type: "postback"
          iteratorVariable: "system.entityToResolve.value.enumValues"
For global actions, this variable controls the Show More global action with the needShowMoreButton, rangeStartVar, and the nextRangeStart properties:
        globalActions: 
        - label: "Show More"
          type: "postback" 
          visible:
            expression: "${system.entityToResolve.value.needShowMoreButton}"
          payload:
            action: "system.showMore"
            variables: 
              ${system.entityToResolve.value.rangeStartVar}: ${system.entityToResolve.value.nextRangeStart} 
        - label: "Cancel"
          type: "postback" 
          visible:
            onInvalidUserInput: true
          payload:
            action: "cancel"
The Show More label must include a system.showMore (action: "system.showMore"). Otherwise, it won't function.
entityToResolve Expressions
Expression Description
${system.entityToResolve.value.resolvingField} Returns the name of the bag item.
${system.entityToResolve.value.allMatches[0].entityName} Returns the entity name that's referenced by the bag item. The allMatches array contains all of the entities whose values could potentially be updated by the user's message.
${<variable>1.value[system.entityToResolve.value.resolvingField]} Returns the bag item value that users enter or select.
${system.entityToResolve.value.userInput} Returns the text entered by the user. You can use this expression to log the user input or display it in the chat, for example, when a user enters an invalid value.
${system.entityToResolve.value.outOfOrderMatches[n].entityName} Returns the name(s) of the entities that are extracted out-of-order. Along with the values that the System.ResolveEntities or the System.CommonResponse components prompt for, users may provide additional values that trigger out-of-order value extraction and updates to other entities in the composite bag.
${system.entityToResolve.value.outOfOrderMatches[n].name} Returns the name of the composite bag item.
${system.entityToResolve.value.outOfOrderMatches? has_content?then(…,…)} Returns the value of an entity that has been matched out of order. Because it's likely that no entity has been matched out of order, this expression uses the has_content operator.
Entity Event Handlers
You can execute validation, prompting, and disambiguation for the composite bag entity items programmatically using Entity Event Handlers. An Entity Event Handler (EEH) is a JavaScript (or TypeScript) implementation that's created for a composite bag entity and deployed as a custom code service.
Note

You can manage the service deployed for the EEH from the Components page This is an image of the Components icon in the left navbar..
You can control the resolution behavior for both individual bag items and for the entity itself by defining the event handler functions provided by the bots-node-sdk. For example, the following snippet illustrates defining a validate event on a bag item called ExpenseDate that prevents users from entering a future date when filing an expense report.
ExpenseDate: {

        validate: async (event, context) => {
          if (new Date(event.newValue.date) > new Date()) {
            context.addValidationError("ExpenseDate",context.translate('ExpenseDate.text'));
            return false;
          }          
        }
The bots-node-sdk’s Writing Entity Event Handlers documentation describes the overall structure of the event handler code, the item- and entity-level events, and the EntityResolutionContext methods like addValidationError and translate in the above snippet.

Because Entity Event Handlers are written in JavaScript, you can use advanced logic that isn’t easily achieved – or even feasible – with the FreeMarker expressions that you can use to define the validation, errors, and prompts in the edit bag item page and the dialog flow. They’re also easier to debug. That said, you don't have to choose Entity Event Handlers over FreeMarker expressions. You can combine the two. For example, you can use FreeMarker expressions for simple validations and prompts and reserve an EEH for more complicated functions like calling a REST API when all of the bag items have been resolved.

Create Entity Event Handlers with the Event Handler Code Editor

You can build the EEH using the Event Handler Code editor that's accessed from the composite bag properties page or with the IDE of your choice. While the Event Handler Code editor has some advantages over a third-party tool, you may want to alternate with a third-party IDE depending on the size of the task and the libraries that you need. To weigh the pros and cons, refer to Which IDE Should I Use?

To access the Event Handler Code editor:
  1. Click + Event Handler.
  2. Complete the Create Event Handler dialog by adding a service name and a handler name.

After you've created the handler, you can open the editor by clicking This is an image of the Edit icon..

The editor is populated with starter code. Its handlers object contains entity, items, and custom objects. Within these objects, you define event-level events, which are triggered for the entire composite bag, the item-level events, which control the resolution of the individual bag items, and the custom events that are fired on postback actions. By default, the handler object has an entity object defined. The items and custom objects get populated when you add an item-level or custom template.
Description of eeh_default_template.png follows

The events themselves are asynchronous JavaScript functions that take two arguments:
  • event: A JSON object of the event-specific properties.
  • context: A reference to the EntityResolutionContext class, whose methods (such as addValidationError in the following snippet) provide the event handler logic.
items: {
  Amount: { 
    validate: async (event, context) => {
      let amount = event.newValue.amount;
      if (amount < 5) {
        context.addValidationError("Amount",`Amounts below 5 ${event.newValue.currency} cannot be expensed. Enter a higher amount or type 'cancel'.`);
      }
    }
  }
You access the templates by clicking + Add Event.
Note

Refer to the bots-node-sdk’s Writing Entity Event Handlers documentation for further information on the EEH starter code, item- and entity-level events, EntityResolutionContext, and code samples.
Add Events
Clicking + Add Event enables you to add the templates for event, item, and custom events.
Description of eeh_select_event_type_top_menu.png follows

For example, adding a validate event template populates the editor with the following code:
validate: async (event, context) => {
        
      },
You can then update this template with your own code:
validate: async (event, context) => {
     if (event.newValue.value === 'PEPPERONI')
       context.addValdiationError('Type', "Sorry, no pepperoni pizzas today!");     
      },
Clicking Validate checks your code for design time issues, so you should click this option regularly. You can’t add further events if the code is invalid, neither can you save invalid code. Because saving code means also deploying it, you can’t deploy invalid code either.
Description of eeh_edit_event_handler_code_validate.png follows

When your code is valid, clicking Save automatically deploys it and packages it in a TGZ file. You can monitor the status of the deployment and download the TGZ file for reuse in other skills from the Components page.
Description of eeh_deployed_event_components_page.png follows

Tip:

To check for runtime errors, switch on Enable Component Logging and then review the logs (accessed by clicking Diagnostics > View Logs) to find about the parameters that invoked the events.
In the composite bag page, a Ready status This is an image of the Ready status icon. and an Edit icon Edit icon for revising your code becomes available after you’ve deployed the service.
Description of eeh_deployed_event_confirmation.png follows

Add Entity-Level Event Handlers
For entity-level, events, you can update the templates for the validate, publishMessage, maxPromptsReached, resolved, attachmentReceived, and locationReceived entity level events.
Description of eeh_entity_level_templates.png follows

Event Description
validate A handler for entity-level validations that's called when the value for at least one of the bag items has been changed.
publishMessage A generic fallback handler that's called whenever a bag item lacks a prompt message or disambiguation handling.
maxPromptsReached A generic fallback handler when the item-specific handler for reaching the maximum number prompts has not been specified.
resolved This function gets called when the composite bag entity has been resolved. You would typically add a resolved event to call a backend API that completes a transaction related to the values collected by the composite bag entity. If API call returns errors because some the values collected by the composite bag are not valid, then you can clear these values.
attachmentReceived This handler is called when the user sends an attachment.
locationReceived This handler gets called when the user sends a location.
By default, the template is populated with an entity-level event, publishMessage. Through the updatedItemMessage and outOfOrderItemsMessage functions (which are also defined in the default template), this event enables the skill to output messages that confirm that a previously resolved bag item value has been updated, or that it has accepted valid input for a bag item other than the one that the entity is currently prompting for (out-of-order input).
Description of eeh_publishmessage.png follows

This event is optional. You can delete it, leave it as is, or add functionality to it. For example, you can add a cancel button when a user’s attempts at entering a valid value have exceeded the maximum number of prompts.
publishMessage: async (event, context) => {
        updatedItemsMessage(context);
        outOfOrderItemsMessage(context);
        //Add Cancel button for invalid values entered by users
        let message = context.getCandidateMessageList()[0];
      }
…
message.addGlobalAction(context.getMessageFactory().createPostbackAction('Cancel', {action:'cancel'}));
        context.addMessage(message);      }
}
Add Item-Level Handlers
For the bag items listed in the dialog, you can add templates for the item level events: shouldPrompt, validate, publishPromptMessage, publishDisambiguateMessage, and MaxPromptsReached .
Description of eeh_choose_item_level_event_type.png follows

Event Description
shouldPrompt Prompts for an item based on the values of the other items in the bag. This handler takes precedence over the prompting configured through the Prompt for Value field.
validate This handler is called only when a value has been set for a bag item. If the validity of the value depends on other bag items, then you should implement the entity-level validate event instead.
publishPromptMessage Use this function to replace or extend the message that's generated by the System.CommonResponse and System.ResolveEntities components to prompt for the item.
publishDisambiguateMessage Use this function to replace or extend the disambiguation prompt message generated by the System.CommonResponse and System.ResolveEntities components.
maxPromptsReached This function gets called when the maximum number of prompts for this item, which specified by Maximum User Input Attempts the in the composite bag item screen, has been reached.

Adding an item-level event generates the items object.
Description of eeh_items_block.png follows

Add Custom Events
You can create custom events that are called from postback actions (buttons or list items) using the custom event template.
Description of eeh_custom_event_template.png follows

Adding a custom template adds a custom object with the basic event code. Refer to the bots-node-sdk’s Writing Entity Event Handlers documentation for examples of implementing a custom event.
someCustomEvent: async (event, context) => {
  
}
Replace or Remove an Entity Event Handler
If you need to replace or remove an EEH:
  1. Select an empty line from the Event Handler menu to reactivate the + Event Handler button.
    Description of select_blank_line_delete_eeh.png follows

  2. Open the Components page This is an image of the Components icon in the left navbar.. Switch off Service Enabled or delete the service.
    Note

    You can't delete or disable a service if the EEH is still associated with the composite bag entity.
  3. If needed, add a new EEH to the composite bag, or if you're not opting for a new EEH, you can add the resolution logic with FreeMarker expressions.

Tip:

Deleting the composite bag entity will also delete the service deployed for the EEH.
Which IDE Should I Use?
You can create an EEH using the IDE of your choice and then deploy the code using a TGZ file that you packaged manually with bots-node-sdk pack, or you can use the Event Handler Code editor that we provide. When you use our editor, you don’t have to set up your development environment or package and deploy your code. The code is deployed automatically after you save it. You can also revise the code directly without having to redeploy it, something that you can’t do when you package and deploy a handler created with your own IDE. You can't add additional NPM packages using Event Handler Code editor. You'll need another IDE. For example, if you want to use Moment.js to work with dates, then you must download the TGZ, add the library using the IDE of your choice, and then repackage and deploy the TGZ. After that, you can continue using the Event Handler Code editor.

Tip:

The Event Handler Code editor might be a better option for small changes. If you need to make bigger changes, or add additional NPM packages, then you can download the TGZ from the Components page, unzip it, and then use your favorite editor to modify the code before repackaging and deploying it.
Simplfy Dialog Flows with Entity Event Handlers

Entity event handlers can simplify your dialog flow definition because they’re used with the dialog-shortening best practice that is composite bag entities. When it comes to backend services, they make your dialog flow definition less complicated because you don’t need to write a separate state for the custom component that calls them.

Event handlers simplify the dialog flow definition in another way: they enable you to modify the messages that are generated by the System.ResolveEntities component. For example, you can create a carousel of card messages without using the complex structure of the System.CommonResponseComponent metadata property. You can instead add the carousel through simple code, which means you can also add card responses to the System.ResolveEntities component. For example, this code enables the System.ResolveEntities component to output a horizontally scrolling carousel of cards for pizza type, with card each having a cancel button:
Type: {

        publishPromptMessage: async (event, context) => {
          let candidateMessage = context.getCandidateMessageList()[0];
          const mf = context.getMessageFactory();
          const message = mf.createCardMessage()
            .setLayout(horizontal)
            .setCards(context.getEnumValues().map(p => {
                      mf.createCard(p.value)
                        .setDescription(pizzaInfo[p.value].description)
                        .setImageUrl(pizzaInfo[p.value].image)
                        .addAction(mf.createPostbackAction('Order',{variables: {pizza: p.value}}));
                      })
            .setGlobalActions(candidateMessage.getGlobalActions());
          context.addMessage(message);
        }
Entity Event Handler Tutorials

Follow this tutorial to get acquainted with entity event handlers by creating one using the editor. Then check out this advanced tutorial for creating an entity event handler with an external IDE and bots-node-sdk.

Disambiguate Nested Bag Items and Subtypes
The composite bag will always prompt for values per the item order that's dictated by the hierarchical structure a nested bag item. It will not blindly slot values for multiple items. It instead attempts to match the value in the user message only to the item that it's currently prompting for. When the user input doesn't match the current item, or could potentially match more than one item, as might be the case for the startTime and endTime for an INTERVAL subtype, it presents users with the value defined for the Label property to clarify the requested input.
Description of nested_bag_items_prompt.png follows

Tip:

As with all strings, we recommend that you define the Label value as a resource bundle.
Add the DATE_TIME Entity to a Composite Bag
To enable your skill to handle complex scenarios that require multiple user prompts like scheduling a meeting, or setting a recurring event, you need to create a DATE_TIME composite bag item and then configure the attributes of the Interval, Recurring, and Date and Time subtypes and their respective nested bag items.
Note

While you can use the Date, Time and Duration as standalone entities, we recommend that you use them within composite bag entities.
  1. Before you create a DATE_TIME bag item, configure the date and time ambiguity resolution rules appropriate for your use case. For example, if you're creating an expense reporting skill, select Past. If the skill is a meeting scheduler, select Future.
  2. Within the composite bag entity, click Add item.
  3. Select Entity from the Type menu.
  4. Select DATE_TIME from the Entity Name menu.
  5. Choose a DATE_TIME subtype from the Subtype menu.
    Description of select_date_time_subtype.png follows

    The configuration options on the Add Bag Item page change depending on the subtype that you select. For example, if you select the Recurring subtype, then you can access configuration options for the nested bag items that are specific to setting a repeating event, such as the Date and Time object for the initial starting date and time and the Duration object for setting the event frequency.
    Description of edit_date_time_bag_item.png follows

  6. If you selected the Recurring or Interval subtypes:
    • Set the subtype values that the composite bag prompts for from the Prompt for menu.
    • Because meetings typically start and end on the same day, switch on Default end date to start date for the startDate subtype. This sets the end date as equal to the start date when the user message does not mention the end date (or when the end date is not extracted out of order).
      This is an image of the Default start date to end date toggle.

  7. Optionally add a disambiguation label if the user input can match more than one subtype.

    Tip:

    You can also configure the properties that are not DATE_TIME-specific, such as enhanced slot filling, updating slotting values with Apache FreeMarker, custom prompts, and error messages.
  8. You can access subtype-level configuration by clicking a subtype. Use the traversal to return to the item-level configuration.
    This is an image of the Edit Bag Item traversal.

  9. Next steps:
    • Associate the composite bag entity with the intent.
    • Declare a variable for the entity in the dialog flow.
    • In the dialog flow, reference the composite bag entity with the DATE_TIME item using a Resolve Composite Bag state.
    • The DATE_TIME values are represented as ISO 8601. For user-friendly output, use the Apache FreeMarker .xs built-in. In the following snippet, the value for the Time subtype is formatted using .value?time.xs?string['hh:mm a'].
      Your pizza will be delivered at ${pizza.value.deliveryTime.value?time.xs?string['hh:mm a']}.
      
      Instead of referencing the DATE_TIME item as a string, you can follow the best-practice approach of referencing it in a resource bundle, such as DeliveryMessage in the following example.
      ${rb('DeliveryMessage','time',pizza.value.deliveryTime.value?time.xs?string['hh:mm a'])}
      
      For the DeliveryMessage resource bundle message, the value is rendered through the {time} parameter:
      Your pizza will be delivered at {time}.
      
Tutorial: Real-World Entity Extraction with Composite Bag Entities

You can get a hands-on look at creating a composite bag through this tutorial: Enable Real-World Entity Extraction with Composite Bag Entities.

Create Dynamic Entities

Dynamic entity values are managed through the endpoints of the Dynamic Entities API that are described in the REST API for Oracle Digital Assistant. To add, modify, and delete the entity values and synonyms, you must first create a dynamic entity to generate the entityId that's used in the REST calls.

To create the dynamic entity:
  1. Click + Entity.
  2. Choose Dynamic Entities from the Type list.
  3. If the backend service is unavailable or hasn't yet pushed any values, or if you do not maintain the service, click + Value to add mock values that you can use for testing purposes. Typically, you would add these static values before the dynamic entity infrastructure is in place. These values are lost when you clone, version, or export a skill. After you provision the entity values through the API, you can overwrite, or retain, these values (though in most cases you would overwrite them).
  4. Click Create.

Tip:

If the API refreshes the entity values as you're testing the conversation, click Reset to restart the conversation.
A couple of notes for service developers:
  • You can query for the dynamic entities configured for a skill using the generated entityId with the botId. You include these values in the calls to create the push requests and objects that update the entity values.
  • An entity cannot have more than 150,000 values. To reduce the likelihood of exceeding this limit when you're dealing with large amounts of data, send PATCH requests with your deletions before you send PATCH requests with your additions.
Note

Dynamic entities are only supported on instances of Oracle Digital Assistant that were provisioned on Oracle Cloud Infrastructure (sometimes referred to as the Generation 2 cloud infrastructure). If your instance is provisioned on the Oracle Cloud Platform (as are all version 19.4.1 instances), then you can't use feature.
Guidelines for Creating ML Entities
Here's a general approach to creating an ML Entity.
  1. Create concise ML Entities. The ML Entity definition is at the base of a useful training set, so clarity is key in terms of its name and the description which help crowd workers annotate utterances.

    Because crowd workers rely on the ML Entity descriptions and names, you must ensure that your ML Entities are easily distinguishable from each other, especially when there's potential overlap. If the differences are not clear to you, it's likely that crowd workers will be confused. For example, the Merchant and Account Type entities may be difficult to differentiate in some cases. In "Transfer $100 from my savings account to Pacific Gas and Electric," you can clearly label "savings" as Account Type and Pacific Gas and Electric as Merchant. However, the boundary between the two can be blurred in sentences like "Need to send money to John, transfer $100 from my savings to his checking account." Is "checking account" an Account type, or a Merchant name? In this case, you may decide that any recipient should always be a merchant name rather than an account type.

  2. In preparation of crowd sourcing the training utterances, consider the typical user input for different entity extraction contexts. For example, can the value be extracted in the user's initial message (initial utterance context), or is it extracted from responses to the skill's prompts (slot utterance context)?
    Context Description Example Utterances (detected ML Entity values in bold)
    Initial utterance context A message that's usually well-structured and includes ML Entity values. For an expense reporting skill, for example, the utterance would include a value that the model can detect for an ML Entity called Merchant. Create an expense for team dinner at John's Pasta Shop for $85 on May 3
    Slot utterance context A user message that provides the ML Entity in response to a prompt, either because of conversation design (the skill prompts with "Who is the merchant?") or to slot a value because it hasn't been provided by a previously submitted response.

    In other circumstances, the ML Entity value may have already been provided, but may be included in other user messages in the same conversation. For example, the skill might prompt users to provide additional expense details or describe the image of an uploaded receipt.

    • Merchant is John's Pasta Shop.
    • Team dinner. Amount $85. John's Pasta Shop.
    • Description is TurboTaxi from home to CMH airport.
    • Grandiose Shack Hotel receipt for cloud symposium
  3. Gather your training and testing data.
    • If you already have a sufficient collection of utterances, you may want to assess them for entity distribution and entity value diversity before you launch an Entity Annotation job.
    • If you don't have enough training data, or if you're starting from scratch, launch an Intent Paraphrasing Job. To gather viable (and abundant) utterances for training and testing, integrate the entity context into the job by creating tasks for each intent. To gather diverse phrases, consider breaking down each intent by conversation context.
    • For the task's prompt, provide crowd workers context and ask them, "How would you respond?" or "What would you say?" Use the accompanying hints to provide examples and to illustrate different contexts. For example:
      Prompt Hint
      You're talking to an expense reporting bot, and you want to create an expense. What would be the first thing you would say? Ensure that the merchant name is in the utterance. You might say something like, "Create an expense for team dinner at John's Pasta Shop for $85 on May 3."
      This task asks for phrases that not only initiate the conversation, but also include a merchant name. You might also want utterances that reflect responses prompted by the skill when the user doesn't provide a value. For example, "Merchant is John's Pasta Shop" in response to the skill's "Who is the merchant?" prompt.
      Prompt Hint
      You've submitted an expense to the an expense reporting bot, but didn't provide a merchant name. How would you respond? Identify the merchant. For example, "Merchant is John's Pasta Shop."
      You've uploaded an image of a receipt to an expense reporting bot. It's now asking you to describe the receipt. How would you respond? Identify the merchant's name on the receipt. For example: "Grandiose Shack Hotel receipt for cloud symposium."
      To test false positives for testing – words and phrases that the model should not identify as ML Entities – you may also want to collect "negative examples". These utterances do include an ML Entity value.
      Context Example Utterances
      Initial utterance context Pay me back for Tuesday's dinner
      Slot utterance context
      • Pos presentation dinner. Amount $50. 4 people.
      • Description xerox lunch for 5
      • Hotel receipt for interview stay
    • Gather a large training set by setting an appropriate number of paraphrases per intent. For the model to generalize successfully, your data set must contain somewhere between 500 and 5000 occurrences for each ML entity. Ideally, you should avoid the low end of this range.
  4. Once the crowd workers have completed the job (or have completed enough utterances that you can cancel the job), you can either add the utterances, or launch an Intent Validation job to verify them. You can also download the results to your local system for additional review.
  5. Reserve about 20% of the utterances for testing. To create CSVs for the Utterance Tester from the downloaded CSVs for Intent Paraphrasing and Intent Validation jobs:
    • For Intent Paraphrasing jobs: transfer the contents in the result column (the utterances provided by crowd workers) to the utterance column in the Utterance Tester CSV. Transfer the contents of the intentName column to the expectedIntent column in the Utterance Tester CSV.
    • For Intent Validation jobs: transfer the contents in the prompt column (the utterances provided by crowd workers) to the utterance column in the Utterance Tester CSV. Transfer the contents of the intentName column to the expectedIntent column in the Utterance Tester CSV.
  6. Add the remaining utterances to a CSV file with a single column, utterance. Create an Entity Annotation Job by uploading this CSV. Because workers are labeling the entity values, they will likely classify negative utterances as "I'm not sure" or "None of the entities apply."
  7. After the Entity Annotation job is complete, you can add the results, or you can launch an Entity Validation job to verify the labeling. Only the utterances that workers deem correct in an Entity Validation job can be added to the corpus.

    Tip:

    You can add, remove, or adjust the annotation labels in the Dataset tab of the Entities page.
  8. Train the entity by selecting Entity.
  9. Run test cases to evaluate entity recognition using the utterances that you reserved from the Intent Paraphrasing job. You can divide up these utterances into different test suites to test different behaviors (unknown values, punctuation that may not be present in the training data, false positives, and so on). Because there may be a large number of these utterances, you can create test suites by uploading a CSV into the Utterance Tester.
    Description of ml_test_suites.png follows

    Note

    The Utterance Tester only displays entity labels for passing test cases. Use a Quick Test instead to view the labels for utterances that resolve below the confidence threshold.
  10. Use the results to refine the data set. Iteratively add, remove, or edit the training utterances until test run results indicate the model is effectively identifying ML Entities.
    Note

    To prevent inadvertant entity matches that degrade the user experience, switch on Exclude System Entity Matches if the training data contains names, locations, numbers.
ML Entity Training Guidelines

The model generalizes an entity using both the context around a word (or words) and the lexical information about the word itself. For the model to generalize effectively, we recommend that the number of annotations per entity to range somewhere between 500 and 5000. You may already have a training set that’s both large enough and has the variation of entity values that you’d expect from end users. If this is the case, you can launch an Entity Annotation job and then incorporate the results into the training data. However, if you don’t have enough training data, or if the data that you do have lacks sufficient coverage for all the ML entities, then you can collect utterances from crowd-sourced Intent Paraphrasing jobs.

Whatever the source, the distribution of entity values should reflect your general idea of the values that the model may encounter. To adequately train the model:
  • Do not overuse the same entity values in your trainining data. Repetitive entity values in your training data prevent the model from generalizing on unknown values. For example, you expect the ML Entity to recognize a variety of values, but the entity is represented by only 10-20 different values in your training set. In this case, the model will not generalize, even if there are two or three thousand annotations.
  • Vary the number of words for each entity value. If you expect users to input entity values that are three-to-five words long, but your training data is annotated with one- or two-word entity values, then the model may fail to identify the entity as the number of words increase. In some cases, it may only partially identify the entity. The model assumes the entity boundary from the utterances that you've provided. If you've trained the model on values with one or two words, then it assumes the entity boundary is only one or two words long. Adding entities with more words enables the model to recognize longer entity boundaries.
  • Utterance length should reflect your use case and the anticipated user input. You can train the model to detect entities for messages of varying lengths by collecting both short and long utterances. The utterances can even have multiple phrases. If you expect short utterances that reflect the slot-filling context, then gather your sample data accordingly. Likewise, if you're anticipating utterances for the initial context scenario, then the training set should contain complete phrases.
  • Include punctuation. If entity names require special characters, such as '-' and '/', include them in the entity values in the training data.
  • Ensure that all ML Entities are equally represented in your training data. An unbalanced training set has too many instances of one entity and too few of another. The models produced from unbalanced training sets sometimes fail to detect the entity with too few instances and over-predict for the entities with disproportionately high instances. This leads to false-positives.
ML Entity Testing Guidelines
Before your train your skill, you should reserve about 20% of unannotated utterances to find out how the model generalizes when presented with utterances or entity values that are not part of its training data. This set of utterances may not be your only testing set, depending on the behaviors you want to evaluate. For example:
  • Use only slot context utterances to find out how well the model predicts entities with less context.
  • Use utterances with "unknown" values to find out how well the model generalizes with values that are not present in the training data.
  • Use utterances without ML Entities to find out if the model detects any false positives.
  • Use utterances that contain ML Entity values with punctuation to find out how well the model performs with unusual entity values.

Query Entities

You can create SQL Dialogs skills that let users query databases using natural language. You start by importing information about the data service's physical model into the skill. During the import, the skill adds query entities to the logical model, where each query entity represents a physical table.

You next build your SQL Dialogs skill around these query entities. To learn more, see SQL Dialog Skills.