Conversational Design

Users implicitly expect software applications to have human qualities. When an application is considered "user-friendly", it's often because it exhibits human characteristics such as courtesy and common sense.

With conversational interfaces (digital assistants), the expectations are even higher. Since digital assistants are designed around the concept of human conversation, it is particularly important to design the digital assistant well so that it meets user expectations, both conscious and unconscious.

Conversational design removes the technical aspects from the interaction model of an underlying task or process and replaces it with a natural-sounding conversation that users find easy to understand and engaging.

Good conversation design aims for efficiency, has an understanding of context, reflects backs to the user, is emotionally engaging, and builds great dialogs. As an engineer, you might think good conversation design is easy to achieve, but keep in mind that not every great singer knows how to write good lyrics. And not every author of good lyrics can sing.

Your digital assistant isn't human, but if it uses conversational techniques and cues and exhibits human consideration, it can make the conversations seem more natural (and pleasant) and minimize potential for irritating users. These qualities in a digital assistant can give your users confidence that it is capable of addressing their real concerns.

Here are some conversational techniques that can help make your digital assistant more engaging to users.

Orient Users

A basic but important part of designing a digital assistant is making sure that users can easily discover how to use it effectively.


To get off on the right foot with your users, put some thought into the way your digital assistant greets users. You should:

  • Provide a positive and welcoming introduction.
  • Indicate what the digital assistant can do and/or what is expected next from the user.
  • Vary the greetings, especially for digital assistants that get repeated use.

Digital assistants come with a default welcome implementation, but you can also provide your own implementation.


An important part of any digital assistant is to be able to tell users what the digital assistant can do and to help them get unstuck if the conversation isn’t going as they expect. You should:

  • Ensure the digital assistant can handle a request for help at any point in the conversation, whether it is an explicit request for help or a more subtle inquiry like "what can you do?"
  • As you do when welcoming users, indicate what the digital assistant can do and/or what is expected next from the user.

Digital assistants come with a default help implementation, but you may wish to design the help experience yourself.

Letting Users Exit

Forcing a user to complete a conversation thread they have erroneously initiated makes for a bad user experience. When a user is in a conversation, they should always have a way of exiting, whether it's because the conversation has taken a turn that they don't want or that they simply don't wish to complete the conversation at that time.

You can achieve this in a few ways. For example, you can make sure all choice lists have an option to exit the current conversation. Or you can use the digital assistant's built-in exit intent to explicitly handle any specific requests to end the current conversation.

Hints and Cues

Depending on the complexity of your skills and digital assistant, it might be useful for you to provide various forms of guidance and visual cues to suggest to users what they can do. This can take the form of things like:

  • Hints within messages (such as "… or just tell me to exit this conversation if you don’t want to go on").
  • Information that tells the user what they can expect next after they have completed an action.
  • Buttons for the most common actions.
  • Reminders interspersed throughout the conversation describing how to do things like launch a menu, exit a conversation, ask for help, and speak to an agent.

    On some channels, you can also take advantage of features specific to the messenging platform to provide buttons for common actions like exiting and displaying the menu.

Show Quick Responses as Action Buttons

When prompting users for information, you may be able to anticipate the user's choice based on what people usually select.

For example, if you are asking for a date for a calendar entry, common options are "today" or "tomorrow". So, below the prompt, you could add two buttons that say Today and Tomorrow. When a user selects one of the two buttons or enters the label of one of the buttons, the current date or tomorrow's date is set to the underlying variable.

Another example is the case where the bot needs a delivery address. If there’s a home address on file, you can ask for a delivery address and also include a home delivery button.

When using quick replies, make sure users understand that the buttons are not their only choice and that the button can be triggered by a message. Always remember that conversation also means speech and that users who operate the chatbot using their voice will not have an opportunity to press a button. Most likely, they will mention the button label.

Ensure Mutual Understanding

For conversations to go well, you need to make sure that the digital assistant understands users and that the user understands the digital assistant. Here are some techniques that help in that regard.

Use Plain Language

No user speaks the way your product database was designed. Make sure your bot is using the language of your target audience for which the persona you have defined should guide you. For example:

  • "user account" instead of "user Id"

  • "where can I deliver this to" instead of "shipping address"

  • "I tried my best but could not find what you were looking for" instead of "the query did not return a result"

Also, you can make messages less robotic sounding by providing context and guidance.

So, instead of "What is your order Id?", you could provide a more helpful message like "I can help you find your order. If you can tell me your order number, that would be great. If not, no problem. I can also search by product or date."

Don't Expect Users to Know the Magic Words

Imagine a sales bot that sales reps could use to request a graphical representation of the revenue they generated and how that revenue met their forecasted goals.

Charts can be made with many functions including what type of chart to add, whether or not to add labels, how many y-axes to show. A digital assistant where the sales representative has to request her statistics by saying things like "Show me my sales for Q2 / 2021 as a pie with no_label double_y_axis in linear_plot" will not work in practice.

Make the options clear, keep them conversational, and use the language of your target audience. If you offer funnels, Gantt charts, scatter charts, bubble charts, and Pareto charts, keep this information to yourself and present it differently to the user. Here are some examples of what a user of that bot would be more likely to ask:

  • "Show me an overview of my sales for Q2 / 2021"

  • "Show me my sales for Q2 / 2021 in comparison to last year"

For both queries, the response could be rendered with different types of graphs without the user having to understand the magic words that give them what they want. Not only does it make your digital assistant intuitive to use, it also reduces choice, which is a good thing keeping in mind that conversational tasks should be kept short, as we discuss in the next section.

Give Feedback Within the Conversation

Make sure your bot is not designed as an escape game and that you provide enough pointers, feedback, confirmation, signposting, and help so that users always understand what is expected of them now, what is next, and how to get unstuck. Here are some examples of those techniques:

  • Confirmation: "OK. I got your order."

  • Signposting: "OK. I got your order. Next, I need to know where to send this to."

  • Prompt: "OK. I got your order. Next, I need to know where to send this to. So, let me know the address I should deliver it to."

  • Help: "OK. I got your order. Next, I need to know where to send this to. So, let me know the address I should deliver it to or use the button below to ship to your home address.

There may be situations in which the user does not know what information to provide or perhaps has even lost interest in completing a task. One way to help in these cases is to display additional controls (like buttons) for the user to cancel a task or navigate to a help state. Also, by using the maxPrompts setting on input components, you can even automate the navigation to a help state when the user provides incorrect information multiple times.

Disambiguate User Input

Don't be afraid to have your skill verify its understanding of user input. Language can be ambiguous, even in person.

For example, if it is Thursday and a user says "I want to book an appointment next Saturday", one could interpret that date as being either two or nine days in the future. In this case, you would want your skill to verify the date, perhaps with wording like "OK, just to make sure I've understood you correctly, is that Saturday the 10th or Saturday the 17th?"

In dialog flows, we recommend that you use entities as variable types when collecting user input. Using entities as variable types validates user input and automatically detects ambiguity, which means that all you need to do is to find the right wording when prompting users.

Provide Alternating Prompts

As mentioned earlier, it’s important to write messages in a conversational style. But what happens if that message needs to be repeated because the user didn’t respond correctly? For example:

"Cool. So, tell me where to ship this to"

"Cool. So, tell me where to ship this to"

"Cool. So, tell me where to ship this to"

Even messages that are written conversationally will sound robotic and unengaging if they are repeated. Therefore, you should write multiple versions of each prompt so that a user sees different text if re-prompted (or if she repeats the conversation).

You can use multiple prompts defined on entities to show alternating prompts automatically. So, if the user information gets validated by an entity type variable, you can use the entity's prompt property to define as many prompts as you like. Using the Resolve Entities component all you need to do is to then associate the variable to it.

Gradually Disclose Additional Information

Having alternating prompts is great. But if a user does not understand, rephrasing will make it only slightly better. Here you want to progressively disclose more information or escalate when needed.

  • Bot: "Cool. So, tell me where to ship this to"
  • User: "to me"
  • Bot: "I am sorry, but 'to me' doesn't seem to work for me. If you can give me a street name, a house number and a city name then I can ship this to you"
  • User: "send it to me"
  • Bot: "Tried, that did not work for me either. I’d really like to help you out here. Maybe you want to talk to a human colleague of mine. If so, just ask me to connect you to a human agent. Or, you give me an address I can ship this to."

In that set of prompts, notice how the messages gradually reveal more information to help the user. Using entities to define the prompts makes it easy to implement such a conversation. Just add a sequence number to the prompts.

The sequence number of a prompt indicates when it is displayed. The above example contains messages with sequence numbers from 1 to 3. If you then configure the maxPrompts property of the Resolve Entities component to 3, a third failed user input attempt triggers navigation to e.g. a help or a human agent state.

Prompts with the same sequence number show alternating behaviour, as described in the previous section. This way you can achieve both, gradually exposing additional information plus displaying alternating prompts.

Varied Responses and Progressive Disclosure

Have multiple responses for various points of the conversation. Varied responses enhance the skill's credibility with the user (it doesn't sound like it is stuck in a loop). You can also use them to progressively disclose more information to help a user get unstuck.

For example, if the user provides invalid input to the question "What size would you like?", you could follow up that prompt with something like "OK, lets try again to find a size for you. Select small, medium, or large."

Confirmation and Reflective Listening

Your skill's responses should use reflective listening (the restating of the user's input, but with different wording) to demonstrate that the skill understands the user requests before moving on to the next step. For example, if the user asks “I want to order a pizza”, the skill could respond with “OK, let's get your pizza order started" before continuing with the next question (such as "what size can we get you?”). Also notice that this acknowledgement can be expressed implicitly and in a natural human tone ("let's get your pizza order started") as opposed to something more literal and less natural sounding (like "request to order pizza confirmed").

Also consider the situation when a user has entered information but the skill needs a little time to execute something in the backend. Instead of waiting for the process to complete before responding, you might want to confirm that the request is in process. For example, the skill could respond with the following after payment details have been submitted but before they have been processed: “OK, I’ve got all the payment information. Let me check those details with your bank.”

Similarly, use the typing indicator to show when the digital assistant is working on a response.

Close the Gap that Exists Between AI and Human Understanding

The human brain is by far the slowest but best computer in the world. And this is because of its ability to detect and maintain context. Despite all improvements in conversational AI, you will experience situations in which the chatbot won't be able to determine what a user wants or what the information a user provided is for. This is where your conversation design needs to step up to help your chatbot and the user.

For example, consider the following three messages:

  • "block my diary from 10 a.m. to 12 p.m. tomorrow"
  • "set a marker in my calendar for tomorrow at 10 a.m. for 2 hours"
  • "for 2 hours tomorrow, create an entry in my schedule at 10 a.m."

All three messages say the same thing and the human brain immediately gets it what the user wants, what the event date is, and what the start and the end time are.

Conversational AI, when trained well, will understand that "block my diary", " set a marker in my calendar" and "create an entry in my schedule" have the same meaning, which is to create an event in the user calendar.

However, as far as the information goes, conversational AI extracts "tomorrow" as the event date, 10 a.m. and 12 p.m. as time and 2 hours as a duration. By itself it might have trouble understanding what the start time for a meeting is and what is the end time, especially when the end time needs to be computed from a duration. And what does "tomorrow" mean from the perspective of a bot if you live in Australia as opposed to (for example) Jordan?

For whatever you cannot handle in your implementation, your design needs to handle it, even if it means admitting that the bot did not understand and thus re-prompts for an information.

Good Manners

A digital assistant should show consideration for the user's time and concerns. Here are a few aspects of good manners to build into your digital assistant.

Small Talk

Not only is small talk a natural part of human conversation, but people also initiate small talk with digital assistants. With digital assistants it actually has practical uses, such as:

  • Verifying that it is a bot behind the chat interface and not a human.
  • Discovering what the digital assistant can do.
  • Expressing frustration.

For example, if the user enters expletives, this may be a cue that the digital assistant can use to apologize, connect the user with a human agent, or otherwise try to remediate the problem.

At the very least, you should be able to handle small talk on a basic level. If you handle it well, it makes your digital assistant appear smarter, which helps user confidence in the digital assistant.

Don't Assign Blame

Be careful not to assign blame to users (whether explicitly or implicitly) when they enter something incorrectly or do something else to interrupt progress in the conversation. In such cases, the phrasing should focus on where the digital assistant is having difficulties, not on what the user did incorrectly.

For example, the response "That is an incorrect Order ID" subtly implies that the problem is the user's fault, which might cause irritation or offense (and might not even be true). A better response would be "I couldn't find an order with that number".

Use of Empathy

You can use empathy and humor to make the digital assistant more personal, but be judicious and don't overdo it. The costs of misunderstandings are much greater than any benefits.

For example, if a user of a conference registration digital assistent enters "I won't be able to make it to the conference", the following might seem like a reasonable beginning of a response: “I’m sorry to hear that”. But if the user instead says "I won't be able to make it to the conference because my daugher is due to deliver a baby", the response won't seem empathetic at all!


Keep messages short and to the point. (Be considerate of the user's time and screen real estate.)

If your channel supports links, you may want to provide links to external content.

Keep Interactions Short

To get things done, think about the shortest path from the start to the end of a conversation. Use whatever options you have to skip a stop in a conversation. Here are two options to consider:

  • Use entity slotting and guide users on how to include some, if not all, of the information needed for a task in the first message.

    In dialog flows designed in Visual mode, Common Response and Resolve Entity components automatically extract entity values provided by users in their initial message and don't prompt for those provided values.

    In dialog flows designed in YAML mode, you can use the nlpResultVariable property on input components to enable this automatic slotting.

  • Allow users to provide additional information when prompted. For example, in a pizza order bot, when the user is asked for the pizza size, why not also accept the pizza type and toppings? Out-of-order information extraction can be easily implemented with the composite bag entities.

Don't Design Like It's a Web App

It's likely that your team has experience with developing web applications and thus will also be likely to apply web application paradigms to the digital assistant, whether consciously or sub-consciously. Try to avoid this! The point of a digital assistant is for a user to complete a task with natural language, not to put a web app in a smaller window.

Here are a couple of things to consider:

  • Don't use terms that sound like database field names in a response.

    For example, instead of responding with "invalid Order ID", say something like "I couldn't find that order number."

  • If the user is making a request and there are hundreds of possible solutions, don't respond with hundreds of rows of data for them to scroll through. Think of ways you can help the user narrow down their request before presenting them with a more concise list.

    For example, if you enter a wine shop and ask for a bottle of wine, the merchant won't name every bottle that she has. She'll ask you about your preferences (e.g. red vs. white, regions, and various qualities of the wine) before listing some specific options. Your skill can operate in the same manner.

  • Find ways to collect information for the conversation without querying the user about every detail. For example, you may have a way to determine a user's location without asking. Another example might be asking the user to submit an image (such as a receipt) that provides the required information.

Consider Multi-Language Support

Have you ever wondered why installation instructions for products that are imported from abroad are sometimes translated so poorly? One likely reason is that a translation service was used and the translator was not familiar with the subject or product. Another reason is that certain idioms do not exist or are expressed differently in the language to be translated. Just to give you some examples for what would work in the United States but probably not elsewhere:

  • "under the weather"
  • "hang in there"
  • "we’ll cross that bridge when we get to it"
  • "go Dutch"
  • "call it a day"

To ensure that conversations that are defined for your digital assistant also work when translated, you have a couple of options: annotate any idioms in a resource bundle for the translator to know the meaning of a message, or don’t use idoms at all. Naturally, machine translation services will not serve you well when translating your bot responses to a foreign language.

Checklist for Conversational Design

  • ☑ Make sure that your conversation design guides users in using the chatbot, regardless of their current experience.
  • ☑ If you can, check with people from the target user group to see if the persona is working or not.
  • ☑ Review your bot messages for technical terms that don’t make much sense to users.

Learn More