Here are the features that you can configure in the Oracle iOS SDK.
Absolute and Relative
Timestamps π
Feature flag: enableTimestamp
Feature flag: timestampMode
You can enable absolute or relative timestamps for chat messages. Absolute
timestamps display the exact time for each message. Relative timestamps display only on
the latest message and express the time in terms of the seconds, days, hours, months, or
years ago relative to the previous message.The precision afforded by absolute timestamps
make them ideal for archival tasks, but within the limited context of a chat session,
this precision detracts from the user experience because users must compare timestamps
to find out the passage of time between messages. Relative timestamps allow users to
track the conversation easily through terms like Just Now and A few moments
ago that can be immediately understood. Relative timestamps improve the user
experience in another way while also simplifying your development tasks: because
relative timestamps mark the messages in terms of seconds, days, hours, months, or years
ago, you don't need to convert them for timezones.
Configure Relative
Timestamps π
To add a relative timestamp, enableTimestamp must be
enabled (true) and timestampMode, which controls the
style of timestamp, must be timestampMode.relative. By setting
timestampMode.relative, an absolute timestamp displays before the
first message of the day as a header. This header displays when the conversation has not
been cleared and older messages are still available in the history.
This timestamp is updated at following regular intervals (seconds, minutes,
etc.) until a new message is received.
For first 10s
Between 10s-60s
Every minute between 1m-60m
Every hour between 1hr-24hr
Every day between 1d-30d
Every month between 1m-12m
Every year after first year
When a new message is loaded into the chat, the relative timestamp on the previous
message is removed and a new timestamp appears on the new message displaying the time
relative to the previous message. At that point, the relative timestamp updates until
the next messages arrives.
Actions Layout π
Use the BotsProperties.actionsLayout configuration settings to
display the action buttons in horizontal or vertical layouts. The layout can be set for
local actions, global actions, card actions, and form actions. The default value is
horizontal for all action types.
For skills integrated with live agent support, the agentAvatar
setting enables the display of an avatar icon for the messages sent by the agents. You
configure this with the URL of the icon that displays alongside the agent messages.
Dynamically Update Avatars and Agent
Details π
You can enable the user and agent avatars to be dynamically updated at runtime using the
setUserAvatar(avatarAsset : String),
getAgentDetails(), and setUserAvatar(avatarAsset :
String).
Set the User Avatar π
The setPersonAvatar(avatarAsset : String) enables the dynamic
updating of the user avatar at runtime. This method sets the user avatar for the all the
messages, including previous messages. The avatarAsset can be:
The name of the asset from the project Assets
folder.
An external link to the image source as shown in the following
example.
You can customize the agent details using the setAgentDetails(agentDetails:
AgentDetails) API. Along with the agent name, the other attributes that you
can use this API to customize are text color and the avatar. If no agent avatar has been
configured, the avatar can be configured dynamically with the agent name initials. You
can also customize the color of these initials and background color. The
getAgentDetails() API retrieves the current agent details.
Although these APIs can be called at any time, we recommended using them
with either the onReceiveMessage() or beforeDisplay()
events.
setAgentDetails(agentDetails:
AgentDetails) π
To override the agent details received from server, use this API as
follows:
Note
All of the parameters of the AgentDetails object are
optional.
// to override avatar , name and name text colorlet agentDetails = AgentDetails(name: "Bob", avatarImage: "https://picsum.photos/200/300", nameTextColor: .red)
// to override avatar , namelet agentDetails = AgentDetails(name: "Bob", avatarImage: "https://picsum.photos/200/300")
// to override avatar, name, name text color,avatar initials color , avatar background let agentDetails = AgentDetails(name: "Bob", nameTextColor: .red,avatarTextColor: .blue,avatarBackgroundColor: .green)
BotsUIManager.shared().setAgentDetails(agentDetails: agentDetails)
Additionally, each property of the AgentDetails object can be modified.
For
example:
let agentDetails = BotsUIManager.shared().getAgentDetails()
Attachment Filtering π
Feature flag: shareMenuConfiguration
Use shareMenuConfiguration to restrict, or filter, the item types
that are available in the share menu popup, set the file size limit in KB for uploads
(such as 1024 in the following snippet), and customize the menuβs icons and labels. The
default and the max limit is 25 MB.
Note
Before you can configure
shareMenuConfiguration, you must set
enableAttachment to
true.
For
the types, you have to use the CFString for the corresponding file type
and convert it to String. Any other string will not be valid. You can
allow users to upload all file types by setting the types as
String(kUTTypeItem).
public func shareMenuItems(shareMenuItems:
([ShareMenuItem], [ShareMenuCustomItem])) π
You can dynamically update the share menu items popup by calling the
BotsManager.shared().shareMenuItems(shareMenuItems: ([ShareMenuItem],
[ShareMenuCustomItem]))
API.
public func shareMenuItems() ->
([ShareMenuItem], [ShareMenuCustomItem]) π
You can get the share menu items list by calling the
BotsManager.shared().shareMenuItems();
API.
BotsManager.shared().shareMenuItems()
Auto-Submitting a Field π
When a field has the autoSubmit property set to
true, the client sends a
FormSubmissionMessagePayload with the
submittedField map containing either the valid field values that
have been entered so far. Any fields that are not set yet (regardless of whether they
are required), or fields that violate a client-side validation are not included in the
submittedField map. If the auto-submitted field itself contains a
value that's not valid, then the submission message is not sent and the client error
message displays for that particular field. When an auto-submit succeeds, the
partialSubmitField in the form submission message will be set to
the id of the autoSubmit field.
Connect, Disconnect, and Destroy
Methods π
The skill can be connected or disconnected, or the SDK can be destroyed, using the
public func destroy(), public func
disconnect(), and the public func
connect() methods.
public func destroy() π
Destroys the SDK by closing any active connection, voice recognition, speech
synthesis, file uploads, and by removing the SDK view controller. Once called, none of
the public API methods can be called. They will only be active again after the
initialize(botsConfiguration: BotsConfiguration, completionHandler:
@escaping (ConnectionStatus, Error?) -> ()) method is called again to
initialize the SDK.
public func disconnect() π
All network connections are closed after calling the disconnect
method.
BotsManager.shared().disconnect()
public func connect() π
The web socket connection is established if the skill was in a disconnected
state.
BotsManager.shared().connect()
public func connect(botsConfiguration:
BotsConfiguration) π
When this method is called with a new BotsConfiguration, the
existing web socket connection is closed, and a new connection is established using the
new channel properties. Other properties set in BotsConfiguration
remain as
is.
Use enableDefaultClientResponse: true to provide default
client-side responses accompanied by a typing indicator when the skill response has been
delayed, or when there's no skill response at all. If the user sends out the first
message/query, but the skill does not respond with the number of seconds set by
defaultGreetingTimeout, the skill can display a greeting message
that's configured using the odais_default_greeting_message translation
string. Next, the client checks again for the skill's response. The client displays the
skill's response if it has been received, but if it hasn't, then the client displays a
wait message (configured with the odais_default_wait_message
translation string) at intervals set by the defaultWaitMessageInterval
flag. When the wait for the skill response exceeds the threshold set by the
typingStatusTimeout flag, the client displays a sorry response to
the user and stops the typing indicator. You can configure the sorry response using the
odais_default_sorry_message translation string.
Delegation π
The delegation feature lets you set a delegate to receive callbacks before certain
events in the conversation. To set a delegate, a class must conform to the
BotsMessageServiceDelegate protocol and implement the following
methods:
public func
beforeDisplay(message: [String: Any]?) -> [String: Any]? π
This method allows a skillβs message payload to be modified before it is
displayed in the conversation. The message payload returned by the method is
used to display the message. If it returns nil, then the
message is not displayed.
public func beforeSend(message:
[String: Any]?) -> [String: Any]? π
This method allows a user message payload to be modified before it is sent to the
chat server. The message payload returned by the method is sent to the skill. If it
returns nil, then the message is not sent.
public func
beforeSendPostback(action: [String: Any]?) -> [String: Any]? π
The public func beforeSendPostback(action: [String: Any]?) -> [String:
Any]? allows a postback action payload to be modified before it is sent to
the chat server. The action payload returned by the method is sent to the skill. If it
returns nil, then the postback action selected by the user is not sent
to the chat
server.
public class ViewController: UIViewController, BotsMessageServiceDelegate {
func beforeSend(message: [String : Any]?) -> [String : Any]? {
// Handle before send delegate here
}
func beforeDisplay(message: [String : Any]?) -> [String : Any]? {
// Handle before display delegate here
}
func beforeSendPostback(action: [String : Any]?) -> [String : Any]? {
// Handle before send postback action delegate here
}
}
The instance, which conforms to the BotsMessageServiceDelegate protocol,
should be assigned to the BotsManager.shared().delegate property as
shown in the following code snippet for initializing the
SDK:
BotsManager.shared().delegate = self
End the Chat Session π
Feature flag: enableEndConversation
enableEndConversation, when set to true, adds a
close button to the header view that enables users to explicitly end the current chat
session. A confirmation prompt dialog opens when users click this close button and when
they confirm the close action, the SDK sends an event message to the skill that marks
the end of the chat session. The SDK then disconnects the skill from the instance,
collapses the chat widget, and erases the current user's conversation history. The SDK
also raises a chatend event in the BotsEventListener
protocol that you can implement.
Opening the chat widget afterward starts a new chat session.
Tip:
The conversation can also be ended by
calling BotsManager.shared().endChat() method, which you can use
when the SDK is initialized in the headless mode.
Headless SDK π
The SDK can be used without its UI. The SDK maintains the connection to server and
provides APIs to send messages, receive messages, and get updates for the network status
and for other services. You can use the APIs to interact with the SDK and update the
UI.
You can send a message using any of the send() APIs
available in BotsManager class. For example, public func
send(message: UserMessage) sends text message to skill or digital
assistant.
public func send(message:
UserMessage) π
This function sends a message to the skill. Its message
parameter is an instance of a class which conforms to the UserMessage
class. In this case, it is
UserTextMessage.BotsManager.shared().send(message:
UserTextMessage(text: "I want to order a pizza", type: .text))
BotsEventListener π
To listen for the connection status change, a message received from skill and
attachment upload status events, a class can implement the
BotsEventListener protocol which then implements the following
methods:
onStatusChange(ConnectionStatus connectionStatus) β
This method is called when the WebSocket connection status changes. Its
connectionStatus parameter is the current status of the
connection. Refer to the API docs included in the SDK for more details about the
ConnectionStatus enum.
onReceiveMessage(message: BotsMessage) β This
method is called when a new message is received from the skill. Its
message parameter is the message received from the skill.
Refer to the API docs included in the SDK for more details about the
BotsMessage class.
onUploadAttachment(message: BotsAttachmentMessage)
β This method is called when an attachment upload has completed. Its
message parameter is the
BotsAttachmentMessage object for the uploaded
attachment.
onDestroy() β This method is called when the
destroy() method is called.
onInitialize() β This method is called when the
initialize(botsConfiguration: BotsConfiguration, completionHandler:
@escaping (ConnectionStatus, Error?) -> ()) method is called. It
takes the following parameter:
newLanguage β The SupportedLanguage
object for the newly set chat language.
beforeEndConversation() β This method is called when the end
conversation session is initiated.
chatEnd() β A callback method triggered after conversation has
ended successfully.
extension ViewController: BotsEventListener {
func onReceiveMessage(message: BotsMessage) {
// Handle the messages received from skill or Digital Assistant
}
func onUploadAttachment(message: BotsAttachmentMessage) {
// Handle the post attachment upload actions
}
func onStatusChange(connectionStatus: ConnectionStatus) {
// Handle the connection status change
}
func onInitialize() {
//Handle initialization
}
func onDestroy() {
//Handle destroy
}
func onChatLanguageChange(newLanguage: SupportedLanguage) {
//Handle the language change.
}
func beforeEndConversation(completionHandler: @escaping (EndConversationStatus) -> Void) {
//Do the desired cleanup before session is closed.
return completionHandler(.success) // if cleanup was successfull.
return completionHandler(.success) // if there was en error cleaning up.
}
func chatEnd() {
//Handle successfull session end from server before the SDK is destroyed.
}
}
The
instance which conforms to the BotsEventListener protocol should be
assigned to the BotsManager.shared().botsEventListener property as
illustrated in the following code snippet for initializing the
SDK:
BotsManager.shared().botsEventListener = self
In-Widget Webview π
UI Property: LinkHandler
You can configure the link behavior in chat messages to allow users to
access web pages from within the chat widget. Instead of having to switch from the
conversation to view a page in a tab or separate browser window, a user can remain in
the chat because the chat widget opens the link within a webview.
Configure the In-Widget
Webview π
UI Property: WebViewConfig
You can set the webview configuration by setting the LinkHandler
property to LinkHandlerType.webview.
WebViewConfig can be set to a WebViewConfiguration
struct instance.
BotsProperties.LinkHandler = LinkHandlerType.webview
//Set the properties which you want changed from the default values.
BotsProperties.WebViewConfig.webViewSize = WebViewSize.full
BotsProperties.WebViewConfig.clearButtonLabelColor = UIColor.black
As illustrated in this code snippet, you can set the following attributes for the
webview.
Attribute
Settings
webViewSize
Sets the screen size of the in-widget webview window
with WebviewSize attribute, which has two values:
parial (WebviewSize.partial)
and full (WebviewSizeWindow.full).
clearButtonLabel
Sets the text used for clear/close button in the top
right corner of webview. The default text is taken from the string
set to odais_done in the
Localizable.strings file.
clearButtonIcon
Sets an icon for the clear button, which appears
left-aligned inside the button. By default, there's no icon set for
the clear button. It's an empty string.
clearButtonLabelColor
Sets the color of text of clear button label. The
default color is UIColor.white.
clearButtonColor
Sets the background color for the clear button. The
default color is UIColor.clear.
webviewHeaderColor
Sets the background color for webview header.
webviewTitleColor
Sets the color of title in the header. The title is
the URL of the web link that has been opened.
Message Timestamp
Formatting π
The timestampFormat flag formats timestamps that display in the
messages. It can accept a string consisting of format tokens like
"hh:mm:ss" and other formats supported by the Swift DateFormatter.
Multi-Lingual Chat π
Feature flag: multiLangChat
The iOS SDK's native language enables the chat widget to detect a user's
language or allow users to select the conversation language. Users can switch between
languages, but only in between conversations, not during a conversation because the
conversation gets reset whenever a user selects a new language.
Enable the Language Menu π
You can enable a menu that allows users to select a preferred language from a
dropdown menu by defining the multiLangChat property with an object
containing the supportedLanguages array, which is comprised of language
tags (lang) and optional display labels (label).
Outside of this array, you can optionally set the default language with the
primaryLanguage variable (MultiLangChat(primaryLanguage:
String) in the following
snippet).
To
properly format language and region codes in localizable .lproj (localization
project) files, use a dash (-) as the separator, not an underscore
(_). For example, use
fr-CA, not fr_CA. This aligns with how the
.lproj files are created in the app. When the SDK searches for
an .lproj file, it first tries to locate one with the exact
languageCode-Region.lproj format. If it can't find such a file,
the SDK searches for a languageCode.lproj file. If that is also not
found, the SDK searches for a base.lproj file. When none of these
can be located, the SDK defaults to using English (en).
The chat widget displays the passed-in supported languages in a dropdown menu that's located in the
header. In addition to the available languages, the menu also includes a
Detect Language option. When a user selects a language from
this menu, the current conversation is reset, and a new conversation is started with the
selected language. The language selected by the user persists across sessions in the
same browser, so the user's previous language is automatically selected when the user
revisits the skill through the page containing the chat widget.
You can add an event listener for the onChatLanguageChange event, which
is triggered when a chat language has been selected from the dropdown menu or has been
changed.
Here are some things to keep in mind when configuring multi-language
support:
You need to define a minimum of two languages to enable the
dropdown menu to display.
If you omit the primaryLanguage attribute, the
widget automatically detects the language in the user profile and selects
the Detect Language option in the menu.
The label key is optional for the natively
supported languages: fr displays as
French in the menu, es displays as
Spanish, and so on.
While label is optional, if you've added a language
that's not one of the natively supported languages, then you should add a label
to identify the tag. For example, if you don't define label:
'ΰ€Ήΰ€Ώΰ€ΰ€¦ΰ₯', for the lang: "hi", then the dropdown
menu displays hi instead, contributing to a suboptimal
user experience.
Disable Language Menu π
Starting with Version 21.12, you can also configure and update the chat language
without also having to configure the language selection dropdown menu by passing
MultiLangChat(primaryLanguage: String).
Language Detection π
In addition to the passed-in languages, the chat widget displays a
Detect Language option in the dropdown menu.
Selecting this option tells the skill to automatically detect the
conversation language from the user's message and, when possible, to respond
in the same language.
You can dynamically update the selected language by calling the
BotsManager.shared().setPrimaryLanguage(primaryLanguage:
String) API. If the passed lang matches
one of the supported languages, then that language is selected. When no
match can be found, Detect Language is activated. You
can also activate the Detected Language option by
calling BotsManager.shared().setPrimaryLanguage(primaryLanguage:
"und") API, where "und" indicates
undetermined or by passing primaryLanguage:nil.
You can update the chat language dynamically using the
setPrimaryLanguage(primaryLanguage: String) API
even when the dropdown menu has not been configured.
Multi-Lingual Chat Quick
Reference π
To do this...
...Do this
Display the language selection dropdown menu to end
users.
Pass primaryLanguage:nil or
primaryLanguage:"und".
Dynamically update the chat language.
Call the setPrimaryLanguage(primaryLanguage:
String) API.
Replacing a Previous Input Form π
When the end user submits the form, either because a field has
autosubmit set to true, the skill can send a new
EditFormMessagePayload. That message should replace the previous
input form message. By setting the replaceMessage channel extension
property to true, you enable the SDK to replace previous input form
message with the current input form message.
Share Menu Options π
By default, the share menu displays options for the following file types:
visual media files (images and videos)
audio files
general files like documents, PDFs, and spreadsheets
location
The sharePopupConfiguration setting allows you to restrict the
items that display in the share menu. By passing a tuple of arrays to
ShareMenuConfiguration -- shareMenuConfiguration =
([ShareMenuItem], [ShareMenuCustomItem]) -- you can restrict, or filter,
the type of items that are available in the menu, customize the menu's icons and labels,
and limit the upload file size. The tuple is has an array of share menu options of type
ShareMenuItem and an array of share menu options of type
ShareMenuCustomItem. Pass either as an empty array for all file
types.
public func
shareMenuItems(shareMenuItems: ([ShareMenuItem], [ShareMenuCustomItem])) π
You can enable dynamic updating of the menu using the
Setting the enableSpeechRecognition feature flag to
true enables the microphone button to display in place of the send
button whenever the user input field is empty. The speech is converted to text and sent
to the skill or digital assistant. If the speech is partially recognized, then the
partial result is displayed in a popup that's opened by clicking the microphone
button.
Setting this property to true also supports the
functionality enabled by the enableAutoSendSpeechResponse property,
which when also set to true, enables the user's speech response to be
sent to the chat server automatically while displaying the response as a sent message in
the chat window. You can allow users to first edit (or delete) their dictated messages
before they send them manually by setting
enableSpeechRecognitionAutoSend to false.
Speech recognition is utilized through the following methods:
Checks whether the voice recording has started or not. Returns true
if the recording has started. Otherwise, it returns false.
The onSpeechResponseReceived(data: String, final: Bool) function from
the BotsEventListener protocol can be used to handle all the responses
from the speech
server.
BotsManager.shared().startRecording()
if (BotsManager.shared().isRecording()) {
BotsManager.shared().stopRecording() // Stop voice recording
}
The SDK has been integrated with speech synthesis to read the skill's message aloud
when a new message is received from skill.
You enable this feature by setting the
enableSpeechSynthesis feature flag to
true.
You can set the preferred language that read the skill's messages
aloud with the speechSynthesisVoicePreferences property. This
property enables a fallback when the device doesn't support the preferred
language or voice. If the device does not support the preferred voice, then the
default voice for the preferred language is used instead. When neither the
preferred voice or language are supported, then the default voice and language
are used.
public func speak(text:
String) π
Starts reading the skill's response aloud. Its text parameter is
the text for the skill's message that's read
aloud.
BotsManager.shared().speak(text: "What kind of crust do you want?")
public func stopSpeech() π
Stops reading the skill's response
aloud.
BotsManager.shared().stopSpeech()
Speech Service Injection π
Feature flag: ttsService
The ttsService feature flag allows you to inject any text-to-speech
(TTS) service -- your own, or one provided by a third-party vendor -- into the SDK. To
inject a TTS service, you must first set the enableSpeechSynthesis
feature flag to true and then pass an instance of the
TTSService interface to the ttsService flag.
The TTSService Protocol π
You create an instance of a class that's an implementation of the
TTSService interface. It implements the following methods:
speak(text: String) - This method adds the text
that's to be spoken to the utterance queue. Its text parameter
is the text to be spoken.
isSpeaking() - This method checks whether or not
the audio response is being spoken. It returns false if no
audio response is being spoken.
stopSpeech() - This method stops any ongoing
speech synthesis.
class CustomTTSService: TTSService {
func speak(text: String) {
// Adds text to the utterance queue to be spoken
}
func stopSpeech() {
// Stops any ongoing speech synthesis
}
func isSpeaking() -> Bool {
// Checks whether the bot audio response is being spoken or not.
}
}
Typing Indicator for User-Agent
Conversations π
Feature flag: enableSendTypingStatus
When this flag is enabled, the SDK sends a RESPONDING typing
event along with the text that's currently being typed by the user to Oracle B2C
Service or Oracle Fusion
Service. This shows a typing indicator on the agent console. When the user has finished
typing, the SDK sends a LISTENING event to the service. This hides the
typing indicator on the agent console.
Similarly, when the agent is typing, the SDK receives a
RESPONDING event from the service. On receiving this event, the SDK
shows a typing indicator to the user. When the agent is idle, the SDK receives
LISTENING event from the service. On receiving this event, the SDK
hides the typing indicator that's shown to the user.
The sendUserTypingStatus API enables the same behavior for
headless
mode.
public func sendUserTypingStatus(status: TypingStatus, text: String? = nil)
To show the typing indicator on the agent
console:
To control user-side typing indicator, use the
onReceiveMessage() event. For
example:
public func onReceiveMessage(message: BotsMessage) {
if message is AgentStatusMessage {
if let status = message.payload["status"] as? String {
switch status {
case TypingStatus.LISTENING.rawValue:
hideTypingIndicator()
case TypingStatus.RESPONDING.rawValue:
showTypingIndicator()
}
}
}
}
There are two more settings in BotsConfiguration that
provide additional control:
typingStatusInterval β By default, the SDK sends
the RESPONDING typing event every three seconds to Oracle B2C
Service. Use this flag to throttle this event. The minimum value that can be set is
three seconds.
enableAgentSneakPreview - Oracle B2C
Service supports showing the user text as it's being entered. If this flag is set to
true (the default is false), then the SDK
sends the actual text. To protect user privacy, the SDK sends β¦
instead of the text to Oracle B2C
Service when the flag is set to false.
When voice support is enabled (botsConfiguration.enableSpeechRecognition =
true), the footer of the chat widget displays a voice visualizer, a dynamic
visualizer graph that indicates the frequency level of the voice input. The visualizer
responds to the modulation of the user's voice by indicating whether the user is
speaking too softly or too loudly. This visualizer is created using Swift's AVAudioEngine which is exposed in the
onAudioReceived method in the SpeechEventListener
protocol for use in headless mode.
The chat widget displays a voice visualizer when users click the voice icon.
It's an indicator of whether the audio level is sufficiently high enough for the SDK to
capture the userβs voice. The userβs message, as it is recognized as text, displays
below the visualizer.
Note
Voice mode is
indicated when the keyboard icon appears.
When botsConfiguration.enableSpeechAutoSendSpeechResponse =
true, the recognized text is automatically sent to the skill after the user
has finished dictating the message. The mode then reverts to text input. When
botsConfiguration.enableSpeechAutoSendSpeechResponse = false, the
mode also reverts to text input, but in this case, users can modify the recognized text
before sending the message to the skill.