Lab 03 Watson Assistant
Lab 03 Watson Assistant
js
Cognitive Solutions Application Development
Table of Contents
Overview................................................................................................................................................ 3
Objectives.............................................................................................................................................. 3
Prerequisites.......................................................................................................................................... 3
Section 1: Create a Conversation Dialog in IBM Cloud..........................................................................4
Create a Conversation Service in IBM Cloud....................................................................................4
Create a Watson Assistant Skill........................................................................................................ 5
Create Intents................................................................................................................................... 8
Create entities................................................................................................................................. 11
Create dialogs................................................................................................................................. 12
Build your CSAD Assistant dialog................................................................................................... 15
Section 2: Backend Integration with Cloud Functions..........................................................................29
Section 3: Create an Assistant from our Skill.......................................................................................36
Section 4: Test your Conversation using Postman (REST Client) – Watson Assistant API V1............38
Section 5: Create a Node.js Express application.................................................................................43
Running the application locally....................................................................................................... 43
Test the application......................................................................................................................... 50
Process user input to detect intents and perform app actions........................................................52
(Optional) Deploy the application to IBM Cloud..............................................................................55
Overview
The IBM Watson Developer Cloud (WDC) offers a variety of services for developing
cognitive applications. Each Watson service provides a Representational State
Transfer (REST) Application Programming Interface (API) for interacting with the
service. Software Development Kits (SDKs) are also available and provide high-level
wrappers for the underlying REST API. Using these SDKs will allow you to speed up
and simplify the process of integrating cognitive services into your applications.
The Watson Assistant (formerly Conversation) service combines a number of
cognitive techniques to help you build and train a bot - defining intents and entities
and crafting dialog to simulate conversation. The system can then be further refined
with supplementary technologies to make the system more human-like or to give it a
higher chance of returning the right answer. Watson Assistant allows you to deploy a
range of bots via many channels, from simple, narrowly focused bots to much more
sophisticated, full-blown virtual agents across mobile devices, messaging platforms
like Slack, or even through a physical robot.
Examples of where Watson Assistant could be used include:
• Add a chat bot to your website that automatically responds to customers’
questions
• Build messaging platform chat bots that interact instantly with channel users
• Allow customers to control your mobile app using natural language virtual
agents
• And more!
Objectives
• Learn how to provision a Watson Assistant service and utilize the web tool
interface
• Learn how to train your chat bot to answer common questions
• Learn how to utilize the Watson Assistant service APIs in Node.js
Prerequisites
Before you start the exercises in this guide, you will need to complete the following
prerequisite tasks:
• Guide – Getting Started with IBM Watson APIs & SDKs
• Create a IBM Cloud account
You need to have a workstation with the following programs installed:
1. Node.js
2. Express – A Node.js framework
1. npm install -g express-generator
IBM Cloud offers services, or cloud extensions, that provide additional functionality
that is ready to use by your application’s running code.
You have two options for working with applications and services in IBM Cloud. You
can use the IBM Cloud web user interface or the Cloud Foundry command-line
interface. (See Lab 01 on how to use the Cloud Foundry CLI).
https://cloud.ibm.com.
Step 2 Log in with your IBM Cloud credentials. This should be your IBMid.
Step 3 You should start on your dashboard which shows a list of your applications
and services. Scroll down to the All Services section and
click Create Resource.
Step 4 On the left, under Services, click on AI to filter the list and only show the
cognitive services.
Step 6 Review the details for this service. At the top, there will be a description of
the service. At the bottom, you can review the pricing plans. The Lite plan
for this service provides no cost monthly allowances for workspaces,
intents, and API calls. Enjoy your demo!
COPYRIGHT IBM CORPORATION 2019. ALL RIGHTS RESERVED. LAB 03
Page 4 of 56
Watson Services Workshop
Step 9 In the Credentials section click . You should see the API Key
for your service. Later in this exercise, you will enter this value into a JSON
configuration file for your Node.js application. Feel free to copy them to your
clipboard, to a text file, or just return to this section of the IBM Cloud web in-
terface when the credentials are needed.
Before using the Assistant instance, you will need to train it with the intents,
entities, and/or dialog nodes relevant to your application’s use case. You will
create these items in a Skill in the following steps. A Skill is a container for
the artifacts that define the behavior of your service instance.
The resulting Skill is also available as JSON backup file on GitHub. In a Ter-
minal you can download it with the following command:
wget https://raw.githubusercontent.com/iic-dach/csadConversation/
lab3_csadConversation/Lab3_skill-CSAD-Demo.json
On the Add Dialog Skill page, on the Import skill tab you can import this
JSON file..
Step 10 In this guide, you will build a Conversation simple app to demonstrate the
Natural Language Processing capabilities of IBM Watson in a simple chat
interface. You will also learn how to identify a user’s intent and then perform
an action from a third-party application.
Step 13 Open the Skills tab. Here you can create new Skills.
Step 14 First, you will need to create a new workspace. Workspaces enable you to
maintain separate intents, user examples, entities, and dialogs for each ap-
plication. Click
Step 15 On the Create skill page select Dialog skill and click
Step 16 On the Create Dialog Skill page, on the Create skill tab, enter the following
values and click .
Field Value
Name CSAD Demo
Description For demos only
Language We use English(U.S.) for this demo
Step 17 Once the Skill has been created, you will be redirected to it. However, be-
fore proceeding, you will need to know how to identify the Skill so that it can
be referenced by future applications. At the left, click Skills.
Step 18 On the tile for your new Skill, click Options → View API Details.
Step 19 Locate the Workspace ID. You will need this value in future steps when cre-
ating a JSON configuration file for your demo application. Feel free to copy
the value or just return to this section of the Conversation tooling web inter-
face when the ID is needed.
Step 21 Click on the Skill tile to be taken back to the new Skill.
Create Intents
Before using the new conversation, you will need to train it with the intents,
entities, and/or dialog nodes relevant to your application’s use case. An in-
tent is the purpose or goal of a user’s input.
Step 22 First, you will need to define some intents to help control the flow of your di-
alog. An intent is the purpose or goal of a user’s input. In other words, Wat-
son will use natural language processing to recognize the intent of the
user’s question/statement to help it select the corresponding dialog branch
for your automated conversation. If not already there, click the Intents tab
at the left of your workspace.
Field Value
Intent name hello
Description Greeting for app
Step 24 The user examples are phrases that will help Watson recognize the new in-
tent. (Enter multiple examples by pressing “Enter” or by clicking the
Add example). When finished, click at the top of the page.
Field Value
User example Good morning
Greetings
Hello
Hi
Howdy
Step 25 You should see your new intent listed on the Intents page. The number you
see will indicate the total number of user examples that exist for that intent.
Step 26 Repeat the previous steps to create a new intent that helps the user end the
conversation.
Field Value
Intent name goodbye
Description Goodbye
User example Bye
Goodbye
Farewell
I am leaving
See you later
Step 27 Repeat the previous steps to create a new intent that helps the user ask for
help. In the programming section of this guide, you will learn how to identify
the user’s intent (in a third-party application) so that you can perform the re-
quested action.
Field Value
Intent name Help-Misc
Description Help
User example I have a request
I would like some assistance
I need information
I have a problem
I need help
Step 28 Repeat the previous steps to create a new intent that helps the user issue
commands to turn on a device. In this example, you will assume the user is
interacting with a home/business automation system. The purpose of this in-
tent is to show you (in the next section) how to associate entities with an in-
tent when building your dialog tree. Additionally, this intent demonstrates
that the Conversation service can be used for more than just chat bots. It
can be used to provide a natural language interface to any type of system!
Field Value
Intent name turn_on
Description Turn on
User example Arm the security system
Lock the doors
I need lights
Turn on the lights
Start recording
Step 29 At this point, you have defined some intents for the application along with
the example utterances that will help train Watson to recognize and control
the conversation flow.
Create entities
Before using the new Assistant, you will need to train it with the intents, enti-
ties, and/or dialog nodes relevant to your application’s use case. An entity is
the portion of the user’s input that you can use to provide a different re-
sponse or action to an intent.
Step 30 Next, you will need to create some entities. An entity is the portion of the
user’s input that you can use to provide a different response or action to an
intent. These entities can be used to help clarify a user’s questions/phrases.
You should only create entities for things that matter and might alter the way
a bot responds to an intent. If not already there, click the Entities tab at the
left of your Skill.
Step 31 On the Entities tab, click . In the dialog window, enter the fol-
lowing information. In this example, the #turn_on intent will indicate that the
user wants to turn on a car device. So, you will need to create a new entity
representing a device. Enter device and click . You will then
provide values (and possibly synonyms) for the various types of devices
that can be turned on. (Enter multiple examples by pressing “Enter” or by
clicking the plus sign at the end of the line.)
Step 32 You should see your new entity listed on the Entities page.
Step 33 Repeat the previous steps to create the following new entity for lights con-
trolled by the system.
Step 34 Repeat the previous steps to create the following new entity to request the
current time.
Note: You can get the time by using one of the predefined system entities. In
this exercise, go ahead and enter it manually so it can be used to demon-
strate future concepts.
Step 35 At this point, you have defined intents and the associated entities to help
Watson determine the appropriate response to a user’s natural language in-
put. Your application will be able to:
Create dialogs
Before using the new conversation, you will need to train it with the intents,
entities, and/or dialog nodes relevant to your application’s use case. A dialog
uses the intent and entity that have been identified, plus context from the ap-
plication, to interact with the user and provide a response.
Step 36 Next, you will need to create some dialogs. A dialog is a conversational set
of nodes that are contained in a workspace. Together, each set of nodes
creates the overall dialog tree. Every branch on this tree is a different part of
the conversation with a user. If not already there, click the Dialog tab at the
left of your Skill.
Step 37 Review the documentation for creating dialog nodes and for defining condi-
tions.
Step 38 On the Dialog tab, two default nodes are created for you named Welcome
and Anything else. The Welcome node is the starting point for this applica-
tion’s conversation. That is, if an API query is made without a context de-
fined, this node will be returned. The Anything else node will be used if the
user input does not match any of the defined nodes.
Field Value
Name Welcome (should be the default)
If bot recognizes: welcome (should be the default)
Then respond with: Welcome to the CSAD Demo!
Step 41 Add two variables and click to close the properties view of
the node.
Variable Value
Alarmonoff off
app_action
Step 42 Expand the Anything else node and review it’s default values. Close the
view by clicking .
Field Value
Name Anything else (should be the default)
If bot recognizes: anything_else (should be the default)
Then respond with: 1. I didn’t understand. You can try
rephrasing
2. Can you reword your statement? I'm
not understanding.
3. I didn't get your meaning.
(all should be default)
Response variations are sequential.* (Set to random)
* Sequential means 1st response presented at first hit of anything_else node, and so on. Random means any of
the responses is presented randomly.
Step 44 A test user interface will immediately launch and, based on the Welcome
node, provides a greeting to the end user. (You may see a message that
Watson is being trained.)
Step 45 Since you have not yet defined any other dialog nodes (associated with
your intents and entities), everything typed by the user will get routed to the
Anything else node. F.e type “turn on the lights”.
Although we have defined this phrase in intents and entities, the system
does not recognize them because we have not yet defined a node to catch
them the bot does not yet understand (Anything else node).
Step 46 Did you notice the drop-down menu that appeared for your invalid input?
You can optionally assign this phrase to an existing intent (or verify the cor-
rect intent was used). You can use this functionality in the future to keep
Watson trained on new user inputs and to ensure the correct response is re-
turned. Cool! For now, just proceed to the next step.
Step 47 Click in the top right corner to close the chat pane. Proceed to the next
section.
In this section, you will continue building your demo bot utilizing the intents,
entities, and dialog nodes that you created in previous steps. You will do this
entirely with the web interface (no programming or text/XML file hacking re-
quired!)
Step 48 You should create a dialog branch for each of the intents you identified as
well as the start and end of the conversation. Determining the most efficient
order in which to check conditions is an important skill in building dialog
trees. If you find a branch is becoming very complex, check the conditions
to see whether you can simplify your dialog by reordering them.
Note: It's often best to process the most specific conditions first.
Step 49 Click the menu on the Welcome node and then click Add node below.
In this step you are creating a new branch in your dialog tree that repre-
sents an alternative conversation.
Step 50 In this new node, enter the following values. By setting the condition to an
intent, you are indicating that this node will be triggered by any input that
matches the specified intent. Then click to close the dialog.
Field Value
Name this node… Hello
If bot recognizes: #hello
Then respond with: Hi! What can I do for you?
Step 51 Click the menu on the Hello node and then click Add node below, with
the following values. Then click to close the dialog.
Field Value
Name this node… Goodbye
If bot recognizes: #goodbye
Then respond with: Until our next meeting.
Step 52 Using the same steps (Step 40 ff) as before, test the conversation by typing
the following chat line(s):
Hello
Farewell
You can clear previous tests by clicking Clear at the top of the dialog.
Step 53 Next, you should create a new node below Hello (Add node below) for the
#turn_on intent. As you’ll recall, you have multiple devices that you might
want to turn on. In earlier steps, you documented these devices using a new
@device entity. This dialog branch will require multiple nodes to represent
each of those devices. In this new node, enter the following values. In this
example, the dialog branch will need additional information to determine
which device needs to be turned on. So, leave the “Responses” field blank.
Click to close.
Field Value
Name this node… Turn on
If bot recognizes: #turn_on
Then respond with:
In the Context Editor (click )
app_action on
Step 54 Click the menu on the Turn On node and click Add child node.
Step 55 In this new node, enter the following values. In this example, the only way
this node will be reached is if the user specifies the @device entity “lights”
(or one of its synonyms). Then click to close the dialog.
Field Value
Name this node… Lights
If bot recognizes: @device:lights
Then respond with: OK! Which light would you like to turn on?
Step 56 In this scenario, you want to automatically move from the Turn On node to
the Lights node without waiting for additional user input.
a) Click on the Turn on node and select Jump to. Now the Turn on node is
selected. Click the Lights node and select If assistant recognizes (condi-
tion).
Step 57 At this point, Watson will ask you which lights to turn on. As you will recall,
in earlier steps you created a @lights entity to define the available lights in
the system. Click the menu on the Lights node, click Add child node. In
this new node, enter the following values. By setting the condition to an en-
tity, you are indicating that this node will be triggered by any input that
matches the specified entity.
Field Value
Name this node… Light On
If bot recognizes: @lights
Then respond with: OK! Turning on @lights light.
Step 58 Next, you will want Watson to respond if it does not recognize a light, or en-
tity, provided by the user. Click the menu on the Light On node and
click Add node below to create a new peer node. In this new node, enter
the following values.
Note: The true keyword in If bot recognizes always executes this node when
reached. That means, the conditions of the siblings above (Light On) are not
recognized!
Now your tree for the lights should look like the following:
Step 59 You will need to add nodes to deal with the other devices that can be turned
on. As you’ll recall, you defined those devices with the @device entity. The
Turn On node automatically jumps to the Lights node. So, click the
menu on the Lights node and click Add node below to create a new peer
node. Enter the following values.
Field Value
Name this node… Device
If bot recognizes: @device
Step 60 On the Device node click and click Add a child node. Add the following
values
Field Value
Name this node… Device On Off Check
If bot recognizes: true
Step 62 Now add the following three answers in Then respond with: Use the to
customize and enter the values interactively and click . See the
screenshots of each response:
If assistant recognizes:
@Device:(security system) AND $app_action==”on” AND $Alarmonoff==”on”
Then respond with:
It looks like the @device is already on.
If assistant recognizes:
@Device:(security system) AND $app_action==”on”
Then set context (open the context editor)
$Alarmonoff “on”
And respond with:
I’ll turn on the @device for you.
If assistant recognizes:
@Device AND $app_action==”on”
Then respond with:
I’ll switch on the @device for you.
Step 63 Now click on the Device node and click Jump to, then select the
Device On Off Check (If recognizes condition).
Step 64 Next, you will want Watson to respond if it does not recognize a device, or
entity, provided by the user. Click the menu of the Device node and click
Add node below to create another peer node with the following values:
Field Value
Name this node… Invalid Device
If bot recognizes: true
Then respond with: I’m sorry, I don’t know how to do that. I
can turn on lights, radio, or security
systems.
Step 65 Using the same steps as before, click to test the conversation by
typing the following chat line(s):
Hello
The headlights
fog lamp
Note: You could add a context variable for each device/light and control the
on/off behavior as for the security system.
Step 66 Finally, you will want to enable the Conversation service to identify your
#Help-Misc intent. You will make this a part of the overall “Help” system for
the bot. This intent will be used in the upcoming programming exercises to
show you how to perform specific application actions based on a detected
intent and entity. Click the menu on the Turn On node and
click Add node below to create a new peer node. Enter the following
values.
Field Value
Name this node… Help
If bot recognizes: #Help-Misc
Then respond with: How can I help you?
Step 67 Click Add child node on the menu of the Help node.. In this new node, enter
the following values. In this example, the only way this node will be reached
is if the user specifies the @help:time entity (or one of its synonyms).
Note: You will provide your own custom response to the query in the pro-
gramming exercises.
Field Value
Name this node… Time
If bot recognizes: @help:time
The respond with: The time will be provided by the server
created later in this tutorial.
Step 68 Click Add node below on the menu of the Time node. In this new node, en-
ter the following values. By setting the condition to an entity, you are indicat-
ing that this node will be triggered by any input that matches the specified
entity.
Field Value
Name this node… Additional Help
If bot recognizes: true
Then respond with: I’m sorry, I can’t help with that. For
additional help, please call 555-2368.
Step 69 Click on the Help node and select Jump to. Click the Time node and
select Jump to and… Wait for user input.
Step 70 This is already an epic conversation, but feel free to experiment with more
functionality:
Field Value
Intent name book_table
Description Book a table in one of the restaurants
User example I’d like to make a reservation
I want to reserve a table for dinner
Can 3 of us get a table for lunch?
Do you have openings for next Wednesday at 7?
Is there availability for 4 on Tuesday night?
I'd like to come in for brunch tomorrow
Can I reserve a table?
Field Value
Name this node… Book a Table
If bot recognizes: #book_table
g) Now you can enter the slots (Then check for:) for more lines.
j) You can try (see Step 40 ff) this with a view sample inputs. Always
I want to book a table for 3 on Monday 5pm at the First Street location.
Step 72 Make sure you are logged in to your with the ibmcloud cli api to your ac-
count an respective region.
Step 75 Create Credentials for this service with the following command
Step 76 In the cloud console open the Dashboard of your Cloudant service
Step 79 With the following command you can automatically create Cloud Functions
from certain IBM Services such as Cloudant, Weather Service, Watson Ser-
vices to name a view.
ibmcloud wsk package refresh
Step 80 In your cloud console goto . Then click Actions in the left
menu. You should see the functions that the previous command has
created from some deployed services.
Note: Make sure you have selected your ORG and SPACE at the top that
you have a connection to from the Command Line Interface.
Step 82 Select Endpoints and copy the URL and the API-KEY for later use in our
Assistant Skill. (click API-KEY to get the value).
Step 83 In your Assistant Skill open the Welcome node and the Open JSON editor.
Replace the content with the following:
{
"output": {
"generic": [
{
"values": [
{
"text": "Welcome to CSAD Demo!"
}
],
"response_type": "text",
"selection_policy": "sequential"
}
]
},
"context": {
"private": {
"my_credentials": {
"api_key": "<yourCloudFunctionsAPIKey>"
}
},
"Alarmonoff": "off",
"app_action": ""
}
}
Step 84 Click on the Book a table node and select Add child node.
Field Value
Name this node… Respond to cloud
If bot recognizes: true
Then respond with (Pause) 5000
Then respond with (Text) Great, I have a table of $number people,
(click + to add an response type) on $date at $time at our $locations store.
($my_result.id)
And finally Wait for user input
Step 85 Click on the Book a table node and select Jump to (recognizes condi-
tion). Select the Respond to cloud node.
Step 86 Open the Book a table node and on Open JSON editor and replace the
content with the following:
{
"output": {
"text": {
"values": [],
"selection_policy": "sequential"
}
},
"actions": [
{
"name": "/kpschxxxx.com_dev_gb/actions/Bluemix_kps-cloudin-
troDb_newkey/create-document",
"type": "server",
"parameters": {
"doc": {
"date": "$date",
"time": "$time",
"number": "$number",
"locations": "$locations"
},
"dbname": "reservations"
},
"credentials": "$private.my_credentials",
"result_variable": "$my_result"
}
]
}
Actions name is the part of the URL from Step 80 after .../namespaces.
Note: In case you get an Error Code 400 you do not have legacy creden-
tials, containing password and port, for your Cloudant service. Add the fol-
lowing two paramters from your Cloudant credentials to the code above:
"dbname": "reservations",
"iamApiKey": "<apikey from your Cloudant credentials>",
"url": "<url from your Cloudant credentials>"
I want to book a table for 3 on Monday 5pm at the First Street location.
Step 88 Go back to your Cloudant dashboard. There you should see the document
that was created.
Step 91 Click . and select our CSAD Demo Skill. With the Integrations
function you can now directly integrate this Assistant into Facebook, Slack
and other applications.
Step 97 In the API reference of the IBM Cloud Watson Assistant service you can
find an overview of the available API calls. The basic command is to list the
your workspaces.
See your service credentials for the Url at your region. f.e. gateway-fra for Frankfurt.
Note: The services in the IBM Cloud platform are currently transferred to
an IAM authentication. So if your service was created with an apikey in-
stead of userid and password, you specify the string apikey as userid
and the generated apikey as password.
Note: You can save the request definition for further use by clicking Save
As.. Request Name: Conv_GetWorkspaces, Collection: Watson Services.
Authorization: as in Step 93
Headers: as in Step 93
Step 100 Send (POST request) an initial message to start our conversation
Authorization: as in Step 93
Headers: as in Step 93
Body: (raw)
{"input":
{"text": ""},
"context": {
"conversation_id": "",
"system": {
"dialog_stack":[{"dialog_node":"root"}],
"dialog_turn_counter": 0,
"dialog_request_counter": 0
}
}
}
Url: as in Step 93
Authorization: as in Step 93
Headers: as in Step 93
{"input":
{"text": "turn on the lights"},
"context": {
"conversation_id": "",
"system": {
"dialog_stack": [
{
"dialog_node": "root"
}
],
"dialog_turn_counter": 1,
"dialog_request_counter": 1,
"_node_output_map": {
"Welcome": [
0
]
},
"branch_exited": true,
"branch_exited_reason": "completed"
}
}
}
Note: The context object is the memory of your conversation. For any fur-
ther request always use the context object of the previous response. This
enables the multi-session capability of the Watson Assistant service.
Step 102 You can proceed testing the service with other API calls.
Step 104 Move into this folder and execute the following command
npm init
Specify a name, description and author when prompted, keep the other
defaults. The result should be similar to the following in package.json:
{
"name": "csadconversation",
"version": "1.0.0",
"description": "A simple Watson Assistant example",
"main": "app.js",
"scripts": {
"start": "node app.js",
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "Klaus-Peter Schlotter",
"license": "ISC"
}
Step 105 Adjust the package.json file with the blue content (see above):
Step 106 Open your project folder in Visual Studio code or you can start Visual Studio
Code with the command “code .” from within the project folder. In the
menu select View → Integrated Terminal
Step 107 In the Intergrated Terminal or another terminal execute the following com-
mands to install the additional node modules needed for our project
Step 108 Create config.js in the project root folder and enter your service credentials.
You can create files and folders for your project in the Visual Studio Code
Step 110 In app.js enter the following code to import the dependencies and
instantiate the variables. Save and close the file.
app.listen(port, () => {
console.log('Express app started on port ' + port);
})
Basic node.js setup with bodyParser for json format and the definitions of
ejs as the template engine.
Step 111 In the project root create a folder named routes and in there create a file
named watson.js with the following content: → (routes/watson.js). Save and
close the file.
router.get('/', watsonController.getIndex);
router.post('/', watsonController.postMessage);
module.exports = router;
The router defines the urls we accept from the web with an appropriate
function called that is defined in the controller file. Actually the getIndex()
renders the Index page and postMessage() is a pure api function just
returning json.
Step 112 In the project root create a folder named controllers and in there create a
file named watson.js with the following content: → (controllers/watson.js)
const AssistantV2 = require('ibm-watson/assistant/v2');
const { IamAuthenticator } = require('ibm-watson/auth');
const config = require('../config');
const watsonAssistant = new AssistantV2({
version: config.watson.assistant.version,
authenticator: new IamAuthenticator({
apikey: config.watson.assistant.iam_apikey
}),
url: config.watson.assistant.url
});
let assistantSessionId = null;
exports.getIndex = (req, res, next) => {
watsonAssistant.createSession({
assistantId: config.watson.assistant.assistantId
})
.then(result => {
assistantSessionId = result.result.session_id,
res.render('index');
})
.catch(err => {
console.log(err);
})
}
exports.postMessage = (req, res, next) => {
let text = "";
if (req.body.input) {
text = req.body.input.text;
}
watsonAssistant.message({
assistantId: config.watson.assistant.assistantId,
sessionId: assistantSessionId,
input: {
'message_type': 'text',
'text': text
}
})
.then(result => {
console.log(JSON.stringify(result, null, 2));
res.json(result); // just returns what is received from assistant
})
.catch(err => {
console.log(err);
})
}
getIndex() creates a session with the assistant and returns the compiled in-
dex.ejs, see next steps, to the client. All client logic is defined as AJAX func-
tions in the public/js/scripts.js file.
Note: The session object has a timeout defined in the assistant’s settings.
The lite version of the assistant has a maximum of 5 minutes. When the chat
is inactive for more than five minutes you get an error in postMessage().
The postMessage() function just returns the information from the Assistant
service to the client (see the console log), without any interaction. Our help
request for the time will not yet be answered.
Note: The ejs view engine controls the layout with code indent. For a correct
layout you have to adjust this manually or copy the code from the code snip-
pets file.
Step 114 In the views folder create a header.ejs file with the following content:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>EAG Watson Assistant Lab</title>
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/
bootstrap.min.css" crossorigin="anonymous">
<link rel='stylesheet' href='/stylesheets/styles.css' />
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/js/bootstrap.min.js"
crossorigin="anonymous"></script>
<script src="/js/scripts.js"></script>
</head>
<body class="container">
<h2>
IBM EAG Watson Assistant Lab
</h2>
Step 115 In the views folder create a footer.ejs file with the following content:
Step 116 In the views folder create a 404.ejs file with the following content:
Step 117 In the view folder create an index.ejs file with the following content:
Step 118 To add some styling to the webpage and additional javascripts add a folder
named public to the project root folder.
In app.js this folder was defined to be static and is therefore available to the
rendered html page.
Step 119 In this public folder add the following two folders:
stylesheets → public/stylesheets
js → public/js
Step 120 In the stylesheets folder add a file styles.css with the following content:
body {
padding: 50px;
font: 14px "Lucida Grande", Helvetica, Arial, sans-serif;
}
a {
color: #00B7FF;
}
.form-control {
margin-right: 5px;
}
Step 121 In the js folder add a file scripts.js with the following content:
The context variable was used to handle Assistant API V1 requests.
(See previous versions of this document).
function sendMessage() {
var text = document.getElementById("text").value;
updateChatLog("You", text);
var payload = {};
if (text) {
payload.input = {"text": text};
};
if (context) {
payload.context = context;
};
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function() {
if (xhr.readyState == 4) {
if (xhr.status == 200) {
var json = JSON.parse(xhr.responseText);
context = json.context;
updateChatLog("Watson", json.result.output.generic[0].text);
}
}
}
xhr.open("POST", "./", true);
xhr.setRequestHeader("Content-type", "application/json");
xhr.send(JSON.stringify(payload));
}
function init() {
document.getElementById("text").addEventListener("keydown", function(e)
{
if (!e) {
var e = window.event;
}
if (e.keyCode == 13) {
sendMessage();
}
}, false);
sendMessage();
}
Step 123 Return to the terminal/command window. If the server is still running, enter
Ctrl-C to kill the application so that your latest code changes will be loaded.
Step 124 Test the application on your local workstation with the following
command(s).
npm start
Step 125 In a web browser, navigate to the following URL. If necessary, refresh the
existing page to pick up the changes.
http://localhost:3000/
Step 126 You should see your IBM Watson Conversation application running! Notice
that your client application has already contacted Watson and initiated a
new conversation. As you’ll recall, you can review the server console to see
the JSON returned by the Conversation service.
Step 127 Enter the following messages and observe Watson’s responses in the Con-
versation History. The responses will be the same as those you saw earlier
when performing tests in the Conversation tool web application. The differ-
ence this time is that you are using the API to embed this conversation into
an external application!
Hello
The headlights
Farewell
Step 128 Feel free to experiment with other phrases to further explore the dialog
branches that you created in earlier steps.
Step 129 At this point, you should review the Node.js server console and the
developer tools of the browser to see the full JSON object that was re-
turned. There’s more data there than we displayed in this demo ☺
F.e try the book_table intent and see in the nodejs console what type of re-
turn messages you get from the assistant.
Step 130 The node.js application we have created so far does nothing more than re-
ceiving user input, send it to the Watson Assistant service and return the re-
ceived text to the user. In the next section and also in Lab 5, we can see
how further backend integration can be achieved.
This is basically the same functionality the preview link of the assistant pro-
vides.
Step 131 The simple application (created in this guide) passes the user supplied input
to the Watson Assistant service. Once Watson analyzes the input and re-
turns a JSON response, that response is passed back to the client applica-
tion. In this section, you will modify the code to detect specific intents so that
you could perform various actions in your own system(s).
Step 132 Locate the following file and open it with a text/code editor.
…/csadConversation/controllers/watson.js
Step 133 If you accurately followed the steps in this guide, lines 41 – 44 should con-
tain the application logic that is executed after Watson returns a response.
In the .then() block Insert a blank line so that line 42 is now blank. This
will be the insertion point for your new code snippet. You should also com-
ment out line 17 since you will be using the server console to display up-
dates from your custom app actions. Your code should now look as follows:
Step 134 At line 42, enter the following code to identify the intent and entity returned
by Watson and perform some custom actions. You will also update the
JSON response object to include a new message that should be sent to the
end user with your custom information.
Step 135 Refer to the following table for a description of the code:
Lines Description
42 – 43 Return immediately when no input text
45 The user input that was passed to the Assistant service
46 – 50 If the Assistant service identified an intent from the input…
48-49: Get the intent and print its name and confidence level.
51 – 59 If the Assistant service identified an entity from the input…
52-54: Get the entity and print its name and value
55: Check for a specific entity so that your app can respond accordingly
56-57: Get and print the current system time
58: Modify the Watson output with your new custom app response
62 Send the modified JSON response to the client application
Step 136 On line 49, you obtain and print IBM Watson’s confidence level. This value
is intended to be an indicator of how well Watson was able to understand
and respond to the user’s current query. If this value is lower than you are
comfortable with (let’s say < 60%), you could abort the operation and ask
the user for clarification. Watson will rarely provide an answer with a 100%
confidence level.
Step 138 Return to the terminal/command window. If the server is still running, enter
Ctrl-C to kill the application so that your latest code changes will be loaded.
Step 139 Test the application on your local workstation with the following
command(s).
npm start
Step 140 You should see some output indicating the application is running. In a web
browser, navigate to the following URL. If necessary, refresh the existing
page to pick up the changes.
http://localhost:3000/
Step 141 Test the application using the following inputs. After typing each line, review
the results on the server console to see your custom app actions based off
specific intents and entities.
Hello
Step 142 In the screenshots below, the custom application will output messages to
the server console based off the returned intent. For example:
In this demo exercise, you have been working locally and running your
Node.js server on your own workstation. You can very easily push your
application to the IBM Cloud environment and make it publicly accessible to
customers, partners, friends, and/or family!
Step 143 Create a manifest.yml file in the project folder with the following content:
Under services use the name of you Conversation service created in Step
7. As name you should specify a unique name (or the random-route option
below the name statement). Memory 256M if you use a Lite account
random-route: true
applications:
- name: csadConversation-xxx
path: .
buildpack: sdk-for-nodejs
command: node app.js
memory: 256M
Step 145 Stop your running cloud applications that you do not exceed the memory
limit of the IBM Cloud “Lite” account.
Step 146 Make sure you are logged in to your IBM Cloud account.
(See Workstation Setup document Step 9)
Step 147 In a terminal windows in the project folder execute the command
After completion you should see a message that the application is running
on the IBM Cloud.
Step 148 You will also see the application in your IBM Cloud console.